0%

Joint entity recognition and relation extraction as a multi-head selection problem

Abstract

State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers. Thus, the performance of such joint models depends on the quality of the features obtained from these NLP tools. However, these features are not always accurate for various languages and contexts. In this paper, we propose a joint neural model which performs entity recognition and relation extraction simultaneously, without the need of any manually extracted features or the use of any external tool. Specifically, we model the entity recognition task using a CRF (Conditional Random Fields) layer and the relation extraction task as a multi-head selection problem (i.e., potentially identify multiple relations for each entity). We present an extensive experimental setup, to demonstrate the effectiveness of our method using datasets from various contexts (i.e., news, biomedical, real estate) and languages (i.e., English, Dutch). Our model outperforms the previous neural models that use automatically extracted features, while it performs within a reasonable margin of feature-based neural models, or even beats them.

阅读全文 »

Pytorch

目标检测基础知识

目标检测定义

目标检测,也叫目标提取,是一种基于目标几何和统计特征的图像分割,它将目标的分割和识别合二为一,其准确性和实时性是整个系统的一项重要能力。尤其是在复杂场景中,需要对多个目标进行实时处理时,目标自动提取和识别就显得特别重要。
随着计算机技术的发展和计算机视觉原理的广泛应用,利用计算机图像处理技术对目标进行实时跟踪研究越来越热门,对目标进行动态实时跟踪定位在智能化交通系统、智能监控系统、军事目标检测及医学导航手术中手术器械定位等方面具有广泛的应用价值。

阅读全文 »

Libra R-CNN: Towards Balanced Learning for Object Detection

摘要

与模型体系结构相比,训练过程对于检测器的成功也是至关重要的,在对象检测中受到的关注相对较少。在这项工作中,我们仔细地重新审视了探测器的标准训练实践,发现探测性能通常受到训练过程中不平衡的限制,这通常包括三个层次 - 样本水平,特征水平和目标水平。为了减轻由此引起的不利影响,我们提出了Libra R-CNN,这是一个简单但有效的框架,用于物体检测的均衡学习。它集成了三个新颖的组件:IoU平衡采样,平衡特征金字塔和平衡L1损耗,分别用于减少样本,特征和客观水平的不平衡。得益于整体平衡设计,Libra R-CNN显着提高了检测性能。没有花里胡哨,它在MSCOCO上分别比FPN更快的R-CNN和RetinaNet高出2.5分和2.0分的平均精度(AP)。

阅读全文 »

Significance: 预训练语言模型的意义

  1. Instead of training the model from scratch, you can use another pre-trained model as the basis and only fine-tune it to solve the specific NLP task.
  2. Using pre-trained models allows you to achieve the same or even better performance much faster and with much less labeled data.
阅读全文 »

YOLO: You Only Look Once

YOLOv3: An Incremental Improvement

We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL.

阅读全文 »