ARElight: Context Sampling of Large Texts for Deep Learning Relation Extraction

被引:0
|
作者
Rusnachenko, Nicolay [1 ]
Liang, Huizhi [1 ]
Kalameyets, Maksim [1 ]
Shi, Lei [1 ]
机构
[1] Newcastle Univ, Sch Comp, Newcastle Upon Tyne, Tyne & Wear, England
基金
英国科研创新办公室;
关键词
Data Processing Pipeline; Information Retrieval; Visualisation;
D O I
10.1007/978-3-031-56069-9_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The escalating volume of textual data necessitates adept and scalable Information Extraction (IE) systems in the field of Natural Language Processing (NLP) to analyse massive text collections in a detailed manner. While most deep learning systems are designed to handle textual information as it is, the gap in the existence of the interface between a document and the annotation of its parts is still poorly covered. Concurrently, one of the major limitations of most deep-learning models is a constrained input size caused by architectural and computational specifics. To address this, we introduce ARElight(1), a system designed to efficiently manage and extract information from sequences of large documents by dividing them into segments with mentioned object pairs. Through a pipeline comprising modules for text sampling, inference, optional graph operations, and visualisation, the proposed system transforms large volumes of text in a structured manner. Practical applications of ARElight are demonstrated across diverse use cases, including literature processing and social network analysis.((1)https://github.com/nicolay-r/ARElight)
引用
收藏
页码:229 / 235
页数:7
相关论文
共 50 条
  • [1] Multimodal learning for temporal relation extraction in clinical texts
    Knez, Timotej
    Zitnik, Slavko
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (06) : 1380 - 1387
  • [2] Relation Extraction with Deep Reinforcement Learning
    Zhang, Hongjun
    Feng, Yuntian
    Hao, Wenning
    Chen, Gang
    Jin, Dawei
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2017, E100D (08) : 1893 - 1902
  • [3] GPT-RE: In-context Learning for Relation Extraction using Large Language Models
    Wan, Zhen
    Cheng, Fei
    Mao, Zhuoyuan
    Liu, Qianying
    Song, Haiyue
    Li, Jiwei
    Kurohashi, Sadao
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3534 - 3547
  • [4] Large Scaled Relation Extraction with Reinforcement Learning
    Zeng, Xiangrong
    He, Shizhu
    Liu, Kang
    Zhao, Jun
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 5658 - 5665
  • [5] Enhancing Relation Extraction from Biomedical Texts by Large Language Models
    Asada, Masaki
    Fukuda, Ken
    ARTIFICIAL INTELLIGENCE IN HCI, PT III, AI-HCI 2024, 2024, 14736 : 3 - 14
  • [6] Context and Type Enhanced Representation Learning for Relation Extraction
    Yu, Erxin
    Jia, Yantao
    Wang, Shang
    Li, Fengfu
    Chang, Yi
    11TH IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG 2020), 2020, : 329 - 335
  • [7] Learning Relation Prototype From Unlabeled Texts for Long-Tail Relation Extraction
    Cao, Yixin
    Kuang, Jun
    Gao, Ming
    Zhou, Aoying
    Wen, Yonggang
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) : 1761 - 1774
  • [8] Deep learning models for spatial relation extraction in text
    Wu, Kehan
    Zhang, Xueying
    Dang, Yulong
    Ye, Peng
    GEO-SPATIAL INFORMATION SCIENCE, 2023, 26 (01) : 58 - 70
  • [9] Inter-sentence Relation Extraction for Associating Biological Context with Events in Biomedical Texts
    Noriega-Atala, Enrique
    Hein, Paul D.
    Thumsi, Shraddha S.
    Wong, Zechy
    Wang, Xia
    Morrison, Clayton T.
    2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 722 - 731
  • [10] Relation Extraction for Massive News Texts
    Yin, Libo
    Meng, Xiang
    Li, Jianxun
    Sun, Jianguo
    CMC-COMPUTERS MATERIALS & CONTINUA, 2019, 60 (01): : 275 - 285