4.6 Article

A Novel Joint Extraction Model for Entity Relations Using Interactive Encoding and Visual Attention

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

A dynamic graph expansion network for multi-hop knowledge base question answering

Wenqing Wu et al.

Summary: This paper proposes a method of dynamically expanding subgraphs for multi-hop knowledge base question answering. By connecting different subgraphs at each step and generating strong intermediate signals, the method is able to obtain the correct answer.

NEUROCOMPUTING (2023)

Article Engineering, Electrical & Electronic

LightingNet: An Integrated Learning Method for Low-Light Image Enhancement

Shaoliang Yang et al.

Summary: This paper proposes an integrated learning approach (LightingNet) for enhancing low-light images. LightingNet consists of two core components: the complementary learning sub-network and the vision transformer (VIT) low-light enhancement sub-network. The complementary learning sub-network provides global fine-tuned features through learning transfer, while the VIT low-light enhancement sub-network provides local high-level features through a full-scale architecture. Extensive experiments confirm the effectiveness of LightingNet.

IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING (2023)

Article Computer Science, Artificial Intelligence

A Joint Entity and Relation Extraction Model based on Efficient Sampling and Explicit Interaction

Qibin Li et al.

Summary: Joint entity and relation extraction is a framework that unifies entity recognition and relationship extraction, leveraging dependencies between the tasks to improve performance. However, existing methods suffer from blurred entity boundaries and insufficient implicit interactions between modules. To address these issues, this study proposes a joint entity and relation extraction model based on efficient sampling and explicit interaction, improving entity boundary extraction through controlled negative sample division and enhancing interaction between modules with a heterogeneous graph neural network. The proposed method significantly improves the model's discriminative power and F1 scores on multiple datasets.

ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY (2023)

Article Physics, Multidisciplinary

A Joint Extraction Model for Entity Relationships Based on Span and Cascaded Dual Decoding

Tao Liao et al.

Summary: This paper proposes a new joint entity-relationship extraction model based on the span and cascaded dual decoding, which effectively identifies overlapping relationships by dividing the input text into word vectors based on the span and decoding the relationship type in the span sequence, and then obtaining the head entity and tail entity using a Bi-LSTM neural network. Experiments demonstrate that the model achieves improved F1 value compared to other baseline models on the NYT dataset and WebNLG dataset.

ENTROPY (2023)

Article Computer Science, Artificial Intelligence

A relation aware embedding mechanism for relation extraction

Xiang Li et al.

Summary: Extracting possible relational triples from natural language text is a fundamental task of information extraction. This study proposes a RelationAware Embedding Mechanism (RA) to improve relation extraction performance, and conducts extensive experiments on widely-used datasets, with encouraging results.

APPLIED INTELLIGENCE (2022)

Article Engineering, Electrical & Electronic

Rethinking Low-Light Enhancement via Transformer-GAN

Shaoliang Yang et al.

Summary: We propose a Vision Transformer-based Generative Adversarial Network (Transformer-GAN) for enhancing low-light images. Our method consists of feature extraction and image reconstruction subnets, and introduces multi-head multi-covariance self-attention and Light feature-forward module structures. Experiments show that our method outperforms existing methods on low-light datasets.

IEEE SIGNAL PROCESSING LETTERS (2022)

Proceedings Paper Computer Science, Artificial Intelligence

A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction

Yongliang Shen et al.

Summary: Efforts on joint entity and relation extraction focus on enhancing interaction between entity recognition and relation extraction, yet issues like weak interaction and overlooked relation triggers exist. The proposed TriMF framework addresses these challenges by incorporating memory module, multi-level memory flow attention mechanism, and trigger sensor module to improve model performance in relation extraction. Experiment results demonstrate state-of-the-art performance improvements across multiple datasets.

PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021) (2021)

Proceedings Paper Acoustics

SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS

Qing-Long Zhang et al.

Summary: Attention mechanisms are crucial in enhancing the performance of deep neural networks, with the Shuffle Attention (SA) module effectively combining two types of attention mechanisms to achieve better performance while reducing computational complexity.

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) (2021)

Article Multidisciplinary Sciences

Joint Entity-Relation Extraction via Improved Graph Attention Networks

Qinghan Lai et al.

SYMMETRY-BASEL (2020)