4.7 Article

Explainability in transformer models for functional genomics

期刊

BRIEFINGS IN BIOINFORMATICS
卷 22, 期 5, 页码 -

出版社

OXFORD UNIV PRESS
DOI: 10.1093/bib/bbab060

关键词

interpretable neural networks; transformers; functional genomics; DNA-binding sites

资金

  1. Ghent University [BOF24j2016001002]
  2. Flemish Government under the 'Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen' programme

向作者/读者索取更多资源

The paper introduces a new approach to gather insights on the transcription process in Escherichia coli, utilizing a transformer-based neural network framework to identify transcription factors and characterize their binding sites and consensus sequences.
The effectiveness of deep learning methods can be largely attributed to the automated extraction of relevant features from raw data. In the field of functional genomics, this generally concerns the automatic selection of relevant nucleotide motifs from DNA sequences. To benefit from automated learning methods, new strategies are required that unveil the decision-making process of trained models. In this paper, we present a new approach that has been successful in gathering insights on the transcription process in Escherichia coli. This work builds upon a transformer-based neural network framework designed for prokaryotic genome annotation purposes. We find that the majority of subunits (attention heads) of the model are specialized towards identifying transcription factors and are able to successfully characterize both their binding sites and consensus sequences, uncovering both well-known and potentially novel elements involved in the initiation of the transcription process. With the specialization of the attention heads occurring automatically, we believe transformer models to be of high interest towards the creation of explainable neural networks in this field.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据