4.8 Article

DAN: A Segmentation-Free Document Attention Network for Handwritten Document Recognition

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3235826

Keywords

Layout; Text recognition; Task analysis; Image segmentation; Handwriting recognition; Transformers; Annotations; Handwritten text recognition; layout analysis; segmentation-free; Seq2Seq model; transformer

Ask authors/readers for more resources

This paper proposes an end-to-end segmentation-free network model called Document Attention Network for handwritten document recognition, which labels text parts and sequentially outputs characters and logical layout tokens. The model achieves competitive results on the READ 2016 dataset and performs well on the RIMES 2009 dataset.
Unconstrained handwritten text recognition is a challenging computer vision task. It is traditionally handled by a two-step approach, combining line segmentation followed by text line recognition. For the first time, we propose an end-to-end segmentation-free architecture for the task of handwritten document recognition: the Document Attention Network. In addition to text recognition, the model is trained to label text parts using begin and end tags in an XML-like fashion. This model is made up of an FCN encoder for feature extraction and a stack of transformer decoder layers for a recurrent token-by-token prediction process. It takes whole text documents as input and sequentially outputs characters, as well as logical layout tokens. Contrary to the existing segmentation-based approaches, the model is trained without using any segmentation label. We achieve competitive results on the READ 2016 dataset at page level, as well as double-page level with a CER of 3.43% and 3.70%, respectively. We also provide results for the RIMES 2009 dataset at page level, reaching 4.54% of CER. We provide all source code and pre-trained model weights at https://github.com/FactoDeepLearning/DAN.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available