3.8 Proceedings Paper

Scene Text Telescope: Text-Focused Scene Image Super-Resolution

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.01185

Keywords

-

Funding

  1. STCSM Projects [20511100400, 20511102702]
  2. Shanghai Municipal Science and Technology Major Projects [2017SHZDZX01, 2018SHZDZX01]
  3. Shanghai Research and Innovation Functional Program [17DZ2260900]
  4. Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning
  5. ZJLab

Ask authors/readers for more resources

The study introduces a text-focused super-resolution framework, which utilizes Transformer and self-attention module to extract sequential information, and position-aware and content-aware modules to emphasize character position and content. Weighted cross-entropy loss is used to address the issue of indistinguishable characters in low-resolution conditions.
Image super-resolution, which is often regarded as a preprocessing procedure of scene text recognition, aims to recover the realistic features from a low-resolution text image. It has always been challenging due to large variations in text shapes, fonts, backgrounds, etc. However, most existing methods employ generic super-resolution frameworks to handle scene text images while ignoring text-specific properties such as text-level layouts and character-level details. In this paper, we establish a text-focused super-resolution framework, called Scene Text Telescope (STT). In terms of text-level layouts, we propose a Transformer-Based Super-Resolution Network (TBSRN) containing a Self-Attention Module to extract sequential information, which is robust to tackle the texts in arbitrary orientations. In terms of character-level details, we propose a Position-Aware Module and a Content-Aware Module to highlight the position and the content of each character. By observing that some characters look indistinguishable in low-resolution conditions, we use a weighted cross-entropy loss to tackle this problem. We conduct extensive experiments, including text recognition with pre-trained recognizers and image quality evaluation, on TextZoom and several scene text recognition benchmarks to assess the super-resolution images. The experimental results show that our STT can indeed generate text-focused super-resolution images and outperform the existing methods in terms of recognition accuracy.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available