A Transformer-Based Framework for Scene Text Recognition

dc.contributor.authorSelvam, Prabu
dc.contributor.authorSundar Koilraj, Joseph Abraham
dc.contributor.authorTavera Romero, Carlos Andres
dc.contributor.authorAlharbi, Meshal
dc.contributor.authorMehbodniya, Abolfazl
dc.date.accessioned2025-07-03T20:09:06Z
dc.date.available2025-07-03T20:09:06Z
dc.date.issued2022
dc.description.abstractScene Text Recognition (STR) has become a popular and long-standing research problem in computer vision communities. Almost all the existing approaches mainly adopt the connectionist temporal classification (CTC) technique. However, these existing approaches are not much effective for irregular STR. In this research article, we introduced a new encoder-decoder framework to identify both regular and irregular natural scene text, which is developed based on the transformer framework. The proposed framework is divided into four main modules: Image Transformation, Visual Feature Extraction (VFE), Encoder and Decoder. Firstly, we employ a Thin Plate Spline (TPS) transformation in the image transformation module to normalize the original input image to reduce the burden of subsequent feature extraction. Secondly, in the VFE module, we use ResNet as the Convolutional Neural Network (CNN) backbone to retrieve text image features maps from the rectified word image. However, the VFE module generates one-dimensional feature maps that are not suitable for locating a multi-oriented text on two-dimensional word images. We proposed 2D Positional Encoding (2DPE) to preserve the sequential information. Thirdly, the feature aggregation and feature transformation are carried out simultaneously in the encoder module. We replace the original scaled dot-product attention model as in the standard transformer framework with an Optimal Adaptive Threshold-based Self-Attention (OATSA) model to filter noisy information effectively and focus on the most contributive text regions. Finally, we introduce a new architectural level bi-directional decoding approach in the decoder module to generate a more accurate character sequence. Eventually, We evaluate the effectiveness and robustness of the proposed framework in both horizontal and arbitrary text recognition through extensive experiments on seven public benchmarks including IIIT5K-Words, SVT, ICDAR 2003, ICDAR 2013, ICDAR 2015, SVT-P and CUTE80 datasets. We also demonstrate that our proposed framework outperforms most of the existing approaches by a substantial margin.
dc.identifier.citationSelvam, P., Koilraj, J. A. S., Romero, C. A. T., Alharbi, M., Mehbodniya, A., Webber, J. L., & Sengan, S. (2022). A Transformer-Based Framework for Scene Text Recognition. IEEE Access, 10. https://doi.org/10.1109/ACCESS.2022.3207469
dc.identifier.issn21693536
dc.identifier.urihttps://repositorio.usc.edu.co/handle/20.500.12421/7129
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.titleA Transformer-Based Framework for Scene Text Recognition
dc.typeArticle

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
A Transformer-Based Framework for Scene Text Recognition.pdf
Size:
2.03 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: