Tabular information recognition using convolutional neural networks

Cover Page

Cite item

Full Text

Abstract

The relevance of identifying tabular information and recognizing its contents for processing scanned documents is shown. The formation of a data set for training, validation and testing of a deep learning neural network (DNN) YOLOv5s for the detection of simple tables is described. The effectiveness of using this DNN when working with scanned documents is shown. Using the Keras Functional API, a convolutional neural network (CNN) was formed to recognize the main elements of tabular information — numbers, basic punctuation marks and Cyrillic letters. The results of a study of the work of this CNN are given. The implementation of the identification and recognition of tabular information on scanned documents in the developed IS updating information in databases for the Unified State Register of Real Estate system is described.

About the authors

Igor Victorovich Vinokurov

Financial University under the Government of the Russian Federation

Author for correspondence.
Email: igvvinokurov@fa.ru
ORCID iD: 0000-0001-8697-1032
Candidate of Technical Sciences (PhD), Associate Professor at the Financial University under the Government of the Russian Federation. Research interests: information systems, information technologies, data processing technologies.

References

  1. Винокуров И. В.. «Using a convolutional neural network to recognize text elements in poor quality scanned images», Программные системы: теория и приложения, 13:3 (2022), с. 29–43 (in Russian).
  2. Harit G., Bansal A.. “Table detection in document images using header and trailer patterns”, Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP'12 (December 16–19, 2012, Mumbai, India), ACM, New York, 2012, ISBN 978-1-4503-1660-6, 8 pp.
  3. Gatos B., Danatsas D., Pratikakis I., Perantonis S.. “Automatictable detection in document images”, ICAPR 2005: Pattern Recognition and Data Mining, Lecture Notes in Computer Science, vol. 3686, Springer, Berlin–Heidelberg, 2005, ISBN 978-3-540-28757-5, pp. 609–618.
  4. Kasar T., Barlas P., Adam S., Chatelain C., Paquet T.. “Learning to detect tables in scanned document images using line information”, 2013 12th International Conference on Document Analysis and Recognition (25–28 August 2013, Washington, DC, USA), 2013, pp. 1185–1189.
  5. Jahan M. A., Ragel R. G.. “Locating tables in scanned documents for reconstructing and republishing”, 7th International Conference on Information and Automation for Sustainability (22-24 December 2014, Colombo, Sri Lanka), 2014, pp. 1–6.
  6. Kieninger T. G.. “Table structure recognition based on robust block segmentation”, Document Recognition V, Photonics West'98 Electronic Imaging (1998, San Jose, CA, United States), Proc. SPIE, vol. 3305, 1998, pp. 22–32.
  7. Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, Zhoujun Li. TableBank: A benchmark dataset for table detection and recognition, 2020, 9 pp.
  8. Fang J., Tao X., Tang Z., Qiu R., Liu Y.. “Dataset, ground-truth and performance metrics for table detection evaluation”, 2012 10th IAPR International Workshop on Document Analysis Systems (27–29 March 2012, Gold Coast, QLD, Australia), 2012, pp. 445—449.
  9. Gobel M., Hassan T., Oro E., Orsi G.. “Icdar 2013 table competition”, 2013 12th International Conference on Document Analysis and Recognition (15 October 2013, Washington, DC, USA), 2013, pp. 1449–1453.
  10. Shahab A., Shafait F., Kieninger T., Dengel A.. “An open approach towards the benchmarking of table structure recognition systems”, Proceedings of the 9th IAPR International Workshop on Document Analysis Systems, DAS'10 (June 9–11, 2010, Boston, Massachusetts, USA), 2010, ISBN 978-1-60558-773-8, pp. 113–120.
  11. Gao L., Huang Y., Dejean H., Meunier J. -L., Yan Q., Fang Y., Kleber F., Lang E.. “ICDAR 2019 competition on table detection and recognition (cTDaR)”, 2019 International Conference on Document Analysis and Recognition (ICDAR) (20–25 September 2019, Sydney, NSW, Australia), 2019, pp. 1510–1515.
  12. Ren S., He K., Girshick R., Sun J.. “Faster R-CNN: towards real-time object detection with region proposal networks”, IEEE transactions on pattern analysis and machine intelligence, 39:6 (2016), pp. 1137–1149.
  13. Redmon J., Divvala S., Girshick R., Farhadi A.. “You only look once: Unified, real-time object detection”, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
  14. Gilani A., Qasim S. R., Malik I., Shafait F.. “Table detection using deep learning”, 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). 1 (09–15 November 2017, Kyoto, Japan), 2017, pp. 771–776.
  15. Banerjee A.. YOLOv5 vs YOLOv6 vs YOLOv7, 2022–2023, Learn With A Robot, https://www.learnwitharobot.com/p/yolov5-vs-yolov6-vs-yolov7.
  16. Lebiedzinski P.. A single number metric for evaluating object detection models, 2021, Towards Data Science, https://towardsdatascience.com/a-single-number-metric-for-evaluating-object-detection-models-c97f4a98616d.
  17. Surya Gutta. Object Detection Algorithm — YOLO v5 Architecture, Analytics Vidhya, 2021, https://medium.com/analytics-vidhya/object-detection-algorithm-yolo-v5-architecture-89e0a35472ef.
  18. Zixin Ning, Xinjiao Wu, Jing Yang, Yanqin Yang. “MT-YOLOv5: Mobile terminal table detection model based on YOLOv5”, The Fourth International Conference on Physics, Mathematics and Statistics (ICPMS) 2021 (19–21 May 2021, Kunming, China), Journal of Physics: Conference Series, 1978 (2021), 012010.
  19. Yilun Huang, Qinqin Yan, Yibo Li, Yifan Chen, Zhi Tang. “A YOLO-based table detection method”, 2019 International Conference on Document Analysis and Recognition (ICDAR) (20–25 September 2019, Sydney, NSW, Australia), 2019.

Supplementary files

Supplementary Files
Action
1. JATS XML


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).