Using Transfer Learning to contextually Optimize Optical Character Recognition (OCR) output and perform new Feature Extraction on a digitized cultural and historical dataset
Paper i proceeding, 2021

Understanding handwritten and printed text is easier for humans but computers do not have the same level of accuracy. While there are many Optical Character Recognition (OCR) tools like PyTesseract1, Abbyy FineReader2 which extract the text as digital characters from handwritten or printed text images, none of them are without unrecognizable characters or misspelled words. Spelling correction is one of the well-known tasks in Natural Language Processing. Spelling correction of an individual word could be performed through existing tools, however, correcting a word based on the context of the sentence is a challenging task that requires a human-level understanding of the language. In this paper, we introduce a novel experiment of applying Natural Language Processing using a machine learning concept called Transfer Learning3 on the text extracted by OCR tools, thereby optimizing the output text by reducing misspelled words. This experiment is conducted on the OCR output of a sample of newspaper images published between the late 18th century to 19th century. These images were obtained from the Maryland State Archives4 digital archives project named, the Legacy of Slavery5. This Natural Language Processing approach uses pre-trained language transformer models like BERT6 and RoBERTa7 which are used as word-prediction software for spelling correction based on the context of the words in the OCR output. We compare the performance of BERT and RoBERTa on two OCR tool outputs, namely PyTesseract and Abbyy FineReader. A comparative evaluation shows that both the models work fairly well on correcting misspelled words considering the irregularities in the text data from the OCR output. Additionally, with the Transfer Learning output text, a special process is conducted to create a new feature that originally did not exist in the original dataset dataset using Spacy's Entity Recognizer (ER)8. This new extracted values are added to the dataset as a new feature. Also, an existing feature's values are compared to Spacy's ER output and the original hand transcribed data.

RoBERTa

spelling correction

entity recognition

BERT

transfer learning

natural language processing

Författare

Aravind Inbasekaran

Student vid Chalmers

Rajesh Kumar Gnanasekaran

University of Maryland

Richard Marciano

University of Maryland

Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021

2224-2230
9781665439022 (ISBN)

2021 IEEE International Conference on Big Data, Big Data 2021
Virtual, Online, USA,

Ämneskategorier

Annan data- och informationsvetenskap

Språkteknologi (språkvetenskaplig databehandling)

Datorseende och robotik (autonoma system)

DOI

10.1109/BigData52589.2021.9671586

Mer information

Senast uppdaterat

2024-01-03