The profusion of online resources available for Sanskrit, Hindi and other Devanagari-Based
scripts indicates widespread use of devanagari script in the Digital World.
Devanagari Optical Character Recognition (OCR) is a very well-researched problem, with many
groups actively researching it. However, very little research has specifically addressed the
lack of uniform data sets for Devanagari script OCR. Most current research focuses on
font representation or creating lexical libraries, while a comprehensive, central image data
resource for devanagari script OCR has yet to be developed.
Our group believes that successful Devanagari OCR research requires an accurate and comprehensive
benchmark to test research results. Given that a test bench for various Devanagari OCR algorithms
was clearly lacking, we have worked on non-comprehensive testing of algorithms and OCR methodologies.
For such cutting-edge research, establishing an accurate benchmark with respect to testing was
essential.
As part of the Handwritten Address Interpretation (HWAI)project, CEDAR developed a
data resource that identifies handwritten english text, which includes handwritten words and zip
codes in high resolution. A CD-ROM database on Japanese Character Images has also been developed.
Given this prior research, CEDAR has a well-equipped truthing laboratory with excellent
hardware resources that ensure high computational power and stable data back-up, as well as
well-designed truthing tools and student truthers constantly available.
The goals of this project involve a wide range of issues dealing with OCR and include:
-
Developing Devanagari Script Recognition tools that include:
-
Techniques for word and line separation.
-
Techniques for character segmentation.
-
Lexicon driven and Lexicon free techniques for word recognition.
-
Linguistic tools for post processing.
-
Creating Data Resources for Devanagari Script Recognition and distributing them
free-of-charge over the internet as follows:
-
Creating a standard training and testing set of machine printed and handwritten
Devanagari characters to evaluate character recognition systems.
-
Creating and/or consolidating any machine-readable dictionaries for Sanskrit and
Hindi words that can be used in various NLP applications.
-
Develop software for grammatical tagging of corpora used in various NLP
applications.
There are three main technical aspects of this project:
- Text Image Processing
- Linguistic Tools
- Development of Data Resources
Text Image Processing
- Preprocessing
- Skew Removal-
Methods used for skew correction of english mail documents will be modified for Devanagari
scripts.
- Line Separation -
Local minima points between touching characters of successive lines are calculated and lines are
identified. The header line would be the reference line instead of the base line as it is in
english OCR.
- Word Separation -
The Shirorekha in Devanagari scripts can be used as a word separator.
- Character Segmentation -
Character Segmentation with Devanagari scripts is more difficult than segmenting Roman
characters because of Devanagari compound and fused characters. Several papers listed in
the bibliography review the issue of touching and fused
characters. We propose to improvise on these techniques.
- Character Recognition
Devanagari Script has a large basic set of characters - comprised of 16 vowels and 36 basic
consonants. each of the consonant combines with vowel modifiers to create new character shapes.
Conjunct Consonants can be formed by combination of two or more such consonant.
Devanagari text can be partitioned into basic and compound characters, and feature
extraction techniques divide the character into 9 zones, and a search for primitives is discussed
in some of the papers listed in the bibliography.
Gradient and Concavity features of characters are also used as feature vectors.
- Word Recognition
This step involves identifying and removing errors as well as optimizing results through
syntactic and lexical features. In Devanagari script recognition, this involves addressing
2 research issues -
-
Developing tools needed for string matching and calculating edit distances.
-
Creating a method to organize the lexicon in a tree, enabling efficient lexicon reduction.
We will addresses this issue by developing composition rules that can recombine
characters to obtain a syntactically valid word. Association rules of modifiers will be used to
prune the lexicon set. Coarse features, like word length, presence or absence of certain modifiers
and symbols will also be identified as lexicon pruning features.
-
Post-Processing
Post-Processing is vital to applications such as phrase and sentence recognition. Current OCR
systems (for English) use Collocations and Syntactic Analysis (POS tagging) to get word level
information. We will advance these techniques for Devanagari script recognition.
Linguistic Tools
OCR research depends heavily upon linguistic tools. Hence, CEDAR will work toward
developing invaluable linguistic resources as outlined below.
-
Electronic Corpora
This will include both Sanskrit and Hindi resources, creating a corpus similar to the
Wall Street Journal corpus.
-
Machine-Readable Dictionaries
Machine-Readable dictionaries such as COBILDS and LDOCE are widely used in english OCR.
However, similar tools for Hindi do not exist. CEDAR, through prior work in this field will work
to fill this gap.
-
Spell Check and Grammar Check
These techniques are widely used in OCR to eliminate errors. Data post-Processing using these
techniques goes a long way in improving recognition results. CEDAR will work toward developing
such resources for Hindi.
-
Edit Distance Measures
Distance between words and characters is used in the form of a confusion matrix to disambiguate
OCR results. CEDAR will examine such techniques for use with Devanagari Script recognition.
Development of Data Resources-
The following stages have been outlined for
Devanagari data collection.
- Creating a representative collection of text documents.
- Creating character and word databases.
- Creating groundtruths for data sets.
- Creating machine-readable dictionaries and corpora.
- A standard transliteration scheme to represent Devanagari text in
english.
- Developing software for grammatical tagging of corpora.