Digitization Projects Using OCR / ML / NLP

From iDigBio
Jump to navigation Jump to search

Use Cases for OCR / ML / NLP in Current Digitization Efforts


Royal Botanic Garden Edinburgh
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
Royal Botanic Garden Kew
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
BRIT
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
Specify6 users
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
LBCC TCN using Symbiota
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
MaCC TCN using ABBYY
(description of the scope of the digitization and where / how OCR / ML / NLP is being used goes here).
Arizona State University
Barber, A.C., Lafferty, D., & Landrum, L.R. 2013. The SALIX Method: A semi-automated workflow for herbarium specimen digitization. Taxon 62(3).
Abstract. Supported by a United States American Recovery and Reinvestment Act grant, we have developed a workflow, “the SALIX Method,” to image, database, and provide web access to ca. 60,000 Latin American plant specimens housed at the Arizona State University Herbarium. The SALIX Method incorporates optical character recognition using ABBYY FineReader and uses other proprietary software for word processing (Microsoft Word) and image management (Adobe Lightroom). We developed the other applications ourselves: SALIX for text parsing, and BarcodeRenamer (BCR) for renaming image files to match their barcodes. We use our Symbiota data portal (SEINet) to provide web access to collections data and images. Data entry was found to be about as fast to considerably faster using the SALIX Method than by keystroke entry directly into SEINet. Speed is dependent on label quality and length as well as user proficiency.
Video tutorials available here: ASU Herbarium YouTube Channel

Back to the aOCR Wiki