2013 AOCR Hackathon Wiki

From iDigBio
Revision as of 01:56, 11 January 2013 by Dpaul (Talk | contribs)

Jump to: navigation, search

Welcome to the 2013 iDigBio AOCR Hackathon Wiki

  • Short URL to this hackathon wiki http://tinyurl.com/aocrhackathonwiki
  • Those participating in the first iDigBio AOCR Hackathon need an iDigBio account.
  • Note: This wiki page undergoing frequent updates and some participants have wiki edit permissions and will add to / update / edit these pages before, during and after the hackathon.

Links to Logistics, Communication, and Participant Information

Overview of the Challenge

  "core" fields   

Three Data Sets

There are three data sets, that is, three different sets of images of museum specimen labels. Participants, working alone or in groups, may work on one or more data sets as they choose. The sets have been ranked, easy, medium, hard, as an estimate of how difficult it might be to successfully get good parsed data from the OCR output from each data set.

Set 1 (easy) 
10,000 Lichens, Bryophyte and Climate Change TCN, lichen and bryophyte packet labels. These are considered easy because these jpg images are of the label only and data on the label is mostly typed or printed with little or no handwriting present.
Set 2 (medium) 
5,000 Botanical Research Institute of Texas (BRIT) Herbarium and 5,000 New York Botanical Garden Herbarium specimen sheets. These are full sheets and again, most have been pre-selected to focus on labels containing mostly print or typed text and little handwriting. Note there are exceptions in order to make a more realistic (and more difficult) data set.
Set 3 (hard)
Several thousand images from the Essig Museum and the CalBug project. The gold set has not yet been created for these (in progress). Silver set creation needs to be discussed.

Frequently Asked Questions

link to page summarizing the rules we followed to transcribe the gold set (and others)

Text Transcription Issues

Known OCR, ML, NLP Issues and challenges

Human-in-the-loop: User Interface Wish List

*Thank you NESCent, Hilmar Lapp and the HIP working group for this model.