2013 AOCR Hackathon Wiki: Difference between revisions
Jump to navigation
Jump to search
Line 26: | Line 26: | ||
link to page summarizing the rules we followed to transcribe the gold set (and others) | link to page summarizing the rules we followed to transcribe the gold set (and others) | ||
== Choosing Images and Parsing Decisions == | |||
[[Image Selection and Processing Protocols]] | |||
[[Text Transcription Issues]] | [[Text Transcription Issues]] | ||
Revision as of 13:17, 11 January 2013
Welcome to the 2013 iDigBio AOCR Hackathon Wiki
- Short URL to this hackathon wiki http://tinyurl.com/aocrhackathonwiki
- Those participating in the first iDigBio AOCR Hackathon need an iDigBio account.
- Note: This wiki page undergoing frequent updates and some participants have wiki edit permissions and will add to / update / edit these pages before, during and after the hackathon.
Links to Logistics, Communication, and Participant Information
- Participant List
- Call for Participation
- Application Form
- Travel, Food, Lodging, Connectivity Logistics
- 2013 Hackathon Listserv, a mailing list for Hackathon Participants at aocr-hackathon-l@lists.ufl.edu
Overview of the Challenge
- 2013 iDigBio AOCR Hackathon Challenge
- overall description of The Challenge
- The Specific Task: parse OCR output to find values for these core data elements
- Metrics and Evaluation to be used
- Three Data Sets
- There are three data sets, that is, three different sets of images of museum specimen labels. Participants, working alone or in groups, may work on one or more data sets as they choose. The sets have been ranked, easy, medium, hard, as an estimate of how difficult it might be to successfully get good parsed data from the OCR output from each data set.
- Accessing the Data
"core" fields
Frequently Asked Questions
link to page summarizing the rules we followed to transcribe the gold set (and others)
Choosing Images and Parsing Decisions
Image Selection and Processing Protocols Text Transcription Issues
Known OCR, ML, NLP Issues and challenges
Human-in-the-loop: User Interface Wish List
*Thank you NESCent, Hilmar Lapp and the HIP working group for this model.