Difference between revisions of "2013 AOCR Hackathon Wiki"

From iDigBio
Jump to: navigation, search
m (Overview of the Challenge)
m (Overview of the Challenge)
Line 18: Line 18:
 
** overall description of [[Hackathon Challenge#the_problem| the problem]]
 
** overall description of [[Hackathon Challenge#the_problem| the problem]]
 
** the specific challenge: parse OCR output to find values for these [[Hackathon Challenge#core_data_elements| core data elements]]
 
** the specific challenge: parse OCR output to find values for these [[Hackathon Challenge#core_data_elements| core data elements]]
[[2013 hackathon data elements| "core" fields]]
 
 
** read about [[Hackathon Challenge#the_metrics| the_metrics]] to be used
 
** read about [[Hackathon Challenge#the_metrics| the_metrics]] to be used
   
+
  [[2013 hackathon data elements| "core" fields]] 
 +
 
 
link to explanations and examples of the 3 data sets
 
link to explanations and examples of the 3 data sets
 
   set 1: LBCC label images
 
   set 1: LBCC label images

Revision as of 22:31, 10 January 2013

Welcome to the 2013 iDigBio AOCR Hackathon Wiki

  • Short URL to this hackathon wiki http://tinyurl.com/aocrhackathonwiki
  • Those participating in the first iDigBio AOCR Hackathon need an iDigBio account.
  • Note: This wiki page undergoing frequent updates and some participants have wiki edit permissions and will add to / update / edit these pages before, during and after the hackathon.

Links to Logistics, Communication, and Participant Information

Overview of the Challenge

  "core" fields   

link to explanations and examples of the 3 data sets

  set 1: LBCC label images
  set 2: NYBG and BRIT label images
  set 3: CalBug ENT label images

link to page summarizing the rules we followed to transcribe the gold set (and others)

Text Transcription Issues

link to a page listing known "specific" issues / challenges

  how to get OCR to ignore a map (reduce OCR confusion)
  ... and ___ present a challenge and confuse OCR and parsing.
  figure out an algorithm that would separate images into sets with
     no handwriting, little handwriting (mostly text typed or printed), lots of handwriting

link to user interface wish list

  link labelx to apiary and symbiota
  what else?
*Thank you Nesent, Hilmar Lapp and the HIP working group for this model.