Managing Natural History Collections Data for Global Discoverability

From iDigBio
Revision as of 12:29, 30 June 2015 by Egbot (Talk | contribs)

Jump to: navigation, search
Managing Natural History Collections Data for Global Discoverability
Quick Links for Managing Natural History Collections Data for Global Discoverability
Managing Natural History Collections Data for Global Discoverability Agenda
Managing Natural History Collections Data for Global Discoverability Biblio Entries
Managing Natural History Collections Data for Global Discoverability Report

This wiki supports the Managing Natural History Collections (NHC) Data for Global Discoverability Workshop and is in development. This workshop is sponsored by iDigBio and hosted by the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space on September 15-17, 2015. It is the fourth in a series of biodiversity informatics workshops held in fiscal year 2014-2015. The first three were 1)Data Carpentry, 2)Data Sharing Data Standards and Demystifying the IPT, and 3)Field to Database (March 9 - 12, 2015).

General Information

Description and Overview of Workshop. Are you:

  • actively digitizing NHC data and looking to do it more efficiently?
  • getting ready to start digitizing NHC data and looking to learn some new skills to enhance your workflow?
  • digitizing someone else’s specimens (e.g., as part of a research project)?
  • finding yourself in the role of the museum database manager (even though it may not be your title or original job)?
  • someone who has a private research collection who wishes to donate specimens and data to a public collection?

The theme of the "Collections Data for Global Discoverability" workshop is ideally suited for natural history collections specialists aiming to increase the "research readiness" of their biodiversity data at a global scale. Have you found yourself in situations where you need to manage larger quantities of collection records, or encounter challenges in carrying out updates or quality checks? Do you mainly use spreadsheets (such as Excel) to clean and manage specimen-level datasets before uploading them into your collections database? The workshop is most appropriate for those who are relatively new to collections data management and are motivated to provide the global research community with accessible, standards- and best practices-compliant biodiversity data.

During the workshop essential information science and biodiversity data concepts will be introduced (i.e., data tables, data sharing, quality/cleaning, Darwin Core, APIs). Hands on data cleaning exercises using spreadsheet programs and readily usable and free software will be performed. The workshop is platform independent, and thus will not focus on the specifics of one or the other locally preferred biodiversity database platforms, instead addressing fundamental themes and solutions that will apply to a variety of database applications.

To Do For You: Pre-reading materials [Darwin Core Data Standard, Best Practices for Data Management,...]

Updates will be posted to this website as they become available.

Planning Team

Collaboratively brought to you by: Katja Seltmann (AMNH - TTD-TCN), Amber Budden (DataONE), Edward Gilbert (ASU - Symbiota), Nico Franz (ASU), Mark Schildhauer (NCEAS), Greg Riccardi (FSU - iDigBio), Reed Beaman (NSF), Cathy Bester (iDigBio), Shari Ellis (iDigBio), Kevin Love (iDigBio), Deborah Paul (FSU - iDigBio)


Instructors (iDigBio): Katja Seltmann, Amber Budden, Edward Gilbert, Nico Franz, Mark Schildhauer, Greg Riccardi, Deborah Paul

Skill Level: We are focusing our efforts in this workshop on beginners.

Where and When: Tempe, AZ at the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space, September 15 - 16, 2015

Requirements: Participants must bring a laptop.

Contact (iDigBio Participants): Please email Deb Paul for questions and information not covered here.


Tuition for the course is free, but there is an application process and spots are limited. [Apply here]

Software Installation Details

A laptop and a web browser are required for participants.
We use Adobe Connect extensively in this workshop. Please perform the systems test using the link below. Also, you will also need to install the Adobe Connect Add-In to participate in the workshop.

  • Adobe Connect Systems Test
    • Note when you follow the link to install and perform the test, some software will install (but it doesn't look like anything happens). To check, simply re-run the test.


  • Managing NHC Data Adobe Connect Room (to be linked - stay tuned)
  • Monday evening, September 14th: pre-workshop informal get-together at [to be decided], from [time to be decided].

Schedule - subject to change.

Course Overview - Day 1 - Tuesday September 15th
8:15-8:45 Check-in, name tags, log in, connect to wireless and Adobe Connect All
8:45-9:00 Welcome, Introductions, Logistics, Intro to the Workshop Deb Paul, iDigBio
9:00-9:15 Why this workshop? Amber Budden (Katja Seltmann) and Deb Paul
09:15-9:35 General Concepts and Best Practices
brief introduction to data modeling, the data lifecycle, and relational databases
(to be decided), Ed Gilbert and Amber Budden
9:35-9:55 Overview of Data standards
Darwin Core, EML, Audubon Core, GGBN, DwC-A, Identifiers (GUIDs vs local)
Ed Gilbert, Deb Paul
10:00-10:30 Hands-on Exercise with Specimen Data Set
with known mapping / standardization issues.
10:30-10:50 Break all
10:50-11:30 Data Management Planning
choosing a database, data flow, data backup, field-to-database, metadata
Amber Budden (and others TBD)
11:30-12:00 Exercise DataONE Lesson 4: best practices for data entry and data manipulation Amber Budden, et al
12:00-1:00 Lunch
1:00-1:30 Images and media issues: a brief intro

choosing a camera, issues across different database platforms, image submissions, linking images to occurrence records, batch processing

Ed Gilbert? (to be decided)
1:30-1:50 Digitization workflows and process
getting started, prioritization, specimen collecting, new database, integrating old data
Deb Paul, Ed Gilbert (TBD), Katja Seltmann
1:50-2:10 Common Workflows
image to data, specimen to data, skeletal records, crowd-sourcing, OCR/NLP, georeferencing, metadata
Ed and Deb
2:10-2:25 Optimization: Reviewing your own workflow
common bottlenecks, documentation
(to be decided)
2:25-3:00 Hands-on exercise (to be decided) tbd
3:00-3:20 Break
3:20-3:50 Georeferencing Data (Georeferencing Workflow)
visualization tools, when to georeference, best practices
tbd (Ed, et al)
3:50-4:10 GEOLocate Exercise (May be DEMO)
CoGe, GPS Visualizer, re-integration, qc
tbd (Ed, Deb, Nico, et al)
4:40-5:30 Conversation, overview of day, preview for tomorrow... All
(Optional Evening Activity?)
Course Overview - Day 2 - Wednesday September 16th
8:30-8:55 Welcome Back and Intro to Data Quality
inside the data-life-cycle, cost of data quality, quality vs completeness
Amber Budden, et al
9:00-9:15 Data Cleaning
where, when and how does it happen?, what kind of feedback to expect
9:15 - 9:35 Data Cleaning - Quick exercise: Spot the snafus
9:35-10:00 Data Cleaning - the details
types of common errors and omissions, best practices strategies, feedback and annotation, error tracking, automation, policies and protocols
(10:00-10:25) 25 extra minutes here on purpose - for discussion / break outs / unconference topics or demos
10:25-10:45 Break
10:45-11:20 Data Cleaning Exercise I
better spreadsheet skills
Deb, Ed, ...tbd
11:20-11:45 Data Cleaning Exercise II
Open Refine, part I (facets, clustering)
12:00-1:00 Lunch
1:00-1:35 Data Cleaning Exercise II
Open Refine, part II (Using APIs, Taxonomic Name Resolution Services)
Deb, et al (tbd)
1:35-1:50 (move this time to earlier slots above to make more time in data cleaning sections) Deb, Ed, Katja ...(tbd)
1:50-2:15 Data Cleaning, Data Manipulation, and Visualization Tools (and Lessons) Review
Kurator, GPS Visualizer, GEOLOcate, Google Fusion Tables, Notepad ++, Open Refine
2:15 - 2:45 Data Cleaning Exercise III (Your own data)
3:00-3:20 Break
3:20-4:40 Break out groups
TNRS,ECAT,QGIS,GEOLocate,CoGe,Data Cleaning: what is scripting? what is regex? examples in Open Refine, possibly in Symbiota, your own data issues / requests
4:40-5:00 Conversation, overview of day, preview for tomorrow...
Course Overview - Day 3 - Thursday September 17th
8:30-12:45 Desert Botanical Garden (DBG) Field Trip and Lunch
1:00-1:25 Data Publishing: in the context of the data life cycle
benefits, concerns, aggregators, citation, attribution
1:30-2:15 iDigBio Portal Exercise: Using iDigBio portal to do something with data that can’t be done within a local system, Ex. PhyloJive tbd
2:15-2:45 Copyright / Intellectual Property tbd
3:00-3:20 Break
3:20-4:20 Second round of break-out groups
DWC-A publishing Exercise (or DEMO): using IPT instance OR Symbiota DwC-A mapping and publishing exercise
4:20-4:40 Closing topics
a greater network, the global landscape, next steps
4:40-5:10 Participant 3 minute Presentations (1 slide)
5:10 - 5:30 Review Data Life Cycle we’ve walked through.
discussion, survey, next steps, and conclusions


Adobe Connect Access

Adobe Connect will be used to provide access for everyone and for remote folks to listen to the lectures.

Workshop Documents, Presentations, and Links

  • Google Collaborative Notes
  • links to any presentations (like power points) here
  • Darwin Core Terms
  • Participant Presentations

Pre-Workshop Reading List

Links beneficial for review

Workshop Recordings

Day 1

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-2:30pm
  • 3:00-5:00pm

Day 2

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-2:30pm
  • 3:00-5:00pm

Day 3

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-3:30pm
  • 3:30-5:00pm

Resources and Links

Digitization Training Workshops Wiki Home