Managing Natural History Collections Data for Global Discoverability

From iDigBio
Jump to navigation Jump to search
Managing Natural History Collections Data for Global Discoverability
[[Image:|center|500px|]]
Quick Links for Managing Natural History Collections Data for Global Discoverability
Managing Natural History Collections Data for Global Discoverability Agenda
Managing Natural History Collections Data for Global Discoverability Biblio Entries
Managing Natural History Collections Data for Global Discoverability Report

This wiki supports the Managing Natural History Collections (NHC) Data for Global Discoverability Workshop and is in development. This workshop is sponsored by iDigBio and hosted by the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space on September 15-17, 2015. It is the fourth in a series of biodiversity informatics workshops held in fiscal year 2014-2015. The first three were 1)Data Carpentry, 2)Data Sharing Data Standards and Demystifying the IPT, and 3)Field to Database (March 9 - 12, 2015).

General Information

Description and Overview of Workshop. Are you:

  • actively digitizing NHC data and looking to do it more efficiently?
  • getting ready to start digitizing NHC data and looking to learn some new skills to enhance your workflow?
  • digitizing someone else’s specimens (e.g., as part of a research project)?
  • finding yourself in the role of the museum database manager (even though it may not be your title or original job)?
  • someone who has a private research collection who wishes to donate specimens and data to a public collection?

The theme of the "Collections Data for Global Discoverability" workshop is ideally suited for natural history collections specialists aiming to increase the "research readiness" of their biodiversity data at a global scale. Have you found yourself in situations where you need to manage larger quantities of collection records, or encounter challenges in carrying out updates or quality checks? Do you mainly use spreadsheets (such as Excel) to clean and manage specimen-level datasets before uploading them into your collections database? The workshop is most appropriate for those who are relatively new to collections data management and are motivated to provide the global research community with accessible, standards- and best practices-compliant biodiversity data.

During the workshop essential information science and biodiversity data concepts will be introduced (i.e., data tables, data sharing, quality/cleaning, Darwin Core, APIs). Hands on data cleaning exercises using spreadsheet programs and readily usable and free software will be performed. The workshop is platform independent, and thus will not focus on the specifics of one or the other locally preferred biodiversity database platforms, instead addressing fundamental themes and solutions that will apply to a variety of database applications.


To Do For You: Pre-reading materials [Darwin Core Data Standard, Best Practices for Data Management,...]

Updates will be posted to this website as they become available.

Planning Team

Collaboratively brought to you by: Katja Seltmann (AMNH - TTD-TCN), Amber Budden (DataONE), Edward Gilbert (ASU - Symbiota), Nico Franz (ASU), Mark Schildhauer (NCEAS), Greg Riccardi (FSU - iDigBio), Reed Beaman (NSF), Cathy Bester (iDigBio), Shari Ellis (iDigBio), Kevin Love (iDigBio), Deborah Paul (FSU - iDigBio)

About

Instructors (iDigBio): Katja Seltmann, Amber Budden, Edward Gilbert, Nico Franz, Mark Schildhauer, Greg Riccardi, Deborah Paul

Skill Level: We are focusing our efforts in this workshop on beginners.

Where and When: Tempe, AZ at the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space, September 15 - 16, 2015

Requirements: Participants must bring a laptop.

Contact (iDigBio Participants): Please email Deb Paul dpaul@fsu.edu for questions and information not covered here.

Twitter:

Tuition for the course is free, but there is an application process and spots are limited. [Apply here]

Software Installation Details

A laptop and a web browser are required for participants.
We use Adobe Connect extensively in this workshop. Please perform the systems test using the link below. Also, you will also need to install the Adobe Connect Add-In to participate in the workshop.

  • Adobe Connect Systems Test
    • Note when you follow the link to install and perform the test, some software will install (but it doesn't look like anything happens). To check, simply re-run the test.

Agenda

Schedule - subject to change.

Course Overview - Day 1 - Tuesday September 15th
8:15-8:45 Check-in, name tags, log in, connect to wireless and Adobe Connect All
8:45-9:00 Welcome, Introductions, Logistics, Intro to the Workshop Deb Paul, iDigBio
9:00-9:15 Why this workshop? Amber Budden (Katja Seltmann) and Deb Paul
09:15-9:35 General Concepts and Best Practices
brief introduction to data modeling, the data lifecycle, and relational databases
(to be decided), Ed Gilbert and Amber Budden
10:15-10:45 Break all
9:35-9:55 Overview of Data standards
Darwin Core, Audubon Core, GGBN, DwC-A, Identifiers (GUIDs vs local)
(to be decided)
10:00-10:30 Hands-on Exercise with Specimen Data Set
with known mapping / standardization issues.
All
10:30-10:50 break
10:50-11:30 Data Management Planning
choosing a database, data flow, data backup, field-to-database, metadata
Amber Budden (and others TBD)
11:30-12:00 Exercise DataONE Lesson 7: best practices for data entry and data manipulation Amber Budden, et al
12:00-1:00 Lunch
1:00-1:30 Images and media issues: a brief intro

choosing a camera, issues across different database platforms, image submissions, linking images to occurrence records, batch processing

Ed Gilbert? (to be decided)
1:30-1:50 Digitization workflows and process
getting started, prioritization, specimen collecting, new database, integrating old data
Deb Paul, Ed Gilbert (TBD)
1:50-2:10 Common Workflows
image to data, specimen to data, skeletal records, crowd-sourcing, OCR/NLP, georeferencing, metadata
Ed and Deb
2:10-2:25 Optimization: Reviewing your own workflow
common bottlenecks, documentation
(to be decided)
2:25-3:00 Hands-on exercise (to be decided) tbd
3:00-3:20 Break
3:20-3:50 Georeferencing Data (Georeferencing Workflow)
visualization tools, when to georeference, best practices
tbd
3:50-4:10 GEOLocate Exercise (May be DEMO)
CoGe, GPS Visualizer, re-integration, qc
tbd (Ed, Deb, Nico, et al)
4:40-5:30 Conversation, overview of day, preview for tomorrow... All
(Optional Evening Activity?)
Course Overview - Day 2 - Wednesday September 16th
8:30-9:00
9:00-10:15
10:15-10:45 Break
10:45-12:00
12:00-1:00 Lunch
1:00-2:30
2:30-3:00 Break
3:00-4:00
4:00-4:30
Course Overview - Day 3 - Thursday September 17th
8:30-9:00
9:00-10:15
10:15-10:45 Break
10:45-12:00
12:00-1:00 Lunch
1:00-2:30
2:30-3:00 Break
3:00-4:00
4:00-4:30

Logistics

Adobe Connect Access

Adobe Connect will be used to provide access for everyone and for remote folks to listen to the lectures.

Workshop Documents, Presentations, and Links

  • Google Collaborative Notes
  • links to any presentations (like power points) here
  • Darwin Core Terms
  • Participant Presentations

Pre-Workshop Reading List

Links beneficial for review

Workshop Recordings

Day 1

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-2:30pm
  • 3:00-5:00pm

Day 2

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-2:30pm
  • 3:00-5:00pm

Day 3

  • 8:30am-10:15m
  • 10:45am-11:00am
  • 11:15am-12pm
  • 1:00pm-3:30pm
  • 3:30-5:00pm

Resources and Links

Digitization Training Workshops Wiki Home