Managing Natural History Collections Data for Global Discoverability

From iDigBio
Jump to navigation Jump to search
Managing Natural History Collections Data for Global Discoverability
Capture1.JPG

Quick Links for Managing NHC Data for Global Discoverability wiki
Managing NHC Data Announcement
Managing NHC Data for Global Discoverability - Agenda
Managing NHC Data for Global Discoverability Biblio Entries
Managing NHC Data for Global Discoverability Report

This wiki supports the Managing Natural History Collections (NHC) Data for Global Discoverability Workshop and is in development. This workshop is sponsored by iDigBio and hosted by the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space on September 15-17, 2015. It is the fourth in a series of biodiversity informatics workshops held in fiscal year 2014-2015. The first three were 1) Data Carpentry, 2) Data Sharing Data Standards and Demystifying the IPT, and 3) Field to Database (March 9 - 12, 2015).

General Information

Application2.png

Description and Overview of Workshop. Are you:

  • actively digitizing NHC data and looking to do it more efficiently?
  • getting ready to start digitizing NHC data and looking to learn some new skills to enhance your workflow?
  • digitizing someone else’s specimens (e.g., as part of a research project)?
  • finding yourself in the role of the museum database manager (even though it may not be your title or original job)?
  • someone who has a private research collection who wishes to donate specimens and data to a public collection?

The theme of the "Collections Data for Global Discoverability" workshop is ideally suited for natural history collections specialists aiming to increase the "research readiness" of their biodiversity data at a global scale. Have you found yourself in situations where you need to manage larger quantities of collection records, or encounter challenges in carrying out updates or quality checks? Do you mainly use spreadsheets (such as Excel) to clean and manage specimen-level datasets before uploading them into your collections database? The workshop is most appropriate for those who are relatively new to collections data management and are motivated to provide the global research community with accessible, standards- and best practices-compliant biodiversity data.

During the workshop essential information science and biodiversity data concepts will be introduced (i.e., data tables, data sharing, quality/cleaning, Darwin Core, APIs). Hands-on data cleaning exercises using spreadsheet programs and readily usable and free software will be performed. The workshop is platform independent, and thus will not focus on the specifics of one or the other locally preferred biodiversity database platforms, instead addressing fundamental themes and solutions that will apply to a variety of database applications.


To Do For You: Pre-reading materials

Updates will be posted to this website as they become available.

Planning Team

Collaboratively brought to you by: Katja Seltmann (AMNH - TTD-TCN), Amber Budden (DataONE), Edward Gilbert (ASU - Symbiota), Nico Franz (ASU), Mark Schildhauer (NCEAS), Greg Riccardi (FSU - iDigBio), Reed Beaman (NSF), Cathy Bester (iDigBio), Shari Ellis (iDigBio), Kevin Love (iDigBio), Deborah Paul (FSU - iDigBio)

About

Instructors (iDigBio): Katja Seltmann, Amber Budden, Edward Gilbert, Nico Franz, Greg Riccardi, Deborah Paul, Joanna McCaffrey, Kevin Love, Anne Thessen, David Bloom

Skill Level: We are focusing our efforts in this workshop on beginners.

Where and When: Tempe, AZ at the Arizona State University (ASU) School of Life Sciences Natural History Collections, Informatics & Outreach Group in their new Alameda space, September 15 - 17, 2015

Requirements: Participants must bring a laptop.

Contact (iDigBio Participants): Please email Deb Paul dpaul@fsu.edu for questions and information not covered here.

Twitter:

Tuition for the course is free, but there is an application process and spots are limited (and class is full).

Software Installation Details

A laptop and a web browser are required for participants.
We use Adobe Connect extensively in this workshop. Please perform the systems test using the link below. Also, you will also need to install the Adobe Connect Add-In to participate in the workshop.

  • Adobe Connect Systems Test
    • Note when you follow the link to install and perform the test, some software will install (but it doesn't look like anything happens). To check, simply re-run the test.

Agenda

Schedule - subject to change.

Course Overview - Day 1 - Tuesday September 15th
8:15-8:30 Check-in, name tags, log in, connect to wireless and Adobe Connect All
8:30-9:15 Welcome, Logistics, Intro to the Workshop, Why Share Data? Why this workshop?

Why this Workshop?, part 2

quick exercise - what are your data challenges? what software do you use?
key point - why share data?
Deb Paul, Amber Budden
09:15-9:35 General Concepts and Best Practices
the data life-cycle, brief introduction to data modeling, and relational databases
Ed Gilbert and Amber Budden
9:35-9:55 Overview of Data standards
Darwin Core, EML, Audubon Core, GGBN, DwC-A, Identifiers (GUIDs vs local)
Ed Gilbert, Deb Paul
10:00-10:30 Introduction to Mapping Data
hands-on exercise with occurrence specimen data set
data set with known mapping / standardization issues.
Darwin Core Terms
Sample Data
Known Issues in Sample Data
All
10:30-10:50 Break
10:50-11:30 Data Management Planning
choosing a collection management system, data flow, data backup, field-to-database, metadata
Amber Budden and Joanna McCaffrey
11:30-12:00 DataONE Lesson 4
best practices for data entry and data manipulation
Amber Budden
12:00-1:00 Lunch (Provided by Panera)
1:00-1:30 Images and media issues: a brief intro
choosing a camera, issues across different database platforms, image submissions, linking images to occurrence records, batch processing, dams
Ed Gilbert and Joanna McCaffrey
1:30-2:00 Digitization workflows and process: Common Workflows and Optimization
getting started, prioritization, specimen collecting, new database, and integrating old data.
Image to data, specimen to data, to-the-web and skeletal records.
Reviewing your own workflow, common bottlenecks, policy, documentation
Katja Seltmann, Deb Paul & Ed Gilbert
2:00 - 3:00 Collections Tours and Symbiota Demo. (groups of 10)
Digitization in Action: Insects, Botany, Symbiota
All
3:00-3:20 Break
3:20-3:50 Georeferencing Data (Georeferencing Workflow)
visualization tools, when to georeference, best practices (the import of standards): error uncertainty, georeferencingRemarks
Ed Gilbert
3:50-4:10 GEOLocate Exercise (May be DEMO)
CoGe, GPS Visualizer, re-integration, qc
Folks can preregister to GEOLocate Collaborative Georeferencing using the link below. Doing so will automatically register them for the Phoenix community project that Ed created. If you already have a login, you can use the link to just register ypur existing account to the Phoenix project.
http://www.museum.tulane.edu/coge/WebComEasySignUp.aspx?ajc=915E2056
Ed Gilbert
4:40-5:30 Conversation, overview of day, preview for tomorrow, backpack logistics for tomorrow, ... All
Course Overview - Day 2 - Wednesday September 16th
8:30-12:00 Desert Botanical Garden (DBG) Field Trip and Lunch
meet at 7:55 in Hotel Lobby, depart at 8:00 and 8:30 for DBG; garden from 9-11:30, lunch 11:30 - 12:30, aim to depart 12:00 and 12:30 to ASU. Bring a hat!
11:30-12:30 Lunch at Gertrude's (in the Garden) YUM!
2:00-2:35 Welcome Back and Intro to Data Quality
inside the data-life-cycle, cost of data quality, quality vs completeness
Amber Budden, Greg Riccardi, (Ed Gilbert)
2:35-2:45 Review Tools for Data Cleaning, Data Manipulation, and Visualization (and Lessons)
Spreadsheets, Kurator, GPS Visualizer, GEOLOcate, CoGE, Google Maps, CartoDB, Google Fusion Tables, Notepad ++, Open Refine, BioVel, Access,(others), iDigBio recordset data cleaning, iPlant TNRS, RegEx
Where do they fit in your workflow?
Deb Paul
2:45-2:50 Data Cleaning
where, when and how does it happen?, what kind of feedback to expect
types of common errors and omissions, best practices strategies, feedback and annotation, error tracking, automation, policies and protocols
Deb Paul & Katja Seltmann
2:50-3:40 Data Cleaning Exercise I
better spreadsheet skills (Data Carpentry)
http://idigbio.github.io/spreadsheet-skills/00-intro.html
Katja Seltmann & Deb Paul
3:40-4:00 Break
4:00 - 5:00 Data Cleaning Exercise I
better spreadsheet skills (Data Carpentry), continued...
Katja Seltmann & Deb Paul
5:00-5:15 Data Cleaning Exercise II
Open Refine, part I (facets, clustering)
https://idigbio.github.io/open-refine/00-getting-started.html
https://wiki.biovel.eu/display/doc/Installing+and+running+DR+Workflow+on+Taverna+Workbench#InstallingandrunningDRWorkflowonTavernaWorkbench-InstallingGoogleRefine
http://multimedia.journalism.berkeley.edu/tutorials/google-refine-export-json/
Deb Paul
5:15-5:30 Conversation, overview of day for context and questions, homework and preview for tomorrow... Deb Paul & Katja Seltmann
Evening Activity (opt) Insect Collecting Opportunity
Sign Up and Details - Wednesday night insect collecting trip to Mesquite Wash
Pictures Please!
Host - Nico Franz
Course Overview - Day 3 - Thursday September 17th
8:30-9:00 Discussion of Material Covered so far, Overview of Day 3, Set up breakout groups Katja Seltmann
9:00-10:00 Potential break out groups
Taxonomic Names issues - TNRS,ECAT
GEOLocate, CoGe, Georeferencing Workflows, Workshops
Data Cleaning: what is scripting? what is regex? examples in Open Refine, possibly in Symbiota
your own data issues / requests
Refine, part II (Using APIs, Taxonomic Name Resolution Services)
More about choosing software / and the "build-your-own" scenario
More about identifiers
More on imaging issues (what camera to purchase, etc)​
On OCR, NLP, duplicate harvesting
DataONE Data Management Planning Tool
What is Data Carpentry?
Text Editors
rAPI
All
10:00-10:35 Break
10:35 - 10:55 Sharing Data: Preparing and Moving Data to the Internet
making data useful, understandable in the outside world, properties, values and being systematic
Greg Riccardi
10:55-11:20 Data Publishing: in the context of the data life cycle
benefits, concerns, aggregators, citation, attribution
VertNet Norms For Data Use and Publication
Anne Thessen, http://datadetektiv.com/
11:20-11:40 Getting Your Data Published: Sending Data to iDigBio
from you to us, the details, the options
Joanna McCaffrey
12:00-1:00 Lunch (Provided by Panera)
1:00-1:45 Feedback from iDigBio as part of the Data Life Cycle and an iDigBio Portal Exercise
iDigBio Data Management and Recordset Data Quality
Webinar coming up - Improving Data Quality: iDigBio Recordset data cleaning method, tools, and data flags October 23th, 2015
Using the iDigBio Portal and integrated research tools (PhyloJive, LifeMapper)
https://goo.gl/gyRwx7
http://idigbio.github.io/spreadsheet-skills/09-iDigBio-portal.html
Kevin Love, Katja Seltmann and Deb Paul
1:45-2:05 Copyright / Intellectual Property
VertNet Guide to Copyright and Licenses for Dataset Publication
VertNet Norms
iDigBio Terms of Use and Citation
David Bloom, Jonathan Rees, Greg Riccardi
3:00-3:20 Break
3:20-5:00 Second round of break-out groups
DWC-A publishing Exercise (or DEMO): using IPT instance
Sample Dataset
your email and "password"
http://iptworkshop.idigbio.org/ (your email prefix)
Symbiota DwC-A mapping and publishing exercise,
others
Edward Gilbert
5:00 -5:30 Closing topics
What are your next steps for moving forward
guided discussion, survey, and thanks!, ...
Katja Seltmann & all

Logistics

Adobe Connect Access

Adobe Connect will be used to provide access for everyone and for remote folks to listen to the lectures.

Workshop Documents, Presentations, and Links

Pre-Workshop Reading List

Links beneficial for review

Workshop Recordings

Day 1

Day 2

Day 3

Resources and Links

Digitization Training Workshops Wiki Home