Data Ingestion Guidance

From iDigBio
Jump to navigation Jump to search


Contact information

If you need assistance related to data ingestion, contact data@idigbio.org.

Data Ingestion Workflow

Working copy 1.1 (February 2015)

Audience: iDigBio data ingestion staff and data providers

This is the process description for

  • iDigBio staff to follow to assure that data are successfully and efficiently moved from data provider to the portal, available for searching.
  • Data providers to follow to assure that data are efficiently and accurately provided to the iDigBio staff.

First step to becoming a data provider

Sending your data to iDigBio is as simple as sending an email to data@idigbio.org to say where to pick it up for ingestion. If you need help compiling it into the acceptable formats, then get in touch with us to express your interest, and we'll help with what you currently have.

Register your data

iDigBio accepts specimen data and related media from any institution. If you are ready to discuss providing data to iDigBio, contact data@idigbio.org to register your interest and begin the process of preparing your data for ingestion. If you have a Darwin Core Archive (DwC-A), getting your data ingested by iDigBio could be as easy as telling us the RSS feed address on your network. Information about setting up an RSS feed can be found here: Setting up an RSS feed

Data requirements for data providers

Below are what we ask of the data to make it easily searchable in the cyberinfrastructure we provide.

There are 3 kinds of data files to submit for ingestion:

  1. specimen data with dataset metadata
  2. media data related to and attached by reference to specimen records with metadata (use of dwc:associatedMedia in the occurrent/specimen data file is not viewed as sending media)
  3. media files - e.g., non-archival .jpgs (see acceptable format here: https://www.idigbio.org/content/idigbio-image-file-format-requirements-and-recommendations-1)

Packaging for specimen data

In order of preference:

  1. DwC-A (Darwin Core Archive) produced by IPT on a RSS feed. IPT is available at: https://code.google.com/p/gbif-providertoolkit/ Providers are encouraged to use the most current version of IPT (v. 2.1 or later) that supports the Audubon Core extension, especially if they want to include media with their specimen records.
  2. Custom DwC-A on an RSS feed produced by Symbiota
  3. Custom CSV or TXT (save the data in UTF-8 format to preserve diacritics in people and place names), this option for sending only specimen data or only media data (DwC-A packaging required when sending both specimen and media data)
  4. Custom RSS feed following the guidance at: iDigBio RSS specification
  • A custom CSV allows providers to send data beyond standards such as Dublin Core and Darwin Core. For example, providers can send tribe taxonomic information in the field "idigbio:tribe". While creating additional fields, use field names that follow DwC format (camel case), additionally, consult the MISC field names (local iDigbio extensions to DwC). The host association terms are an example of an extension found in the MISC. Use the XML style field names that include the domain of the schema, e.g., dwc:termName, ac:termName. Non-standard field names are indexed and available through search API.

Special note to data aggregators

Note to aggregated data providers (e.g., California Consortium of Herbaria (CCH), Calbug, Tri-Trophic TCN (TTD), Consortium of Pacific Northwest Herbaria (CPNW)):

When providing us access to your data, we highly encourage you to provide your aggregated data one provider at a time, each in their own Darwin Core archive, with their own list of contacts in a separate EML file. iDigBio is moving towards providing data quality feedback, data correction, annotations, and other value-added information back to the providers and thus we want individual contact information for each source provider where possible. The hope is that the information could be re-integrated at the source so that higher quality data would be in place for the provider as well as be available to downstream data consumers such as iDigBio and GBIF.

However, if that is not possible or desirable, we still welcome your aggregated data as one monolith.

In the interest of people/researchers using your data in the aggregate, e.g., EOL, we encourage you to homogenize the rights information you provide. We recommend CC0 (http://creativecommons.org/publicdomain/zero/1.0/). Further info about Creative Commons licenses is below, under the 'providing media' section.

Sending data to iDigBio

  • An RSS feed to a DwC-A for ready access and update is our preference
  • Email the files to us

Specimen metadata

  • Each specimen record should have a unique (within the dataset) identifier in the dwc:occurrenceID field. When the ingestion software detects duplicate identifiers, the duplicated records are flagged as an error and are not ingested. This is the number one reason for records to be rejected. Identifiers, if not GUIDSs or specifically UUIDs, are what is typically called the DwC (Darwin Core) triplet:
<dwc:institutionCode>:<dwc:collectionCode>:<dwc:catalogNumber>

example with a prefix (lowercase is preferred in the prefix):

urn:catalog:TNHC:Herpetology:122

Further examples include:

  • a simple / bare UUID:
f47ac10b-58cc-4372-a567-0e02b2c3d479
  • a UUID using URI syntax: (lowercase is preferred in the prefix) (preferred format of GUIDs in iDigBio)
urn:uuid:f47ac10b-58cc-4372-a567-0e02b2c3d479
  • an Archival Resource Key (ARK):
ark:/87286/f47ac10b-58cc-4372-a567-0e02b2c3d479

Complete attribution and licensing

In order for each provider's data to be correctly attributed when found on the iDigBio portal, the following are important to complete:

  • Fill in your official institution code (dwc:institutionCode) and collection code (dwc:collectionCode)
  • Go to http://GRBio.org to get their Cool URI value for your institution in the alternateIdentifier field in the EML dialog (e.g., http://biocol.org/urn:lsid:biocol.org:col:15587). Store in the dwc:institutionID and dwc:collectionID fields
  • Fill in the DwC global-to-the-dataset DwC record-level fields for intellectual property and licensing, e.g., dcterms:rights, dcterms:rightsHolder and dcterms:accessRights or use the global EML-based dwc:intellectualRights field.
  • Use the field dcterms:bibliographicCitation, e.g., Ctenomys sociabilis (MVZ 165861) for the correct attribution string for each record.
dcterms:rights
any actual rights statements (IP, or otherwise), and any licenses associated with the data sets (e.g., CC0 is recommended), chosen from the Creative Commons options. Any right or license will appear in the iDigBio portal with each record it covers.
dcterms:rightsHolder
you should fill in this field if you filled in dcterms:rights. It completes who precisely owns the data rights and will assure proper and correct attribution.
dcterms:accessRights
is where the precise terms of use should be placed, things such as: '...you have to attribute us or provide us with a final copy of a given product'. It will be blank unless the provider has entered content at the source and on their own.
dwc:intellectualRights example
institution-name data records may be used by individual researchers or research groups, but they may not be repackaged, resold, or redistributed in any form without the express written consent of a curatorial staff member of the institution-name. If any of these records are used in an analysis or report, the provenance of the original data must be acknowledged and the institution-name notified. The institution-name and its staff are not responsible for damages, injury or loss due to the use of these data.

Example:

dcterms:rights = http://creativecommons.org/publicdomain/zero/1.0/
dcterms:rightsHolder = University of Florida, Florida Museum of Natural History

Several examples of the use of public domain, recommended for specimen data:

dcterms:rights = http://creativecommons.org/publicdomain/mark/1.0/
xmpRights:webStatement = http://creativecommons.org/publicdomain/mark/1.0/
dc:rights = Public Domain
xmpRights:owner = Public Domain

Some further guidance on this subject: when you are completing the metadata in the IPT, under Additional Metadata, it is important to consider the licensing and rights that you may wish to publish the data under. There are a couple of interesting articles describing the reasoning behind the Creative Commons licenses, http://creativecommons.org/licenses/, at the following URLs:

It may also be useful to read the Creative Commons Wiki on using Creative Commons licenses on data. http://wiki.creativecommons.org/Data" (ref D. Bloom)

On the last word on the subject of 'Attribution", in the Project Information -> funding section of IPT, you should put information about the grants you received to fund digitization. The IPT dialog will guide you for pertinent information.

Permission to ingest

  • the provider needs to have permission to submit their data

Data recommendations for optimal searchability and applicability in the aggregate

  • institutionCode and ownerInstitutionCode: we recommend that if you use ownerInstitutionCode in your data that you also fill in institutionCode. The former is typically used to to indicate that the specimen is at location 'x' while the record is being provided by institution 'y'. While we do not require the use of institutionCode, it is likely to be the most agreed upon searchable information when thinking about the disparities in a precise institution name.
  • eventDate: put dates in ISO 8601 format, i.e., YYYY-MM-DD, e.g., 2014-06-22. The critical element in this date is a four character year. e.g., http://rs.tdwg.org/dwc/terms/#eventDate.
  • Meters: put elevation in METERS units in the elevation field without the units (e.g., the fields dwc:minimumElevationInMeters and dwc:maximumElevationInMeters already assume the numeric values are in meters, so there no need to include the units with the data).
  • Escapes: do not use unescaped newline characters in text fields.
  • Data uncertainty: use the remarks fields to express doubt or missing values in data, Something like '?' is not a helpful value, and cannot be searched for.
  • No '0': do not export '0' in fields to represent no value, e.g., lat or lon. This caution applies to '?', 'NA', '00/00/0000' and any other placeholder value.
  • decimalLatitude & decimalLatitude: make sure lat and lon coordinates are in decimal, and not N, S, E, W. For details see: http://rs.tdwg.org/dwc/terms/#decimalLatitude.
  • genus, specificEpithet, infraspecificEpithet & taxonRank: parse taxon ranks. Note: if the identification is something like Aeus sp., the taxonRank=genus.
  • scientificName: combine taxon ranks into the identification value.
  • vernacularName: include common names for broader audience findability. For details see: http://rs.tdwg.org/dwc/terms/#vernacularName
  • higherClassification: include parsed higher taxonomy classification, at least kingdom and family, and the intervening ranks if possible. For details see: http://rs.tdwg.org/dwc/terms/#higherClassification.
  • countryCode: include a 3 character countryCode from here: http://en.wikipedia.org/wiki/ISO_3166-1_alpha-3. For details see: http://rs.tdwg.org/dwc/terms/#countryCode. Using a code for country aids in situations where the correct spelling and timeframe of collection location is not known, e.g., Thailand, Siam.
  • continent: For details see: http://rs.tdwg.org/dwc/terms/#continent
  • dynamicProperties: when including data in the dynamicProperties field, please use JSON format. For details see: http://rs.tdwg.org/dwc/terms/#dynamicProperties.
  • recordNumber or fieldNumber: in our experience botanists use recordNumber and all others who have collection events use fieldNumber.

Other fields for completeness that can be configured as defaults in IPT for all records:

Anyone considering contributing data should read these anecdotes. They come from users of iDigBio's aggregated data.

Packaging for images / media objects

  • Firstly, adding a field in the occurrence file for associatedMedia is not the way to include media with a specimen record. Media that comes to us via this method, or embedded in a webpage will not get the usual handling.
  • Each media record should have a unique (within the dataset) identifier in the dcterms:identifier field.
  • If submitting media records with specimen data records, here are the critical fields to fill in:
    • sample
      • id = (this is the coreid field in the Audubon Core extension file), it matches one identifier among the related specimen records
        urn:catalog:institutionCode:collectionCode:catalogNumber
      • identifier (dcterms:identifier) = id of the media record - needs to be unique within Audubon Core file, is the equivalent of the occurrenceID in the occurrence file
        urn:catalog:institutionCode:collectionCode:Image:catalogNumber
      • accessURI (ac:accessURI) = direct http link to the media file. See other info about this not being a link to an enclosing web page containing the media file
        http://bgbasesrvr.univ.edu/DATABASEIMAGES/LONN00000001.JPG
      • providerManagedID (ac:providerManagedID) = if you have a UUID GUID for your media records, then assign it to the optional ac:providerManagedID field.
        urn:uuid:32e5da5d-c747-435c-a368-07d989259bf4   (optional)
  • Note to aggregators: In the case where the data are coming from an aggregator, an additional recordId field is required (idigbio:recordId). This is the media identifier, distinct from the one given by the provider in the dcterms:identifier field. It is assumed that aggregators are building their own archives, as this is not a Darwin Core term, and is not supported in the IPT.
  • Terms: Use Audubon Core terms, http://terms.tdwg.org/wiki/Audubon_Core_Term_List, with one record for each media record. The more you can flesh out the details of the image, the more likely it will be to be highly retrievable. The best practice is to use the taxonomic and geographic fields to capture as much information as possible when only media are given to iDigBio.
  • License: Just like permission of catalog records, the media records need to be provided freely and with permission, and each record should have a Creative Commons license. Content providers are required to adopt a Creative Commons license for information they serve through iDigBio. Except for public-domain or CC0 content, the default license is CC BY (Attribution), which allows users to copy, transmit, reuse, remix, and/or adapt data and media, as long as attribution regarding the source of these data or media is maintained. See http://creativecommons.org/licenses/by/4.0/ for a more detailed explanation of the CC BY license. Any combination of BY, NC, and SA of CC media license you wish to apply is fine with us, however ND is not acceptable. Using ND (no derivatives) will cause the media to be rejected.

Possible licenses:

If you are not using IPT, generate a meta.xml file by hand and package up the files in a DwC A-like format. (No eml.xml required).

Best practice for getting Audubon Core images linked to specimen records - special cases

Relationship Supported by Core Type Extensions
One-specimen-record-to-many-media files IPT 2.1/Custom DwC-A Specimen (DwC) Audubon Core
Many-specimen-records-to-one-media file IPT 2.2/Custom DwC-A Audubon Core Specimen (DwC)
Many-specimen-records-to-many-media files IPT 2.1/Custom DwC-A Specimen (DwC) Audubon Core + Relationship

Keep in mind that:

  • DwC-A is a set of files: a core type + a number of extensions
  • All files/tables (core or extension) need to have a unique identifier

Sending updates to iDigBio

All updates to for iDigBio should be sent to us using the method by which you originally published your data. For most data systems, this will mean generating a whole new export of your data periodically. iDigBio will examine the new data file, and convert it into an update-only dataset on our end. For publishers using RSS feeds, we automatically harvest these updates daily, and process them in about a week unless there are interruptions in our data ingestion workflow, such as system maintenance or your update getting stuck behind a very large ingestion run. If you remove any records from your data export, iDigBio will flag those records as deleted in our system, and remove them from our indexes, but they will still be available via our data API to those who know the identifiers of the records.

Instructions on Changing Identifiers

If you have already had your data ingested by iDigBio, and you decide to reformat or replace your specimen identifiers (occurenceIDs), and are not giving us a record identifier (recordID) with your record, you will need to add the following to your Darwin Core Archive:

Non-Darwin Core Archive publishers, or providers who wish to change record identifiers, will need to contact iDigBio to facilitate the change.

Notes on getting data from EMu into a Darwin Core Archive

The cookbook recipe is provided by Larry Gall, Yale Peabody Museum
It is straightforward to set up a feed between KE EMu and an IPT instance from which iDigBio can harvest. Perhaps the simplest approach is to use the scheduled operations facility in EMu to write a template that generates an output file (e.g., csv, txt) containing Darwin Core metadata to be ingested by the IPT. This output file can be produced automatically via operations at whatever frequency is desirable. Some mechanism can then be used to move the output file into a location where it is read by the IPT, either manually through the IPT UI or through a batch process. At Yale, we automate the entire workflow using cron such that 10 IPT resources get reinstantiated from EMu every day. The IPT uses MySQL as its metadata source and lives on a server separate from EMu. The output files from EMu are text files, which are scped from the EMu server to the IPT server, and used as input for daily MySQL table refreshes (truncate table xxx ; load data local infile 'yyy' into table xxx ;). In turn, the IPT is set to publish its 10 resources automatically on a daily basis.

Error handling

When data are received from the provider during the mobilizing process step, they are evaluated for fitness. Once the evaluation is successful, the ingestion process moves from mobilizing to ingesting, and the data are submitted to the ingestion scripts by the cyberinfrastructure staff. If an error condition occurs, the staff evaluate whether it is a script error or a data error. If it is the latter, the staff sends an email to the mobilizing staff who may contact the provider for changes. When the errors have been addressed, the mobilizing staff re-submit the data to the ingesting staff.

Sample scenarios of data transformations to prepare data for ingestion

Advertising your data on iDigBio on your website

We encourage you to post a link on your institution's website informing users that they will also find your data on iDigBio's portal.

Please look here for logo material: https://www.idigbio.org/wiki/index.php/IDigBio_Logo

and consider making the link to be to your publishers page, something like:

https://www.idigbio.org/portal/recordset/c50755ff-ca6d-4903-8e39-8b0e236c324f

where the UUID on the end of this link belongs to your recordset. The link to your recordset can be found here: iDigBio publishers


Additional references

If you want to learn about acceptable Creative Commons licenses in iDigBio:

Data ingestion report, progress so far

Provider assistance

Process terminology for iDigBio mobilization and ingestion staff

Processing steps, each step has a start and an end, signifying that it has moved to the next step.

IngestionProcess.gif
  • negotiating - the process of determining provider's interest in data ingestion
    • begins with email invitation to providers (in institutions, aggregators) to invite them to send their data to iDigbio specimen data portal
    • open a Redmine ticket in project=Data Mobilizing
    • ends with data exported by provider, ready for inspection and ingestion.
  • mobilizing - the process of evaluating data being fit for ingestion
    • begins with provider exported data and cursory inspection
    • fill in this table with provider info: eml.xml, unless there is a good eml.xml file available (e.g., from a DwC Archive)
    • ends with data passing inspection and passing to ingesting state, Redmine ticket changes to assignee=cyberinfrastructure team
  • ingesting - the process of ingesting provider's data
    • begins with Redmine ticket change to assignee=cyberinfrastructure team
    • ends with
      • data successfully ingested, ready for consumption in the portal
      • report sent back to data mobilizing staff
      • report sent to provider. Reference: Publishers Report
      • Redmine ticket set to Status= Closed
  • evaluating - the process of evaluating a failure to be ingested
    • begins with ingestion failure
      • evaluate ingestion failure, if data error - send it back to mobilizing state for corrections or
      • evaluate ingestion failure, if ingestion error - make corrections
    • ends with data re-submission to ingesting state