Data Ingestion Guidance
- 1 Contact information
- 2 Data Ingestion Workflow
- 2.1 First step to becoming a data provider
- 2.2 Register your data
- 2.3 Data requirements for data providers
- 2.4 Packaging for specimen data
- 2.5 Sending data to iDigBio
- 2.6 Specimen metadata
- 2.7 Complete attribution and licensing
- 2.8 Permission to ingest
- 2.9 Data recommendations for optimal searchability and applicability in the aggregate
- 2.10 Packaging for images / media objects
- 2.11 Best practice for getting Audubon Core images linked to specimen records - special cases
- 2.12 Sending updates to iDigBio
- 2.13 Instructions on Changing Identifiers
- 2.14 Notes on getting data from EMu into a Darwin Core Archive
- 2.15 Error handling
- 2.16 Sample scenarios of data transformations to prepare data for ingestion
- 2.17 Advertising your data on iDigBio on your website
- 2.18 Additional references
- 3 Process terminology for iDigBio mobilization and ingestion staff
If you need assistance related to data ingestion, contact email@example.com.
Data Ingestion Workflow
Working copy 1.1 (February 2015)
Audience: iDigBio data ingestion staff and data providers
This is the process description for
- iDigBio staff to follow to assure that data are successfully and efficiently moved from data provider to the portal, available for searching.
- Data providers to follow to assure that data are efficiently and accurately provided to the iDigBio staff.
First step to becoming a data provider
Sending your data to iDigBio is as simple as sending an email to firstname.lastname@example.org to say where to pick it up for ingestion. If you need help compiling it into the acceptable formats, then get in touch with us to express your interest, and we'll help with what you currently have.
Register your data
iDigBio accepts specimen data and related media from any institution. If you are ready to discuss providing data to iDigBio, contact email@example.com to register your interest and begin the process of preparing your data for ingestion. If you have a Darwin Core Archive (DwC-A), getting your data ingested by iDigBio could be as easy as telling us the RSS feed address on your network. Information about setting up an RSS feed can be found here: Setting up an RSS feed
Data requirements for data providers
Below are what we ask of the data to make it easily searchable in the cyberinfrastructure we provide.
There are 3 kinds of data files to submit for ingestion:
- specimen data with dataset metadata
- media data related to and attached by reference to specimen records with metadata (use of dwc:associatedMedia in the occurrent/specimen data file is not viewed as sending media)
- media files - e.g., non-archival .jpgs (see acceptable format here: https://www.idigbio.org/content/idigbio-image-file-format-requirements-and-recommendations-1)
Packaging for specimen data
In order of preference:
- DwC-A (Darwin Core Archive) produced by IPT on a RSS feed. IPT is available at: https://code.google.com/p/gbif-providertoolkit/ Providers are encouraged to use the most current version of IPT (v. 2.1 or later) that supports the Audubon Core extension, especially if they want to include media with their specimen records.
- Custom DwC-A on an RSS feed produced by Symbiota
- Custom CSV or TXT (save the data in UTF-8 format to preserve diacritics in people and place names), this option for sending only specimen data or only media data (DwC-A packaging required when sending both specimen and media data)
- Custom RSS feed following the guidance at: iDigBio RSS specification
- Standard DwC-A uses field names from:
- A custom CSV allows providers to send data beyond standards such as Dublin Core and Darwin Core. For example, providers can send tribe taxonomic information in the field "idigbio:tribe". While creating additional fields, use field names that follow DwC format (camel case), additionally, consult the MISC field names (local iDigbio extensions to DwC). The host association terms are an example of an extension found in the MISC. Use the XML style field names that include the domain of the schema, e.g., dwc:termName, ac:termName. Non-standard field names are indexed and available through search API.
Special note to data aggregators
Note to aggregated data providers (e.g., California Consortium of Herbaria (CCH), Calbug, Tri-Trophic TCN (TTD), Consortium of Pacific Northwest Herbaria (CPNW)):
When providing us access to your data, we highly encourage you to provide your aggregated data one provider at a time, each in their own Darwin Core archive, with their own list of contacts in a separate EML file. iDigBio is moving towards providing data quality feedback, data correction, annotations, and other value-added information back to the providers and thus we want individual contact information for each source provider where possible. The hope is that the information could be re-integrated at the source so that higher quality data would be in place for the provider as well as be available to downstream data consumers such as iDigBio and GBIF.
However, if that is not possible or desirable, we still welcome your aggregated data as one monolith.
In the interest of people/researchers using your data in the aggregate, e.g., EOL, we encourage you to homogenize the rights information you provide. We recommend CC0 (http://creativecommons.org/publicdomain/zero/1.0/). Further info about Creative Commons licenses is below, under the 'providing media' section.
Sending data to iDigBio
- An RSS feed to a DwC-A for ready access and update is our preference
- Email the files to us
- Each specimen record should have a unique (within the dataset) identifier in the dwc:occurrenceID field. When the ingestion software detects duplicate identifiers, the duplicated records are flagged as an error and are not ingested. This is the number one reason for records to be rejected. Identifiers, if not GUIDSs or specifically UUIDs, are what is typically called the DwC (Darwin Core) triplet:
<dwc:institutionCode>:<dwc:collectionCode>:<dwc:catalogNumber>example with a prefix (lowercase is preferred in the prefix):
Further examples include:
- a simple / bare UUID:
- a UUID using URI syntax: (lowercase is preferred in the prefix) (preferred format of GUIDs in iDigBio)
- an Archival Resource Key (ARK):
- any of the various GUID types defined in TDWG Standard 150 (http://www.tdwg.org/standards/150/).
Complete attribution and licensing
In order for each provider's data to be correctly attributed when found on the iDigBio portal, the following are important to complete:
- Fill in your official institution code (dwc:institutionCode) and collection code (dwc:collectionCode)
- check your entry in grbio.org and make sure it is current and complete : http://grbio.org/
- make sure you have used the same institutionCode and collectionCode in GRBio, and your EML/IPT dialog
- Go to http://GRBio.org to get their Cool URI value for your institution in the alternateIdentifier field in the EML dialog (e.g., http://biocol.org/urn:lsid:biocol.org:col:15587). Store in the dwc:institutionID and dwc:collectionID fields
- Fill in the DwC global-to-the-dataset DwC record-level fields for intellectual property and licensing, e.g., dcterms:rights, dcterms:rightsHolder and dcterms:accessRights or use the global EML-based dwc:intellectualRights field.
- Use the field dcterms:bibliographicCitation, e.g., Ctenomys sociabilis (MVZ 165861) for the correct attribution string for each record.
- Check our intellectual property policy for complete details.
- any actual rights statements (IP, or otherwise), and any licenses associated with the data sets (e.g., CC0 is recommended), chosen from the Creative Commons options. Any right or license will appear in the iDigBio portal with each record it covers.
- you should fill in this field if you filled in dcterms:rights. It completes who precisely owns the data rights and will assure proper and correct attribution.
- institution-name data records may be used by individual researchers or research groups, but they may not be repackaged, resold, or redistributed in any form without the express written consent of a curatorial staff member of the institution-name. If any of these records are used in an analysis or report, the provenance of the original data must be acknowledged and the institution-name notified. The institution-name and its staff are not responsible for damages, injury or loss due to the use of these data.
dcterms:rights = http://creativecommons.org/publicdomain/zero/1.0/ dcterms:rightsHolder = University of Florida, Florida Museum of Natural History
Several examples of the use of public domain, recommended for specimen data:
dcterms:rights = http://creativecommons.org/publicdomain/mark/1.0/ xmpRights:webStatement = http://creativecommons.org/publicdomain/mark/1.0/ dc:rights = Public Domain xmpRights:owner = Public Domain
Some further guidance on this subject: when you are completing the metadata in the IPT, under Additional Metadata, it is important to consider the licensing and rights that you may wish to publish the data under. There are a couple of interesting articles describing the reasoning behind the Creative Commons licenses, http://creativecommons.org/licenses/, at the following URLs:
It may also be useful to read the Creative Commons Wiki on using Creative Commons licenses on data. http://wiki.creativecommons.org/Data" (ref D. Bloom)
On the last word on the subject of 'Attribution", in the Project Information -> funding section of IPT, you should put information about the grants you received to fund digitization. The IPT dialog will guide you for pertinent information.
Permission to ingest
- the provider needs to have permission to submit their data
Data recommendations for optimal searchability and applicability in the aggregate
- institutionCode and ownerInstitutionCode: we recommend that if you use ownerInstitutionCode in your data that you also fill in institutionCode. The former is typically used to to indicate that the specimen is at location 'x' while the record is being provided by institution 'y'. While we do not require the use of institutionCode, it is likely to be the most agreed upon searchable information when thinking about the disparities in a precise institution name.
- eventDate: put dates in ISO 8601 format, i.e., YYYY-MM-DD, e.g., 2014-06-22. The critical element in this date is a four character year. e.g., http://rs.tdwg.org/dwc/terms/#eventDate.
- Meters: put elevation in METERS units in the elevation field without the units (e.g., the fields dwc:minimumElevationInMeters and dwc:maximumElevationInMeters already assume the numeric values are in meters, so there no need to include the units with the data).
- Escapes: do not use unescaped newline characters in text fields.
- Data uncertainty: use the remarks fields to express doubt or missing values in data, Something like '?' is not a helpful value, and cannot be searched for.
- No '0': do not export '0' in fields to represent no value, e.g., lat or lon. This caution applies to '?', 'NA', '00/00/0000' and any other placeholder value.
- decimalLatitude & decimalLatitude: make sure lat and lon coordinates are in decimal, and not N, S, E, W. For details see: http://rs.tdwg.org/dwc/terms/#decimalLatitude.
- genus, specificEpithet, infraspecificEpithet & taxonRank: parse taxon ranks. Note: if the identification is something like Aeus sp., the taxonRank=genus.
- scientificName: combine taxon ranks into the identification value.
- vernacularName: include common names for broader audience findability. For details see: http://rs.tdwg.org/dwc/terms/#vernacularName
- higherClassification: include parsed higher taxonomy classification, at least kingdom and family, and the intervening ranks if possible. For details see: http://rs.tdwg.org/dwc/terms/#higherClassification.
- countryCode: include a 3 character countryCode from here: http://en.wikipedia.org/wiki/ISO_3166-1_alpha-3. For details see: http://rs.tdwg.org/dwc/terms/#countryCode. Using a code for country aids in situations where the correct spelling and timeframe of collection location is not known, e.g., Thailand, Siam.
- continent: For details see: http://rs.tdwg.org/dwc/terms/#continent
- dynamicProperties: when including data in the dynamicProperties field, please use JSON format. For details see: http://rs.tdwg.org/dwc/terms/#dynamicProperties.
- recordNumber or fieldNumber: in our experience botanists use recordNumber and all others who have collection events use fieldNumber.
Other fields for completeness that can be configured as defaults in IPT for all records:
- basisOfRecord="PreservedSpecimen" or "FossilSpecimen". For details see: http://rs.tdwg.org/dwc/terms/#basisOfRecord
- type="PhysicalObject" For details see: http://rs.tdwg.org/dwc/terms/#type
- language= "en" For details see: http://rs.tdwg.org/dwc/terms/#language
Anyone considering contributing data should read these anecdotes. They come from users of iDigBio's aggregated data.
Packaging for images / media objects
- Firstly, adding a field in the occurrence file for associatedMedia is not the way to include media with a specimen record. Media that comes to us via this method, or embedded in a webpage will not get the usual handling.
- Each media record should have a unique (within the dataset) identifier in the dcterms:identifier field.
- If submitting media records with specimen data records, here are the critical fields to fill in:
- id = (this is the coreid field in the Audubon Core extension file), it matches one identifier among the related specimen records
- identifier (dcterms:identifier) = id of the media record - needs to be unique within Audubon Core file, is the equivalent of the occurrenceID in the occurrence file
- accessURI (ac:accessURI) = direct http link to the media file. See other info about this not being a link to an enclosing web page containing the media file
- providerManagedID (ac:providerManagedID) = if you have a UUID GUID for your media records, then assign it to the optional ac:providerManagedID field.
- id = (this is the coreid field in the Audubon Core extension file), it matches one identifier among the related specimen records
- Note to aggregators: In the case where the data are coming from an aggregator, an additional recordId field is required (idigbio:recordId). This is the media identifier, distinct from the one given by the provider in the dcterms:identifier field. It is assumed that aggregators are building their own archives, as this is not a Darwin Core term, and is not supported in the IPT.
- Terms: Use Audubon Core terms, http://terms.tdwg.org/wiki/Audubon_Core_Term_List, with one record for each media record. The more you can flesh out the details of the image, the more likely it will be to be highly retrievable. The best practice is to use the taxonomic and geographic fields to capture as much information as possible when only media are given to iDigBio.
- License: Just like permission of catalog records, the media records need to be provided freely and with permission, and each record should have a Creative Commons license. Content providers are required to adopt a Creative Commons license for information they serve through iDigBio. Except for public-domain or CC0 content, the default license is CC BY (Attribution), which allows users to copy, transmit, reuse, remix, and/or adapt data and media, as long as attribution regarding the source of these data or media is maintained. See http://creativecommons.org/licenses/by/4.0/ for a more detailed explanation of the CC BY license. Any combination of BY, NC, and SA of CC media license you wish to apply is fine with us, however ND is not acceptable. Using ND (no derivatives) will cause the media to be rejected.
- CC0: http://creativecommons.org/publicdomain/zero/1.0/
- CC BY: http://creativecommons.org/licenses/by/4.0/
- CC BY-SA: http://creativecommons.org/licenses/by-sa/4.0/
- CC BY-NC-SA: https://creativecommons.org/licenses/by-nc-sa/4.0/
- a sample of an Audubon Core file
- The media records represent a one-to-one relationship between the media object (the fit-for-display best quality JPG, in the case of images, for example) and the specimen record. There is no need to include links to any other forms of the media, like an enclosing webpage, or thumbnails. Below is some guidance on handling special cases. If none of these media attachment rules make sense to you, please get in touch with us for further assistance.
If you are not using IPT, generate a meta.xml file by hand and package up the files in a DwC A-like format. (No eml.xml required).
|Relationship||Supported by||Core Type||Extensions|
|One-specimen-record-to-many-media files||IPT 2.1/Custom DwC-A||Specimen (DwC)||Audubon Core|
|Many-specimen-records-to-one-media file||IPT 2.2/Custom DwC-A||Audubon Core||Specimen (DwC)|
|Many-specimen-records-to-many-media files||IPT 2.1/Custom DwC-A||Specimen (DwC)||Audubon Core + Relationship|
Keep in mind that:
- DwC-A is a set of files: a core type + a number of extensions
- All files/tables (core or extension) need to have a unique identifier
Sending updates to iDigBio
All updates to for iDigBio should be sent to us using the method by which you originally published your data. For most data systems, this will mean generating a whole new export of your data periodically. iDigBio will examine the new data file, and convert it into an update-only dataset on our end. For publishers using RSS feeds, we automatically harvest these updates daily, and process them in about a week unless there are interruptions in our data ingestion workflow, such as system maintenance or your update getting stuck behind a very large ingestion run. If you remove any records from your data export, iDigBio will flag those records as deleted in our system, and remove them from our indexes, but they will still be available via our data API to those who know the identifiers of the records.
Instructions on Changing Identifiers
If you have already had your data ingested by iDigBio, and you decide to revamp your identifiers, thereby replacing them all, here is what you should add to your Darwin Core Archive:
- include the resource relationship extension in your archive and document the relationship using the OWL 'sameAs' relationship (http://www.w3.org/TR/owl-ref/#sameAs-def)
Notes on getting data from EMu into a Darwin Core Archive
The cookbook recipe is provided by Larry Gall, Yale Peabody Museum
It is straightforward to set up a feed between KE EMu and an IPT instance from which iDigBio can harvest. Perhaps the simplest approach is to use the scheduled operations facility in EMu to write a template that generates an output file (e.g., csv, txt) containing Darwin Core metadata to be ingested by the IPT. This output file can be produced automatically via operations at whatever frequency is desirable. Some mechanism can then be used to move the output file into a location where it is read by the IPT, either manually through the IPT UI or through a batch process. At Yale, we automate the entire workflow using cron such that 10 IPT resources get reinstantiated from EMu every day. The IPT uses MySQL as its metadata source and lives on a server separate from EMu. The output files from EMu are text files, which are scped from the EMu server to the IPT server, and used as input for daily MySQL table refreshes (truncate table xxx ; load data local infile 'yyy' into table xxx ;). In turn, the IPT is set to publish its 10 resources automatically on a daily basis.
When data are received from the provider during the mobilizing process step, they are evaluated for fitness. Once the evaluation is successful, the ingestion process moves from mobilizing to ingesting, and the data are submitted to the ingestion scripts by the cyberinfrastructure staff. If an error condition occurs, the staff evaluate whether it is a script error or a data error. If it is the latter, the staff sends an email to the mobilizing staff who may contact the provider for changes. When the errors have been addressed, the mobilizing staff re-submit the data to the ingesting staff.
Sample scenarios of data transformations to prepare data for ingestion
- Example preparing specimen data from Illinois Natural History survey (INHS) fish collection from FileMakerPro
- Example of transformations on InvertNet image metadata dataset
Advertising your data on iDigBio on your website
We encourage you to post a link on your institution's website informing users that they will also find your data on iDigBio's portal.
Please look here for logo material: https://www.idigbio.org/wiki/index.php/IDigBio_Logo
and consider making the link to be to your publishers page, something like:
https://www.idigbio.org/portal/recordset/c50755ff-ca6d-4903-8e39-8b0e236c324fwhere the UUID on the end of this link belongs to your recordset. The link to your recordset can be found here: iDigBio publishers
If you want to learn about acceptable Creative Commons licenses in iDigBio:
Data ingestion report, progress so far
- How to use the image ingestion appliance and link to specimen records : image ingestion cheatsheet
- How to generate a UUID GUID in an Excel spreadsheet
- How to use IPT to include images in your Darwin Core Archive (using Audubon Core)
- Setting up an RSS feed
- If you are interested in the terms used by providers in their data so far, including non-DwC ones, this is a snapshot as of 05/20/2014:
Process terminology for iDigBio mobilization and ingestion staff
Processing steps, each step has a start and an end, signifying that it has moved to the next step.
- negotiating - the process of determining provider's interest in data ingestion
- begins with email invitation to providers (in institutions, aggregators) to invite them to send their data to iDigbio specimen data portal
- open a Redmine ticket in project=Data Mobilizing
- ends with data exported by provider, ready for inspection and ingestion.
- mobilizing - the process of evaluating data being fit for ingestion
- begins with provider exported data and cursory inspection
- fill in this table with provider info: eml.xml, unless there is a good eml.xml file available (e.g., from a DwC Archive)
- ends with data passing inspection and passing to ingesting state, Redmine ticket changes to assignee=cyberinfrastructure team
- ingesting - the process of ingesting provider's data
- begins with Redmine ticket change to assignee=cyberinfrastructure team
- ends with
- data successfully ingested, ready for consumption in the portal
- report sent back to data mobilizing staff
- report sent to provider. Reference: Publishers Report
- Redmine ticket set to Status= Closed
- evaluating - the process of evaluating a failure to be ingested
- begins with ingestion failure
- evaluate ingestion failure, if data error - send it back to mobilizing state for corrections or
- evaluate ingestion failure, if ingestion error - make corrections
- ends with data re-submission to ingesting state
- begins with ingestion failure