You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@Karvovskaya and I wanted some feedback on the exercises for the Findable episode before we start fleshing it out. Suggestions for improvement, incl. how to do them totally differently are welcome!
I've organized the exercises according to the (sub-)principles for some structure.
F1: (Meta) data are assigned globally unique and persistent identifiers / DOIs
Challenge 1:
Compare these two papers from arXiv - a preprint repository for physics, math, computer science, and related disciplines which allow researchers to share and access their work before it is formally published:
Challenge 2:
Look at this paper [link to be included]. Click on the ‘pdf’ link to download it. Do a full-text search by using control + F or command + F and search for ‘http’. Did the author use DOIs for their data and software?
Challenge 3:
What is the problem with referring to your code and software only with a URL [example to be included] without providing a DOI?
F2: Data are described with rich metadata F3: Metadata clearly and explicitly include the identifier of the data they describe
We could provide an exercise where a dataset/software is provided, and learners have to extract + fill out metadata fields based on that? If possible, it would be nice to allow for ‘correctly’ typed answers only - so no typos, etc. because those little errors affect the links between content.
The depth of this exercise can range from something simple like the three images in the previous link, or we could have sample exercises that follow specific schemes/standards like DDI, DataCite, discipline-specific standards.
Also, this is what is currently on the lesson website: Automatic ORCID profile update when DOI is minted RelatedIdentifiers linking papers, data, software in Zenodo
F4: (Meta)data are registered or indexed in a searchable resource
Perhaps we could use Zenodo’s Sandbox for learners to ‘upload’ the data + metadata?
We could also provide some example datasets/software and have learners select the most appropriate (discipline-specific) repository from a list we give them/they can search for the repo themselves.
The text was updated successfully, but these errors were encountered:
Dataset X (very basic or incomplete metadata example) is available , and Dataset Y (rich metadata example) is available . Which dataset, X or Y, provides you enough information in the metadata for you to be able to use/reuse the dataset?
@Karvovskaya and I wanted some feedback on the exercises for the Findable episode before we start fleshing it out. Suggestions for improvement, incl. how to do them totally differently are welcome!
I've organized the exercises according to the (sub-)principles for some structure.
F1: (Meta) data are assigned globally unique and persistent identifiers / DOIs
Challenge 1:
Compare these two papers from arXiv - a preprint repository for physics, math, computer science, and related disciplines which allow researchers to share and access their work before it is formally published:
https://arxiv.org/abs/2008.09350
https://arxiv.org/abs/2008.00287
Which one of them has a persistent identifier?
Challenge 2:
Look at this paper [link to be included]. Click on the ‘pdf’ link to download it. Do a full-text search by using control + F or command + F and search for ‘http’. Did the author use DOIs for their data and software?
Challenge 3:
What is the problem with referring to your code and software only with a URL [example to be included] without providing a DOI?
F2: Data are described with rich metadata
F3: Metadata clearly and explicitly include the identifier of the data they describe
We could provide an exercise where a dataset/software is provided, and learners have to extract + fill out metadata fields based on that? If possible, it would be nice to allow for ‘correctly’ typed answers only - so no typos, etc. because those little errors affect the links between content.
Example exercise for inspiration: https://sites.uwm.edu/dltre/metadata/exercises/
The depth of this exercise can range from something simple like the three images in the previous link, or we could have sample exercises that follow specific schemes/standards like DDI, DataCite, discipline-specific standards.
Also, this is what is currently on the lesson website: Automatic ORCID profile update when DOI is minted RelatedIdentifiers linking papers, data, software in Zenodo
F4: (Meta)data are registered or indexed in a searchable resource
Perhaps we could use Zenodo’s Sandbox for learners to ‘upload’ the data + metadata?
We could also provide some example datasets/software and have learners select the most appropriate (discipline-specific) repository from a list we give them/they can search for the repo themselves.
The text was updated successfully, but these errors were encountered: