We have taken great lengths to eliminate human error in the annotation of studies, including an intensive training period for all team members, randomly selecting and re-annotating 10% of the papers by a second domain expert, and having a second domain expert re-annotate 100% of the means and standard deviations (which we use as a primary means to calculate effect sizes).
In the codebook you can find the extent of inter-rater agreement for each variable. Overall, there was substantial agreement between raters, which speaks to the quality of the training, annotation and data. Yet, there were still instances of disagreement, and CoDa does contain human error in annotation.
You might still find errors in the data or the application and we want to hear from you so we can fix these errors and improve CoDa.
To do so, you can notify the CoDa team of a specific error. In the report, we ask that you indicate:
- the paper’s title
- study number
- the incorrect value
- the correct value
We will go back to the paper, re-check the annotation, and address the mistake.
Your eyes on the data can help us eliminate errors and continue our efforts to improve this public good for science and society.