Hitachi, Ltd. Research & Development Group
University of Bologna
Isabela Maria Rinderu
What do all these icons mean?
Chris Aberson, Humboldt State University
Athena Aktipis, Arizona State University
Nancy Buchan, University of South Carolina
Carsten de Dreu, Leiden University
Andrew Delton, Stony Brook University
Susann Fiedler, Max Planck Institute for Research on Collective Goods
Simon Gächter, University of Nottingham
Nir Halevy, Stanford University
Paul van Lange, Vrije Universiteit Amsterdam
Caspar van Lissa, Utrecht University
Wolfgang Viechtbauer, Maastricht University
Future of CoDa
Vision for the future of CoDa
Entry of newly published studies
We aim to keep CoDa up-to-date with the published literature. We have two strategies to achieve this goal. First, our team can train PhD students to annotate studies. This is a valuable learning exercise for students and their work directly benefits the field. Contact the CoDa team if you would like to collaborate. Second, we will offer a tool for authors who publish studies on cooperation to annotate their own study. Authors can provide a machine-readable translation of their published PDF paper and dataset that can be directly added to CoDa.
Entry of file drawer studies
Researchers are less likely to publish null findings. We plan to implement a method for researchers to annotate and report their own null results with the field, especially in a time and cost efficient way. Researchers can complete a brief form about their study methods and results, and this information will be reviewed by an editorial board and subsequently added to CoDa. Researchers can cite these machine-readable reports and authors can be recognized for their contribution.
Include other paradigms and languages
We intend to expand the databank to include studies using different paradigms to study cooperation (e.g., the trust game, dictator game, ultimatum bargaining game, stag hunt and other coordination games). Our goal is to also annotate papers that have been published in additional languages, besides English, Japanese and Chinese.
During the development of CoDa, we did our best to reduce human error in our annotation of the literature. Indeed, we found substantial inter-rater agreement on most variables. Visit our Codebook to check the agreement scores. That said, CoDa does contain errors and we can crowd source identifying and correcting these errors. Please notify us when you spot an error in the annotation of a study.
We are open to collaborations to maintain and further develop CoDa. Contact the CoDa team if you have an interest in collaborating on some aspect of CoDa.