Highlights from Sprint Session for Open Science Indicators
Organized by Vincent Traag, Tim Willemse, and Zeynep Anli from CWTS
Written by Zeynep Anli
Open Science Indicator Handbook is one of the key outputs of PathOS, and we intend to ensure it remains self-sustainably accessible after the project concludes. We are proud to share this user-friendly interface as a collaborative outcome of our project. The OS Indicator Handbook addresses multiple dimensions of measuring Open Science, such as causality, open science, academic impact, societal impact, economic impact and reproducibility.
The OS Indicator Handbook is open for community contributions. We will also continue to enhance the handbook with future collaborative activities. One such recent activity was the Sprint Session on the development of indicators for open/FAIR data practices, the use of data, and the use of code in the context of OS indicators. For this Sprint Session, we reached out to some experts interested in these research areas, and invited them to join the discussion to help refine and enhance the indicators. Our experts for this session were Laetitia Bracco as the coordinator of the French Open Science Monitor on research data and software, Stephan Druskat who is affiliated with German Aerospace Center and Software Sustainability Institute, and is conducting a PhD research on measuring the impact of dependencies in research software, Iain Hrynaszkiewicz as the director of Open Research Solutions at Public Library of Science (PLOS), Iratxe Puebla as the director of the Make Data Count initiative, and Tim Vines as the founder and director of DataSeer. We asked them to take a look at the OS Indicator Handbook, specifically at the following indicators: prevalence of open/FAIR data practices, use of data in research, and use of code in research.
The sprint session
In our 1.5-hour online meeting, we focused on gathering feedback from the interactions and discussions among our experts. Our attention was directed not only toward the specific indicators but also toward recommendations for the future development and long-term sustainability of the OS Indicator Handbook.
A recurring theme regarding FAIR data practices was the sense of discouragement stemming from the perception that achieving this complex, high-standard goal is nearly impossible. There was also emphasis on the gap between the ideals of FAIR and what the researcher is actually doing. Viewing FAIR as an aspirational target may be more practical, but this approach carries the risk of failing to inspire and implement "FAIR enough" practices that meet basic expectations. On a broader level, we questioned the underlying reasons for pursuing FAIR data, for example to promote reuse and as a proxy for quality. We still need to establish what indicators would indicate quality in this context. For example, a closed source software can be FAIR but not open, which could reflect on its quality.
Some disciplines, like the Social Sciences, demand special consideration. Differences in publication and citation practices make it challenging for algorithm-based tools to effectively standardize across fields. Social Sciences need better, tailor-made indicators.
When it comes to research software, we discussed, for example, that tracking downloads was more meaningful than tracking citations. We also touched upon an indicator for software being used by other software. We discussed the usage of LLM in the context of data reuse, the comparisons between DataStet and DataSeerML, and the differences between GitHub and GitLab, and why one is preferred over the other. In the end, the most common issue seems to be the difficulty of normalization of metrics among various databases and the difficulty of motivating researcher behavior.
Looking ahead
We intend the PathOS Open Science Indicator Handbook to remain a useful resource beyond the project's conclusion. To ensure this, we asked our experts for their opinion on this. We have taken a note to attend the meeting of Open Science Monitoring Initiative (OSMI) to look for synergies. Other suggestions included community submissions and crowdsourcing. Ultimately, the future success of the handbook may depend on how we present it at the project's conclusion—such as through a guide that clearly defines its scope and objectives while outlining our vision for its future development and use. We learned a lot during this fruitful discussion and we are taking steps ahead to improve our handbook. We would like to thank our experts again for making time for our sprint session.
Latest blog posts
-
Measuring the Impact of Open Science: Insights from PathOS at EOSC Symposium 2024
-
New Study Published on Societal Impact of Open Science in Royal Society Open Science
-
PathOS Updates: Our Spring 2024 Events Highlights
-
Key Takeaways from the PathOS Workshop on Impact Pathways