In this webinar, part of the Artificial Intelligence (AI) methods in evidence synthesis series, the presenters covered:-
- Why do we all need to embrace responsible AI?
- What are the recommendations for responsible AI?
- What changes to Cochrane processes and governance will help authors and others use AI responsibly?
The session was aimed at evidence synthesists, methodologists, AI developers, and those from organizations, funders or publishers involved in evidence synthesis. It was delivered in June 2025 and below you will find the videos from the webinar, together with the accompanying slides to download [PDF].
Part 1: Why we need to embrace AI and the importance of infrastructure
Part 2: A global challenge and introducing RAISE (Responsible AI use in evidence SynthEsis)
Part 3: Applying the RAISE recommendations in practice; what does it mean for authors?
Part 4: Questions and answers
Understanding expectations for evidence synthesis when using AI compared to current best practice
As mentioned in the webinar, the evidence synthesis field needs community-based best practices to facilitate the use of automation and AI in evidence synthesis. One area of uncertainty is how 'correct' should evidence synthesis be. We do not currently have consensus on how correct it should be with current best practice (i.e., humans only), how this changes if we add AI, what an acceptable impact of errors might be, and whether this changes for different types of evidence synthesis. The Wellcome-funded DESTinY consortium, of which Cochrane is a partner, has launched a survey to better understand community expectations, which will inform future work and how the next generation of evidence synthesis tools driven by AI are built and evaluated. We welcome feedback from anyone interested in this topic via this survey. Open until 2 July 2025 and takes approximately 35 minutes.
New & updated Responsible AI use in Evidence Synthesis (RAISE) recommendations & guidance
As discussed in the webinar, RAISE is now a three-paper collection available on OSF, including:-
- RAISE 1 with recommendations for practice for the main roles in the evidence synthesis ecosystem to enhance collaboration & communication for the transparent & reliable use of AI in evidence synthesis.
- RAISE 2 with guidance on building & evaluating AI evidence synthesis tools, which focuses on determining if an AI tool does what it claims to do to an acceptable standard, including how to build and validate AI tools, conduct evaluations to build a cumulative evidence base, including performance metrics to consider, & report evaluations.
- RAISE 3 with guidance on selecting & using AI evidence synthesis tools, which focuses on understanding whether an AI tool can be used for a specific evidence synthesis, including how to assess, select & use an AI tool, including ethical, legal and regulatory considerations, & the current state of AI tools for evidence synthesis.
This is a joint initiative with individuals from across 30+ organizations, including Cochrane.