RAISE guidance – maintaining SLR standards in an AI-assisted future

Oct 21, 2025

Written by Gwennie Ogilby (Associate Systematic Review Analyst) and Tom Metcalf (Senior Systematic Review Analyst)

 

Systematic literature reviews (SLRs) are considered one of the strongest forms of evidence to guide healthcare policy and inform future research. High standards are maintained through adherence to rigid methodologies and strict principles of research integrity, which can make SLRs time-consuming and resource intensive. Automation via artificial intelligence (AI) has the potential to increase speed and efficiency, but could it simultaneously compromise accuracy, reproducibility, and transparency? At present, there is a relative lack of published guidance on this topic.

In the first coordinated international attempt to establish a set of standards and frameworks for safe AI use in evidence synthesis, a team led by the International Collaboration for the Automation of Systematic Reviews (ICASR), Cochrane, and Campbell has produced the “Responsible AI use in evidence synthesis” (RAISE) guidelines. These are divided across three documents with the following focus areas:

    • RAISE 1: Recommendations for eight specific individuals/groups, including AI tool developers, SLR analysts, and organisations such as Source
    • RAISE 2: Guidance for responsibly developing and evaluating AI tools for use in SLRs
    • RAISE 3: Selecting and using appropriate tools at different stages of the SLR process

Below, we summarise the key themes within the RAISE guidelines and explore their immediate implications for performing SLRs.

 

Accountability and oversight

How can reviewers remain ultimately responsible for every step of the SLR process?

    • Ensure accountability for each AI tool’s output by maintaining human oversight throughout the process (known as ‘human in the loop’)
    • Thoroughly audit AI tool outputs to safeguard against data hallucinations and potential bias

Disclosure and transparency

How can processes remain transparent about the influence of an AI tool?

    • Openly report when, why, and how a tool is being used, via clearly written protocols and publications
    • Utilise AI tools that are developed with transparency around their limitations and biases

Methodological rigour and validation

How can we ensure performance of the AI tool without introducing bias?

    • Consider which stages of an SLR are suited to the current capabilities of AI (e.g. screening) and which may require further development (e.g. critical appraisal)
    • Use tools that are rigorously evaluated for the specific task at hand, via appropriate performance metrics
      • Utilise a validated framework for assessing whether a tool is appropriate based on transparent reporting of its strengths, limitations, and compliance to regulatory standards
      • AI tools should be trained on large, high-quality datasets to optimise accuracy

Ethical, legal, and regulatory compliance

How can we respect data privacy concerns and conform to regulatory requirements?

    • Incorporate guidance on AI use into internal plagiarism and copyright policies
    • Use tools that transparently describe their sources of training data and incorporate privacy features relating to data collection

Continuous learning and collaboration

How can we keep up with developments and optimise our AI use in such a rapidly evolving field?

    • Engage in ongoing learning as the field evolves to stay informed and to comply with new regulatory policies
    • Supply real-world data to AI tool developers to assist them in creating high-quality tools that produce accurate outputs

Conclusions

The guidelines summarised here are an important first step towards standardised, responsible AI use in evidence synthesis. This is essential to navigate the complex legal and ethical challenges that AI use can pose, together with the potential for bias and inaccurate outputs. As new processes such as the European Joint Clinical Assessment (JCA) pile further time pressure on SLR analysts, increased efficiency via the responsible integration of AI technology is likely to transform the industry. At Source, the RAISE guidelines lay the foundations for updating our internal processes as we prepare for this exciting transition.

 

If you would like to learn more about our SLRs, please contact Source Health Economics, a HEOR consultancy specialising in evidence generation, health economics, and communication.

More Insights

Evaluating Health Inequalities: NICE’s Modular Update

Evaluating Health Inequalities: NICE’s Modular Update

Written by Paloma Charlesworth (Assistant Project Manager)   Background  Health inequalities are systematic, avoidable, and unjust differences in health outcomes between patient groups. Despite decades of policy and research, they not ... Read more

HTA monthly – October 2025

HTA monthly – October 2025

By Kiera Lander   October brought real movement across the health technology assessment (HTA) landscape — not just in HealthTech, but across medicines, strategy, and international collaboration. From National Institute for Health and ... Read more

Thought Piece: How do we Reconcile Commoditisation and Strategic Value in Market Access?

Thought Piece: How do we Reconcile Commoditisation and Strategic Value in Market Access?

Written by Dan Spacie, CEO of SCIRIS   Executive Summary Market access services are undergoing a dual transformation. On one hand, technological advances and procurement pressures are driving commoditisation—standardising deliverables and ... Read more

Newborn genome screening: implications for health technology assessments of new orphan drugs

Newborn genome screening: implications for health technology assessments of new orphan drugs

Written by Luke Parkes   Introduction Newborn screening (NBS) is an impactful public health initiative that enables the diagnosis of severe conditions in asymptomatic infants at birth. Typically, NBS programmes include testing for a ... Read more