Lived experience in the room where decisions are made
a case study with the Medical Research Foundation
Vocal Collective worked with the Medical Research Foundation to design and test a model for integrating lived experience expertise in grant review and funding decisions. Across two live funding calls, we recruited and supported lived experience experts and public engagement professionals, thoughtfully building a process the Foundation can deliver for future funding rounds. The project tested something new: pairing lived experience experts with public engagement professionals as distinct but complementary reviewers, and putting lived experience representatives in the room when funding decisions were made. Feedback was striking. Our Lived experience experts described it as 'more than stellar' and ‘a gold standard approach’.
“It genuinely felt like lived experience was trusted, supported, and treated as expertise, which isn't always the case in funding spaces." LEE reviewer.
The challenge
Research funders are increasingly recognising lived experience as valuable knowledge. The Medical Research Foundation wanted to integrate lived experience expertise into grant review and decision-making rather than just consulting people. They asked Vocal Collective to help them design and test a model that would work in practice, and that the Foundation could own and build on after we stepped back.
What we did
Between October 2025 and March 2026, we worked with the Foundation across two live funding calls: Suicide and Self-Harm Fellowships and Impact of Cancer and Cancer Care on Human Biology and Health Research Grants.
We designed, recruited for and delivered a 10-stage process covering everything from setting the call guidance through to interviews and close. This included recruiting and supporting ten lived experience experts (LEEs), 18 public engagement and involvement practitioners (PEPs), and working closely with the Foundation's team throughout. For the Suicide and Self-Harm scheme, LEEs reviewed fellowship applications, took part in facilitated sense-making sessions, and two LEE representatives sat on the shortlisting panel and interview panel. Each voice and vote was equal to the scientific panel members.
For the Cancer and Health scheme, PEP reviews fed into shortlisting. This gave us a direct comparison: we could see what happened when lived experience was in the room, and what happened when it wasn't.
Step by step to involving Public Partners
The 10-stage process developed and tested with the Medical Research Foundation
Setting the call guidance
Agreeing exactly what the purpose of the LEE/PEP input will be, and how it will influence decision-making, including safeguarding and wellbeing expectations.
Recruiting lived experience experts
Understanding the demographics relevant to the funding call, writing clear role descriptions, and using a fair selection process including 1:1 conversations to check fit, support needs and boundaries.
Identifying and recruiting PEPs
Building a pool of public engagement professionals matched to the Foundation's funding portfolio, with clear role definitions and backup reviewers planned in case of non-completion.
Wellbeing and safeguarding
A relational approach to safeguarding, including independent clinical psychology support, reflective group sessions, safety planning, and clear routes to raise concerns — designed around agency and choice, not paperwork.
Preparing and onboarding LEEs
1:1 introductory conversations, a plain-English welcome pack, a live briefing session, and clear guidance on what reviewers are and aren't expected to assess.
Briefing PEPs and monitoring reviews
Scheme-specific review guidance with a shared scoring scale and descriptors, benchmarking across 4–5 applications per reviewer to support consistency, and progress tracking throughout.
Supporting LEEs in the review period
Applications allocated to match each reviewer's lived experience, with optional early draft feedback, regular check-ins, and contingency plans for illness or delays.
Sense-making and briefing representatives
Facilitated sessions bringing reviewers and panel representatives together to discuss themes, benchmark scoring, and produce short panel-ready summaries. Includes a briefing with the Chair.
Shortlisting preparation and meeting
LEE representatives named as discussers with protected time and a clear platform. The Chair briefed to treat lived experience as equal expertise. LEE, PEP and scientific scores all visible to the panel.
Interviews, final decisions and close
LEE representatives in the room for interviews, with dedicated questions on lived experience involvement. Decisions recorded showing where LEE input shaped outcomes. Debrief, evaluation and feedback to all involved.
Two things that made this work
Combining lived experience experts with public engagement professionals
This pilot tested something new: bringing LEE and PEP perspectives together as distinct but complementary forms of expertise.
LEEs assessed applications through the lens of relevance, real-world impact, ethics, language and what participation might actually feel like for people involved. PEPs brought a focus on feasibility, resourcing, practical delivery and whether involvement plans were realistic and well-structured.
Together, these perspectives gave the Foundation a much fuller picture than either could provide alone. LEE reviewers told us they found the PEP reviews useful for understanding practical considerations. And having LEE representatives in the room during shortlisting meant that both sets of expertise were actively held in decision-making, not sitting in a document that might or might not get read.
Where PEP reviews were the only input (in the Cancer and Health scheme), their influence was inconsistent. Reviews were discussed more when panel members had a personal connection to the topic, and often overlooked for basic science applications. The contrast was clear: without someone in the room to represent and explain the value of public engagement and lived experience, even strong written reviews can be sidelined.
Building the Foundation's capability to do this themselves
This was never designed as a project where Vocal Collective would become a permanent fixture. From the start, the aim was to build the Foundation's confidence, skills and infrastructure so they could run this kind of process themselves.
We did this in several ways. We worked alongside the Foundation's team at every stage, describing our approach and why it mattered. Weekly project meetings were as much about reflection and learning as they were about logistics. We documented every stage, every template, every lesson learned in a detailed report with practical tools the Foundation can adapt and reuse.
What the evidence showed
Evaluation data from LEEs was striking. People described feeling valued, heard, and treated as genuine experts, not consulted symbolically.
Two LEE representatives attended the shortlisting and interview panels. Both said it was the most co-produced process they had been part of. One described it as a "gold standard approach". They reported that being in the room together made a significant difference to their confidence, to the quality of discussion, and to the weight given to lived experience in decisions.
The Chair played a critical role. By being explicit that lived experience expertise mattered and had parity with scientific expertise, she created conditions where LEEs could contribute fully. LEE representatives were named as discussers for applications, with protected time and a clear platform to speak.
PEPs reported that the process was well structured, the guidance was clear, and they felt their professional expertise was taken seriously. Almost all said they would take part again.
What we delivered
A detailed final report with recommendations structured around 10 stages, each with a recommended process, key learnings, pitfalls to avoid, and practical templates and tools. This report serves as both a record of the pilot and a handbook for future rounds.
Strategic and policy recommendations for the Foundation, covering language and terminology, safeguarding policy, conflict of interest processes, accessibility, payment, diversity monitoring, and specific guidance for embedding PPIE in basic research.
A tested model that combines LEE and PEP expertise in grant review and decision-making. It’s one that other funders can learn from and adapt.
Closing the loop
This summary was written for the lived experience experts and public engagement professionals who were part of the pilot, and for anyone considering similar work in the future. It covers what the project involved, what it felt like to take part, what we learned together, and what makes this kind of involvement work well.
Click the cover to download the summary.
About this work
This project was delivered by Vocal Collective CIC, commissioned by the Medical Research Foundation. The project team was Vanessa Bennett, Farrah Nazir and Leah Holmes, with clinical psychology support, specialist lived experience advice and communications expertise from our associates.
Vocal Collective is a social enterprise that brings together diverse groups to understand and act on the health and social issues that matter to them. We work with funders, researchers and communities to develop processes where lived experience is treated as the expertise it is, and where organisations are equipped to sustain that work themselves.
If you're interested in how this approach could work in your organisation, get in touch at hello@vocalcollective.org.uk.