Introduction to reviewing & assessing medical, surgical, data & social care research.
The background to this guide
The answer as to how to review research is to understand for what funders of research are looking.
It's possible to design a perfect application around a research question, but if it is a question to which funders won't pay, then the process is flawed from the beginning. So getting funds for research is very much a selling game. The applicant has to sell the research question when recruiting other members of the team, and the team have to sell the application to the funder to persuade the funder to pay. It is to the benefit of the public reviewer, therefore, to keep in mind the question, "what would stop anyone buying this idea?". Understand the end goal.
The end goal, of course, isn’t actually to get funds, although that is a huge ambition to fulfil. The end goal is to benefit the patient long term, which means imagining what will be the patient experience of an intervention. Not just during the period of a research study but what will be the experience of a patient, if the research project was so successful, that the intervention was adopted across the whole health and/or social care environment.
The role of the Lay reviewer is to represent and protect the interests of the patient, participants in research and their carers, and members of the public who aren’t themselves clinicians or academics. It boils down to possessing the imagination to think ‘how would this feel, and how does this make sense, to “ordinary” people?’.
This is intended to help members of the public who wish to read medical research documents with a view to help them critique and assess them. This was compiled by me, - Jeremy Dearling - based on many years of experience supporting the research community in reviewing research at all stages of the research cycle. This is not necessarily a complete list of tips and hints, but in going through what follows and trying to understand the degree of scrutiny required to comprehensively assess a research proposal, or a written paper submitted for publication, the reader will be better informed for doing so. This site has frequent additions and improvements.
Research relies on many opinions before, during, and after conducting research and gathering data. Before research happens, the team conducting research go through a lengthy process refining the application, so it stands the best possible chance of being funded. During research it relies on the opinions of team members, at least one of which should be a member of the public, to guide its management. After research is completed, the team seek at least one academic journal to publish its results.
Members of the public are called upon to review research documents to provide an opinion as to the quality of those documents. This is where this website comes in. By working through the following sections, members of the public should be able to critique research applications with thoroughness and confidence.
An obstacle to developing an application is acquiring public opinion, via panel meetings, focus groups or other, but up till now that has been poorly supported. As of July 2024, the Research Support Service will offer grants up to £600 for researchers to engage in PPI to develop their applications.
The chief funder of research in the UK is the National Institute for Health and Care Research, the NIHR. Research topics are refined as a result of a variety of drivers. For example, current crises, such as Covid-19, and public consultations, such as informed by the James Lind Alliance. Principal outcomes of research funded are two questions; 1) Does the research offer tangible patient benefits? 2) Will implementing the research outcomes save money?
Research takes place in a variety of places. In a laboratory using scans, cells and samples. In the community using patients and research teams. In an institution, such as a university, in a healthcare setting such as a hospital, known as secondary care or in general practice, (doctor's surgeries), known as primary care.
It is important that the public reviewer keeps in mind that they are there as a critical friend to research and nothing else. Being on a crusade to bang a particular drum, especially if that drum is banging on about not being listened to is frowned upon by researchers and the PPIE community. We are there to ask awkward and challenging questions but we are not there to have arguments with those who have different ideas to which we vehemently disagree. Politeness is critical to ease the wheels of the research cycle and to uphold the integrity of PPIE.
With all that as a background, the following sections cover the ground of reviewing documents.
When I review this is what I'm looking for......
When I review an application for funding or a patient facing document, the following questions are considered with each and every line:
1. Does this sentence make sense?
2. Does this sentence conflict with previous sentences, or with information I know factually to be true?
3. Is this sentence supported or could it be possible that it is supported?
4. Is this sentence presumptive on the reader?
5. Is this sentence consistent with the rest of the application or the research question, or is it without context?
6. Is this REALLY research or something else?
When reviewing the following should be considered.....
1. Research documents come in various forms. The one perhaps of most interest to the public are described as “Lay summaries”, or “Plain English summaries”. In this essay, both terms will be used. When reviewing these, it is important to ask, ‘Is this really plain English?’ If, when you read it, you don’t understand it as easily as you would understand a book, magazine, or newspaper article, then it isn’t plain English, and you need to identify what has to change to make it so.
Be mindful also of the assumption being made by the author that the reader will understand technical terms. For example, “protein” is a technical term but sadly in the field of healthcare clinicians of all disciplines use the term without really explaining to people what they mean. For most people, protein is what comes with gravy, or with chips. In research terms, a protein is a molecule that builds, maintains and replaces tissues in the body. Muscles, organs and the immune system is made up of protein. A molecule is defined as two or more atoms that are bound together.
Both these explanations, protein and molecule, are very simplistic, but they serve to explain that terms often used and often thought of as being plain English, are actually technical, and in a plain English summary, technical language has no place.
2. When reviewing a research document, at the back of ones mind should be a number of basic questions, the first of which is; - ‘What is the question for which this research is the answer?’ The research question should be very simple and straightforward, though it can often be a lengthy one for that. Is it obvious from the research question what is the problem being described? If it is obscure, and you have to struggle to perceive the problem then that is in itself a problem and needs pointing out.
3. It should be clear from the lay summary, the abstract (an abstract is a brief summary of the whole study), and the full application, what will be the benefits the research will bring. It isn’t the case any longer that an obscure academic question be funded that may, or may not, be of benefit later on. You ought to be able to understand what will be the benefit by doing the research. If that isn’t clear, there is a problem and you ought to point it out.
4. By the same token, although not expressly stated, it should also be clear in your mind from reading the documents, what will be the consequences of not doing the research, and in weighing up the benefits of doing the research and consequences of not doing the research, you ought to be able to begin making a mental assessment as to the quality of the proposal.
5. The outcomes of the project ought to be clear. What will the funders be getting for their money, and ultimately are those benefits value for money? Will the outcomes be tangible? Will patients make a swifter recovery, will diagnosis be easier, more accurate, more affordable, more accessible? Will cure be made with fewer adverse consequences, be swifter, involve fewer procedures, involve fewer clinical staff, have fewer side effects, be more easily managed? Will patients be able to return to a life more recognisable to them as “normal” sooner, will their rehabilitation be more accessible? Will the dying experience less distress? Will communication be facilitated better? Will an implant work better, last longer, be more affordable?
6. How will the outcomes change practice? Once the hypothesis (a hypothesis is an assumption that is being tested) is demonstrated, how easy will it be for other centres to copy? If, for example, the hypothesis is made that an expensive piece of machinery is tweaked to help a patient, how likely is it that other places can afford that machinery and have it tweaked to produce the same benefit?
If the hypothesis is made that taking a pill after breakfast instead of at night is a benefit how difficult would that be? Are there any practical, political or social obstacles likely to get in the way of implementing a change in practice?
7. Research is usually done within a timeframe of months, but occasionally it takes place over a number of years. When a long time-scale is involved, it is described as longitudinal research. In either case, it is important to consider how long the project will last. Of equal importance is to consider if the recruitment and involvement of research participants will be impaired by environmental factors. For example, a study that examines physical and outdoor activities taking place in the months between November and April will struggle more than a study examining the same question where participants are measured between May and September.
8. Who will be conducting the research? This is an important question, and one to which funders pay particular attention, because if funders are not persuaded that the research team have the key skills and numbers to conduct the research, they will not release money.
Do the team, in order to demonstrate value for money, include someone whose specialist skill is to weigh up that question? (Such a person is called a Health Economist).
What percentage of time will key people be devoting to the project instead of doing their day job - if an investigator of a work package expecting to complete a significant part of the study is only giving a 5% of their time to the project, how likely is it that that will be sufficient to complete the task? (Research projects are often divided into sections called work packages, and collectively they form the study).
9. What sum of money is being requested? Awarding the fund is also dependent upon the assessment of the sum being asked. If the sum is too little for a reasonable expectation for the hypothesis to be demonstrated, the project will not be funded. If the project team are asking for too much money, the project will not be funded. In looking at the budget document, it is reasonable to question if it covers all things necessary for the job to be done. It’s like asking for a quote from a builder for knocking a hole in a wall for a door. Has the builder included an RSJ joint, mortar, door, door furniture, glass, putty, time to do the job, etc.
10. Will the project need ethical approval, and if so, what arrangements are needed? Not all research studies require ethics approval. Many research projects are done looking at lots of historical data already available. Such projects may be referred to as ‘retrospective analysis’, ‘meta-analysis', or - ‘literature review’. Some projects will involve people looking down at microscopes to donated samples; samples already donated will already have consent given by the donors, so ethics may be simple to obtain.
Ethics can get complicated when a project involves people who do not have the mental capacity, to consent. These may be patients who are unconscious, or who lack judgement for a variety of reasons. In these cases, consent may be sought from their advocates, who may be their clinicians, their closest relatives, or some other person who will decide what is best for them, acting in the patients best interests.
11. Have members of the public, and/or patients whose circumstances are relevant to the condition being studied, been involved in designing the study? Has their involvement been meaningful or tokenistic? Has their involvement helped the study evolve shape? Will their involvement continue during the progress of the study? Will their involvement be burdensome to them or shared across different patient representatives? Will they be included in examining the data to help draw conclusions? Will they be involved in sharing the results of the study to others? Will they help describe the research and its conclusions in language accessible to members of the public?
12. Will there be any reimbursement or payments due to members of the public who are part of the team? Will there be any reimbursement for participants to cover their travel, parking, and care costs?
It is unreasonable for the public to fund research, and this is widely recognised by the research community, but usually reimbursement is usually referred to as travel costs. However, in addition to travel is the need to pay for parking, and for those who have caring responsibilities for others, such as dependant family members who may employ a sitter to monitor those in need, that cost also should be reimbursed.
13. Has the research question been answered already? It is sometimes a useful exercise to Google the research question proposed - because the question may have been answered comprehensively many times already. Sometimes the question has been answered but so long ago it needs asking again. Will answering the research question add substantially to existing knowledge?
14. How will participants be recruited? Achieving the necessary number of participants is often the weakest part of research proposals. Will they be recruited by invitation, if so how will that invitation been issued, will they be recruited by self referral, if so how will they know when and how to offer themselves? Will the recruitment period be dependent on an unreasonable set of criteria? Will they be recruited during a difficult period to gather recruitment, such as winter, over a public holiday, or when everyone is sunning themselves on a beach in Barcelona?
Are recruitment targets too high? Are recruitment targets too low? Can participants be recruited within the time allowed?
15. Who is eligible to take part in the research? What will be the inclusion and exclusion criteria? Is the criteria so narrow, or so broad, as to make recruitment difficult?
16. What are the components of the project? Measurements, biopsies, interviews, patient diaries, blood tests, scans? Are all the components achievable in the set time, budget and skill set of the project? Will the burden of measurements on participants be acceptable or beyond reason? What will have to happen for burdens that could make recruitment and retention difficult or impossible more acceptable?
17. Researchers live often in academic and clinical bubbles with little exposure to the worlds that patients and participants occupy. As such, they may not understand local and regional factors that may influence recruitment and retention. Sometimes research involves interviews in homes, and in densely populated areas, conducting 10 interviews is achievable, whereas in rural areas no more than 4 are reasonable. A reviewer needs to be reminded of such possible complications.
18. What are the risks to participants in taking part? Have those risks been recognised, identified and covered in the proposal? Is there a risk to their dignity, their identity, that they could experience claustrophobia, end up with a wound - a bruise or loss? Has the researcher allowed for adverse events? Has the researcher built into the proposal various stop points?
19. Does the consent document cover all the points mentioned in patient facing documents? Is the author expecting too much from participants before they sign? Consent that isn’t informed isn’t consent. Is the author asking for consent for things not fully explained in patient facing materials?
20. Are patient facing materials appropriate for the participants? Seeking consent from a child or teenager needs to be couched in appropriate language. The same language used for younger people is inappropriate for someone much older. Are graphics, diagrams, flow charts illustrations appropriate and clear, or do they obscure the point intended? Are they the right size for people with a visual impairment? Is the size of text and choice of font clear enough? Do patient facing materials have contact names and details?
21. Would you, in the place of a participant, agree to take part in this research or would you not touch it with a bargepole? Would you allow or recommend someone you loved to be a participant?
22. How will data from participants and about them be collected and stored? Are you confident that the storage site, conditions of storage, sharing permissions will be secure and not be leaked, or sold on?
Has there been adequate and clear reference to regulations (GDPR)?
When the applicant says that details will be stored for a period of time, do they then go on to explain what happens then? Will the details be destroyed or rendered inaccessible?
23. What are the arrangements to keep participants informed about their contribution and the progress of the study? What are the arrangements at the end of the study to share the conclusions with those who contributed to it as participants, if they wish to be informed?
Attrition (loss of - or leaving something before it is finished) is a big problem for researchers. A lot of time and money is invested in recruiting people to participate, and if they don’t stay involved that investment is wasted. Will there be regular ‘phone calls, emails, newsletters, or will there be a website or blogs?
24. What are the arrangements to monitor and mitigate for adverse events? If something could go wrong, something that may be avoidable or possible, have the team factored in that possibility and given consideration to it?
25. Are the outcomes predicted credible? If a study suggests that the conclusions will have the world beating at their door for the answer to life, the universe and everything in exchange for a weekly helping of vanilla custard over sprouts, it ought not be funded. If the claim is that a sample of spit will solve hair loss in the over 80s, it ought not be funded. If the predicted outcome is that yoga will solve flatulence, it ought not be funded.
26. Are the outcomes unambitious? If a study suggests that conclusions will mean that a new assessment form being completed on admission will make a difference to discharge by half a day, perhaps it ought not be funded - the skilled reviewer might flag this up but the context will be important.
Patients will be keen to get out of hospital as soon as possible. Hospitals need the beds for those waiting in A&E or planned inpatient surgery which is often postponed due to beds not becoming available soon enough. Some might say this should be funded to make better use of resource and facility funding.
The role of a reviewer is to ask questions and not take responsibility for the answers. The burden of responsibility lies on the shoulders of the principle investigator, the most senior person, and while asking difficult and challenging questions is important, it is not our role to get agitated if our advice isn't accepted.
If a study suggests that raising the bed half an inch lower when getting a blood sample will make the job of the phlebotomist five minutes swifter, it ought not be funded. If a study suggests that using yellow plaster casts not blue will make eating breakfast easier, it ought not be funded.
27. Consider what are the variables. Variables are all the possible differences there could be that may affect the answer to the research question. If there are too many, or if the team haven’t taken into consideration all the variables, the outcomes will be unsafe, obscure and inconclusive. Ideally, the perfect research would be to take 50 clones and divide them into two or more groups and test different things on each cohort, at the same time each day, in the same room, having given them identical quantities of sleep and food, to see which produced the best result.
The problem is that people are individuals, and it is the distinguishing things between them that could make for a variation in outcome. The job of a good research study is to take account of the variations and make allowances, and the job of a good reviewer is to identify variations that could be problematic.
How will the study compare and contrast the effects of a drug when given to everyone from 16-96, living in cities and the countryside, with different education levels, different ethnicities, different religions, and different pre-existing diseases? A drug can more successfully be tested in a group of patients who are broadly similar to each other.
28. Is the proposal badly written? Does it contain so much technical language and jargon that makes it difficult to understand? A good proposal needs to be accessible if it is to be funded. Researchers who attempt to impress funding committees by unnecessary technical language just make the job of making an assessment of its worth more difficult. Funders have dozens of projects to review at a time. The one that is easier to read will get the best attention. Take the opportunity to suggest edits if you think something could be better expressed.
29. Funders are not looking for reasons to fund a research study. They look for reasons not to fund a research study. It is easy to find reasons to fund a study that promises it will cure cancer, it is better to fund a study that promises to cure cancer if there are no reasons NOT to fund it. Funders look at proposals to identify flaws and weaknesses, and it is the job of the team and the member of the public supporting the team to help spot weaknesses before the proposal gets to the funding committee.
30. Are the proposed benefits too vague? Are the proposed outcomes too vague to be useful, too vague to translate into wider practice, too indiscernible to offer a tangible patient benefit?
31. As well as having to have the correct skills on a research team, are there too many team members to make it impractical to get the job done? If the proposal is for a 6 - month study plus three months recruitment and two months writing up conclusions looking for only 12 participants but the research team consists of 25 academics and clinicians as co-applicants, this might not make for the smooth running of a team. Too many academics and clinicians clutter up decision making. Too few opens up the possibility for mistakes and oversights
32. Research applications are supported by references to previous work that argue for the gap in knowledge that the study will fill, or argue for the case supporting the intended intervention. It is sometimes wise to check the dates of the references to see if they are contemporary or very old. A contemporary reference is one that is within ten years. Older than that it may be treated with caution. Why is the applicant using a reference that dates back to the 1980’s or earlier? Is that reference valid, appropriate, or is it stretching the point being made? Upon scrutiny, does a reference solidly underpin the point where it is used, or is the connection tenuous? Are there any statements made that are missing references?
33. Of the samples proposed, either donated samples or participants, is there risk of bias? Conducting a research on healthy diets, for example, where the sample population are all female research graduates between 20 and 35 living in a university city and who graduated in sports and nutrition would result in a bias that makes for unreliable conclusions. Conducting research into breathlessness among a population who are males 16-20 who are members of a football club would be a bias. Conducting research into healthy diets with a wide mix of ages, men and women, young and old, active and sedentary, with existing diseases and disease free would eliminate bias. (But it would make for a problem with variables!).
34. Is the intended sample population too narrow to either draw safe conclusions or recruit? If a study intended to have as a sample only men or women who are 5’ 7” tall, weigh only 12 stone, only have green eyes, strawberry blonde hair, walk with a limp and has a burning passion for Faure’s Requiem, that would make for too narrow a sample to be representative of the population.
35. For whom will the research be important, or will it be important to no one? Potential candidates are; - other clinicians in the same field, clinicians in different fields, academics, students, patients, people who care about or for patients, other researchers, policy makers, educators, managers, professions allied to medicine such as nurses, occupational therapists, speech therapists, physiotherapists etc, the wider community, journalists, people in other countries and cultures.
36. Are the methods described to conduct the research appropriate that the conclusions drawn will answer the research question? Research is either qualitative or quantitative. Research either measures the quality of something or the quantity of something. Occasionally a researcher will try and include both elements in a study, (this is what will be called a "mixed methods study"). If the research question is something like, “how many milligrammes of a pill will cure X but not kill the patient?”, but the methods are qualitative and seek to find out how many pills the patient is happy to take, then the methods are inappropriate, and the study ought not be funded.
37. Will any conclusions drawn and published make it easier for others to make better decisions? The purpose of research is to extend knowledge so that safer, better, more cost effective treatments may be offered to patients, or so that the new knowledge it provides can support new research questions that will ultimately do that. Or will any conclusions drawn make it more difficult for decision makers to decide? Will the proposed outcomes clarify the options, or cloud the options?
38. In reading a proposal, does the author mix up terms? For example, do they use “cost effective”, when what they are describing is “cost-saving”? Are there any contradictions, omissions, statements that lack logic, assumptions, spilleng mitstakes, statements that try to make connections that simply don’t exist?
39. Is the research relevant to current problems? A study into Covid-19 is relevant. A study into polio may not be. A study into the King’s Evil definitely isn’t. A study looking at the role of combination analgesia, (pain relief) to combat post radiotherapy pain is relevant. A study looking at the role of blowing up balloons to solve alopecia (hair loss) isn’t.
40. Playing close scrutiny to the budget pays dividends. Is the quoted PPIE (Patient and Public Involvement and Engagement) budget realistic, does it take account of the inflation rate, does it look like a figure conjured up on the back of an envelope or has it been thought through? Does the budget match outcomes in the GANNT chart (project timeline schedule) ? (If the budget describes a figure for dissemination in year one, then ask what could they possibly be disseminating at the end of year one?.
41. Has there been attention paid to equality, diversity and inclusion (EDI)? EDI is a knotty problem because all too often the research community think it just means the colour of people's skin, the god people worship, who people sleep with and what cultural practices they follow, but EDI is far bigger than this. EDI in research should mean everyone who are rarely heard in the research conversation, and as such should pay as much attention to single parents, shift workers, those in remote industries and areas of the country, those who are dysphasic or aphasic, those who are young etc. There are so many diverse voices that it is impossible to include all of them. It doesn't matter if a participant in a study about glaucoma is either homosexual or heterosexual provided they have glaucoma or can understand what it is like to have glaucoma. EDI is important so the research outcomes can be translated across whole populations equally. This is evidenced by the recent discovery during China's Gift to the World that pulse oximeters (peg like electronic devices that measure the number of times the heart beats and how much oxygen is in the blood) work most accurately on caucasian fingers and less accurately on coloured and black ones. Research teams are often driven to pursue EDI because they are terrified in case they offend people, but taking offence is a deliberate choice people make.
42 PPIE must be meaningful and not either a tick box exercise or a conduit for "woke" virtue signalling. Public reviewers should be vigilant in patient and public facing documents as well as full applications for malign influences that risk the integrity of research merely to prosecute a politically correct agenda. Funding committees scrutinise applications with forensic attention, and will weed out poor ones where PPIE fails to be meaningful. Very often researchers into a condition will strive to include voices in their PPIE who experience such a condition, but this can be restrictive. You don't have to have had surgery on a joint to be able to anticipate what pain feels like.
43 Will the research have a tangible patient benefit or just a theoretical one? A study of fasting plasma glucose monitoring in post surgical wound care has a potential tangible patient benefit. A study of Maslow's Hierarchy of Needs in post surgical wound care monitoring, less so. (By the way, there is a theory that testing inulin, a chemical usually tested for kidney function, is a better guide for testing diabetes than testing insulin).
44 What is the dissemination plan? On average research takes 15 years to move from a published document to anything a patient recognises as a meaningful benefit. The public reviewer is justified in pressing for the disseminate plan to be ambitious. As well as conferences, articles and posters (the usual fare) the public reviewer could press for podcasts.
Here are examples:
This is a link to a study led by Kristy Sanderson looking into fatigue in the ambulance service: https://www.uea.ac.uk/groups-and-centres/projects/catnaps/disseminationpublicengagement
This is a study led by Angus Ramsey looking at video triage between ambulance crews and stroke physicians: https://www.ucl.ac.uk/epidemiology-health-care/research/applied-health-research/research/health-care-organisation-and-management-group/photonic-0
45. A good public reviewer should always keep in mind the welfare of the patient. Does a particular drug, for example, extend life and also the quality of life, or does it extend the process of dying and makes no change to the diminishing quality of life of the poor patient?
46. Are the benefits and impacts of the proposed application overstated? Does the applicant claim that the study will answer the the question "what is the meaning of life, the universe and everything?", when in fact it will only answer "what is the best lasagne recipe?"
47. Is this really primary research or is this more a service evaluation? Primary research is the pursuit of new knowledge using new data. Secondary research is the pursuit of new knowledge using existing data (e.g. meta-analyses). Service evaluation is the pursuit of validating knowledge about the provision of an existing service. A crude example could come in the form of, "This study is to understand if the rehabilitation clinic at X is delivering expected benefits to patients within X NHS Trust".
48. Be vigilant to the appearance of false syllogisms. Sometimes applicants will try to get away with making a weak case stronger by sentences along the lines of, "...this demonstrates that...". For example, an applicant could cite that because there are 100,000 strokes annually, this statistic supports the case that more research is needed to understand the link between marmalade and left sided weakness.
Help for research applicants
Funders reject applications for a number of reasons;
1. There is no tangible patient benefit
2. The cost would be unjustifiable
3. The experience and/or skill mix is imperfect
4. The study can't be completed satisfactorily within the stated time frame.
5. The FTE (Full Time Equivalent expressed as a percentage of time) involvement of key players is inadequate.
6. The outcomes are unachievable
7. The outcomes are unbelievable
8. The variables are too great to arrive at a solid conclusion
9. Recruitment targets are too low
10. Recruitment targets are too high
11. The priorities on which the hypothesis is made has been;
a) replaced
b) superseded
c) covered by subsequent studies
12. The perceived patient benefit is too minimal
13. The value for money isn't there
14. It's overambitious
15. It's unambitious
16. There isn't enough PPI
17. The proposal is badly written
18. The benefits are too vague to be of value
19. Outcomes are unlikely to be translated into practice
20. Outcomes are too local and unlikely to be rolled out to a wider practice.
21. Inclusion/exclusion criteria are too narrow
22. The patient burden is too great
23. There is a risk of sample bias (a study based only in Cambridge or Oxford and not also in Lowestoft or Bolton could mean there is an inherent bias.
24. The supervision is insufficient.
Questions that need answers in funding applications for funders to release money
1. Who is wanting money
2. What is the problem for which their research will be the answer?
3. What is the answer?
4. How long will it take to get the answer?
5. How will the question be answered?
6. How will bias be eliminated in getting the answer?
7. Who else will be part of the team getting the answer, are they the right people, and will their contribution be meaningful?
8. What is the sort of participant necessary to answer the question?
9. How will these participants be acquired?
10. How many participants will be acquired?
11. Over what time scale will the participants be recruited?
12. Over what time scale will their cooperation be necessary?
13. What will happen to participants?
14. Will their expenses be reimbursed?
15. Have there been members of the public involved in deciding the question, designing the study, and will they be involved to the end, how, and how many of them have you involved? Also, what value will co-applicants who are members of the public be bringing to the study?
16. How will data gathered help answer the question?
17. How will data gathered be analysed, and by whom?
18. Will there be a patient benefit?
19. Will it save money, or will it cost more in the end?
20. How will findings from the study change practice?
21. How will people know about the study?
22. What are the risks, or possible risks?
23. (Sometimes they also want to know, what are the stop points along the way - Stop Points are the points in a research plan where the study can be stopped early).
24. What will be the burden to participants?
25. How does this application fit in with current policies, priorities, public conversations, calls for change?
26. What impact will this study have on existing resources, organisations and staff?
Disappearing down rabbit holes
Taking place in research conversations are technical terms that don't, or shouldn't, concern the public reviewer. Nevertheless, when they crop up they can be puzzling and a challenge to follow what is going on. What is below will rarely be necessary to understand but just for the halibut I explain them. When I came across these in the work I do, I needed someone to explain them and I have to confess it seemed like a conversation theologians would have about how many angels could dance on the head of a pin.
What is Realist Methodology?
Realist Methodology is a way of trying to understand how evidence works, not just why. Data may be gathered to answer a research question that concludes X is better than Y, but applying realist methodology leads to an understanding why X is better than Y. The further insights gained turbocharge the results.
What is a superiority study?
A superiority study happens when a randomised clinical trial (RCT) is comparing one intervention (a drug, for example, or a surgical procedure) against one or more others, the purpose being to demonstrate that one is better than another. This is often the case when, in medical trials, a placebo is involved
What is an inferiority study?
An inferiority study happens when a randomised clinical trial (RCT) is comparing one intervention (a drug, for example, or a surgical procedure) against one or more others, the purpose being to demonstrate that one treatment is not worse than another. This is often the case when a placebo cannot be involved.
What is a non-inferiority study?
A non-inferiority study is one that tests whether a new treatment is not worse than an active active treatment it is being compared to. The benefits of designing a trial to test non-inferiority is so a study could demonstrate that while an intervention isn't necessarily better than other existing interventions it is less burdensome. For example, if drug X treats itching associated with liver disease but has 12 side effects, a non-inferiority trial would be used to demonstrate that drug Z does exactly the same but has only 6 side effects.
What is an equivalence study?
An equivalence study is pretty much what you would expect it to be. It's what it says it is. Equivalence, or bioequivalence, trials seek to demonstrate that there is no, or statistically minimal difference between two or more intervention for the same purpose.
What is a sham clinical trial?
A sham clinical trial happens usually in clinical trials involving surgery where otherwise a placebo would be used. A placebo is a substance with no therapeutic benefit given to a patient when testing how effective is a drug that is believed to have a therapeutic benefit. It is important that the patient doesn't know they have been given a 'dummy drug', so they are not influenced by that knowledge when later they describe their signs and symptoms or the absence of.
In clinical trials involving surgery, such a placebo happens where the patient would be prepared for surgery as usual, they would be anaesthetised, the surgeon would cut them leaving exactly the same scar as they would for a patient undergoing the real surgery, the wound would be closed and dressed and the patient would be taken to theatre recovery and then the ward. The patient would be under the impression they had surgery and later describe their signs and symptoms or the absence of.
The ethics of sham clinical trials are conversations worth having. On the one hand such studies, which pose risks to participants without compensating benefits, are generally considered ethically acceptable, provided that the risks have been minimised, are not excessive, and are justified by the value of the knowledge to be gained from the research, but the risks of sham surgery can be argued to be greater than the risks associated with placebo drugs. The wound created, for example, could become infected. The patient could, in the belief they are cured, ascribe any worsening subsequent symptoms to the belief that they are to be expected and not complain leading to a deterioration that may become irreversible even with genuine surgery.
What is a mechanistic study?
A mechanistic study sets out to understand how biological, organisational or behavioural processes work. A mechanistic study could seek to understand the way disease develops (this is known as the pathophysiological mechanism). Alternatively it could seek to undo the systems and structures in an organisation to understand why it isn't working better. Alternatively while a drug is known to work, a mechanistic study could seek to understand how it works to understand why it works.
What is a pragmatic clinical trial?
Unlike clinical trials that study hypothetical questions under strictly controlled conditions to iron out differences between two or more groups so the data produced is more fairly comparable, a pragmatic clinical trial seeks to answer a research question under real world conditions. Clinical trials can happen by creating artificial environments, like bringing people to clinics dedicated to research, or, as in pragmatic clinical trials, happen in clinics that are part of routine care.
For example, do exercises to restore arm function in a broken shoulder work in a rehabilitation facility using patients who would normally turn up for therapy and not a selection chosen because they have particular set of characteristics or excluded because they don't.
What is a Delphi study?
A Delphi study employs a method of reaching a set of conclusions that either stand alone as a result or acts as a step in a pathway, towards a result along with other methods. Very crudely explained, imagine a large family trying to decide where to go on holiday, and every member is given a long list of destinations, with options to read about those destinations, and everyone is invited to rank them in order of preference. The family head then takes all those scores and rankings and makes a short list then repeats the exercise until it is possible to reach a conclusion. When it is used in research, the family members consist of academics, clinicians and members of the public/patients. It is a technique used by the James Lind Alliance to refine options posed by a research question to arrive at the top most important research questions yet to be answered by others. Setting up a Delphi is laborious but it results in fantastic results, and anyone wishing to use this technique should study thoroughly before setting their hand to the plough. The software needed to create a Delphi is available from a number of organisations, typical of which is a company called Jisc, (onlinesurveys.ac.uk). To understand more about Delphi click on the links.
What is a crossover trial?
A crossover trial is where a study is testing two elements of a research question. For example, these could be either two new drugs being tested against each other, a new drug being tested against a proven drug, or a placebo against a new drug. In the case of two things being tested, at a predetermined point during the study those patients on drug A are given the drug that the people taking drug B, and vice versa. Fewer patients are needed for a crossover trial in comparison to other designs, which is a big plus.
What is a factorial study?
A factorial clinical study tests the effect of two or more treatments using combinations of those treatments. This design allows researchers to compare the benefits and problems of each intervention being tested against the other interventions. That sounds garbled but it is hard to get your head around. If there are six drugs being tested on 20 patients, it is possible to test combinations of those drugs on the various arms. So, in the case of drugs to combat constipation when morphine is being taken we could have drugs A, B, C, D, E, F and G. being tested. If A is morphine and drugs B,C,D,E, F and G are different drugs to resolve constipation, participants could be tested over time with a combination of A &B, A &C, A & D, A & E and A & F, and the best drug that works out of those combinations wins the prize. This is a very simplistic and crude explanation, but I hope it explains it well enough for the purposes of following a conversation.
What is an observational clinical study?
An observational clinical study is one where research staff physically observe what is happening in an environment where people being studied are present. For example, a study may want to understand the prevalence of urinary tract infections on orthopaedic wards (wards where patients are having bone surgery). Part of the data gathering could be a researcher would monitor the length of time it takes nurses to answer call bells when the patient needs to empty their bladder their bladder. The data that is gathered would be combined with other known measurable data and conclusions reached.
What is a case control study?
A case control study will compare a group of people with one disease with an identical group of people without that disease. This design happens more often in the study of rare diseases, but they have a problem because there is a risk of biases and may lack credibility. A crude example could be where caucasian male employees of an industrial incineration plant between the ages of 20-45 are studied to see how many become infertile after 5 years of employment. The differences between the two groups might narrow down who are and are less at risk from the exposure to incineration ash.
What are field studies?
Field studies are ones that gather data from the normal home, leisure or working environment of those being studied. They could be studies that take place in the middle of 20 acres of sugar beet, but this is unlikely. An example of a field study could be where Occupational Therapists are trying to understand why patients doing exercises at home are doing less well or better than those in a therapy gym.
What is Grounded Theory?
The grounded theory research method is a type of qualitative research method, but it works by constructing a theory from data after it has been collected and analysed, rather than having a theory then going out to collect data for analysing. The data grounded theory collects is done through usual methods such as interviews or from documents.
What is the Hawthorne Effect?
The Hawthorne Effect is the phenomenon where people change their behaviour when they know they are being observed. This is a caution to raise by the public reviewer when discussing observational studies.
What is a Platform Trial?
A platform trial is a type of random clinical trial that adapts to where new information indicates a change of focus is needed. They compare multiple and simultaneous interventions against a single controlled group. Platform trials are different in that they are open ended, meaning new interventions can be added, assessed, and removed as time goes on, without having to specify what they might be at the start. Platform trials are very flexible tools meaning if a new intervention comes along to address a problem being tackled by an existing solution being tested, this new intervention can be added to the mix and be tested alongside others.
What is Action Research?
Action Research is a qualitative method of answering a research question that involves patients being closely included in identifying a problem in healthcare delivery, designing equally with academics and clinicians how data is too be gathered and being involved in gathering it, analysing data, and interpreting and disseminating the results. This is also what is meant by "co-production". With patients and the public, identifying a problem, designing a solution, implementing a solution, assessing the success of the solution, identifying the problems with the solution, designing a solution to the solution, implementing a solution, and so on and so forth. There is a fine line here between mistaking activity for achievement, something the Department of Health has done for decades, but done well there can be tangible patient benefit.
What is Narrative Research?
Narrative research aims to draw conclusions from exploring and analysing experiences described n interviews. Narrative research is used by researchers from a wide variety of disciplines, which include anthropology, communication studies, cultural studies, economics, education, history, linguistics, medicine, nursing, psychology, social work, and sociology. It encompasses a range of research approaches including ethnography, phenomenology, grounded theory, narratology and action research.
What is Programme Theory?
Programme theory seeks to identify the components of interventions and understand the mechanisms through which they work. It has the potential to shorten the time frame of developing interventions, improves the design of interventions and identifies what will make interventions successful. It should specify the components of an intervention and describe the rationale and possible links between an intervention, processes, conditions and outcomes.
What is Process Evaluation?
Process Evaluation aims to answer the question about how an implementation happens and to understand how the results of a clinical trial can be translated into day to day practice. It's all very well to design a Decision Support Tool for elective surgery and in a cohort demonstrate it has patient benefits, but a process evaluation study takes that outcome and looks to understand how it is used in real world use. The outcomes of a process evaluation guides the planning, design and conduct of implementing an intervention. Process evaluations can also take place within clinical trials, to monitor and assess a tested intervention so when the project is complete the team can say they have examined robustly how their intervention is applied.
What is the Modified Rankin Scale?
You may come across this term and it might be helpful to understand what it is. The Modified Rankin Scale (mRS) is used to measure the degree of disability in patients who have had a stroke, as follows:
0: No symptoms at all
1: No significant disability despite symptoms; able to carry out all usual duties and activities
2: Slight disability; unable to carry out all previous activities, but able to look after own affairs without assistance
3: Moderate disability; requiring some help, but able to walk without assistance
4: Moderately severe disability; unable to walk without assistance and unable to attend to own bodily needs without assistance
5: Severe disability; bedridden, incontinent and requiring constant nursing care and attention
6: Dead
What is a Complex Innovative Design?
A Complex Innovative Design trial is a broad term used to describe new ways of designing a trial, usually a drug trial, that hastens the efficiency of studies and consequently shorten drug development timelines. CIDs can differ from standard randomised controlled trials (RCTs) in a number of ways, including:
Trial design: The trial methodology may be more complex or innovative.
Set-up: The trial may be set up in a novel way.
Recruitment: The trial may use new recruitment methods.
Delivery: The trial may be delivered in an innovative way.
Statistical analysis: The trial may use new statistical methods or mathematics. CIDs can be used to evaluate new cancer drugs, and can help to shorten the time it takes to develop drugs. For example, a platform trial is a type of CID trial that evaluates multiple treatments at once. This can be faster than setting up multiple RCTs. CIDs can be challenging to design, conduct, and interpret. They often involve more complex statistical methods, which can make it difficult for non-experts to understand the results. However, with proper training, education, and regulatory guidance, the challenges associated with CIDs can be overcome.
What is a Research Passport?
A research passport is a formal mechanism, a licence, for people who do not work for the NHS to access NHS sites for the purposes of conducting research. A typical example of the need for a research passport is where part of a study involves observing clinical practice as it happens and the people who are doing the observation are academics or members of the public who are listed members of the research team. A research passport is NOT needed when someone is employed by an NHS organisation; when someone is an independent contractor (e.g. GP) or employed by an independent contractor; when someone has an honorary clinical contract with an NHS Trust e.g. clinical academics; when someone is a student on a healthcare placement. Further guidance is here: https://www.hra.nhs.uk/planning-and-improving-research/best-practice/research-passport/
What good PPIE looks like
Funders are struggling to get the message across to applicants what good PPIE looks like, so this section is given to helping reviewers identify good PPIE and flag up to funders where quality is lacking. Good PPIE is….
1. Where the applicant has involved recruiting public and patient voices from the very beginning of their study, ideally when he team is being recruited.
2. Where the applicants not only say “I have held a focus group” or I have met with people affected”, but state how often and how many voices have they heard.
3. Where they have explained how their application has been influenced by patients and public voices, where it has changed and in what way it is improved.
4. Where they have recognised that EDI goes much further than just colour of skin, ethnicity and the gods people worship and extends to cohorts never heard such as dysphasic people, single parents and shift workers.
5. Where it is credible that plans to include patient and public voices are built into the team throughout the life of the study in both the management and oversight committees.
6. Where patient and public voices are involved in qualitative data analysis where it is possible.
7. Where the research question has an element that it has been influenced by patient and public priorities.
8. Where the budget for PPIE has been detailed and explained and bears no relation to figures concocted on the back of an envelope.
9. Where there is representation of both patient and carer/family voices.
10. Where there is acknowledgement that patient/participant facing materials could need translation services and there are mechanisms for this available.
11. Where the application recognises that patients carers and the public who live in cities are representative only of city dwellers and not others, and PPIE comes from all demographics.
12. Where the PPIE co-applicant has direct relevant reasons to be a co-applicant and what insights they will bring to the study.
13. Where the activities, expectations and tasks of the PPIE are clearly described
14. Where PPIE is an agenda item at every meeting of the management and oversight committee
15. Where training for PPIE and team members in how to involve PPI (if needed) are included as part of the design and included in the budget.
16. Where patient facing materials are influenced or designed by PPIE
The 4 stages of a drug clinical trial
Phase I
In this phase, a small group of people (usually 20 to 80) participate to assess the safety of the new treatment. Researchers determine the best dose, identify side effects, and study how the treatment interacts with the human body. People recruited to these trials often have advanced cancer and have alredy tried sll the available standard treatments.
Phase II
This phase involves a larger group of people (around 100 to 300) to see if the treatment is effective and to gather more information about its safety and side effects. It helps determine which types of diseases or conditions the treatment works best for. During this phase more is understood about the side effects.
Phase III
If the study treatment is successful at Phase II then it can proceed to Phase III. In this phase, hundreds to thousands of people participate to confirm the treatment's effectiveness, monitor side effects, and compare it with existing treatments. This phase is crucial for gathering enough data to seek FDA approval. (The FDA is the American Food and Drug Agency, it approves drugs if they pass certain criteria). If a Phase III trial is amazingly effective it might become the standard treatment. Many treatment trials reaching this point also have a component to the study measuring quality of life of those taking the drug. (What is the incentive to live longer on a drug if the quality of living is worse than having cancer?).
Phase IV
After the treatment has been approved by the FDA, Phase IV trials continue to monitor its long-term safety and effectiveness in a large, diverse population. This phase helps identify any rare side effects and assess the treatment's benefits over an extended period.