Editor’s Perspective

Assessing Research Evidence

December 1, 2023

By Dwight Akerman, OD, MBA, FAAO, FBCLA, FIACLE

Evidence-Based Medicine (EBM) uses the scientific method to organize and apply current data to improve health care decisions. Thus, the best available science is combined with an eye care professional’s clinical experience and the patient’s values to arrive at the best medical decision for the patient. What sort of evidence are we looking for? Current best evidence. Not perfect evidence, simply the best there is. But not old or out-of-date evidence; we need to find modern, up-to-date current evidence.

When practicing evidence-based medicine, the levels of evidence or data should be graded according to their relative strength. More robust evidence should be given more weight when making clinical decisions. The evidence is commonly stratified into six different levels. All clinical studies or scientific evidence can be classified into one of the following categories:

  • Level IA: Evidence obtained from a meta-analysis of multiple, well-conducted, and well-designed randomized trials. Randomized trials provide some of the strongest clinical evidence, and if these are repeated and the results combined in a meta-analysis, then the overall results are assumed to be even stronger.
  • Level IB: Evidence obtained from a single well-conducted and well-designed randomized controlled trial. The randomized controlled study, when well-designed and well-conducted, is a gold standard for clinical medicine.
  • Level IIA: Evidence from at least one well-designed and executed non-randomized controlled study. When randomization does not occur, there may be more bias introduced into the study. 
  • Level IIB: Evidence from at least one well-designed case-control or cohort study. Not all clinical questions can be effectively or ethically studied with a randomized controlled study. 
  • Level III: Evidence from at least one non-experimental study. Typically, Level III evidence would include case series as well as not well-designed case-control or cohort studies.   
  • Level IV: Expert opinions from respected authorities on the subject based on their clinical experience.

To manage juvenile-onset myopia at the highest level, eye care professionals must understand what they have to look for in published myopia management studies. Recently, the Director of Research for Aston University’s optometry and vision science research group, Professor Nicola Logan, shared her tips on assessing the validity of research into myopia management interventions. 

Building upon Professor Logan’s tips, eye care professionals should consider the following seven points when assessing the validity of published myopia management research:

  1. Who was the study sponsor? 
    • Was the study funded by an independent source, such as the NEI, or by industry, whose product was researched in the published study?
    • Have the research findings been published in a peer-reviewed journal? Have other independent researchers reviewed the research findings?
  2. What type of research study supports the claims? 
    • Are the data from a randomized controlled trial (RCT)? Other study designs, such as case reports, offer insight, but the evidence’s strength could be more robust.
  3. What age are the study participants, and does this fit with the age range of children you see in your clinical practice?
  4. What was the length of the study? 
    • Ideally, three-year longitudinal data are preferable.
  5. What are the pre-defined study outcomes? 
    • Since almost all juvenile-onset myopia is related to axial elongation, slowing the axial length progression should be a primary outcome. Slowing spherical equivalent Rx should also be a primary outcome.
  6. Are the research results statistically significant but not clinically significant?
    • The NEI-funded COMET trial on PALs to slow myopia progression in children is an example of statistically significant but not clinically significant findings.
  7. Does the paper report on the intention-to-treat study population before reporting sub-group or per protocol results?
    • An intention-to-treat analysis is a method for analyzing results in a prospective RCT where all participants who are randomized are included in the statistical analysis and analyzed according to the group they were originally assigned, regardless of what treatment (if any) they received. However, subjects in clinical trials do not always adhere to the protocol. Excluding subjects from the analysis who violated the research protocol (did not get their intended treatment) can have significant implications that impact the results and analysis of a study.
    • Applying the intention-to-treat principles yields an unbiased estimate of the efficacy of the intervention on the primary study outcome at the level of adherence observed in the trial. So, when the treatment under study is effective, but there is substantial non-adherence, the intention-to-treat analysis will underestimate the magnitude of the treatment effect that will occur in adherent patients. Although an underestimate of an effective therapy, it will be unbiased. 
    • Per protocol analysis is a marketing favorite because only adherent children are included in the reported results, often resulting in far more robust efficacy claims.

I encourage all eye care practitioners to consider these points when reading the literature and assessing a new intervention for myopia management. If they do, they can weigh whether the intervention is worth adding to their clinical myopia management armamentarium. As eye care professionals, we must do everything possible to give children and adolescents the highest quality care, including practicing evidence-based eye care for myopia management.


Best professional regards,

Dwight H. Akerman, OD, MBA, FAAO, FBCLA, FIACLE
Chief Medical Editor
dwight.akerman@gmail.com

To Top