Skip to Main Content
University of Jamestown Library Guides banner

Evidence-Based Practice: Research Guide | Appraise: Building an Evidence Table

Tips and resources for evidence-based practice research.

Online Research & Instruction Librarian

Profile Photo
Jeanie Winkelmann
she/her/hers
Virtual Office Hours:

Mon, Tues, Wed: 1p - 4p

Thurs & Sun: 6p - 9p

Please use the "Meet With Me" button to book your appointment.

Appraise: Building an Evidence Table

The Basics:

A standard evidence table has 10 columns. You need to fill in all of the columns for each study you find to properly review the available evidence. Evidence tables are often included in Systematic reviews and represent a great tool in taking evidence based practice from the page into the clinical setting, especially, when you are making an administrative change in the treatment of patients.

The Ten Columns:

1 2 3 4 5 6 7 8 9 10
Condition Study Design Author, Year N Statistically Significant? Quality of Study Magnitude of Benefit Absolute Risk Reduction Number Needed to Treat Comments

 

Column 1: Condition

This column refers to the medical condition or disease that is targeted by the therapy.

 

Column 2: Study Design

This column is where you report the type of study you found.

List one of the following:

  • Randomized controlled trial (RCT)
  • Equivalence trial: An RCT which compares two active agents. Equivalence trials often compare new treatments to usual (standard) care, and may not include a placebo arm.
  • Before and after comparison: A study that reports only the change in outcome in each group of a study, and does not report between-group comparisons. This is a common error in studies that claim to be RCTs.
  • Case series
  • Case Report
  • Case-control study
  • Cohort study
  • Meta-analysis
  • Review
  • Systematic review
  • P: Pending verification

 

Column 3: Author, Year

Identifies Author and Year of the study being described in a row of the table.

 

Column 4: N

The total number of subjects included in a study (treatment group plus placebo group). Some studies recruit a larger number of subjects initially, but do not use them all because they do not meet the study's entry criteria. In this case, it is the second, smaller number that qualifies as N.

Trials with a large number of drop-outs that are not included in the analysis are considered to be weaker evidence for efficacy. (For systematic reviews the number of studies included is reported. For meta-analyses, the number of total subjects included in the analysis or the number of studies may be reported.)

P= pending verification.

 

Column 5: Statistically Significant?

Results are noted as being statistically significant if a study's authors report statistical significance, or if quantitative evidence of significance is present.

P= pending verification.

 

Column 6: Quality of Study

A numerical score between 0-5 is assigned as a rough measure of study design/reporting quality (0 being weakest and 5 being strongest). This number is based on a well-established, validated scale developed by Jadad et al. (Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Controlled Clinical Trials 1996;17[1]:1-12).

Jadad Score Calculation
Ask yourself: If yes, score:
Was the study described as randomized (this includes words such as randomly, random, and randomization)? 1 point
Was the method used to generate the sequence of randomization described and appropriate (table of random numbers, computer-generated, etc)? 1 point
Was the study described as double blind? 1 point
Was the method of double blinding described and appropriate (identical placebo, active placebo, dummy, etc)? 1 point
Was there a description of withdrawals and dropouts? 1 point
Deduct one point if the method used to generate the sequence of randomization was described and it was inappropriate (patients were allocated alternately, or according to date of birth, hospital number, etc). -1 point
Deduct one point if the study was described as double blind but the method of blinding was inappropriate (e.g., comparison of tablet vs. injection with no double dummy). -1 point

P= pending verification.

 

Column 7: Magnitude of Benefit

This summarizes how strong a benefit is: small, medium, large, or none. If results are not statistically significant "NA" for "not applicable" is entered. Be consistent in defining small, medium, and large benefits across different studies and monographs

P= pending verification.

NA= not applicable.

 

Column 8: Absolute Risk Reduction

This describes the difference between the percent of people in the control/placebo group experiencing a specific outcome (control event rate), and the percent of people in the experimental/therapy group experiencing that same outcome (experimental event rate). Mathematically, Absolute risk reduction (ARR) equals experimental event rate minus control event rate.

Many studies do not include adequate data to calculate the ARR, in which cases "NA" is entered into this column.

P= pending verification.

NA= not applicable.

 

Column 9: Number Needed to Treat

This is the number of patients who would need to use the therapy under investigation, for the period of time described in the study, in order for one person to experience the specified benefit. It is calculated by dividing the Absolute Risk Reduction into 1 (1/ARR).

P= pending verification.

 

Column 10: Comments

When appropriate, this brief section may comment on design flaws (inadequately described subjects, lack of blinding, brief follow-up, not intention-to treat, etc.), notable study design elements (crossover, etc.), dosing, and/or specifics of study group/sub-groups (age, gender, etc).