forked from OHDSI/TheBookOfOhdsi
-
Notifications
You must be signed in to change notification settings - Fork 0
/
EvidenceQuality.Rmd
32 lines (18 loc) · 3.99 KB
/
EvidenceQuality.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# (PART) Evidence Quality {-}
# Evidence Quality {#EvidenceQuality}
*Chapter lead: Jon Duke*
## Understanding Evidence Quality
How do we know if the results of a study are reliable? Can they be trusted for use in clinical settings? What about in regulatory decision-making? Can they serve as a foundation for future research? Each time a new study is published or disseminated, readers must consider these questions, regardless of whether the work was a randomized controlled trial, an observational study, or other type of analysis. \index{evidence quality} \index{regulatory decision-making}
One of the concerns that is often raised around observational studies and the use of "real world data" is the topic of data quality [@botsis2010secondary; @hersh2013caveats; @sherman2016real]. Commonly noted is that data used in observational research were not originally gathered for research purposes and thus may suffer from incomplete or inaccurate data capture as well inherent biases. These concerns have given rise to a growing body of research around how to measure, characterize, and ideally improve data quality [@kahn2012pragmatic; @liaw2013towards; @weiskopf2013methods]. The OHDSI community is a strong advocate of such research and community members have led and participated in many studies looking at data quality in the OMOP CDM and the OHDSI network [@huser_multisite_2016; @kahn_transparent_2015; @callahan2017comparison; @yoon_2016]. \index{data quality} \index{community}
Given the findings of the past decade in this area, it has become apparent that data quality is not perfect and never will be. This notion is nicely reflected in this quote from Dr Clem McDonald, a pioneer in the field of medical informatics:
> Loss of fidelity begins with the movement of data from the doctor’s brain to the medical record. \index{Clem McDonald}
Thus, as a community we must ask the question-- *given imperfect data, how can we achieve the most reliable evidence?* The OHDSI community is seeking to address this question through a holistic focus on "evidence quality". Evidence quality considers not only the quality of observational data but also the validity of the methods, software, and clinical definitions used in our observational analyses. \index{community} \index{reliable evidence}
In the following chapters, we will explore four components of evidence quality:
| Component of Evidence Quality | What it Measures |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------------|
| [Data Quality](DataQuality.html) | Are the data completely captured with plausible values in a manner that is conformant to agreed structure and conventions? |
| [Clinical Validity](ClinicalValidity.html) | To what extent does the analysis conducted match the clinical intention? |
| [Software Validity](SoftwareValidity.html) | Can we trust that the process transforming and analyzing the data does what it is supposed to do? |
| [Method Validity](MethodValidity.html) | Is the methodology appropriate for the question, given the strengths and weaknesses of the data? |
## Communicating Evidence Quality
An important aspect of evidence quality is the ability to express the uncertainty that comes from the data being imperfect. Thus, our efforts around evidence quality include not only concepts but also specific tools and community processes. The overarching goal of OHDSI's work around evidence quality is to produce confidence in health care decision-makers that the evidence generated by OHDSI-- while undoubtedly imperfect in many ways-- has been consistently measured for its weaknesses and strengths and that this information has been communicated in a rigorous and open manner.