Send to

Choose Destination

See 1 citation found by title matching your search:

J Biomed Inform. 2019 Nov;99:103293. doi: 10.1016/j.jbi.2019.103293. Epub 2019 Sep 19.

Making work visible for electronic phenotype implementation: Lessons learned from the eMERGE network.

Author information

Department of Biomedical Informatics, Columbia University, New York, NY, United States.
Northwestern University Feinberg School of Medicine, Chicago, IL, United States.
Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, United States.
Research Information Science and Computing, Partners Healthcare, Boston, MA, United States.
Department of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.
Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, United States.
Center for Applied Genomics, Children's Hospital of Philadelphia, Philadelphia, PA, United States.
Kaiser Permanente Washington Health Research Institute, Seattle, WA, United States.
National Human Genome Research Institute, NIH, Bethesda, MD, United States.
Department of Biomedical Informatics, Columbia University, New York, NY, United States; Medical Informatics Services, NewYork-Presbyterian Hospital, New York, NY, United States. Electronic address:
Department of Biomedical Informatics, Columbia University, New York, NY, United States. Electronic address:



Implementation of phenotype algorithms requires phenotype engineers to interpret human-readable algorithms and translate the description (text and flowcharts) into computable phenotypes - a process that can be labor intensive and error prone. To address the critical need for reducing the implementation efforts, it is important to develop portable algorithms.


We conducted a retrospective analysis of phenotype algorithms developed in the Electronic Medical Records and Genomics (eMERGE) network and identified common customization tasks required for implementation. A novel scoring system was developed to quantify portability from three aspects: Knowledge conversion, clause Interpretation, and Programming (KIP). Tasks were grouped into twenty representative categories. Experienced phenotype engineers were asked to estimate the average time spent on each category and evaluate time saving enabled by a common data model (CDM), specifically the Observational Medical Outcomes Partnership (OMOP) model, for each category.


A total of 485 distinct clauses (phenotype criteria) were identified from 55 phenotype algorithms, corresponding to 1153 customization tasks. In addition to 25 non-phenotype-specific tasks, 46 tasks are related to interpretation, 613 tasks are related to knowledge conversion, and 469 tasks are related to programming. A score between 0 and 2 (0 for easy, 1 for moderate, and 2 for difficult portability) is assigned for each aspect, yielding a total KIP score range of 0 to 6. The average clause-wise KIP score to reflect portability is 1.37 ± 1.38. Specifically, the average knowledge (K) score is 0.64 ± 0.66, interpretation (I) score is 0.33 ± 0.55, and programming (P) score is 0.40 ± 0.64. 5% of the categories can be completed within one hour (median). 70% of the categories take from days to months to complete. The OMOP model can assist with vocabulary mapping tasks.


This study presents firsthand knowledge of the substantial implementation efforts in phenotyping and introduces a novel metric (KIP) to measure portability of phenotype algorithms for quantifying such efforts across the eMERGE Network. Phenotype developers are encouraged to analyze and optimize the portability in regards to knowledge, interpretation and programming. CDMs can be used to improve the portability for some 'knowledge-oriented' tasks.


Electronic health records; Phenotyping; Portability

[Available on 2020-11-01]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center