From Meaningful Use to Meaningful Analytics
I’ve written about the three data waves we are facing in healthcare: data capture, data sharing, and data analytics. Primarily due to the “meaningful use” program the United States has made huge strides in electronic health record (EHR) adoption and is beginning to make progress in health information exchange (HIE). These are the principal drivers of the increased capability and use of clinical analytics, since it is the patient data captured, shared, and aggregated by these applications that is the primary source of the data that healthcare organizations analyze using these clinical analytics tools. This allows us to turn those data into information – actionable information that actually provides the ability to improve quality and lower the cost of care. It is the meaningful use of EHR technology that will ultimately enable meaningful analytics.
Two key factors for using clinical analytics to translate data into information are: achieving high quality of care and improving patient safety, as well as increasing awareness about the costs associated with providing care. One way in which organizations are framing these quality of care issues is within the context of meaningful use. Because of incentives when meaningful use criteria are met, and the impending penalties when they are not, many healthcare organizations and providers are evaluating how they are capturing and sharing data. Since organizations are required to report on multiple measures to achieve meaningful use, they often attempt to find ways to capture and report successfully on all measures rather than focusing on only a handful of measures. Clinical data analytics do not only leverage meaningful use rules, but also can help satisfy compliance with them.
One benefit from HIE, besides improved care coordination, is the ability to perform queries and apply analytical tools to those data that were not previously available. The five health outcomes policy priorities included in meaningful use are:
- Improve quality, safety, efficiency, and reduce health disparities
- Engage patients and families
- Improve care coordination
- Improve population and public health
- Ensure adequate privacy and security protections for personal health information
Meaningful Use Analytics
Obviously the reporting requirements for meaningful use can make good use of clinical analytics tools, but some of this reporting capability is also useful when participating in new payment models such as accountable care organizations (ACOs). Although not directly called out in meaningful use, lowering costs is a high priority and part of the over-arching Triple Aim. I cannot imagine succeeding in a truly transformed healthcare system without having the clinical and business intelligence tools that will allow for targeted interventions and not only a retrospective look via claims data, but the real-time capabilities of an Enterprise Data Warehouse (EDW) with robust analytic and reporting functionality.
In addition to a focus on meaningful use measures and ACOs, the industry’s shift to the use of ICD-10 (International Statistical Classification of Diseases and Related Health Problems-10th revision), mandated for the coding of all inpatient and outpatient claims beginning in October 2014, will also impact the use of clinical analytics. Conversion to the ICD-10 coding will dramatically increase the specificity and granularity—and therefore the value—of diagnostic datasets. For example, this change will increase the number of codes available for identifying diagnoses and procedures from 17,000 to 155,000. This will improve the classification of patient interactions by expanding the information that is relevant to ambulatory and managed care encounters, offer expanded injury codes and enable the combination of diagnosis and symptom codes to reduce the number of codes needed to fully describe a condition. This increased granularity, combined with the continued increase in digital capture of clinical data will yield new data sets which healthcare organizations will have the opportunity to translate into meaningful information that can be used to improve the delivery of healthcare.
Health Catalyst offers solutions to measure readiness.
The Outcomes Improvement Readiness Assessment (OIRA) Tool
Significant trends in U. S. healthcare demand a focused effort on sustained clinical, operational, and financial outcomes improvements. Healthcare organizations have been asking us to help them understand the competencies necessary to drive and sustain outcomes improvements—and help them assess their readiness for implementing the necessary changes. No tool existed that assessed the full spectrum of competencies essential for driving sustainable outcomes improvements. Nothing existed to guide organizations—until now.
OIRA Tool Development: And Future Plans
The OIRA Tool was developed using an integrated literature review of healthcare organizational improvement research across three databases: CINAHL, MEDLINE®, and Web of Science™. The articles were assessed and used to derive an initial set of competencies. A three-round, modified Delphi nominal group method was used with a panel of 11 subject matter experts to evaluate the relevancy and clarity of the competencies using an item-level content validity index (I-CVI) and a scale content validity index (S-CVI)1.
The evaluation panel included healthcare executives and directors responsible for organization-wide outcomes improvements, multidisciplinary improvement team members (clinicians, data architects, and data analysts), and healthcare improvement consultants and analysts. Following round three of the evaluation, the 22 OIRA Tool competencies were grouped into five main keys for success. An S-CVI index of 0.92 was achieved, with I-CVI indices ranging from 0.82 to 1.0, indicating the OIRA Tool competencies are clear, understandable, and relevant to how ready organizations are to drive and sustain outcomes improvements.
How to deal with Non Structured Data
The healthcare industry has recently realized a sharp increase in interest in natural language processing (NLP). The unstructured clinical record contains a wealth of insight into patients that isn’t available in the structured record. Additionally, advances in data science and AI have introduced new techniques for analyzing text, broadening and deepening understanding of the patient. Any organization seeking to leverage their data to improve outcomes, reduce cost, and further medical research needs to consider the wealth of insight stored in text and how they will create value from that data using NLP.
The first step in using NLP can be the most difficult, and many organizations never meet the initial challenge of making the data available for analysis. NLP requires that data engineers transform unstructured text into a usable format (see need to know aspect #2 below) and in a location where the NLP technology can make use of it. This NLP pre-requisite can be a complex process, involving larger data sets and different technologies than many data engineers are familiar with.
This article outlines four need-to-know ways to meet and overcome the challenges of making unstructured text available for advanced NLP analysis. It’s focused on the challenges and skillsets required to build a solid foundation for text analytics.
Understanding Free Text Is the Foundation for Healthcare NLP
In my role of leading NLP efforts for healthcare analytics vendor, I recently worked on a patient safetysurveillance tool that helps health systems monitor for potential adverse events. For example, administering Narcan to reverse the effects of a patient who doesn’t respond well to a pain killer or hospital-acquired pressure ulcers. While the administration of Narcan is commonly documented in structured data, pressure ulcers are often found in unstructured nursing notes.
To get the necessary data to improve patient safety, we needed to leverage the free text of nursing notes. We found that five of the 33 adverse events were primarily documented in unstructured text. To access and leverage the text data in the patient safety tool, we needed NLP. We needed more, however, than the right tools for NLP itself to use the rich information unstructured text holds.
Four Need-to-Know Aspects of Working with Unstructured Text
To effectively build a data pipeline for text, and navigate unfamiliar challenges, data engineers must understand four key points:
1. Text Is Bigger and More Complex
An average EMR record—such as a medication, allergy, or diagnosis, etc.—runs between 50 to 150 bytes, or 50 to 150 MB per million records. On the other hand, the average clinical note record is approximately 150 times as large. With large health systems storing hundreds of millions of note records, this scale introduces data transfer and storage complexities that many data engineers won’t have previously confronted.
2. Text Comes from Different Data Sources
Readiness Assessment: Improve and Sustain Outcomes: Are you ready? Take the readiness assessment and in minutes get your free, custom report that will help you assess your readiness to drive and sustain outcomes.
Good article it is really informative..
ReplyDeleteHotel Management