The digital health space refers to the integration of technology and health care services to improve the overall quality of health care delivery. It encompasses a wide range of innovative and emerging technologies such as wearables, telehealth, artificial intelligence, mobile health, and electronic health records (EHRs). The digital health space offers numerous benefits such as improved patient outcomes, increased access to health care, reduced costs, and improved communication and collaboration between patients and health care providers. For example, patients can now monitor their vital signs such as blood pressure and glucose levels from home using wearable devices and share the data with their doctors in real-time. Telehealth technology allows patients to consult with their health care providers remotely without having to travel to the hospital, making health care more accessible, particularly in remote or rural areas. Artificial intelligence can be used to analyze vast amounts of patient data to identify patterns, predict outcomes, and provide personalized treatment recommendations. Overall, the digital health space is rapidly evolving, and the integration of technology in health

Friday, September 13, 2024

OpenAI and ChatGPT,DougallMD, Doximity and most Medical applications will use AI Breaking the rules of medical technology: A conversation with Dr. Eric Topol | LinkedIn

History has proven that society moves forward when innovators question and defy the accepted financial, cultural, or technological norms. This is especially true in medicine.

To encourage more technological innovation in medicine, the 10th season of the
Fixing Healthcare podcast focuses on (a) which technologies will disrupt the status quo in medical practice and (b) who will lead these much-needed changes.

1. Personalized Preventive Care

The traditional approach to preventive care uses broad, universal categories like age and sex to determine what medical screenings and preventive measures people should receive. While this approach works on a large—and mostly probabilistic—scale, it overlooks individual risk factors, such as genetic profiles and specific biomarkers. Humans can’t possibly retain and recall all the data that affects each individual patient, but they will be able to apply the totality of information using GenAI. These tools can almost instantaneously analyze vast amounts of data to identify those individuals at highest risk for serious conditions like cancer and heart disease and recommend targeted approaches to improve patient outcomes.

2. Reducing Diagnostic Errors
Traditionally, doctors rely on memory and experience to diagnose new symptoms, drawing from their medical training and past cases. However, this approach risks overlooking rare or complex conditions, and it is vulnerable to cognitive errors like confirmation and proximity bias. GenAI offers a powerful solution by integrating vast amounts of medical data—combining patient history, symptoms and real-time imaging—and correlating it with comprehensive medical literature, including obscure case reports. In the future, combining a dedicated clinician with a generative AI application will produce more accurate diagnoses than either alone.
3. Enhancing Doctor-Patient Interactions
In today’s healthcare, doctors often spend more time inputting data into electronic health records (EHRs) than engaging with patients, leading to impersonal and transactional experiences. GenAI is changing this dynamic by automatically transcribing and organizing doctor-patient conversations into accurate, high-quality EHR entries. This technology not only frees up to two hours a day for clinicians, but also improves the quality of care and helps reduce burnout.

A Fourth Opportunity: Accelerating Medical Research
In addition to Dr. Topol’s three points, I’d add a fourth: the ability of GenAI to accelerate research. In medical science today, it can take years to gather enough data to drive meaningful advances. GenAI can dramatically shorten this timeline by analyzing vast amounts of patient data quickly, leading to faster breakthroughs and more timely application of new treatments.
Clinical research conventionally starts with a question, followed by lengthy data collection and analysis. This approach is time-consuming and limited by the volume of data that researchers can analyze and manage. GenAI alters the calculus by enabling doctors to sift through enormous datasets. Today, U.S. hospitals produce up to 50 petabytes of data each year, 97% of which currently goes unused. By mining this data, GenAI will be able to uncover patterns and insights that would take years to find with traditional methods. One of the first practical applications will be identifying hospitalized patients who are likely to deteriorate over the next 24 hours, allowing clinicians to intervene earlier and potentially save lives.

Challenges and Risks
Of course, breaking the rules of medicine comes with challenges. Security is a major concern, especially when clinicians use generative AI for EHR data entry. However, the reality is that this danger already exists in the current EHRs, which can easily be hacked, but fail (alone) to offer the advantages that GenAI solutions will provide.
The evolution of medical technology always includes trade-offs. The advent of CT scans, MRIs, and laparoscopic tools, for example, led doctors to lose their skill in physical exams, but the lives saved by these innovations are undeniable. No clinician would go back to the past.
Within the next five years, Dr. Topol predicts that GenAI will become a standard tool for creating electronic health records (EHRs). Other applications will follow soon after. I’m confident that the old rule “the doctor knows best” will be replaced by a new reality—one in which the best outcomes come from a collaboration between a dedicated clinician, an empowered patient, and GenAI. Together, they will achieve more than any of the three could accomplish alone.


You will be there.
 


Breaking the rules of medical technology: A conversation with Dr. Eric Topol | LinkedIn

Tuesday, September 3, 2024

OpenAI, Anthropic Agree to Work With US Institute on Safety Testing

OpenAI, Anthropic Agree to Work With US Institute on Safety Testing


Artificial intelligence startups OpenAI and Anthropic
have agreed to help the U.S. government with AI safety testing by allowing early access to new models. The goal is to enable the AI Safety Institute, part of the Commerce Department's National Institute of Standards and Technology, to assess risks and head off potential issues before the models are released, working closely with counterparts in the U.K. The institute's director called it an "important milestone," as governments try to put more guardrails in place while still allowing companies room to innovate in the fast-moving sector.

  • At the state level, California legislators overwhelmingly approved an AI safety bill, which now heads to Gov. Gavin Newsom for final consideration.
  • If enacted, the "fiercely debated" bill would require tech companies to safety-test AI programs before release and empower the attorney general to sue for any major harm caused by the technologies.
California is the leading state in the effort to regulate AI.  The EU is far ahead in the process. There are 27 Member states of the European Union.  They are. 

Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden

The United Kingdom no longer belongs to the E.U. 
 
As of 2023, the estimated population of the European Union (EU) is around 447 million people.

This represents many people in the EU who may be using  AI.

As of 2023, the status of AI regulation in the USA is evolving and characterized by several key points:

No Comprehensive Federal Law: Unlike the EU's proposed AI Act, the U.S. still needs a comprehensive federal law specifically governing AI. However, various sector-specific regulations apply.
Executive Orders and Guidance: The Biden administration has issued executive orders and guidelines aimed at promoting safe and responsible AI development. This includes principles for ethical AI use and initiatives to foster innovation while addressing risks.
Agency Actions: Various federal agencies, such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), are developing frameworks and guidelines for AI use, focusing on issues like transparency, accountability, and bias.
State Regulations: Some states have begun to implement their own regulations concerning AI, particularly around issues like facial recognition and data privacy.
Industry Collaboration: There are ongoing discussions between government entities, industry leaders, and civil society to establish best practices and standards for AI governance.
Public Input: The U.S. government has sought public input on AI regulation, indicating a desire for a collaborative approach to developing policies.

As of 2023, several U.S. states have started to implement regulations or propose legislation concerning AI. Here are some notable examples:

California:
California has laws addressing data privacy (e.g., CCPA and CPRA) that impact AI systems, especially regarding data collection and usage.

Illinois:
The Illinois Biometric Information Privacy Act (BIPA) regulates the use of biometric data, which affects AI technologies that utilize facial recognition.

New York:
New York City has enacted regulations concerning the use of AI in hiring practices, requiring transparency and fairness in automated decision-making.

Virginia:
Virginia has proposed legislation focusing on the responsible use of AI in government and public services.

Washington:
Washington state has considered various bills aimed at regulating facial recognition technology and ensuring accountability in AI systems.

Massachusetts:
Massachusetts has explored legislation focused on the ethical use of AI and the implications of AI technologies in different sectors.
These regulations often focus on specific applications of AI, such as facial recognition, data privacy, and transparency in automated decision-making. The landscape is rapidly changing, and more states may introduce AI-related legislation in the future.

U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing, and Evaluation With Anthropic and OpenAI


These first-of-their-kind agreements between the U.S. government and industry will help advance safe and trustworthy AI innovation for all.


This represents a public-private concern about the hazards of the misuse of AI. The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI.

Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company before and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. 

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards, and related tools. Evaluations under these agreements will further NIST’s work on AI by facilitating deep collaboration and exploratory research on advanced AI systems across a range of risk areas.

Evaluations conducted under these agreements will help advance the safe, secure, and trustworthy development and use of AI by building on the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made to the administration by leading AI model developers.

About the U.S. AI Safety Institute

The U.S. AI Safety Institute, located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. It is tasked with developing the testing, evaluations, and guidelines that will help accelerate safe AI innovation here in the United States and around the world. 

This effort reveals the international concern about the use of AI.

AI extends to all areas of global commerce and politics.

Monday, September 2, 2024


At the heart of medicine lies the physician-patient dialogue, where skillful history-taking paves the way for accurate diagnosis, effective management, and enduring trust. Artificial Intelligence (AI) systems capable of diagnostic dialogue could increase accessibility, consistency, and quality of care. However, approximating clinicians' expertise is an outstanding grand challenge. Here, we introduce AMIE (Articulate Medical Intelligence Explorer), a Large Language Model (LLM) based AI system optimized for diagnostic dialogue.

AMIE uses a novel self-play-based simulated environment with automated feedback mechanisms for scaling learning across diverse disease conditions, specialties, and contexts. We designed a framework for evaluating clinically meaningful axes of performance including history-taking, diagnostic accuracy, management reasoning, communication skills, and empathy. We compared AMIE's performance to that of primary care physicians (PCPs) in a randomized, double-blind crossover study of text-based consultations with validated patient actors in the style of an Objective Structured Clinical Examination (OSCE). The study included 149 case scenarios from clinical providers in Canada, the UK, and India, 20 PCPs for comparison with AMIE, and evaluations by specialist physicians and patient actors. AMIE demonstrated greater diagnostic accuracy and superior performance on 28 of 32 axes according to specialist physicians and 24 of 26 axes according to patient actors. Our research has several limitations and should be interpreted with appropriate caution. Clinicians were limited to unfamiliar synchronous text-chat which permits large-scale LLM-patient interactions but is not representative of usual clinical practice. While further research is required before AMIE can be translated to real-world settings, the results represent a milestone toward conversational diagnostic AI.

AMIE: A research AI system for diagnostic medical reasoning and 

conversations


This article from Google Research reveals the investment Google is applying to health care.

Inspired by this challenge, we developed Articulate Medical Intelligence Explorer (AMIE), a research AI system based on an LLM and optimized for diagnostic reasoning and conversations. We trained and evaluated AMIE along many dimensions that reflect quality in real-world clinical consultations from the perspective of both clinicians and patients. To scale AMIE across a multitude of disease conditions, specialties, and scenarios, we developed a novel self-play-based simulated diagnostic dialogue environment with automated feedback mechanisms to enrich and accelerate its learning process. We also introduced an inference time chain-of-reasoning strategy to improve AMIE’s diagnostic accuracy and conversation quality. Finally, we tested AMIE prospectively in real examples of multi-turn dialogue by simulating consultations with trained actors.

AMIE as an aid to clinicians

In a recently released preprint, we evaluated the ability of an earlier iteration of the AMIE system to generate a DDx alone or as an aid to clinicians. Twenty (20) generalist clinicians evaluated 303 challenging, real-world medical cases sourced from the New England Journal of Medicine (NEJM) ClinicoPathologic Conferences (CPCs). Each case report was read by two clinicians randomized to one of two assistive conditions: either assistance from search engines and standard medical resources, or AMIE assistance in addition to these tools. All clinicians provided a baseline, unassisted DDx before using the respective assistive tools.

AMIE exhibited standalone performance that exceeded that of unassisted clinicians (top-10 accuracy 59.1% vs. 33.6%, p= 0.04). Comparing the two assisted study arms, the top-10 accuracy was higher for clinicians assisted by AMIE, compared to clinicians without AMIE assistance (24.6%, p<0.01) and clinicians with search (5.45%, p=0.02). Further, clinicians assisted by AMIE arrived at more comprehensive differential lists than those without AMIE assistance.



Thursday, August 29, 2024

(2) Healthcare-IT/ EHR/ HIS | Groups | LinkedIn

In the beginning there was darkness, and there was a void. Aeons of time went by, then the Electronic Health Record was invented.  With this great achievement there were also endless responsibilities, passwords, typing, learning how to look at the screen and patient at the same time without developing vertigo, seeing fewer patients, and often taking work home to catch up on data input.  

Feeding the EHR was a daunting task. It consumed time as well as electrons. 

Eventually there was HIPAA, and interoperability, as well as blue screens, power failures, malware, viruses and ransom notes.

Software vendors would come and go. Software would become obsolete requiring updates and sometimes replacement Electronic Health Records.

New systems became available. Changing to EHR migration is a daunting task. Muscle memory from an old system would have to be reprogrammed to utilize the new system.  Systems were designed to reduce clicks and mouse movements.
 There was an epidemic of carpal tunnel syndrome in providers.  This created a wave of disability and worker's compensation claims.

Each year, health systems invest millions of dollars in their Electronic Health Records (EHR) and applications to leverage their full potential. In a recent Becker’s Health IT article, it is stated that over the next 10 years, EHRs will become more streamlined, as companies like Epic, Oracle Cerner or MEDITECH continue to refine their current technology to align with provider and patient demands.

As your team prepares your systems and processes for the next generation of digital transformation in healthcare, there’s a good chance that you’re considering (or maybe even decided on) an EHR transition. The process of transitioning from one EHR to another is a complex change for health organizations and is often described by our clients as extremely expensive, labor intensive, and time consuming.  

Transitioning to a new EHR system often meets resistance due to fear of the unknown or concerns about disruption to established workflows. Comprehensive training addresses these concerns by familiarizing users with the new system, its benefits, and how it integrates into existing practices.  There are organizations that specialize in transitioning to or changing an EHR. 

                                    Best Practices for EHR Transition Training

Early Engagement and Communication

Start training initiatives well in advance of the EHR Go-Live date. Communicate the benefits of the new system and the training schedule clearly to build anticipation and encourage buy-in from staff.

Tailored Training Plans

It is important to recognize that different roles within the healthcare organization have varying needs regarding EHR functionality. Developing customized training plans that cater to the specific responsibilities and workflows of clinical staff, administrative personnel, and support teams, is an essential step toward EHR optimization and adoption.

Engage Super Users AND End Users

While it is important to identify and train a group of super users within the organization who possess advanced knowledge of the EHR system, it is just as crucial to engage your end users. Super users can serve as internal champions and mentors, providing support to their peers during and after the transition period. End users are just as instrumental, as they are often the ones who utilize the EHR the most. By engaging and training both types of users, you can ensure a more streamlined approach, resulting in a unified experience for your patients.

Hands-on Learning and Multimodal Approach

Incorporating hands-on training sessions where users can interact directly with the EHR system enables them to apply theoretical knowledge to practical exercises and simulations. Our Epic trainers are classroom certified, provide At-the-Elbow (ATE) support and offer personalization labs to ensure our clients’ IT teams, clinicians, and staff are confident in their ability to utilize the new system.

It is also important to recognize that individuals have different learning preferences. When it comes to selecting a Go-Live training partner, select a team that offers a mix of training modalities such as in-person workshops, webinars, e-learning modules, and printed reference materials. This ensures accessibility and accommodates diverse learning styles.

Feedback Mechanisms

Establish channels for continuous feedback from trainees regarding their learning experience and the usability of the EHR system. Use this feedback to refine training materials, adjust content based on user needs, and address any challenges promptly.

Continuous Learning Opportunities

EHR systems evolve with updates and new features. Be sure to ask your Go-Live partner about ongoing training opportunities to keep users informed about system enhancements, workflow optimizations, and best practices. This can include refresher courses, advanced training modules, or lunch-and-learn sessions.

As your healthcare organization prepares for the future of EHR systems, the investment in comprehensive Go-Live training emerges as a critical strategy for success. Training plays a pivotal role in empowering healthcare providers and staff to maximize the potential of new EHR technologies, enhancing clinical workflows, and ultimately improving patient outcomes. As your team navigates the evolving EHR landscape, consider HCTec as your next Go-Live project partner.

Artificial Intelligence has great potential to reduce workloads, with voice recognition and automated scribe inserting real-time provider-patient conversation in the history of the EHR, automated procedural codes analyzing the length of the examination, the complexity and  summarization of the encounter, ICD coding in accordance with the diagnosis. It may provide predictive modelling.

The veracity and trustworthiness of A.I. will evolve over time with vetting, and approval of regulatory agencies, such as the FDA which has the authority to supervise EHR and AI as medical devices.

Studies on patient-provider trust in the context of AI focus on several key themes:

Perceptions of AI: Research indicates that patients often have mixed feelings about AI in healthcare. Trust can be influenced by how well patients understand AI technology and its benefits.
Transparency: Studies show that clear communication about how AI systems work and the rationale behind their recommendations can enhance trust. Patients are more likely to trust AI when they feel informed.
Provider Role: The relationship between healthcare providers and patients remains crucial. Patients tend to trust AI more when their providers endorse and explain the technology.
Outcomes and Accuracy: Trust is also linked to the perceived accuracy and effectiveness of AI tools. Positive outcomes from AI-assisted care can bolster trust.
Ethical Considerations: Ethical concerns regarding data privacy and bias in AI algorithms can hinder trust. Patients want assurances that their data is secure and that AI systems are fair.
Demographic Factors: Trust levels can vary among different demographic groups, influenced by factors like age, education, and prior experiences with technology in healthcare.
Longitudinal Studies: Ongoing research is needed to understand how trust evolves over time as AI technologies become more integrated into healthcare practices.
Overall, fostering trust in AI requires a multifaceted approach, addressing both technological and interpersonal aspects of care.