The digital health space refers to the integration of technology and health care services to improve the overall quality of health care delivery. It encompasses a wide range of innovative and emerging technologies such as wearables, telehealth, artificial intelligence, mobile health, and electronic health records (EHRs). The digital health space offers numerous benefits such as improved patient outcomes, increased access to health care, reduced costs, and improved communication and collaboration between patients and health care providers. For example, patients can now monitor their vital signs such as blood pressure and glucose levels from home using wearable devices and share the data with their doctors in real-time. Telehealth technology allows patients to consult with their health care providers remotely without having to travel to the hospital, making health care more accessible, particularly in remote or rural areas. Artificial intelligence can be used to analyze vast amounts of patient data to identify patterns, predict outcomes, and provide personalized treatment recommendations. Overall, the digital health space is rapidly evolving, and the integration of technology in health

Wednesday, December 18, 2024

 https://garymarklevin.substack.com/p/amazon-sued-for-one-medical-malpractice


Healthcare is scaling and failing as corporations’ greed adds to rising healthcare costs.

The lawsuit alleges Amazon’s health clinic was “reckless and negligent” in its care of a 45-year-old California man who died after seeking help via telemedicine.

By Caroline O'Donovan


One week before Christmas 2023, Philip Tong logged onto a video consultation with healthcare clinic Amazon One Medical and said that he was short of breath, coughing up blood, and that his feet were turning blue. The provider told him to buy an inhaler, according to an October lawsuit.

 Hours later, Tong collapsed in an emergency room in Oakland, California, according to a complaint filed against the hospital and One Medical. He died the same day.

Whether this case will rise to the level of a formal court adjudication remains open to be seen. Malpractice liability usually resorts to the standard usual and customary practice in the community.

Minute Clinics at CVS and other pharmacies have risen and fallen failing to create a substantial following and are ‘bottom dwellers’ according to most health care providers, except for some nurse practitioners looking for employment ‘elsewhere’.

This is an important case in the healthcare marketplace and should the plaintiff prevail it can set a precedent and warn against further investments in this type of delivery service for healthcare.



Friday, December 13, 2024

UHG ignores ethical considerations for using AI to determine Prior Authorization Denials

 


The nation’s largest health insurance company pressured its medical staff to cut off payments for seriously ill patients in lockstep with a computer algorithm’s calculations, denying rehabilitation care for older and disabled Americans as profits soared, a STAT investigation has found.

UnitedHealth Group has repeatedly said its algorithm, which predicts how long patients will need to stay in rehab, is merely a guidepost for their recoveries. But inside the company, managers delivered a much different message: that the algorithm was to be followed precisely so payments could be cut off by the date it predicted.


Denied by AI: How Medicare Advantage plans to use algorithms to cut off care for Seniors in need.  Internal documents show that a UnitedHealth subsidiary called NaviHealth set a target for 2023 to keep the rehab stays of patients in Medicare Advantage plans within 1% of the days projected by the algorithm. Former employees said missing the target for patients under their watch meant exposing themselves to discipline, including possible termination, regardless of whether the additional days were justified under Medicare coverage rules.


United Health Care to Limit Treatment for Autism

Ethical and moral guardrails are lagging behind in the use of artificial intelligence in health care. The United States is lagging behind other countries (EU) in regulating this segment of information technology.  Specifically human oversight.

The European Union (EU) has taken significant steps to regulate artificial intelligence (AI) with a focus on ethical considerations. Here are some key points regarding the EU regulations on AI ethics:

 1. **AI Act** The EU's proposed AI Act aims to create a legal framework for AI technologies, ensuring they are safe and respect fundamental rights. Key aspects include:

- **Risk-based classification:** AI systems are categorized by risk levels: unacceptable, high, limited, and minimal. Unacceptable risk systems (like social scoring by governments) are banned.

- **High-risk AI systems:** These are subject to strict requirements, including risk assessments, transparency, and accountability measures.

Ethical Guidelines**

The EU has published guidelines emphasizing the following principles:

Human agency and oversight:** AI should augment human abilities and not undermine them.

- **Technical robustness and safety:** Systems should be reliable and secure.

- **Privacy and data governance:** AI should protect personal data and privacy.

- **Transparency:** AI operations should be understandable and traceable.

- **Fairness:** Avoid discrimination and ensure inclusivity.

- **Environmental and societal well-being:** Promote sustainability and societal benefits.

The proposed regulations recommend setting up national supervisory bodies to monitor AI compliance and encourage best practices.

The EU's approach to AI ethics emphasizes a balanced framework that promotes innovation while safeguarding fundamental rights and societal values. These regulations are still evolving, and ongoing discussions will shape their final form.

STAT+ Exclusive Story


Monday, December 9, 2024

New AI solves math and science problems faster than supercomputers

This new AI framework DIMON stands for Diffeomorphic Mapping Operator Learning. 


Engineers design safer cars, more resilient spacecraft, and stronger bridges using complex math problems that drive the underlying processes. Similarly, doctors use mathematical models to predict heart problems with greater accuracy.

These problems, called partial differential equations, are the backbone of engineering and science. But solving them can take days, even weeks, especially for complex shapes.

Now, Johns Hopkins University researchers have created a new AI model called DIMON. It can solve these complex equations thousands of times faster, right on your personal computer.

“While the motivation to develop it came from our own work, this is a solution that we think will have generally a massive impact on various fields of engineering because it’s very generic and scalable,” said Natalia Trayanova, biomedical engineering and medicine professor from the Johns Hopkins University. 

Tested on heart digital twins
Partial differential equations are common mathematical problems. These equations help convert real-world scenarios into mathematical models to predict future changes in objects or environments.

However, solving these big math problems is typically a job for supercomputers. Things are becoming easy with the arrival of artificial intelligence. 

This new AI framework DIMON stands for Diffeomorphic Mapping Operator Learning. 

The team tested DIMON on over 1,000 digital computer heart models of real patients. 

Interestingly, the model accurately predicted electrical signal pathways in diverse heart structures.

In this demonstration, the researchers used partial differential equations to investigate cardiac arrhythmia. This happens when the human heart beats irregularly because of messed-up electrical signals.

Using heart digital twin models, researchers can predict the risk of this life-threatening condition and suggest appropriate treatments.

“We’re bringing novel technology into the clinic, but a lot of our solutions are so slow it takes us about a week from when we scan a patient’s heart and solve the partial differential equations to predict if the patient is at high risk for sudden cardiac death and what is the best treatment plan,” explained Trayanova, who directs the Johns Hopkins Alliance for Cardiovascular Diagnostic and Treatment Innovation. 

The new AI significantly speeds up heart disease predictions, reducing calculation time from hours to 30 seconds. It can be done using a simple computer, making it more practical for everyday clinical use.


However, don't expect your cardiologist to use this now. It is in early theoretical development.  

DIMON is not for the weak of heart or mind. “For each problem, DIMON first solves the partial differential equations on a single shape and then maps the solution to multiple new shapes. This shape-shifting ability highlights its tremendous versatility,” said Minglang Yin, a Johns Hopkins Biomedical Engineering Postdoctoral Fellow who developed the platform.

Gary M. Levin M.D.



New AI solves math and science problems faster than supercomputers

Sunday, December 8, 2024

Physician gaslighting looks like this

Physician gaslighting looks like this

As physicians, we are often collectively gaslit—made to believe that somehow, we are responsible for system failures. Many physicians are brainwashed to “drink the Kool-Aid” and accept phrases like, “This is the way things have always been done.” We are made to feel inhuman when we advocate for ourselves as if we are somehow to blame. The system uses terms like “unprofessional,” “unable to cope with the rigors of being a physician,” “not taking personal responsibility for themselves,” and the list goes on.

Sometimes, the people who gaslight us are fellow physicians, attendings, and system administrators. The term “eating their young” doesn’t just apply to other professions—it plagues medicine. The very people who are supposed to support and mentor us can be the ones who create the most angst. During my residency, one attending stands out in my memory—etched there forever. This attending had a habit of publicly shaming me in front of patients, doctors, and administrators. This person’s behavior left a lasting impact, selectively “nice” to some residents and frankly cruel to others. To this day, I haven’t spoken a word to this attending, and when I see them, I act like I don’t know them.

So, I’m speaking my truth—I’m sick and tired of hearing about fellow physicians who have died tragically by suicide. These deaths were preventable, and it breaks my heart. Physicians are dying by suicide at alarming rates, with studies showing that approximately 300 to 400 U.S. physicians die by suicide each year—a rate twice as high as the general population. Let’s stop gaslighting each other. Let’s not accept the toxic practices or culture in medicine driving physicians away from clinical practice. About 49 percent of physicians report experiencing burnout, and 40 percent consider leaving medicine within the next two years.

What exactly does gaslighting mean?

Here are some signs that you might be experiencing gaslighting as a physician:

Physical challenges are downplayed. We are not given time to attend our health care appointments and are made to feel guilty for taking time away from clinical duties to care for ourselves.

Excessive criticism. Receiving disproportionate criticism for mistakes while positive contributions are overlooked can lead to self-doubt. This is incredibly challenging during residency, where residents are still learning, and it should be a supportive environment.

Ignoring feedback. As physicians on the front lines, we see firsthand how policies can negatively impact patient care. When physicians provide feedback about systemic issues and dismiss their concerns, it perpetuates a negative cycle.

Tokenism in leadership. Women and minority physicians may feel tokenized in leadership roles, undermining their input and authority. For instance, women comprise over one-third of the physician workforce, but only about 18 percent hold senior leadership positions.

Lack of support for mental health. Many physicians would say that one of the last places they would reach out to for support is with their leaders or administrators. Frankly, it’s hard to trust people who are often a big part of your stress—they act as judge, jury, and executioner. Any perceived “weakness” could be used against you indefinitely in your career.

Let us do better

Which side are we on? The side of good and humanity? Or, frankly, the side of being “not a nice person?” (I was going to compare it to a body part, but I decided not to—you get my drift.)

What does it take to be a decent, kind human being? We should encourage and support our young—our future health care professionals. Our world depends on the “healer” (the physician) being healthy.

These not-so-pleasant individuals and system policies made me consider quitting at various points in my career, but I’m thankful I could pull through. Everyone’s situation is different; for some, it may seem hopeless—and that is a tragedy.

Let us do better! Let us lead by example and be kind to each other. If that doesn’t appeal to you, perhaps medicine isn’t for you.

Tomi Mitchell is a board-certified family physician and certified health and wellness coach with extensive experience in clinical practice and holistic well-being. She is also an acclaimed international keynote speaker and a passionate advocate for mental health and physician well-being. She leverages over a decade of private practice experience to drive meaningful change.

Dr. Mitchell is the founder of Holistic Wellness Strategies, where she empowers individuals through comprehensive, evidence-based approaches to well-being. Her career is dedicated to transforming lives by addressing personal challenges and enhancing relationships with practical, holistic strategies.