The digital health space refers to the integration of technology and health care services to improve the overall quality of health care delivery. It encompasses a wide range of innovative and emerging technologies such as wearables, telehealth, artificial intelligence, mobile health, and electronic health records (EHRs). The digital health space offers numerous benefits such as improved patient outcomes, increased access to health care, reduced costs, and improved communication and collaboration between patients and health care providers. For example, patients can now monitor their vital signs such as blood pressure and glucose levels from home using wearable devices and share the data with their doctors in real-time. Telehealth technology allows patients to consult with their health care providers remotely without having to travel to the hospital, making health care more accessible, particularly in remote or rural areas. Artificial intelligence can be used to analyze vast amounts of patient data to identify patterns, predict outcomes, and provide personalized treatment recommendations. Overall, the digital health space is rapidly evolving, and the integration of technology in health

Monday, June 17, 2024

Are you ready to use LLM or ChatGPT?

4 Ways Generative AI Will Impact CISOs and Their Teams

Overview

Impacts

A proliferation of overoptimistic generative AI (GenAI) announcements in the security and risk management markets could still drive promising improvements in productivity and accuracy for security teams but also lead to waste and disappointments.
Consumption of GenAI applications, such as large language models (LLMs), from business experiments and unmanaged, ad hoc employee adoption creates new attack surfaces and risks to individual privacy, sensitive data, and organizational intellectual property (IP).
Many businesses are rushing to capitalize on their IP and develop their own GenAI applications, creating new requirements for AI application security.
Attackers will use GenAI. They’ve started with the creation of more seemingly authentic content, phishing lures, and impersonating humans at scale. The uncertainty about how successfully they can leverage GenAI for more sophisticated attacks will require more flexible cybersecurity roadmaps.

Caveat Emptor (Let the Buyer Be Aware)

Recommendations

To address the various impacts of generative AI on their organizations’ security programs, chief information security officers (CISOs) and their teams must:
Initiate experiments of “generative cybersecurity AI,” starting with chat assistants for security operations centers (SOCs) and application security.
Work with organizational counterparts who have active interests in GenAI, such as those in legal and compliance, and lines of business to formulate user policies, training, and guidance. This will help minimize unsanctioned uses of GenAI and reduce privacy and copyright infringement risks.
Apply the AI trust, risk, and security management (AI TRiSM) framework when developing new first-party, or consuming new third-party, applications leveraging LLMs and GenAI.
Reinforce methods for assessing exposure to unpredictable threats, and measure changes in the efficacy of their controls, as they cannot guess if and how malicious actors might use GenAI.

Attackers Will Use Generative AI

CISOs and their teams must approach this threat without a strong fact base or direct proof of the full impact of the adversarial use of GenAI. In How to Respond to the 2023 Cyberthreat Landscape, Gartner categorized “attackers using AI” as an uncertain threat. Uncertain threats can often be real but lack a direct and obvious immediate response from targeted enterprises.



Assess third-party security products for non-AI-related controls and AI-specific security functions.
Test emerging products for potential risks like misinformation, biases, and illegitimate information.
Implement temporary manual review processes if needed.
Deploy automated actions gradually with accurate tracking metrics.
Ensure automated actions can be easily reverted.
All AI and LLMs are susceptible to hacks. It is too early to adopt AI if you want privacy or security.
It is highly advisable to obtain a cybersecurity expert before implementing any A.I. application for sensitive or proprietary data

Include LLM model transparency in third-party application evaluations.
Consider the advantages of private hosting for LLMs while assessing operational challenges.
Address security challenges specific to AI applications, including explainability, ModelOps, security, and privacy.
Establish corporate policies and user guidelines to minimize risks associated with generative AI applications.
Prioritize security resource involvement in critical areas to mitigate threats.
Evaluate and adapt security infrastructure to address emerging threats posed by GenAI.
Run experiments with new security features and benchmark generative cybersecurity AI against other approaches.
Determine corporate feedback mechanisms to enhance the efficacy of AI applications.
Ensure transparency in data processing and supply chain dependencies.
Choose fine-tuned models aligned with specific security use cases.
Explore prompt engineering and API integrations for enhanced security controls.
Prefer private hosting options for added security and privacy measures.
These points summarize the essential considerations and recommendations provided in the Gartner Reprint for the effective adoption and management of generative AI technologies in cybersecurity.
How can organizations effectively assess third-party security products for both general controls and AI-specific security functions in the context of generative AI technologies?
What are some key security challenges specific to AI applications, such as explainability, ModelOps, security, and privacy, and how can these challenges be effectively addressed in the cyber security domain?
The United States Computer Emergency Readiness Team (CERT) defines a malicious insider as one of an organization’s current or former employees, contractors, or trusted business partners who misuse their authorized access to critical assets in a manner that negatively affects the organization. Malicious insiders are harder to detect than outside attackers, as they have legitimate access to an organization’s data and spend most of their time performing regular work duties. Thus, detecting malicious insider attacks takes a long time. The 2020 Cost of Insider Threat [PDF] Report by the Ponemon Institute states that it takes an average of 77 days to detect and contain an insider-related security incident.

Origin: https://www.ekransystem.com/en/blog/portrait-malicious-insiders
© Ekran System







Gartner Reprint

Monday, June 10, 2024

Telepractice information | Arizona Board of Behavioral Health Examiners

Telepractice information | Arizona Board of Behavioral Health Examiners

Telepractice information

With the efforts to reduce community spread of COVID-19, many practitioners are seeking guidance on telepractice. Continuity of care is vital to mental health clients, and in this new climate, we encourage our licensees to become competent in telehealth delivery to continue to serve those in need.  There are many resources to assist behavioral health professionals in providing technology-assisted therapy.  The Board does not have restrictions on which license types (temporary licensees, associate level or independent level licensees) can provide telepractice, however, there may be limitations if providers are working through third-party reimbursement.

Saturday, June 8, 2024

Did you know there is a Global Digital Health Monitor?


It underwent a big review in 2022 and their team scanned 50+ countries and their governments about their digital health efforts.

Now the map includes year-on-year performance monitoring country visualizations, regional visualizations, and country-to-country comparisons.

They analyzed questions like the ones below in detail:

- Does the country have a separate department/agency / national working group for digital health?

- Is there a national plan specific to emerging technologies (e.g., AI, Wearables, Blockchain, IoT) to support public health goals?

- Is there a law to protect individual privacy, governing ownership, consent, access, and sharing of individually identifiable digital health data?

- Is digital health part of the curriculum for health and health-related support professionals in training, in general?

- Are current country digital health initiatives contributing to public health reporting and decision-making?

Check it out! https://lnkd.in/eNRcCEFj