The digital health space refers to the integration of technology and health care services to improve the overall quality of health care delivery. It encompasses a wide range of innovative and emerging technologies such as wearables, telehealth, artificial intelligence, mobile health, and electronic health records (EHRs). The digital health space offers numerous benefits such as improved patient outcomes, increased access to health care, reduced costs, and improved communication and collaboration between patients and health care providers. For example, patients can now monitor their vital signs such as blood pressure and glucose levels from home using wearable devices and share the data with their doctors in real-time. Telehealth technology allows patients to consult with their health care providers remotely without having to travel to the hospital, making health care more accessible, particularly in remote or rural areas. Artificial intelligence can be used to analyze vast amounts of patient data to identify patterns, predict outcomes, and provide personalized treatment recommendations. Overall, the digital health space is rapidly evolving, and the integration of technology in health

Tuesday, September 3, 2024

OpenAI, Anthropic Agree to Work With US Institute on Safety Testing

OpenAI, Anthropic Agree to Work With US Institute on Safety Testing


Artificial intelligence startups OpenAI and Anthropic
have agreed to help the U.S. government with AI safety testing by allowing early access to new models. The goal is to enable the AI Safety Institute, part of the Commerce Department's National Institute of Standards and Technology, to assess risks and head off potential issues before the models are released, working closely with counterparts in the U.K. The institute's director called it an "important milestone," as governments try to put more guardrails in place while still allowing companies room to innovate in the fast-moving sector.

  • At the state level, California legislators overwhelmingly approved an AI safety bill, which now heads to Gov. Gavin Newsom for final consideration.
  • If enacted, the "fiercely debated" bill would require tech companies to safety-test AI programs before release and empower the attorney general to sue for any major harm caused by the technologies.
California is the leading state in the effort to regulate AI.  The EU is far ahead in the process. There are 27 Member states of the European Union.  They are. 

Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden

The United Kingdom no longer belongs to the E.U. 
 
As of 2023, the estimated population of the European Union (EU) is around 447 million people.

This represents many people in the EU who may be using  AI.

As of 2023, the status of AI regulation in the USA is evolving and characterized by several key points:

No Comprehensive Federal Law: Unlike the EU's proposed AI Act, the U.S. still needs a comprehensive federal law specifically governing AI. However, various sector-specific regulations apply.
Executive Orders and Guidance: The Biden administration has issued executive orders and guidelines aimed at promoting safe and responsible AI development. This includes principles for ethical AI use and initiatives to foster innovation while addressing risks.
Agency Actions: Various federal agencies, such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), are developing frameworks and guidelines for AI use, focusing on issues like transparency, accountability, and bias.
State Regulations: Some states have begun to implement their own regulations concerning AI, particularly around issues like facial recognition and data privacy.
Industry Collaboration: There are ongoing discussions between government entities, industry leaders, and civil society to establish best practices and standards for AI governance.
Public Input: The U.S. government has sought public input on AI regulation, indicating a desire for a collaborative approach to developing policies.

As of 2023, several U.S. states have started to implement regulations or propose legislation concerning AI. Here are some notable examples:

California:
California has laws addressing data privacy (e.g., CCPA and CPRA) that impact AI systems, especially regarding data collection and usage.

Illinois:
The Illinois Biometric Information Privacy Act (BIPA) regulates the use of biometric data, which affects AI technologies that utilize facial recognition.

New York:
New York City has enacted regulations concerning the use of AI in hiring practices, requiring transparency and fairness in automated decision-making.

Virginia:
Virginia has proposed legislation focusing on the responsible use of AI in government and public services.

Washington:
Washington state has considered various bills aimed at regulating facial recognition technology and ensuring accountability in AI systems.

Massachusetts:
Massachusetts has explored legislation focused on the ethical use of AI and the implications of AI technologies in different sectors.
These regulations often focus on specific applications of AI, such as facial recognition, data privacy, and transparency in automated decision-making. The landscape is rapidly changing, and more states may introduce AI-related legislation in the future.

U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing, and Evaluation With Anthropic and OpenAI


These first-of-their-kind agreements between the U.S. government and industry will help advance safe and trustworthy AI innovation for all.


This represents a public-private concern about the hazards of the misuse of AI. The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI.

Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company before and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. 

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards, and related tools. Evaluations under these agreements will further NIST’s work on AI by facilitating deep collaboration and exploratory research on advanced AI systems across a range of risk areas.

Evaluations conducted under these agreements will help advance the safe, secure, and trustworthy development and use of AI by building on the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made to the administration by leading AI model developers.

About the U.S. AI Safety Institute

The U.S. AI Safety Institute, located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. It is tasked with developing the testing, evaluations, and guidelines that will help accelerate safe AI innovation here in the United States and around the world. 

This effort reveals the international concern about the use of AI.

AI extends to all areas of global commerce and politics.

No comments:

Post a Comment