OpenAI, Anthropic Agree to Work With US Institute on Safety Testing
- At the state level, California legislators overwhelmingly approved an AI safety bill, which now heads to Gov. Gavin Newsom for final consideration.
- If enacted, the "fiercely debated" bill would require tech companies to safety-test AI programs before release and empower the attorney general to sue for any major harm caused by the technologies.
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
The United Kingdom no longer belongs to the E.U.
As of 2023, the estimated population of the European Union (EU) is around 447 million people.
This represents many people in the EU who may be using AI.
As of 2023, the status of AI regulation in the USA is evolving and characterized by several key points:
No Comprehensive Federal Law: Unlike the EU's proposed AI Act, the U.S. still needs a comprehensive federal law specifically governing AI. However, various sector-specific regulations apply.
Executive Orders and Guidance: The Biden administration has issued executive orders and guidelines aimed at promoting safe and responsible AI development. This includes principles for ethical AI use and initiatives to foster innovation while addressing risks.
Agency Actions: Various federal agencies, such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST), are developing frameworks and guidelines for AI use, focusing on issues like transparency, accountability, and bias.
State Regulations: Some states have begun to implement their own regulations concerning AI, particularly around issues like facial recognition and data privacy.
Industry Collaboration: There are ongoing discussions between government entities, industry leaders, and civil society to establish best practices and standards for AI governance.
Public Input: The U.S. government has sought public input on AI regulation, indicating a desire for a collaborative approach to developing policies.
As of 2023, several U.S. states have started to implement regulations or propose legislation concerning AI. Here are some notable examples:
California:
California has laws addressing data privacy (e.g., CCPA and CPRA) that impact AI systems, especially regarding data collection and usage.
Illinois:
The Illinois Biometric Information Privacy Act (BIPA) regulates the use of biometric data, which affects AI technologies that utilize facial recognition.
New York:
New York City has enacted regulations concerning the use of AI in hiring practices, requiring transparency and fairness in automated decision-making.
Virginia:
Virginia has proposed legislation focusing on the responsible use of AI in government and public services.
Washington:
Washington state has considered various bills aimed at regulating facial recognition technology and ensuring accountability in AI systems.
Massachusetts:
Massachusetts has explored legislation focused on the ethical use of AI and the implications of AI technologies in different sectors.
These regulations often focus on specific applications of AI, such as facial recognition, data privacy, and transparency in automated decision-making. The landscape is rapidly changing, and more states may introduce AI-related legislation in the future.
U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing, and Evaluation With Anthropic and OpenAI
These first-of-their-kind agreements between the U.S. government and industry will help advance safe and trustworthy AI innovation for all.
This represents a public-private concern about the hazards of the misuse of AI. The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI.
Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company before and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute.
The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards, and related tools. Evaluations under these agreements will further NIST’s work on AI by facilitating deep collaboration and exploratory research on advanced AI systems across a range of risk areas.
Evaluations conducted under these agreements will help advance the safe, secure, and trustworthy development and use of AI by building on the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made to the administration by leading AI model developers.
About the U.S. AI Safety Institute
The U.S. AI Safety Institute, located within the Department of Commerce at the National Institute of Standards and Technology (NIST), was established following the Biden-Harris administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to advance the science of AI safety and address the risks posed by advanced AI systems. It is tasked with developing the testing, evaluations, and guidelines that will help accelerate safe AI innovation here in the United States and around the world.
This effort reveals the international concern about the use of AI.
AI extends to all areas of global commerce and politics.
No comments:
Post a Comment