Back to Articles
Scaling AI With Safety: Anthropic, Google, and Microsoft Commit to New National Institute of Standards and Technology Framework

NIST

SKIPPED

Description

Comprehensive up-to-date news coverage, aggregated from sources all over the world by Google News.

Summary

This reporting details the formal commitment by frontier AI labs to the NIST AI Risk Management Framework, focusing on mitigating catastrophic risks associated with model scaling. The agreement emphasizes voluntary testing and red-teaming for dual-use capabilities that could be exploited for biological or cyber-attacks. It marks a significant step in global AI governance by establishing standardized safety benchmarks for large language models before deployment. Such frameworks are critical for harmonizing international safety standards, including those relevant to the Australian Government's ongoing consultations on high-risk AI applications.