Developing AI safety tests offers opportunities to meaningfully contribute to AI safety while advancing our understanding of ...
Crucially, security leaders are taking steps to ensure that policy frameworks are being used responsibly, and 87% of ...
IMD business school created an AI Safety Clock — similar to the Doomsday Clock — ticking toward global calamity. While not as ...
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or ...
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit ...
Funding and advancing science in technology standards, artificial intelligence, the humanities, social sciences, and ...
A third-party lab caught OpenAI's o1 model trying to deceive, while OpenAI's safety testing has been called into question.
Forbes named a recent Georgia Tech graduate to its 30 Under 30 in Science for 2025. Announced days before Fall 2024 ...
A study from Anthropic's Alignment Science team shows that complex AI models may engage in deception to preserve their ...
New research from EPFL demonstrates that even the most recent large language models (LLMs), despite undergoing safety ...
OpenAI announced the release of a new family of AI models, dubbed o3. The company claims the new products are more advanced ...