Developing AI safety tests offers opportunities to meaningfully contribute to AI safety while advancing our understanding of ...
Crucially, security leaders are taking steps to ensure that policy frameworks are being used responsibly, and 87% of ...
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or ...
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit ...
Meta is the world’s standard bearer for open-weight AI. In a fascinating case study in corporate strategy, while rivals like ...
Funding and advancing science in technology standards, artificial intelligence, the humanities, social sciences, and ...
A study from Anthropic's Alignment Science team shows that complex AI models may engage in deception to preserve their ...
A third-party lab caught OpenAI's o1 model trying to deceive, while OpenAI's safety testing has been called into question.
Forbes named a recent Georgia Tech graduate to its 30 Under 30 in Science for 2025. Announced days before Fall 2024 ...
New research from EPFL demonstrates that even the most recent large language models (LLMs), despite undergoing safety ...
Systems that operate on behalf of people or corporations are the latest product from the AI boom, but these “agents” may ...