Developing AI safety tests offers opportunities to meaningfully contribute to AI safety while advancing our understanding of ...
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit ...
Forbes named a recent Georgia Tech graduate to its 30 Under 30 in Science for 2025. Announced days before Fall 2024 ...
A third-party lab caught OpenAI's o1 model trying to deceive, while OpenAI's safety testing has been called into question.
OpenAI's o1 model, which users can access on ChatGPT Pro, showed "persistent" scheming behavior, according to Apollo Research ...
Funding and advancing science in technology standards, artificial intelligence, the humanities, social sciences, and ...
New research from EPFL demonstrates that even the most recent large language models (LLMs), despite undergoing safety ...
IMD business school created an AI Safety Clock — similar to the Doomsday Clock — ticking toward global calamity. While not as ...
A study from Anthropic's Alignment Science team shows that complex AI models may engage in deception to preserve their ...
TechRepublic looks back at the biggest AI news of 2024, from Apple putting AI in phones to global governments weighing in.
Meta is the world’s standard bearer for open-weight AI. In a fascinating case study in corporate strategy, while rivals like ...