Deep search
Search
Copilot
Images
Videos
Maps
News
Shopping
More
Flights
Travel
Hotels
Notebook
Top stories
Sports
U.S.
Local
World
Science
Technology
Entertainment
Business
More
Politics
Any time
Past hour
Past 24 hours
Past 7 days
Past 30 days
Best match
Most recent
AI, o3 and o1
OpenAI trained o1 and o3 to ‘think’ about its safety policy
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it’s released. These improvements appear to have come from scaling test-time compute,
OpenAI to advance o1 and o3 AI models with new safety training paradigm
OpenAI announced the release of a new family of AI models, dubbed o3. The company claims the new products are more advanced than its previous models, including o1. The advancements, according to the startup,
OpenAI Announces o3 Model Launching Next Year
OpenAI is set to introduce its next-generation AI model, o3, early next year, with safety researchers gaining access to the technology. The announcement, made during OpenAI’s “12 Days of OpenAI” livestream,
OpenAI introduces new AI models, o3 and o3 mini- Know their capabilities and launch timeline
OpenAI o3 and o3 mini capabilities are showcased during ?12 Days of OpenAI?. Know how these powerful AI models are closer to AGI.
Everything You Need to Know About OpenAI's New Model, o3
Context Window Hello, and happy Sunday! It’s yet another week in which a new best AI model, OpenAI's o3. was released—scroll down to read about it in Alex Duffy's analysis of the company's 12 days of Shipmas.
4h
The Fundamentals Of Designing Autonomy Evaluations For AI Safety
Developing AI safety tests offers opportunities to meaningfully contribute to AI safety while advancing our understanding of ...
4d
on MSN
Exclusive: New Research Shows AI Strategically Lying
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit ...
cc.gatech.edu
11d
Research in AI Safety Lands Recent Graduate on Forbes 30 Under 30
Forbes named a recent Georgia Tech graduate to its 30 Under 30 in Science for 2025. Announced days before Fall 2024 ...
5d
Top AI labs aren’t doing enough to ensure AI is safe, a flurry of recent datapoints suggest
A third-party lab caught OpenAI's o1 model trying to deceive, while OpenAI's safety testing has been called into question.
openaccessgovernment
4h
North American research special focus
Funding and advancing science in technology standards, artificial intelligence, the humanities, social sciences, and ...
1d
on MSN
AI Models Were Caught Lying to Researchers in Tests — But It's Not Time To Worry Just Yet
OpenAI's o1 model, which users can access on ChatGPT Pro, showed "persistent" scheming behavior, according to Apollo Research ...
Tech Xplore on MSN
3d
Can we convince AI to answer harmful requests?
New research from EPFL demonstrates that even the most recent large language models (LLMs), despite undergoing safety ...
6d
AI Safety Clock Ticks Closer To ‘Midnight,’ Signifying Rising Risk
IMD business school created an AI Safety Clock — similar to the Doomsday Clock — ticking toward global calamity. While not as ...
4d
New Anthropic study shows AI really doesn’t want to be forced to change its views
A study from Anthropic's Alignment Science team shows that complex AI models may engage in deception to preserve their ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results
Feedback