Deep search
Search
Copilot
Images
Videos
Maps
News
Shopping
More
Flights
Travel
Hotels
Real Estate
Notebook
Top stories
Sports
U.S.
Local
World
Science
Technology
Entertainment
Business
More
Politics
Any time
Past hour
Past 24 hours
Past 7 days
Past 30 days
Best match
Most recent
AI, o3
OpenAI trained o1 and o3 to ‘think’ about its safety policy
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it’s released. These improvements appear to have come from scaling test-time compute,
OpenAI Announces o3 Model Launching Next Year
OpenAI is set to introduce its next-generation AI model, o3, early next year, with safety researchers gaining access to the technology. The announcement, made during OpenAI’s “12 Days of OpenAI” livestream,
OpenAI introduces new AI models, o3 and o3 mini- Know their capabilities and launch timeline
OpenAI o3 and o3 mini capabilities are showcased during ?12 Days of OpenAI?. Know how these powerful AI models are closer to AGI.
OpenAI to advance o1 and o3 AI models with new safety training paradigm
OpenAI announced the release of a new family of AI models, dubbed o3. The company claims the new products are more advanced than its previous models, including o1. The advancements, according to the startup,
Everything You Need to Know About OpenAI's New Model, o3
Context Window Hello, and happy Sunday! It’s yet another week in which a new best AI model, OpenAI's o3. was released—scroll down to read about it in Alex Duffy's analysis of the company's 12 days of Shipmas.
8h
The Fundamentals Of Designing Autonomy Evaluations For AI Safety
Developing AI safety tests offers opportunities to meaningfully contribute to AI safety while advancing our understanding of ...
5d
on MSN
Exclusive: New Research Shows AI Strategically Lying
Experiments by Anthropic and Redwood Research show how Anthropic's model, Claude, is capable of strategic deceit ...
cc.gatech.edu
11d
Research in AI Safety Lands Recent Graduate on Forbes 30 Under 30
Forbes named a recent Georgia Tech graduate to its 30 Under 30 in Science for 2025. Announced days before Fall 2024 ...
5d
Top AI labs aren’t doing enough to ensure AI is safe, a flurry of recent datapoints suggest
A third-party lab caught OpenAI's o1 model trying to deceive, while OpenAI's safety testing has been called into question.
1d
on MSN
AI Models Were Caught Lying to Researchers in Tests — But It's Not Time To Worry Just Yet
OpenAI's o1 model, which users can access on ChatGPT Pro, showed "persistent" scheming behavior, according to Apollo Research ...
openaccessgovernment
8h
North American research special focus
Funding and advancing science in technology standards, artificial intelligence, the humanities, social sciences, and ...
Tech Xplore on MSN
3d
Can we convince AI to answer harmful requests?
New research from EPFL demonstrates that even the most recent large language models (LLMs), despite undergoing safety ...
6d
AI Safety Clock Ticks Closer To ‘Midnight,’ Signifying Rising Risk
IMD business school created an AI Safety Clock — similar to the Doomsday Clock — ticking toward global calamity. While not as ...
4d
New Anthropic study shows AI really doesn’t want to be forced to change its views
A study from Anthropic's Alignment Science team shows that complex AI models may engage in deception to preserve their ...
2d
AI News Round-Up 2024: 10 Biggest Stories That Dominated the Year
TechRepublic looks back at the biggest AI news of 2024, from Apple putting AI in phones to global governments weighing in.
19h
10 AI Predictions For 2025
Meta is the world’s standard bearer for open-weight AI. In a fascinating case study in corporate strategy, while rivals like ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results
Feedback