OpenAI, Google, and Meta Researchers Warn We May Lose the Ability to Track AI Misbehavior

0
8كيلو بايت

Over 40 scientists from the world’s leading AI institutions, including OpenAI, Google DeepMind, Anthropic, and Meta, have come together to call for more research in a particular type of safety monitoring that allows humans to analyze how AI models “think.” 

The scientists published a research paper on Tuesday that highlighted what is known as chain of thought (CoT) monitoring as a new yet fragile opportunity to boost AI safety. The paper was endorsed by prominent AI figures like OpenAI co-founders John Schulman and Ilya Sutskever as well as Nobel Prize laureate known as the “Godfather of AI,” Geoffrey Hinton. 

In the paper, the scientists explained how modern reasoning models like ChatGPT are trained to “perform extended reasoning in CoT before taking actions or producing final outputs.” In other words, they “think out loud” through problems step by step, providing them a form of working memory for solving complex tasks.

“AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave,” the paper’s authors wrote. 

The researchers argue that CoT monitoring can help researchers detect when models begin to exploit flaws in their training, manipulate data, or fall victim to malicious user manipulation. Any issues that are found can then either be “blocked, or replaced with safer actions, or reviewed in more depth.” 

OpenAI researchers have already used this technique in testing to find cases when AI models have had the phrase “Let’s Hack” in their CoT. 

Current AI models perform this thinking in human language, but the researchers warn that this may not always be the case. 

As developers rely more on reinforcement learning, which prioritizes correct outputs rather than how they arrived at them, future models may evolve away from using reasoning that humans can’t easily understand. Additionally, advanced models might eventually learn to suppress or obscure their reasoning if they detect that it’s being monitored.

In response, the researchers are urging AI developers to track and evaluate the CoT monitorability of their models and to treat this as a critical component of overall model safety. They even recommend that it become a key consideration when training and deploying new models.

Like
Love
Haha
3
البحث
الأقسام
إقرأ المزيد
Wellness
Nữ sinh t:ử n:ạn trên đường đi ôn thi tốt nghiệp THPT, hoàn cảnh gia đình khiến nhiều người xót xa
Nữ sinh 18 tuổi trên đường đi ôn thi tốt nghiệp THPT thì không may xảy ra va chạm với ô tô khiến...
بواسطة JudoExpert Mayer 2025-06-25 03:44:05 0 9كيلو بايت
غير مصنف
Theo quy định, bằng lái xe ô tô quá hạn bao lâu thì sẽ phải thi sát hạch lại?
Bằng lái xe ô tô quá hạn bao lâu thì sẽ phải thi sát...
بواسطة satansgrandpa Ty 2025-08-14 15:22:05 0 8كيلو بايت
Wellness
Kế toán trường cấp 2 thu BHYT của 523 học sinh tổng cộn gần 500 triệu đồng, nhưng chỉ nộp 200 em, còn lại giữ tiêu xài
Sau khi thu tiền bảo hiểm y tế của 523 học sinh Trường THCS Ngô Xá (Cẩm Khê, Phú Thọ), Trần...
بواسطة FruitOnyx Ty 2025-07-01 09:55:06 0 10كيلو بايت
Wellness
Đạt G hé lộ mối quan hệ với 2 con riêng của Cindy Lư, để lộ chi tiết khiến Hoài Lâm phải xem lại mình
Tạp chí Doanh nghiệp Việt Nam có bài viết "Đạt G hé lộ mối quan hệ với 2 con riêng của Cindy Lư,...
بواسطة Mathew Nguyen 2025-06-12 04:43:09 0 9كيلو بايت
غير مصنف
Điểm chuẩn Trường Đại học Công nghệ Thông tin năm 2025 lên tới 29,5
Trường Đại học Công nghệ Thông tin Tiến sĩ Nguyễn Tấn...
بواسطة AuraKindaNub Đái 2025-08-21 08:00:07 0 8كيلو بايت