OpenAI, Google, and Meta Researchers Warn We May Lose the Ability to Track AI Misbehavior

0
8KB

Over 40 scientists from the world’s leading AI institutions, including OpenAI, Google DeepMind, Anthropic, and Meta, have come together to call for more research in a particular type of safety monitoring that allows humans to analyze how AI models “think.” 

The scientists published a research paper on Tuesday that highlighted what is known as chain of thought (CoT) monitoring as a new yet fragile opportunity to boost AI safety. The paper was endorsed by prominent AI figures like OpenAI co-founders John Schulman and Ilya Sutskever as well as Nobel Prize laureate known as the “Godfather of AI,” Geoffrey Hinton. 

In the paper, the scientists explained how modern reasoning models like ChatGPT are trained to “perform extended reasoning in CoT before taking actions or producing final outputs.” In other words, they “think out loud” through problems step by step, providing them a form of working memory for solving complex tasks.

“AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave,” the paper’s authors wrote. 

The researchers argue that CoT monitoring can help researchers detect when models begin to exploit flaws in their training, manipulate data, or fall victim to malicious user manipulation. Any issues that are found can then either be “blocked, or replaced with safer actions, or reviewed in more depth.” 

OpenAI researchers have already used this technique in testing to find cases when AI models have had the phrase “Let’s Hack” in their CoT. 

Current AI models perform this thinking in human language, but the researchers warn that this may not always be the case. 

As developers rely more on reinforcement learning, which prioritizes correct outputs rather than how they arrived at them, future models may evolve away from using reasoning that humans can’t easily understand. Additionally, advanced models might eventually learn to suppress or obscure their reasoning if they detect that it’s being monitored.

In response, the researchers are urging AI developers to track and evaluate the CoT monitorability of their models and to treat this as a critical component of overall model safety. They even recommend that it become a key consideration when training and deploying new models.

Like
Love
Haha
3
Pesquisar
Categorias
Leia mais
Sem categoria
126 phường, xã mới ở Hà Nội sẽ có tên gọi như thế nào sau sáp nhập?
Ủy ban Thường vụ Quốc hội quyết định sắp xếp để thành...
Por CommercialBuy975 Brakus 2025-06-18 09:54:12 0 10KB
Food
Vanilla Bean Crème Brûlée Cheesecake Cupcakes
Vanilla Bean Crème Brûlée Cheesecake CupcakesIngredients:For the Cheesecake...
Por Cehrui Cehrui 2024-10-07 15:28:50 0 30KB
Sports
Khám phá 6 thực phẩm chay 'đáng ngờ' có thể khiến bạn tăng cân nhanh chóng.
Ăn chay không đồng nghĩa với giảm cân Vào mùa hè, ăn...
Por IllMinute994 Hồ 2025-07-01 07:48:05 0 18KB
Sem categoria
Đừng phạm phải 6 điều cấm kỵ này trong ngày hạ chí 21.6!
Về vấn đề này, chuyên gia phong thuỷ đặc biệt nhắc nhở...
Por trapd00r Ledner 2025-06-20 23:10:05 0 10KB
Film
Phụ nữ trên 40 tuổi nên tránh mặc 3 kiểu áo này vào mùa hè! Hãy chuyển sang áo len đan và áo sơ mi voan để trông thời trang và trẻ trung hơn
Bạn có biết không? Thường thì không phải chúng ta...
Por donnaversatile Ngô 2025-07-03 08:01:18 0 6KB