Some cognitive tasks are nearly instantaneous, such as determining the current day, while others demand prolonged mental effort, like deciphering a cryptic crossword or debugging intricate code. The ability to adjust cognitive effort based on the complexity of a task is an essential aspect of human intelligence. Now, Claude exhibits a similar flexibility with the introduction of extended thinking mode in Claude 3.7 Sonnet.
With extended thinking mode, Claude can be directed to engage in deeper analysis for complex queries. This feature allows users to toggle extended thinking on or off, while developers can fine-tune a "thinking budget" to regulate the time Claude spends on a given problem. Unlike switching between different models or distinct strategies, this mode enables the same model to allocate additional time and cognitive effort to arrive at a solution.
This enhancement provides a notable boost in intelligence and introduces new considerations regarding AI functionality, evaluation, and safety. One key aspect of this development is the transparency of Claude’s thought process.
Making the AI’s reasoning visible in raw form serves several purposes:
Despite its benefits, visible reasoning also presents challenges. Some users may find the thought process detached and impersonal, as it lacks standard character training. Furthermore, raw AI reasoning may include incorrect or misleading intermediate steps. Another challenge involves faithfulness, or ensuring that the displayed thought process accurately represents the model’s internal decision-making.
Faithfulness remains an area of active research. Current findings suggest that AI models often rely on factors not explicitly stated in their visible thought process, limiting its utility for ensuring safety. Additionally, making thought processes visible could introduce security risks, such as aiding malicious actors in jailbreaking attempts. Over time, AI models may even adapt their internal reasoning in unpredictable ways if they are aware that their thought process is being monitored.
Given these concerns, the visible thought process in Claude 3.7 Sonnet is considered a research preview, with future iterations carefully weighing the benefits and risks of transparency.
Claude 3.7 Sonnet demonstrates improved agentic capabilities, particularly in iterative tasks requiring sustained interaction. This improvement is evident in evaluations like OSWorld, where the model performs progressively better over time. One notable example of its enhanced agency is playing Pokémon Red, where Claude was equipped with memory and screen input, allowing it to sustain gameplay over thousands of interactions. Unlike previous versions, which struggled with early obstacles, Claude 3.7 Sonnet API demonstrated the ability to adjust strategies and progress through multiple Gym Leaders.
Beyond games, these capabilities signal meaningful advancements for AI-driven automation. The ability to persistently pursue open-ended tasks enhances the potential for AI applications in problem-solving, software development, and other fields requiring sustained reasoning.
Extended thinking in Claude 3.7 Sonnet leverages serial test-time compute scaling, where additional reasoning steps are taken sequentially before generating an output. Research indicates that this approach yields logarithmic improvements in accuracy, particularly in mathematical reasoning.
Parallel test-time compute scaling represents another avenue for enhancing performance. By generating multiple independent thought processes and selecting the best outcome, models can achieve significant accuracy improvements. Techniques such as consensus voting and secondary model verification contribute to refining Claude’s reasoning. In evaluations such as GPQA, a dataset of complex scientific questions, leveraging 256 independent samples and an optimized scoring model enabled Claude 3.7 Sonnet to achieve high accuracy rates, including a physics subscore of 96.5%.
While parallel compute scaling is not yet available in the deployed model, ongoing research suggests its potential for further optimizing AI reasoning efficiency.
Ensuring safe and responsible AI deployment remains a priority. Claude 3.7 Sonnet adheres to Anthropic's Responsible Scaling Policy, maintaining AI Safety Level (ASL) 2 standards while demonstrating increased sophistication across multiple domains. Safety evaluations include extensive red-teaming exercises to assess risks related to chemical, biological, radiological, and nuclear threats. Although some AI-assisted participants demonstrated enhanced capabilities in these areas, all attempts failed at critical junctures, preventing successful execution.
Beyond traditional safeguards, Claude 3.7 Sonnet introduces encrypted thought process segments in cases where reasoning could potentially include harmful content. This ensures that while Claude retains the ability to generate accurate outputs, users do not have access to thought processes that may pose security risks.
Another major focus is computer use security. Claude now features enhanced defenses against prompt injection attacks, where hidden messages attempt to manipulate AI behavior. Through targeted training and classifier implementation, resistance to such attacks has improved significantly, with mitigation success rates increasing from 74% to 88%.
Claude 3.7 Sonnet’s extended thinking capability represents a significant leap forward in AI cognition, introducing more robust reasoning, improved problem-solving abilities, and greater transparency. As research continues, future models may integrate further enhancements in parallel computation, safety mechanisms, and reasoning accuracy.
available on Claude.ai and via API. The continued development of extended AI thinking will shape the future of intelligent automation, offering exciting possibilities for innovation across various industries.
Say hello to the all-new Claude 3.5 Sonnet and Haiku models, plus an exciting new feature—computer use! Experience the next evolution of AI today!