New Claude Model Triggers Stricter Safeguards at Anthropic
Digest more
Claude Opus 4 and Claude Sonnet 4, Anthropic's latest generation of frontier AI models, were announced Thursday.
Claude, developed by the AI safety startup Anthropic, has been pitched as the ethical brainiac of the chatbot world. With its focus on transparency, helpfulness and harmlessness (yes, really), Claude is quickly gaining traction as a trusted tool for everything from legal analysis to lesson planning.
When Anthropic’s older Claude model played Pokémon Red, it spent “dozens of hours” stuck in one city and had trouble identifying nonplayer characters. With Claude 4 Opus, the team noticed an improvement in Claude’s long-term memory and planning capabilities.
Anthropic says Claude Sonnet 4 is a major improvement over Sonnet 3.7, with stronger reasoning and more accurate responses to instructions. Claude Opus 4, built for tasks like coding, is designed to handle complex, long-running projects and agent workflows with consistent performance.
Anthropic's latest Claude models promise coding marathons and superior reasoning. But you'll pay premium rates for the privilege.
Anthropic's Claude 4 outperforms competitors in coding and reasoning, offering advanced features and robust safety measures.
A new study reveals that most AI chatbots, including ChatGPT, can be easily tricked into providing dangerous and illegal information by bypassing built-in safety controls
A recent study reveals that most AI chatbots can be easily tricked into providing dangerous and illicit information, posing a significant security concern.