Intelligent Machines BriefingFor Wednesday, 25 February 2026(Prepared Tue 24 Feb 2026 at 20:16 PST)

1. Anthropic

Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight

Summary not available


Anthropic digs in heels in dispute with Pentagon, source says

Summary not available


Anthropic Dials Back AI Safety Commitments

The company said competitive pressure prompted it to pivot away from the

previous, more-cautious stance.


Musk's xAI and Pentagon reach deal to use Grok in classified systems

Elon [Musk's][1] artificial intelligence company xAI has signed an agreement to

allow the military to use its model, Grok, in classified systems, a Defense


Anthropic officially bans using subscription auth for third party use

Summary not available


Anthropic Accuses Chinese Companies of Siphoning Data From Claude

The allegations mirror those made by OpenAI, which told House lawmakers that

DeepSeek used ‘distillation’ to improve AI models.


US firms accused Chinese rivals of training their AIs using American models

Major U.S. technology companies have accused Chinese rivals of using AI-powered techniques to improperly train their own models on American systems, threatening U.S. competitive advantages in artificial intelligence.

The accusations center on "distillation" techniques that allow weaker AI systems to train on outputs from more sophisticated models. OpenAI specifically told Congress that DeepSeek used unauthorized distillation to enhance its R1 model, potentially allowing the Chinese firm to gain benefits from expensive U.S. training without comparable investment. Additionally, U.S. companies are defending against what they view as Chinese piracy—Disney issued a cease-and-desist to ByteDance for allegedly training its AI on copyrighted characters—raising broader concerns about intellectual property protection in the AI race between the two countries.


Infosec community panics as Anthropic rolls out Claude code security checker

"Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call." -- Anthropic

Anthropic announced Claude Code Security, a new AI-powered feature that scans codebases for vulnerabilities and suggests patches, causing significant market reaction and sparking debate about the future of cybersecurity tools.

The announcement triggered a sell-off in cybersecurity stocks, with CrowdStrike shares dropping nearly 8 percent, and prompted concerns about whether AI could replace traditional security solutions. However, Claude Code Security is not a revolutionary tool but rather the latest in a series of AI-enabled vulnerability detection systems from major tech companies including Amazon, Microsoft, Google, and OpenAI. While large language models have demonstrated capability in flagging pattern-based vulnerabilities at scale—Anthropic claimed Claude Opus 4.6 "found and validated more than 500 high-severity vulnerabilities" in open source code—all these tools still require human approval before implementing fixes, meaning humans remain essential to the security process.


Anthropic Pushes Claude Deeper Into Knowledge Work

While the market remains rattled over how new AI tools threaten traditional

software-as-a-service vendors, Anthropic pushes forward with new updates to its


Anthropic Links AI Agent With Tools for Investment Banking, HR

Summary not available


Cyber Stocks Slide As Anthropic Unveils 'Claude Code Security'

Anthropic introduced a new security feature called "Claude Code Security" into its Claude AI model, causing shares of major cybersecurity software companies to tumble on Friday.

The new tool scans codebases for security vulnerabilities and suggests targeted software patches for human review. The announcement prompted significant market declines across the cybersecurity sector, with CrowdStrike falling as much as 6.5%, Cloudflare dropping more than 6%, and the Global X Cybersecurity ETF falling as much as 3.8%. The feature is currently available in a limited research preview, raising investor concerns about potential competition to established cybersecurity software vendors.


2. OpenAI

How will OpenAI compete? — Benedict Evans

"You've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to try to sell it" -- Steve Jobs

Analyst Benedict Evans argues that OpenAI faces four fundamental strategic challenges as it attempts to maintain competitive advantage in an increasingly crowded AI market where multiple companies are shipping equivalent frontier models.

Evans contends that OpenAI's current business lacks a clear competitive moat—it has a large user base but with weak engagement and stickiness, no network effects, and no consumer products with proven product-market fit. As the AI market rapidly develops and competitors leverage existing distribution advantages, OpenAI must either invent new differentiated products or risk losing its early-mover advantage, particularly as undifferentiated products competing on similar capabilities tend to shift competition toward brand and distribution—a battle where established incumbents like Google and Meta have inherent advantages that OpenAI lacks.


OpenAI’s first ChatGPT gadget could be a smart speaker with a camera

OpenAI's first hardware release will be a smart speaker with a camera priced between $200 and $300, according to reporting from The Information.

OpenAI is expanding beyond software into physical hardware devices following its $6.5 billion acquisition of Jony Ive's hardware company last May. The smart speaker will feature facial recognition capabilities and the ability to recognize items and conversations in its vicinity, allowing users to make purchases. Beyond this initial device, OpenAI is reportedly exploring smart glasses and a smart lamp, though these products remain in early development stages with no confirmed release timelines, reflecting a broader industry trend of major tech companies racing to develop AI-powered physical gadgets.


OpenAI Plans to Price Smart Speaker at $200 to $300, as AI Device Team Takes Shape

Summary not available


Inside the OpenAI Team Developing its AI Devices

OpenAI has more than 200 people working on a family of AI-powered devices that

will include a smart speaker and possibly smart glasses and a smart lamp,


ChatGPT spits out surprising insight in particle physics

Summary not available


GPT-5.2 derives a new result in theoretical physics | OpenAI

Summary not available


OpenAI defeats xAI’s trade secrets lawsuit

"We welcome the Court's decision. This baseless lawsuit was never anything more than yet another front in Mr. Musk's ongoing campaign of harassment." -- OpenAI

A federal judge ruled Tuesday to dismiss xAI's trade secrets lawsuit against OpenAI, handing the company a legal victory in its ongoing battles with Elon Musk.

US District Judge Rita F. Lin found that xAI failed to demonstrate any misconduct by OpenAI itself, instead only pointing to eight former employees who left for OpenAI around the same time. While xAI alleged that former employees stole source code and retained confidential information, the judge determined that without evidence OpenAI directed or encouraged this behavior, the claims did not constitute illegal conduct. The ruling represents one chapter in an escalating legal conflict between OpenAI and Musk, who serves as CEO of xAI and co-founded OpenAI, with a more significant lawsuit over OpenAI's nonprofit-to-for-profit transition scheduled for jury trial in April.


3. Google

Google restricting Google AI Pro/Ultra subscribers for using OpenClaw

"If third-party integrations are the issue, I would expect the platform to block the integration rather than restrict a paid account ($249/mo) without communication." -- Aminreza_Khoshbahar

A Google AI Ultra subscriber reported that their paid account ($249/mo) was suddenly restricted without warning after connecting Gemini models via OpenClaw OAuth, leaving them locked out for days with no communication from support.

This case highlights frustration with Google's account restriction policies and customer support responsiveness. The user received no prior warnings before the restriction and faced difficulty accessing support channels, with some support options requiring additional fees. The incident reflects broader concerns about how platforms handle third-party integrations and their obligation to communicate with paying customers before taking restrictive action on their accounts.


Gemini 3.1 Pro

Summary not available


Google’s new Gemini Pro model has record benchmark scores — again

"Gemini 3.1 Pro is now at the top of the APEX-Agents leaderboard" -- Brendan Foody, CEO of AI startup Mercor

Google released Gemini Pro 3.1, its newest large language model, which achieved record benchmark scores and is being positioned as one of the most powerful LLMs available.

Gemini 3.1 Pro represents a significant upgrade from its predecessor, Gemini 3, and has demonstrated superior performance on independent benchmarks including Humanity's Last Exam and the APEX-Agents leaderboard. The release is part of an intensifying competition in the AI model space, where major tech companies including OpenAI and Anthropic are rapidly releasing increasingly powerful models designed for agentic work and multi-step reasoning tasks.


What Gemini features you get with Google AI Plus, Pro, & Ultra [February 2026]

Summary not available


4. AI AI AI

Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox

Summary not available


AIs can generate near-verbatim copies of novels from training data

"It was a surprise that they could memorize entire texts" despite guardrails. -- A. Feder Cooper, researcher at Yale University

Researchers at Stanford and Yale Universities have demonstrated that large language models from OpenAI, Google, Meta, Anthropic, and xAI can be prompted to generate near-verbatim copies of copyrighted novels, undermining the AI industry's longstanding defense in copyright lawsuits.

Recent studies show that LLMs memorize far more training data than previously believed, with some models able to reproduce 70-76 percent of entire books when strategically prompted. This finding directly challenges the AI industry's core legal argument that these systems "learn" from copyrighted works without storing copies, and could significantly impact ongoing copyright litigation worldwide. The memorization capability raises serious questions about fair use claims and could create substantial liability for AI companies, particularly as courts have already begun ruling against companies for storing copyrighted content.


Firefox 148 Launches with AI Kill Switch Feature and More Enhancements

"once AI features are turned off, future updates will not override this choice" -- Mozilla

The product: Firefox 148 is a browser update that introduces an "AI kill switch" feature allowing users to disable AI functionalities such as chatbot prompts and AI-generated link summaries. The update also enhances core web platform capabilities, privacy controls, and accessibility features.

Availability: Posted February 24, 2026

Platforms: Windows 10 and other operating systems (Firefox is cross-platform)


More Than Half of Teens Use Chatbots for Schoolwork, Survey Finds

A new study from the Pew Research Center finds teens think chatbot-assisted

cheating has become “a regular feature of student life.


AI Now Helps Manage 16% of America's Apartments

"From an industry perspective, it's really about meeting the renter where they are." -- Christopher Yip, RET Ventures

AI systems are now managing apartment leasing and tenant interactions at approximately 16% of America's apartments, with companies like EliseAI handling everything from tours to lease signing to ongoing management tasks.

This widespread adoption of AI in residential property management represents a significant shift in how renters interact with landlords, accelerated by pandemic-era demand for contactless services that has only deepened since 2020. While the technology offers convenience—AI responds within 30 seconds compared to human agents who may ghost for days—the trend raises concerns about the loss of human interaction and trust in the rental process, even as industry leaders acknowledge long-term plans to create "fully autonomous buildings."


Uber employees have an AI clone of CEO Dara Khosrowshahi — and use 'Dara AI' before talking to the big boss himself

"When the models can learn in real-time, that is the point at which I'm going to think that, yeah, we are all replaceable." -- Dara Khosrowshahi

Uber CEO Dara Khosrowshahi revealed that some employees have created an AI clone of him called "Dara AI" to help prepare presentations before pitching to the CEO himself.

The creation of "Dara AI" exemplifies how employees across major companies are using artificial intelligence in novel ways to prepare for high-pressure workplace moments. While Khosrowshahi acknowledged the AI tool helps teams refine their presentations, he argued that true executive replacement remains distant, noting that AI still struggles to process new information and make real-time decisions—capabilities he considers essential to leadership roles.


LLM-Generated Passwords Look Strong but Crack in Hours, Researchers Find

"passwords generated by major large language models -- Claude, ChatGPT and Gemini -- appear complex but follow predictable patterns that make them crackable in hours, even on decades-old hardware." -- AI security firm Irregular

Researchers at AI security firm Irregular have discovered that passwords generated by major large language models including Claude, ChatGPT, and Gemini appear secure but are actually vulnerable and can be cracked in hours.

The finding reveals a significant security flaw in relying on LLMs for password generation. When Claude Opus 4.6 was prompted 50 times, only 30 passwords were unique, with 18 being identical strings. The estimated entropy of 16-character LLM-generated passwords was only 20 to 27 bits, far below the 98 to 120 bits expected of truly random passwords, making them easily crackable despite appearing complex to human users.


5. In the Wild

Men ‘yell’ at AI in ALL CAPITAL LETTERS 80% more than women

Summary not available


Can you detect an AI generated face?

"What we saw was that people with average face-recognition ability performed only slightly better than chance. And while super-recognisers performed better than other participants, it was only by a slim margin. What was consistent was people's confidence in their ability to spot an AI-generated face – even when that confidence wasn't matched by their actual performance." -- Dr Dunn

Researchers from UNSW and ANU found that most people can only identify AI-generated faces at slightly better than chance levels, even those with exceptional face-recognition abilities.

A study published in the British Journal of Psychology tested 125 participants, including 36 "super recognisers" with exceptional face-recognition ability, on their capacity to distinguish real faces from AI-generated ones. The research revealed that while super-recognisers performed better than average participants, the margin was slim, and notably, people across all groups demonstrated high confidence in their ability to spot AI faces despite their actual performance not matching that confidence. This finding has significant implications as AI-generated imagery becomes increasingly sophisticated and prevalent in everyday life.


This AI-powered machine turns photos into smells

Picture a memory from childhood, one that feels real and nostalgic, but somehow

just out of grasp: perhaps a family trip to the beach, or a moment mid-swing on


It’s Called the ‘Fitbit for Farts’—and It’s No Joke

Scientists developing a new underwear-able hope to do for gastroenterology what

the Apple Watch did for cardiology.


Can you Guess the English Accent?

The product: "Guess the English Accent" is an interactive game where users listen to audio clips of English speakers from different countries and attempt to identify which country each speaker is from based on their accent.

Availability: Available online at guesstheaccent.xyz/


I Taught My Dog to Vibe Code Games | Caleb Leak

Summary not available


6. Leo's Pick

Famous Signatures Through History

The product: Signatory is a free online signature creator that allows users to draw a handwritten signature using a mouse, finger, stylus, or Apple Pencil, then export it as a transparent PNG or scalable SVG file for use in documents, emails, and digital or print applications.

Availability: Free, instant, no signup required. Works on phone, tablet, or desktop.

Platforms: Web-based; works on any device with a browser (phone, tablet, desktop). Compatible with finger, Apple Pencil, stylus, or mouse input.


AlexsJones/llmfit: 94 models. 30 providers. One command to find what runs on your hardware.

Summary not available


This app alerts you when it detects Meta camera glasses nearby

"The author warns that there is no guarantee that the app will work in all cases and that false positives, i.e., notifications triggered for devices that are not camera glasses, may occur."

The product: Nearby Glasses is an app that detects camera glasses from Meta, Oakley, and Snap nearby by reading their Bluetooth transmissions and sends a notification to the user's phone when camera glasses are within approximately 10 meters.

Cost: Free

Availability: Currently available for Android on the Play Store and GitHub, with an iOS version in development.

Platforms: Android (iOS version coming soon)



Stories will be updated as needed until show time.