CM
Daily AI Briefing
Wednesday, May 6, 2026

AI Intelligence Report

Cam’s custom briefing on AI covering news, Substack newsletters, RSS feeds, Reddit, workflows, and everything else that matters today in AI.

Top Stories

OpenAI

GPT-5.5 Instant goes live as ChatGPT’s new default model

OpenAI rolled out GPT-5.5 Instant on May 5, replacing the prior default in ChatGPT with a faster, more accurate model that the company says reduces hallucinations and gives users more control over personalisation. It is the latest move in an aggressive release cadence: GPT-5.5 itself shipped just six weeks after GPT-5.4 and now sits second on the Artificial Analysis Intelligence Index, narrowly ahead of Google’s Gemini 3.1 Pro and Anthropic’s Claude Opus 4.7. OpenAI also released the matching GPT-5.5 Instant System Card documenting safety testing.

Axios · Bloomberg · TechCrunch

The big AI labs just became consulting firms

Within hours of each other, OpenAI and Anthropic both launched enterprise services joint ventures aimed at the territory long controlled by McKinsey, BCG and Accenture. OpenAI finalised The Deployment Company, a $10 billion vehicle anchored by TPG with Brookfield, Bain Capital, Advent and 15 other investors, which will embed forward-deployed engineers inside portfolio companies in healthcare, logistics, manufacturing and financial services. Anthropic countered with a $1.5 billion firm co-founded with Blackstone, Hellman & Friedman and Goldman Sachs, with $300 million each from the three anchors and another $150 million from Goldman. For comms leaders, the implication is significant: the implementation layer for enterprise AI is shifting from traditional consultancies to the model labs themselves.

U.S. News · The Information

Anthropic commits $200 billion to Google Cloud over five years

Anthropic has agreed to spend $200 billion with Google Cloud over the next five years, a commitment so large that, by The Information’s calculation, Anthropic now accounts for more than 40 percent of Google’s disclosed cloud revenue backlog. The deal builds on an April agreement with Google and Broadcom for multiple gigawatts of TPU capacity, expected to come online from 2027. The scale of the spend is a reminder that even as Anthropic raises a $50 billion round at a $900 billion valuation, the cost of training frontier models continues to outpace fundraising.

OpenAI · Axios

OpenAI opens self-serve ChatGPT Ads Manager to all U.S. businesses

Also on May 5, OpenAI launched a beta self-serve ChatGPT Ads Manager for U.S. advertisers, with cost-per-click bidding, expanded measurement tools and integrations with Adobe, Criteo, Kargo, Pacvue and StackAdapt. Agency partners include Dentsu, Omnicom, Publicis and WPP. OpenAI says ad placements will be kept separate from conversation content and that personal details will not be shared with advertisers, but the move marks a clear pivot toward the consumer ad business model that funds Google and Meta — and a new venue communications teams will need to monitor.

Mayo Clinic

Mayo Clinic AI flags pancreatic cancer up to three years before diagnosis

A Mayo Clinic AI model called REDMOD identified 73 percent of pre-diagnostic pancreatic cancers on routine abdominal CT scans at a median of 16 months — and in some cases up to three years — before clinical diagnosis. Pancreatic cancer is one of the deadliest cancers because it is typically caught too late for curative treatment; the study, published in the journal Gut, is a landmark for early detection. Mayo is now moving the technology into a prospective clinical trial called AI-PACED.

Tencent · VO3 AI Blog

Tencent quietly open-sources Hunyuan Video 1.5 — and changes the AI video landscape

On May 4, Tencent released Hunyuan Video 1.5 on GitHub with full model weights and Windows/Linux code. The 8.3-billion-parameter open-source text-to-video model produces 720p, 6-second clips in roughly 75 seconds on a single consumer-grade RTX 4090 GPU — commercial-quality output that previously required cloud-only services. Industry analysts describe it as the first major open-source release at this scale and a meaningful competitive moment for Tencent’s AI strategy. (See the dedicated Tencent Watch section below.)

AI News Roundup

Models & Releases
OpenAI GPT-5.5 Instant — new ChatGPT default; smarter, fewer hallucinations, more personalisation controls.
Tencent Hunyuan Video 1.5 — first major open-source text-to-video model that runs on a single consumer GPU.
Three-way frontier race — GPT-5.5, Claude Opus 4.7 and Gemini 3.1 Pro each lead a different benchmark (terminal agents, complex coding, scientific reasoning).
Deals & Funding
OpenAI’s “The Deployment Company” — $10B JV with TPG, Brookfield, Advent, Bain and 15 others to embed engineers in PE-owned firms.
Anthropic + Blackstone, Hellman & Friedman, Goldman Sachs — $1.5B enterprise AI services firm to compete with the Big Four consultancies.
Anthropic / Google Cloud — $200B five-year cloud and TPU commitment, ~40 percent of Google’s disclosed backlog.
Anthropic $50B round — targets $900B valuation, the largest AI funding round to date.
JV M&A activity — both new ventures are already in talks to acquire smaller AI services firms; OpenAI’s vehicle is reportedly in advanced stages on three deals.
Policy & Governance
Google & Microsoft join U.S. CAISI pre-release vetting — both will give the U.S. Commerce Department’s Center for AI Standards and Innovation early access to AI models, alongside OpenAI and Anthropic.
Pentagon clears 8 firms for classified AI — including OpenAI, Google, Microsoft, Meta and others; Anthropic notably absent from the initial list.
Trump pushes mandatory pre-release vetting — concerns over Anthropic’s “Mythos” model are accelerating a federal review framework for frontier models.
EU AI Act — full applicability still scheduled for August 2, 2026; new “Digital AI Omnibus” proposal would defer some high-risk obligations.
Robotics & Embodied AI
Eka Robotics — MIT/DeepMind-founded startup unveils Vision-Force-Action model; demos include screwing in a lightbulb and sorting chicken nuggets, suggesting a “GPT-1 moment” for dexterous manipulation.
Healthcare AI
Mayo Clinic REDMOD — AI flags pancreatic cancer up to three years before clinical diagnosis on routine CT scans.
Security & Risk
“We scanned 1 million exposed AI services” — The Hacker News investigation finds widespread misconfiguration in self-hosted LLM stacks; the “ClawdBot” incident is averaging 2.6 CVEs per day.
Grok crypto wallet hacked via prompt injection — an NFT containing hidden instructions tricked an unofficial Grok-powered wallet into emptying funds; a useful cautionary tale on agentic commerce.

Substack Highlights

Note: direct Substack inbox access wasn’t available this morning, so the highlights below are drawn from the AI newsletter editions captured in your Inoreader feeds in the past 24 hours.

Inoreader AI Folder

13 articles published in the past 24 hours. Items already covered above are cross-referenced rather than repeated.

GPT-5.5 Instant: smarter, clearer, and more personalized

OpenAI News · May 5, 2026

OpenAI’s official launch post for GPT-5.5 Instant, the new ChatGPT default. Headlines: better accuracy, fewer hallucinations, finer-grained personalisation controls. Paired with a system card disclosing red-teaming and safety evaluation results.

See also: Top Stories above.

New ways to buy ChatGPT ads

OpenAI News · May 5, 2026

OpenAI announces beta access to a self-serve Ads Manager for U.S. advertisers, plus CPC bidding and stronger measurement tools. The post emphasises that ads will not draw on private chat content.

See also: Top Stories above.

GPT-5.5 Instant System Card

OpenAI News · May 5, 2026

Companion safety document to GPT-5.5 Instant, detailing capability evaluations, refusal behaviour and known limitations. Worth flagging if Tencent comms ever needs to compare safety disclosure practices across labs.

Grok AI unofficial crypto wallet hacked with an NFT and a prompt injection

Pivot To AI (David Gerard) · May 5, 2026

Caustic write-up of how an unofficial Grok-powered crypto wallet was emptied after attackers embedded a prompt-injection payload inside an NFT. Gerard uses it to puncture the “agentic commerce” narrative: until LLMs reliably resist injected instructions, autonomous agents holding real funds remain a brittle proposition. Useful as a live example for any internal explainer on agent safety.

Scaling GDELT for a new era: moving to daemon proxies for BigTable & GCS using Agentic Gemini

GDELT Official Blog · May 5, 2026

Technical case study: GDELT, the global news monitoring project, used Agentic Gemini to redesign and rewrite its data infrastructure, replacing direct cloud-API calls with daemon proxies in front of BigTable and Google Cloud Storage. Notable as a real-world example of an AI agent doing meaningful infrastructure engineering autonomously.

Codex is gaining steam

Ben’s Bites · May 5, 2026

Ben Tossell’s read on OpenAI’s Codex push, including the new ability to import settings and projects from Claude Cowork. Same edition syndicated under “Newsletters on AI” in your inbox.

See also: Substack Highlights above.

Mayo’s AI spotted cancer 3 years before doctors did

The Neuron / Newsletters on AI · May 5, 2026

Coverage of the Mayo Clinic REDMOD pancreatic-cancer study, plus the proposed White House review of all frontier AI releases.

See also: Top Stories and Substack Highlights above.

The secret ChatGPT setting that stops AI training automatically

The Automated · May 5, 2026

Tutorial on ChatGPT’s Data Controls toggle and how to use AI to support data-backed writing.

See also: Substack Highlights above.

Eka’s robotic claw feels like we’re approaching a ChatGPT moment

Newsletters on AI · May 5, 2026

Spotlight on Eka Robotics’ Vision-Force-Action model and a side note on a study showing OpenAI’s o1 outperforming triage doctors at correctly diagnosing ER patients (67 percent vs. 50–55 percent).

See also: AI News Roundup — Robotics above.

Last Week in AI #340 — OpenAI vs Musk + Microsoft, DeepSeek v4, Vision Banana

Newsletters on AI · May 5, 2026

Weekly digest from Skynet Today; key items recapped under Substack Highlights.

See also: Substack Highlights above.

AI Workflows & Tool Watch

Claude Code: plugin marketplace gets stronger; .zip plugins now supported

The latest Claude Code release lets you install plugins directly from .zip archives via --plugin-dir, and /mcp now shows tool counts and flags servers that connect with zero tools — small but useful changes that make MCP setups noticeably easier to debug. The release also fixes a long-standing bug where MCP stdio servers received corrupted arguments when shell prefixes contained spaces.

Directly relevant: Claude Code, Claude Cowork mode, MCP servers

Codex import-from-Claude lets you migrate workflows in one command

OpenAI’s Codex now ships with persisted /goal workflows, configurable TUI keymaps, plan-mode nudges and a plugin marketplace that mirrors Claude Code’s. The headline addition is a built-in importer: Codex will pull settings, plugins, agents and project configuration directly from Claude Cowork, lowering the cost of trying both side-by-side. Useful for any team weighing assistants for a comms automation pilot.

Directly relevant: Claude Code / Cowork, ChatGPT, OpenAI Codex

Perplexity Comet: Opus 4.6 is now the default model

Max subscribers using Perplexity’s Comet browser agent can now choose between Claude Opus 4.6 (the new default) and Claude Sonnet 4.5 to power autonomous browsing tasks. Comet also gained an upgraded voice mode and a Samsung Galaxy S26 system-level integration, with Bixby now using Perplexity for live web search. For media-monitoring workflows, this makes Comet substantially more capable as a “watch this site / summarise the changes” agent.

Directly relevant: Perplexity, Claude, browser-based research

n8n adds LLM-as-a-Judge evaluations to its automation platform

n8n published a new guide on May 5 covering its built-in evaluation framework: you can now score AI workflow outputs with LLM-as-a-Judge metrics, set up drift monitoring and run pre-deployment tests on real data. Enterprise features like Isolated Projects, Git integration and Workflow Diffs round out a clear push to make n8n the operating layer for production-grade AI automations.

Directly relevant: n8n, Make, Zapier — comms-team automation
Source: n8n Blog

Obsidian + Claude: Claudian plugin turns your vault into an AI workspace

The Claudian plugin (and a similar plugin called CAO — Claude AI for Obsidian) lets Claude Code, Codex and other agents operate directly inside an Obsidian vault: file read/write, search, bash and multi-step workflows all work without leaving your notes. For knowledge-management-heavy workflows — running media-monitoring summaries against an issues knowledge base, drafting Q&A docs from clipped articles — it removes most of the copy/paste friction.

Directly relevant: Obsidian, DEVONthink, Claude Code, knowledge management

Reddit r/ClaudeAI: practitioner tips on long-context refactors and Artifacts as a comms workflow

Active threads this week: developers using Claude Code’s 200K-token context to drop entire codebases (one user refactored a 3,000-line C# file in a single session) and non-developers using Artifacts to spin up shareable interactive briefings. The Artifacts pattern translates well to PR work — build a one-page interactive Q&A or messaging matrix that a stakeholder can explore rather than scroll through.

Directly relevant: Claude, Claude Code, Artifacts, comms deliverables

Watch-out: prompt injection of agentic systems is now a real-world threat

The Grok crypto-wallet incident is the cleanest example yet of an agent being subverted by malicious data dressed as legitimate input (an NFT containing hidden instructions). For comms teams piloting agentic tools that touch external content — news monitoring, social listening, automated drafting — the practical takeaway is simple: do not let an agent take consequential actions (publish, send, transact) without a human approval step.

Directly relevant: any agentic tool used in PR / issues management
Source: Pivot To AI

Tencent Mentions

Hunyuan Video 1.5 quietly released as open source on May 4

Tencent dropped full weights and Windows/Linux code for Hunyuan Video 1.5 on GitHub: an 8.3-billion-parameter text-to-video model that produces 720p, 6-second clips in roughly 75 seconds on a single RTX 4090 GPU. Industry coverage frames this as the first major open-source video model at this scale and a meaningful counter-move to closed offerings from Runway, Pika and OpenAI’s Sora. Pixomondo is reportedly already using Hunyuan technology to build dragon VFX for a recent mini-series — a useful proof point for anyone fielding “is Hunyuan production-ready?” questions.

Hy3 model preview — flagship Hunyuan upgrade in early May

Earlier this month, Tencent’s Hunyuan team launched the Hy3 preview, billed internally as the most intelligent Hy-series model to date, with substantial gains on complex reasoning, instruction following, in-context learning, coding and agentic tasks. Useful talking point if asked how Hunyuan compares with the latest GPT-5.5 / Claude Opus 4.7 / Gemini 3.1 Pro releases dominating today’s news.

SCMP profile of Hunyuan team leadership

South China Morning Post’s continuing coverage of Tencent’s flagship AI push, including the role of the former OpenAI researcher leading the latest Hunyuan generation. No new news today, but the article keeps surfacing in Western media commentary on China’s frontier AI players, alongside Alibaba (Qwen3-Max-Thinking) and Baidu (ERNIE 5.0).