AI Intelligence Report
Top Stories
Tencent unveils Hy3 โ its first flagship AI model under new leadership โ already running inside Yuanbao, WeChat tools and Tencent Docs
Tencent has published the public preview of Hy3, its first flagship Hunyuan model since the company restructured its AI organisation and folded the AI Lab into the Hunyuan team. It is described as a “fast-and-slow-thinking fused” mixture-of-experts model with 295 billion total parameters but only 21 billion active per query (which keeps it cheap to run), supporting a 256,000-token context window. Crucially for the global comms story: Hy3 is already powering Yuanbao, CodeBuddy, WorkBuddy, ima, Tencent Docs and the in-game assistant in Peacekeeper Elite โ and Yuanbao has switched its primary engine away from DeepSeek to this in-house technology. Pricing on Tencent Cloud’s TokenHub is RMB 1.2 per million input tokens / RMB 4 per million output tokens, with two weeks of free access via OpenRouter. Bloomberg frames the launch as a high-stakes test for the ex-OpenAI researchers Tencent has hired to lead the rebuild.
DeepSeek drops the long-awaited V4 preview โ and it’s the first major Chinese model tuned end-to-end for Huawei’s Ascend chips
DeepSeek released preview versions of V4-Pro (a larger model for demanding tasks) and V4-Flash (a smaller, cheaper, faster sibling) on Friday, April 24. Two things make this release matter beyond benchmarks. First, the model was co-launched with Huawei, which says its “Supernode” technology โ large clusters of Ascend 950 chips โ provides the compute. DeepSeek spent months reworking its software stack to optimise for Ascend, signalling a deeper shift away from Nvidia for at least some Chinese frontier work. Second, V4 dramatically improves long-context handling and agent-style reasoning. Bloomberg, citing CCTV-linked accounts, frames the delay as evidence of the broader chip-localisation push. MIT Technology Review’s “Three reasons V4 matters” is the cleanest single read.
Google commits up to $40B to Anthropic โ at a $380B valuation โ in the biggest AI hedging bet of the year
Google will put $10 billion in immediately, with another $30 billion tied to performance milestones. The structure mirrors Amazon’s earlier deal with Anthropic ($5 billion now, up to $20 billion later). Anthropic also separately announced 5 gigawatts of compute capacity coming online over the next year via Google and Broadcom. Two narratives to watch from a comms angle: (1) Google is now strategically funding both its in-house Gemini effort and its largest competitor โ a pattern that will frame regulator questions; (2) Anthropic is approaching $19B annualised revenue while OpenAI has crossed $25B and is preparing for a possible late-2026 IPO. The big-tech-AI capital cycle is no longer slowing.
OpenAI ships GPT-5.5 (“Spud”) only weeks after GPT-5.4 โ pitched as a “new class of intelligence” at double the API price
Released Thursday April 23 to Plus, Pro, Business and Enterprise tiers in ChatGPT and Codex, with API access from April 24. OpenAI is positioning GPT-5.5 as agentic by default โ strong at coding/debugging, deep research, document and spreadsheet creation, and operating other software end-to-end. It runs on Nvidia GB200 NVL72 rack-scale systems and ships with what OpenAI calls its strongest safeguards to date (see the system card). The cadence โ 5.4 to 5.5 in under two months โ and the API price doubling are both notable. TechCrunch frames it as a step toward an OpenAI “super app.”
Sam Altman publishes “Our Principles” โ a five-point manifesto on how OpenAI says it will guide AGI work
Posted yesterday (April 26) on the OpenAI site. Altman lays out the five principles meant to anchor OpenAI’s mission of “ensuring AGI benefits all of humanity.” Coming days after the GPT-5.5 launch and in a week dominated by the Meta layoffs / “personal superintelligence” framing from Mark Zuckerberg, this reads as a deliberate values reset. For Cam: this is the document that journalists writing about OpenAI for the next quarter will quote from.
Meta to cut ~8,000 staff (10% of headcount) to fund a $115โ135B AI capex year โ Wang’s reorg moves people into “AI pods”
Announced last week and dominating the weekend’s analysis. Meta is cutting roughly 10% of its workforce, with the layoffs effective May 20, while expecting capital expenditure of $115โ135 billion in 2026 to fund Meta Superintelligence Labs and core infrastructure. Teams are being reorganised into AI pods under 28-year-old Chief AI Officer Alexandr Wang. Zuckerberg says he’s “looking forward to advancing personal superintelligence for people around the world in 2026.” CNBC’s combined Meta + Microsoft 20,000-cut piece argues the AI-driven labor crisis is no longer hypothetical.
AI News Roundup
Substack Highlights
Note: A direct Substack inbox connector is not available in this run. The items below are pulled from Substack newsletters that landed in your Inoreader AI feeds during the past 24 hours; one-bullet coverage of other Substacks is included where they touched the news cycle. To restore full inbox coverage, reconnect the Substack tools.
Inoreader AI Folder
12 articles published in the past 24 hours; consolidated below where multiple newsletters covered the same story.
“Our Principles”
Sam Altman publishes five principles meant to guide OpenAI’s AGI work โ explicitly in response to the “what does OpenAI actually stand for” question that has trailed the GPT-5.5 launch and the Meta “personal superintelligence” framing.
“Exponential View #571: DeepSeek shows the future, again”
Azeem Azhar’s flagship issue on DeepSeek V4, drone learning curves, solar deployment and LLM-pixel research.
“GPT-5.5, DeepSeek V4, and World ID 4.0”
Triple-launch round-up. World ID 4.0 โ Sam Altman’s biometric “proof of human” project โ hits a 4.0 release with refreshed iris-orb hardware and a new wallet integration. Notable as another OpenAI-adjacent story landing the same week as GPT-5.5.
“OpenAI releases GPT-5.5 as a ‘new class of intelligence’ at double the API price”
Newsletter take focusing on the price doubling and what it means for app developers โ early read is that 5.5’s agentic abilities justify the cost only for workflows that previously needed a human-in-the-loop step.
“Anthropic’s security nightmare begins”
Walks through the cumulative impact of the Claude Code source leak, the deny-rule bypass bug, and the Mythos zero-day disclosures. Argues Anthropic’s brand is now the second story of every Anthropic product launch โ relevant for anyone watching Anthropic’s communications playbook.
“๐บ One analyst replaced 100 economists”
Profile of an analyst who reportedly spent ~$6,000/day on Claude API and replaced a 100-person economics team’s research throughput. Combined with a separate Meta-cuts angle, it’s the issue that will be most-shared in HR circles this week โ useful as a talking point for “augmentation vs replacement” framing.
“๐ง๐ผ I ranked 21 industries by build vs buy”
A subjective but useful 21-industry ranking of where companies should build their own AI tooling vs licence existing platforms. Communications/PR is flagged as a “buy” category because horizontal tools (Claude/ChatGPT + a connector layer) outperform niche PR-AI startups today.
“Your meetings are about to change. Stop attending. Start leading.”
Argument and tactical playbook for using AI meeting agents (and recording-summarisers) to skip listen-only meetings. Names Granola, Otter, Read.ai, and the new Zoom AI Companion as the practical stack today.
“Using Gemini 3 & Gemini Deep Research To Triage The White House Correspondent’s Dinner Shooting After 24 Hours”
How Gemini 3 + Deep Research can build a fast situational-awareness brief in a breaking-news scenario โ directly relevant for crisis comms triage. Leetaru also published two related Gemini posts the same day (backgrounder reports on India counterterrorism & crypto-financing and deep trend analysis on Estonian intelligence reviews).
“Ghost In The Machine has launched!”
An anti-AI documentary, Ghost In The Machine, has gone live. Gerard’s framing is ideological (the field is “race science all the way down”) but the documentary itself is now circulating in policy and media circles and is likely to come up in interviews โ worth being aware of even if you disagree with the thesis.
“Project Lobster: Microsoft brings Copilot AI to OpenClaw”
Reports โ citing The Information โ that Satya Nadella has been personally testing OpenClaw, the open-source vibe-coded personal assistant, and Microsoft is bringing Copilot capabilities to it under “Project Lobster.” Notable as a Microsoft-vs-Anthropic story, given OpenClaw’s Anthropic origins.
“Claude for Authors, Part 2: Getting Set Up (And Why I Changed My Mind About the Desktop App)”
Step-by-step Claude desktop setup for writers. Covers MCP, local files, Drafts integration.
AI Workflows & Tool Watch
Cowork Mode now supports recurring scheduled tasks
Anthropic shipped recurring scheduling for Claude Cowork (this briefing is itself an example). You configure a task once โ daily briefing, weekly comms report, automated file processing โ and Claude runs it on schedule with full access to files, MCP servers, plugins and connectors. Hourly / daily / weekly / weekdays-only options. The /schedule skill walks you through setup. Worth seeing how many of your existing Hazel and Keyboard Maestro chores collapse into this.
Local MCP servers in Claude Cowork โ the missing setup guide
A new step-by-step on getting local MCP servers running in Claude Cowork on macOS โ covers the certificate-trust gotcha, the path differences between Cowork and Claude Code, and how to expose tools to multiple Claude clients without conflicting credentials. Genuinely useful if you’ve been blocked trying to connect Cowork to a tool that doesn’t have an official connector.
Obsidian + Claude Code via MCP โ turning your vault into a live workspace
The cleanest write-up to date on connecting Claude Code (Anthropic’s CLI agent) to an Obsidian vault via MCP. Claude can read, search, and modify notes โ meaning your vault becomes an autonomous knowledge engine, not just a static archive. Practical pattern: a “claude-obsidian” plugin that creates, organises, maintains and evolves notes on its own, beyond what Smart Connections / Copilot do.
n8n adds telemetry for dynamic credentials and broad AI Agent fixes
n8n’s April release notes ship UI polish, AI Agent node fixes, custom scopes for Excel and Teams credentials, and (most important for you) telemetry for dynamic credentials โ useful when an n8n flow is calling a Claude or OpenAI tool and you need to debug why a particular run failed. n8n’s MCP layer also remains the cleanest way to expose your custom n8n workflows to Claude / Lovable / etc as tools.
Claude Code shipped 5 releases between April 17โ22
v2.1.113 โ v2.1.117. Faster MCP startup, model selection that persists between sessions, /resume performance, inline thinking progress, a native CLI binary, and Windows fixes plus sandbox hardening. The native binary is the headline โ startup time drops noticeably and memory footprint is much smaller, making it more viable to leave Claude Code running on a Mac mini for scheduled work.
Microsoft 365 SharePoint AI โ built on Claude
The “AI in SharePoint” Public Preview lets non-technical users plan and build sites, libraries, pages and lists from plain English prompts. The thing worth flagging internally: it is powered by Anthropic’s Claude, not by an OpenAI model. For comms teams who manage SharePoint pages this is an immediately useful tool; for anyone tracking the Microsoft / Anthropic / OpenAI dance, it is more evidence that Microsoft’s “model-agnostic” stance is real.
Perplexity Comet for Enterprise + silent MDM deploy
Comet โ Perplexity’s AI-native browser โ is now generally available and deployable across an enterprise via Mobile Device Management with no user click-through. Comet Assistant can run autonomous multi-step tasks (book a flight, manage email, fill forms) inside the page. Worth a controlled pilot for the comms team’s media-monitoring workflow, especially given that Bixby / Samsung Galaxy S26 now ship Perplexity APIs at the platform level.
Rowboat โ a quietly impressive new “AI work app”
Launched on Product Hunt April 21. Builds a living knowledge graph from your meetings, emails and notes so you can prep for the day, draft emails, build dashboards, automate browser tasks and manage projects with context you don’t have to re-explain each time. Closer to a Things 3 + Drafts hybrid with a context-aware chat layer than to ChatGPT โ worth a look for anyone whose stack is already Things 3, Drafts and Obsidian.
The “100-economist analyst” pattern is now a real comms playbook
The Neuron’s profile of the analyst spending $6K/day on Claude to replicate a 100-person research team is overstated, but the underlying pattern is real and directly relevant for crisis-comms teams: spin up a parallel “research desk” of Claude/Gemini agents on a single fast-moving story (e.g., the GDELT pattern of triaging the WHCD shooting in 24 hours). The expense is real but bounded and far smaller than retainer-firm fees.
Tencent Mentions
Bloomberg: Tencent Unveils AI Model in High-Stakes Test for OpenAI Hire
The most-cited piece of the past 24h. Frames Hy3 explicitly as a referendum on the leadership change โ particularly the ex-OpenAI researchers brought in to run the rebuilt Hunyuan team. Sets the narrative most Western journalists will follow.
Caixin: Tencent Unveils First Major AI Model Update Under New Leadership
The most authoritative Chinese-language framing โ describes Hy3 as the first major release since the AI Lab was folded into Hunyuan, and confirms Yuanbao has switched its primary model from DeepSeek to Hy3.
Techi: Tencent Hy3 Preview โ First Model From the Hunyuan AI Rebuild in 2026
Most useful technical-but-readable summary: 295B parameters / 21B active / 256K context / fast-and-slow-thinking MoE architecture. Already deployed in Yuanbao, CodeBuddy, WorkBuddy, ima, Tencent Docs, Peacekeeper Elite. Pricing on Tencent Cloud TokenHub.
InfoWorld: Former OpenAI research scientist launches new AI model for Tencent
Personnel-led story, written for the developer / enterprise-IT audience. Names the ex-OpenAI lead and emphasises the rapid 3-month iteration cycle from infrastructure rebuild to public preview.
Yahoo Finance: Tencent Hy3 AI And TVU Cloud Deal Put Valuation In Focus
The investor-facing read. Pairs Hy3 with the Tencent Cloud / TVU Networks live-media partnership (cloud-based live production for global streaming) and argues the valuation case is now AI + cloud, not games.
Simply Wall St: Tencent valuation check as Hy3 marks a major technology upgrade
Retail-investor framing. Useful as a barometer of how the consumer / retail-investor narrative is forming around Hy3 and Tencent’s AI capex.
Bloomberg: DeepSeek V4 delay shows shift to China chips (CCTV-linked account)
Not Tencent-specific, but the broader “Chinese frontier AI tunes itself to Huawei Ascend chips” story is one Tencent comms will get asked about โ particularly given Hy3’s deployment is already across the Tencent stack regardless of underlying chips.
Cryptonews: Tencent’s New Hy3 AI Model Is the Most Efficient Chinese LLM No One’s Talking About
Useful counter-narrative: argues Hy3’s low active-parameter count (21B of 295B) makes it materially cheaper to run than DeepSeek V4 for many inference workloads, but it’s getting less coverage because the launch was overshadowed by GPT-5.5 the same week.