Tech / AI / IT Intelligence Briefing
Reporting Period: March 21–22, 2026 | Generated from 151 tweets
Executive Summary
The most significant open-source AI development of the period is the confirmed upcoming release of MiniMax-M2.7 open weights, expected within approximately two weeks, with the company's Head of Engineering confirming the timeline and noting active iteration with a "noticeably better" updated version. Meanwhile, Nous Research's Hermes Agent is gaining significant traction in the local AI community, with users actively migrating from OpenClaw and reporting strong reliability over multi-day runs. In the agentic coding space, competing tools — Cursor, Claude Code, OpenAI Codex, and the emerging OpenCode — are rapidly leapfrogging each other, reflecting an accelerating race for developer mindshare. GPU supply chain stress is emerging as a notable constraint, with reports of shortages across all components needed for deploying GPUs and widespread "nervousness and hoarding." A fringe but noteworthy story involves Terrafab, a reported Tesla/xAI/SpaceX joint venture targeting custom AI chip fabrication, though it remains highly speculative.
Key Events
-
MiniMax-M2.7 open weights confirmed ~2 weeks away: MiniMax's Head of Engineering (SkylerMiao7) confirmed the release timeline, noting continuous iteration and a new version already deployed that performs noticeably better on benchmarks. Community enthusiasm is high, with observers calling it "the best model that can fit at home." → link
-
Nous Research Hermes Agent reaches 10,000 GitHub stars: Teknium announced the milestone as Hermes Agent sees a surge in community adoption, with users running it for 8+ days without restarts and actively migrating production workloads from OpenClaw. → link
-
GPU infrastructure supply chain under stress: @thdxr reports shortages in every component needed to deploy GPUs — including labor — with significant nervousness and hoarding behavior, raising uncertainty about the next 6 months of AI infrastructure capacity. → link
-
Agentic coding tool leapfrog race accelerates: @thdxr documents the rapid sequence: Cursor → Claude Code → Codex → Composer 2, illustrating that no single tool's "data flywheel" moat has held. → link
-
Terrafab (Tesla/xAI/SpaceX chip fab JV) reported: @TrungTPhan details a reported joint venture producing two custom chip lines — AI5/AI6 for Tesla edge inference (FSD/Optimus) and D3 for SpaceX space data centers — alongside a moon-based electromagnetic mass driver concept. Highly speculative but widely discussed. → link
-
OpenCode integrates with AWS Console via Cloud Shell: @thdxr demonstrated that
npx opencode-ailaunched in AWS Cloud Shell auto-authenticates with AWS and picks up Bedrock models, enabling natural language AWS management directly in the console. → link -
Ghostling (libghostty demo) built 100% by AI agents: @jezell RT'd @mitchellh confirming that not a single line of the Ghostling demo was written by a human — agents wrote everything, a notable milestone in agentic software development. → link
-
AI-driven team topology reshaping engineering orgs: @RealGeneKim documents NRC Health's Dustin Warner reorganizing global engineering squads by spoken language, using AI for cross-language communication instead of human manager intermediaries, with teams now syncing hourly instead of daily. → link
-
llama.cpp recommended over Ollama/LM Studio for agent workloads: @sudoingX advises compiling llama.cpp from source for Hermes Agent, arguing that Ollama and LM Studio abstraction layers introduce debugging complexity and lag behind upstream performance improvements. → link
-
Cargill AI vision system "CarVe" recovers 55M lbs of meat/year: @TrungTPhan reports on CarVe, a machine vision system trained to identify residual meat on carcasses, recovering 0.5% more meat per animal — worth ~$200M annually — amid 75-year-low cattle herd levels. → link
-
Tenstorrent office visit yields bullishness on open AI models: @TheAhmadOsman, after visiting the Tenstorrent office and attending events in San Jose, declared increased conviction that open-source AI "cannot lose." → link
-
Local LLM bottleneck education: @TheAhmadOsman clarifies that VRAM size alone is not the bottleneck for local LLM inference — memory bandwidth, PCIe vs. NVLink interconnects, and inference engine choice (vLLM, TensorRT-LLM, SGLang) matter more, and Unified Memory is substantially slower than VRAM. → link
-
pi terminal emulator extension API breaking change incoming: @badlogicgames (Mario Zechner, creator of pi) announced a hard break in pi's extension API to separate business logic from UI layer, warning existing extension developers to prepare for migration work. → link
-
3D generation research: semantic-first approach: @victormustar RT'd @DanielCohenOr1 highlighting new research that builds 3D shapes from semantic meaning rather than coarse geometry — described as a "refreshing take" on 3D generation. → link
-
AI startup accountability critique — "Delve" case: @juliarturc published a sharp critique of the Silicon Valley "fake it till you make it" playbook, calling out Delve founders for hiding human labor behind AI feature claims, and indicting the investor ecosystem that funds such ventures without due diligence. → link
Analysis
Patterns:
-
Open-source momentum is accelerating. The MiniMax-M2.7 release, Hermes Agent's growth, and community sentiment from @TheAhmadOsman after San Jose events all point to open-weight models narrowing the gap with proprietary offerings faster than many expected. The community is increasingly operationalizing local models for production agent workloads, not just experimentation.
-
OpenClaw is losing ground fast. Multiple independent voices — @sudoingX, @Teknium, and community members — are explicitly recommending migration away from OpenClaw to Hermes Agent. This appears to be a genuine shift in the local AI toolchain, not just marketing noise.
-
The agentic coding tool market is winner-take-most but the winner keeps changing. The Cursor → Claude Code → Codex → Composer 2 leapfrog pattern suggests no current tool has defensible moat. Developers are switching quickly. Data flywheels alone are not sufficient.
-
GPU supply chain stress is an emerging systemic risk. The report of shortages across "every component" — not just silicon — including labor, combined with hoarding behavior, could create a significant capacity constraint on AI deployment timelines over the next two quarters.
-
AI is reshaping management layers, not just developer productivity. The NRC Health case is emblematic of a broader pattern: as coding velocity increases, the bottlenecks shift upstream to feedback loops, team topology, and communication. Middle-management translation layers are becoming obsolete.
What to Watch Next:
- MiniMax-M2.7 weights drop: Expected ~2 weeks. Will likely be the most capable open-weight model runnable on consumer hardware at release. Watch benchmark comparisons vs. Qwen, Llama, and Mistral families.
- Hermes Agent vs. OpenClaw migration scale: If the GitHub star trajectory continues and migration guides proliferate, OpenClaw's community position could collapse quickly.
- GPU supply chain data: Any corroborating reports from hyperscalers or hardware vendors on the shortage signals @thdxr is hearing would significantly elevate this concern.
- Terrafab chip fab JV verification: No primary source confirmed. Watch for official announcement from Tesla/xAI/SpaceX.
- OpenCode adoption: The AWS Cloud Shell integration is a low-friction enterprise entry point. Watch for usage reports.
Tweet Feed
🤖 Open-Source Models — MiniMax-M2.7
@SkylerMiao7 · 2026-03-22T13:43
M2.7 open weights coming in ~2 weeks. still actively iterating. just updated a new version on yesterday — noticeably better on OpenClaw.
@TheAhmadOsman · 2026-03-22T16:29
Here we go. Confirmation from MiniMax's Head of Engineering on what I have been saying: MiniMax-M2.7 weights will be opensourced within the next couple of weeks
@TheAhmadOsman · 2026-03-22T01:05
MiniMax-M2.7 weights will be opensourced within the next couple of weeks
@SkylerMiao7 · 2026-03-22T14:36
RT @Mayhem4Markets: MiniMax-M2.7 open weights are coming in just about two weeks! 🥳 It's set to be the most capable local model for its size...
@SkylerMiao7 · 2026-03-22T13:57
RT @0xSero: MiniMax! This is the best model that can fit at home
@victormustar · 2026-03-22T16:52
RT @SkylerMiao7: M2.7 open weights coming in ~2 weeks. still actively iterating. just updated a new version on yesterday — noticeably better...
@Ex0byt · 2026-03-22T13:47
Progress thrives in the open. You had us all worried for a bit — thank you MiniMax_AI!
🧠 Nous Research — Hermes Agent
@Teknium · 2026-03-22T15:07
10,000 Stars on Github - a huge milestone!
@Teknium · 2026-03-22T12:06
If you haven't yet - check out this video by Igor on how Hermes Agent achieves impressive continual learning capabilities through experiential knowledge - and a lot more! https://t.co/eGb4r3bOA9
@Teknium · 2026-03-22T17:39
RT @imranye: been running hermes agent for eight days and I have not had to restart it a single time
@Teknium · 2026-03-22T16:21
RT @izzatraihan: Just set up Hermes Agent to replace my OpenClaw last week, and has been very reliable. Very easy to migrate and set up.
@Teknium · 2026-03-22T12:07
RT @wbic16: I'm live switching my cluster over to Hermes from Openclaw, entirely from my phone lol.
@Teknium · 2026-03-22T22:08
RT @ChikoosJourney: used Hermes to build the db from my json data set and query it - all from telegram
@Teknium · 2026-03-22T16:08
RT @sin_management: today I plugged in My Hermes Agent to native ZO Computer chromium browser via built in MCP.
@Teknium · 2026-03-21T19:43
RT @sudoingX: here someone wrote a complete migration guide from openclaw to hermes agent. 3 production agents migrated in 8 steps.
@Teknium · 2026-03-21T23:45
Hermes Agent about to be a BioHacking guru
@Teknium · 2026-03-21T23:46
This + Hermes Agent would go quite hard for local compute preppers
@sudoingX · 2026-03-22T09:46
if you're trying to run hermes agent with ollama or lm studio, this is probably why it's not working well. compile llama.cpp from source. run it in server mode. point hermes at localhost:8080... compile once. you're running the same engine they're wrapping, just without the middleman.
@sudoingX · 2026-03-22T03:34
if you're using openclaw for your coding sessions, you probably won't make it.
@sudoingX · 2026-03-22T01:34
RT @based_bitcoiner: At the recommendation of @sudoingX I switched from Ollama to llama.cpp today. Significant upgrade with the same old hardware.
💻 Agentic Coding Tools — Codex, OpenCode, Claude Code, Cursor
@thdxr · 2026-