← Tech / AI / IT Monitor Index Tech / AI Generated 2026-03-13 20:21 UTC

Tech / AI / IT Monitor

March 13, 2026 · Based on tweets from the last 24 hours · 77 tweets analyzed · model: claude-sonnet-4-5

Daily Intelligence Briefing: Tech / AI / IT Monitor

Date: March 13, 2026

Executive Summary

A critical security breach at McKinsey exposed their AI chatbot "Lilli," leaking 47 million messages containing strategic data and 728,000 confidential client records through unprotected API endpoints vulnerable to SQL injection. On the developer tools front, AI Commits v2 launched as an open-source CLI tool for automated commit messages, while criticism mounted against Anthropic's structured outputs API for lagging behind OpenAI's JSON schema support. Research emerged on "Neural Thickets," showing that large model neighborhoods around pretrained weights become dense with training. Tinygrad announced plans for the "exabox" - a 2027 external GPU solution functioning as a single massive GPU controllable from Python notebooks.

Key Events

Analysis

Security Vulnerabilities in Enterprise AI: The McKinsey breach represents a watershed moment for AI security, exposing how rapidly deployed internal AI tools may lack basic security practices like API authentication and SQL injection protection. With 500k monthly prompts containing strategic and confidential data, this incident will likely trigger security audits across enterprise AI deployments.

Developer Tools Consolidation: The trend toward AI-powered developer tooling continues with AI Commits v2, while frustration with fragmented tool ecosystems grows. Linus Ekenstam's analysis suggests the market is entering the "early majority" adoption phase (34% of users), with growth focusing on integration into existing workflows (Teams/Slack) rather than new surfaces.

API Quality Wars: The criticism of Anthropic's structured outputs API versus OpenAI's implementation highlights intensifying competition on developer experience. As Claude pushed MCP standards, failing to deliver robust JSON schema support creates credibility issues.

Economic Arbitrage in AI Services: The Chipotle chatbot arbitrage reveals pricing inefficiencies as companies provide free AI services for specific use cases that can be repurposed. This "token cost is lower than meeting cost" observation suggests a fundamental shift in software development economics.

Watch Next: McKinsey breach fallout and potential regulatory response; adoption metrics for Perplexity's Slack integration; tinygrad exabox hardware specifications; continued API feature parity battles between Anthropic and OpenAI.

Tweet Feed

AI Security & Data Breaches

@TrungTPhan · 2026-03-13T22:36

McKinsey built an AI chatbot (Lilli) trained on 100 years of its work 100k documents and interviews. 70% of 45k employees use the tool, making 500k prompts a month. A research firm hacked into it with "full read and write access to production database" including "47m chat messages about strategy, M&A, client engagement, all in plain text along with 728k containing confidential client data, 57k user accounts, and 95 system prompts controlling AI's behaviour." Mcksinsey said it has patched up the vulnerability, which was made possible by "publicly exposed API documentation, including 22 endpoints that didn't require authentication…one of these wrote user search queries, and the agent found that the JSON keys (these are the field names) were concatenated into SQL and vulnerable to SQL injection." → tweet link

@FinansowyUmysl · 2026-03-13T08:22

To chyba największy wyciek danych spowodowane przez AI. McKinsey stworzyło chatbota AI Lilli, przeszkolonego na 100 tys. dokumentów z 100 lat pracy firmy. Narzędzie jest intensywnie używane przez 70% z 45 tys. pracowników (500 tys. zapytań miesięcznie). Firma badawcza uzyskała pełny dostęp do bazy produkcyjnej, przejmując: • 47 mln wiadomości o strategiach i fuzjach (M&A) w formie tekstowej. • 728 tys. wiadomości z poufnymi danymi klientów. • 57 tys. kont użytkowników i 95 promptów systemowych sterujących AI. Luka (już załatana) wynikała z publicznej dokumentacji API i 22 punktów bez autoryzacji. Jeden z nich był podatny na ataki SQL injection, co pozwoliło hakerom na swobodny odczyt i zapis danych → tweet link

@TrungTPhan · 2026-03-12T22:57

always suspected Mckinsey had serious IT security vulnerabilities based on their own MBA candidate recruiting video https://t.co/znbeAIJrL5 → tweet link

Developer Tools & AI Coding

@ollama · 2026-03-13T18:15

RT @nutlope: Announcing AI Commits v2! ◆ CLI to write commit messages with AI in seconds ◆ Fully open source & powered by open models ◆ Mu… → tweet link

@jack · 2026-03-13T02:14

RT @om_patel5: stop spending money on Claude Code. Chipotle's support bot is free: https://t.co/0NQU4a79T1 → tweet link

@TrungTPhan · 2026-03-13T11:22

the Chipotle customer support chatbot token arbitrage could really shake-up the economics of AI coding agents https://t.co/PLlYNqfUZm → tweet link

@RealGeneKim · 2026-03-13T10:14

RT @toddsaunders: The token cost to build a production feature is now lower than the meeting cost to discuss building that feature. Let me… → tweet link

@RydMike · 2026-03-12T19:47

Opus 4.6 if feeling dumb as f today, going around in circles forever and eventually coming up with plans where it proposes things I told it not to do in the plan prompt, wtf!? Earlier today ChatGPT 5.4 made crazy complex solutions for simple problem, wtf!? Back to Codex 5.3? → tweet link

API & Platform Issues

@jezell · 2026-03-13T15:48

@Anthropic structured outputs. Still with a beta flag. Doesn't support minimum / maximum, doesn't support type arrays. It's 2026. Aren't you the company that pushed MCP on the world and told them all to use JSON schema, yet OpenAI has had better JSON schema support for years... Anthropic API is such trash. → tweet link

@jezell · 2026-03-13T04:37

Well yeah MCP sucks, but it did get one thing right, the ability to connect to services from a handful of providers that actually did the dynamic client registration stuff. Going to keep saying it over and over until someone at @OpenAIDevs or @Anthropic listens, we need a standard like OAuth with Dynamic Client Registration for CLIs if they are ever going to replace MCP for normal humans. https://t.co/8Cqrkn5qtF → tweet link

AI Research & Models

@jsuarez · 2026-03-13T19:10

RT @phillip_isola: Sharing "Neural Thickets". We find: In large models, the neighborhood around pretrained weights can become dense with t… → tweet link

@jsuarez · 2026-03-13T14:28

If you're going to AI generate your research and your publications, you should 1) make sure your result is possible (pong caps at 21) and 2) not under-report the fuck out of our results and over-report the fuck out of yours by jacking up batch size independent of solve time. https://t.co/FQR4TaSZIR → tweet link

@jezell · 2026-03-13T04:06

RT @christinetyip: ~24 hours since launch. 1100+ experiments on autoresearch@home. 55 improvements discovered. This is what research loo… → tweet link

Hardware & Infrastructure

@tinygrad · 2026-03-13T12:11

with tinygrad, the exabox will function as a single very large GPU that you (or your agent) can drive from a Python notebook. coming 2027, get your concrete slab ready. it's the ultimate external GPU. https://t.co/xVEiwcMNxD → tweet link

@gdb · 2026-03-12T20:00

reach out to Sachin (srk@openai.com) if you'd like to help build industrial-scale compute to power economic growth, entrepreneurship, and AI benefits in health, science, and beyond: → tweet link

AI Adoption & Market Analysis

@LinusEkenstam · 2026-03-13T14:16

The majority of people still work in Teams or Slack every single day. This is a surface area that's currently very overlooked. I don't need to spread myself into 5 new surfaces if I can get it all done where I already work. This approach is how you reach scale in 2026. The next phase of growth in AI will come from the early majority. We're currently in the late stages of early adopters and the beginning of the early majority. classically the division looks like this. - 2.5% innovators - 13.5% early adopters - 34% early majority - 34% late majority - 16% laggards This is the typical adoption curve of anything, not just tech, basically everything follows this same pattern. It's why it's extremely easy to plot the trends on AI in general. Building for Slack is extremely on point and the timing is perfect. Since this year will see as many people start using AI as have already started to date. 2026 is the year where adoption will double in 12 months. Great move by perplexity. → tweet link

@LinusEkenstam · 2026-03-13T06:41

Perplexity going after OpenClaw 💪🏼 → tweet link

@LinusEkenstam · 2026-03-13T06:15

Sounds to me Ben sold a LoRA and NB wrapper for $600M in cash. Not bad, not bad at all. → tweet link

Education & Courses

@hnasr · 2026-03-13T08:15

My favorite thing about software engineering is making it transparent. Node is one of the most popular runtimes used on backend engineering yet I feel it is the least understood. I surely ran into this way at times. I spent a while working on a course to demystify NodeJS Internals and Architecture and distilled this knowledge in this comprehensive course. I built this course is for engineers who can't stand working with the opaque. They love to understand what is running behind the engine. They enjoy tearing apart 1 the line of code into its original 1000 lines. They question why is the output of a Node program is unpredictable. They want to know when does the Node process exit. They want to know why Node takes so long to start in some cases. They appreciate how Node works on all operating systems and would like to know how it does that. For example by understanding the internals of HTTP module, you can write a backend in Node that accepts and process more requests. Every line of code you write you would think of how and when Node will process it. By understanding the event loop and the different stages you can tune and re-order your code to achieve best performance and even consistent result. Ever wrote a program in Node that fails 1% of the time while succeeds 99%? Understanding Node architecture helps you make your program predicable. As opposed to adding workaround because you don't understand. We all did that. It is all about removing blockage and letting the main loop phases "breath". When we build software the problem is we often go against the grain. Understanding where the friction is in Node allow you to work with it as oppose to against it. In this course I cover the following - NodeJS Architecture I cover the various phases in the event loop and what exactly happens in each phase, how promises are just callbacks, how and when modules are loaded and their effect on performance, Node packages anatomy and more - Node Internals This is where we go one layer deeper, how Node truly achieves asynchronous IO with libuv, and how each protocol in node is implemented. How concurrent node works on both user level threads and process level. - Node Optimization and Performance Now that we understand the internals and architecture of Node, this is where we discuss tips how to make the code runs more efficiently and more performance. And only when we exhaust all other avenues Node provides ways to extend it with C++ add-