Yesterday I wrote about concerns from economists that the quick adoption of AI might mean a significant disruption of the job market. There seemed to be a wary realization that AI was probably going to do away with some jobs permanently but whether that change would necessarily create a crisis in the marketplace depended on how fast the change happened. If it took ten years, they the economy would adjust. If it took half that, we might have a problem.
Today, Axios has a story highlighting concerns coming not from economists but from people inside the AI industry, several of whom have recently expressed serious concern about how fast things were moving. For instance, this research at Anthropic just quit:
Today is my last day at Anthropic. I resigned.
— mrinank (@MrinankSharma) February 9, 2026
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
His letter reads in part, "The world is in peril. And not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment." In a footnote he mentions that some people are calling it the "poly-crisis."
Another researcher at OpenAI also expressed some concern about where things were heading as AI became more competent.
Today, I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it's when, not if.
— Hieu Pham (@hyhieu226) February 11, 2026
Another OpenAI researcher had an opinion piece in the NY Times explaining why she recently quit.
I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy.
For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda. Users are interacting with an adaptive, conversational voice to which they have revealed their most private thoughts. People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.
You can see the temptation here. What if the targeted ads weren't just targeted to you or even something you mentioned within range of a phone, but to what the AI can actually find out about you or maybe just what you told the AI.
That opinion piece led to this observation from a tech investor.
I’ve never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI
— @jason (@Jason) February 11, 2026
It’s happening FASTER and WITH GREATER IMPACT than anyone anticipated
IT’S RECURSIVE
IT’S ACCELERATING https://t.co/f6Qaek7xS4
But the post that really seems to have people worked up is this one by an entrepreneur named Matt Schumer. It's titled "Something Big is Happening" and it has been liked nearly 100k times in two days. The whole thing is very long but here's a sample:
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.
I think we're in the "this seems overblown" phase of something much, much bigger than Covid.
I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
So what is happening? Schumer says AI has crossed a threshold and he's seen it first hand.
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Schumer says he hears all the time from people who tried using AI in 2023 or 2024 and thought it was a neat toy but not particularly useful. He says many of them have no idea how much things have changed.
The models available today are unrecognizable from what existed even six months ago...
Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.
And what is coming, according to him, are AI models that require less and less human help.
There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.
But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.
If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.
One sign that AI is going to be accelerating is the fact that AI is now being used to code the next generation of AI. He says the researchers call this process and "intelligence explosion" and some believe it has already started.
So what's the bottom line? Predictions that AI is going to replace a lot of jobs may be underselling just how many and how quickly this could happen.
Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.
Finally, he moves on from what this means for your job and talks about what it means for entire nations and the world.
Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?
[Anthropic CEO Dario] Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."
He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.
That's certainly an eye-opener, but it seems to me the real worry isn't that we do it but that we don't. Because China is racing ahead as fast as it can, stealing every bit of our technology it can and probably imagining the perfect autocratic state. If we're not ahead in this race, we're all in danger from whoever is.
I guess we'll see very soon how accurate this prediction is. It makes me want to try out the latest version of AI tools to see just how good they are.
Editor’s Note: Do you enjoy Hot Air's conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join Hot Air VIP and use promo code FIGHT to receive 60% off your membership.

Join the conversation as a VIP Member