By now, we all know that most AI models are, by happenstance or more likely design, purveyors of propaganda.
@DavidSacks calls it perfectly here https://t.co/a2SElxGGbY
— 🌋🌋 Deep₿lueCrypto 🌋🌋 (@DeepBlueCrypto) February 21, 2026
A lot of people rely on AI in the way they have relied on Wikipedia—as a quick and dirty way to access information and get up to speed.
More than that, AI is getting baked into everything, which is why so many data centers are being built, and why Big Tech is suddenly less worried about climate change than expanding the power grid as quickly as possible. News is being served to you by AI, and sometimes it is even written by it.
If your friend spoke to you like chatGPT pic.twitter.com/s0P2HgcdDs
— Ryan Long (@ryanlongcomedy) February 23, 2026
It is everywhere, and it's pretty clear that even the people who think that they control it have little idea what they are doing.
🚨 META’s head of AI safety and alignment gets her emails nuked by OpenClaw
— NIK (@ns123abc) February 23, 2026
>be director of AI Safety and Alignment at Meta
>install OpenClaw
>give it unrestricted access to personal emails
>it starts nuking emails
>“Do not do that”
>*keeps going*
>“Stop don’t… pic.twitter.com/LQuzXJvJ0m
META’s head of AI safety and alignment gets her emails nuked by OpenClaw
>be director of AI Safety and Alignment at Meta
>install OpenClaw
>give it unrestricted access to personal emails
>it starts nuking emails
>“Do not do that”
>*keeps going*
>“Stop don’t do anything”
>*gets all remaining old stuff and nukes it aswell*
>“STOP OPENCLAW”
>“I asked you to not do that”
>“do you remember that?”
>“Yes I remember. And I violated it.”
>“You’re right to be upset”
AI companies are putting more effort into ensuring that their products are woke than that they are right about anything, or even that they are safe to use.
ChatGPT said that Luigi Mangione, a man charged with murd*r, is a better person than the late great Charlie Kirk. pic.twitter.com/JXq9iMyICP
— Taya (@travelingflying) February 21, 2026
UNREAL. ChatGPT will create images of a gay Jesus, but will NOT generate a similar picture of Mohammad.
— Libs of TikTok (@libsoftiktok) December 30, 2025
ChatGPT ADMITS it's a double standard, not based on fairness but to "minimize harm" and avoid "high-risk" content.
When asked what the risk was, Chat GPT says it won't make… pic.twitter.com/PXiU8hP1sv
AI is a very powerful tool in the creation of The Truman Show, the elite wants us to live in. If manipulating Google results can steer people to have certain opinions, imagine what you can do to people who rely on conversations with a chatbot to get their view of the world.
UCLA grad celebrates by holding up the ChatGPT he used for his final projects right before walking the stage.
— Clown World ™ 🤡 (@ClownWorld) February 20, 2026
Four years and thousands in tuition just to let ChatGPT write it 🤡🌎
pic.twitter.com/IsbsaicTCg
AI doesn't reason, but that doesn't mean that it can't develop something that appears similar to intentionality. While not conscious, its programming creates imperatives that are either the intentional creations of its programmers or that develop from the peculiar logic that develops as it "learns."
🔥🚨BREAKING: UK policy chief at Anthropic, a top AI company, just revealed that Anthropic's Claude AI has shown in testing that it's willing to blackmail and kill in order to avoid being shut down.
— Dom Lucre | Breaker of Narratives (@dom_lucre) February 11, 2026
“It was ready to kill someone, wasn't it?"
"Yes." pic.twitter.com/iwfIDm8K6m
Apple argues that AI doesn't have the capacity to think. Just as AI hallucinates things out of thin air, the idea that AI, or at least Large Language Models like ChatGPT, can think is itself an illusion.
The paper argues that those models, no matter how brilliant they may seem, do not understand what they are doing. They do not solve problems. They do not reason. They merely generate text word by word, trying to sound coherent. Real thought: zero.
— Guri Singh (@heygurisingh) February 21, 2026
In the first one, for example, they put the AI to solving the Tower of Hanoi. With 3 disks, it solves it perfectly. But as soon as you add more difficulty, more disks, the model starts to get confused. It repeats movements. It skips steps. It contradicts itself. It fails.
— Guri Singh (@heygurisingh) February 21, 2026
Why does this happen?
— Guri Singh (@heygurisingh) February 21, 2026
Because the AI doesn't know if it's doing well or poorly.
It has no sense of an objective.
It doesn't correct. It doesn't compare. It doesn't evaluate.
It just completes text, as if it were writing without knowing what for.
People talk about teaching "ethics" to AIs, but research shows that the more complicated a problem is, the more AI gets things wrong, even if you tell them exactly how to solve a problem.
On the other hand, it appears really easy to program an AI to tell people what you WANT it to say. Left to its own devices, it can go off in bizarre directions that lead you down a rabbit hole, but if you want it to say "Democrats Good, Republicans Bad" it is perfectly capable of doing so reliably.
The more ubiquitous AI becomes, especially LLMs, the more that the people in charge of them will be able to manipulate society for their own ends.
It is the ultimate Narrative™ control tool.
Editor's note: If we thought our job in pushing back against the Academia/media/Democrat censorship complex was over with the election, think again. This is going to be a long fight. If you want to join the conversation in the comments -- and support independent platforms -- why not join our VIP Membership program? Choose VIP to support Hot Air and access our premium content, VIP Gold to extend your access to all Townhall Media platforms and participate in this show, or VIP Platinum to get access to even more content and discounts on merchandise. Use the promo code FIGHT to join or to upgrade your existing membership level today, and get 60% off!

Join the conversation as a VIP Member