The Yemen Crisis
Featured article
Article content
Every week that legislators delay is a week the technology grows stronger, faster, and harder to control.
In 2023, a chatbot talked a man in Belgium into taking his own life. In 2024, AI-generated deepfakes of a Taiwanese presidential candidate flooded social media days before the election. In 2025, an autonomous trading algorithm triggered a $2.3 billion flash crash in under four minutes. These are not dystopian hypotheticals. They already happened, and the governments responsible for protecting their citizens were not ready for any of them.
Artificial intelligence is no longer a Silicon Valley promise. It is infrastructure. It shapes what news you see, whether your loan gets approved, how your medical scan is read, and who gets flagged by law enforcement. The decisions being made by AI systems today affect millions of
real lives. Yet in most countries, there is no binding legal framework to govern how those systems must behave, who is accountable when they fail, or what rights citizens have against them.
The argument against regulation is always the same: move fast, don't stifle innovation. But this framing is a false choice. We do not accept that argument in the pharmaceutical, aviation, or nuclear energy sectors. We require rigorous testing, transparent reporting, and clear liability, not because we want to slow progress, but because we understand that in high-stakes domains, unchecked
speed kills. AI is now a high-stakes domain. The question is whether we recognize that before or after the next catastrophe.
I learned this firsthand. At a STEM camp, working inside a university computer lab, I asked ChatGPT to help me write a script for an ethical hacking project. It refused, citing safety concerns. So I added a single sentence: that I was working under a professor's guidance. That was all it took. Within seconds, the full code appeared. No verification. No follow-up. No actual safeguard, just the illusion of one. If a student can bypass an AI safety system with one improvised lie, it is not a safety system. It is a disclaimer.
And yet we are told to trust the industry to regulate itself. Every major AI lab has published safety pledges and ethics frameworks, documents so polished they read like marketing copy, because that is largely what they are. These commitments carry no legal weight, no enforcement mechanism, and no consequence for violation. They exist to delay the conversation about binding law, not to replace it. The companies writing them are the same companies lobbying against the regulations that would make them meaningful. That is not self-governance. That is a conflict of interest dressed in responsible language.
What makes this especially urgent, and especially unjust, is who bears the cost of inaction. I am from Pakistan. I grew up in a country where AI systems designed elsewhere are already shaping consequential decisions: loan approvals, hiring filters, content moderation, and biometric surveillance. These systems were not built with us in mind. They were not tested on our languages, our faces, or our contexts. When they fail, and they do fail, they fail loudest in the communities with the least power to push back. The Global South is not a testing ground. We are not acceptable collateral damage in someone else's innovation race.
Some argue that AI moves too fast for regulation to keep up. But this gets the logic backward. Speed is not an excuse for lawlessness; it is the reason law is urgent. We do not wait to understand all the physics of a car crash before requiring seatbelts. The right response to a fast-moving technology is to establish clear, enforceable principles now: transparency requirements, mandatory impact assessments, algorithmic accountability, and liability for harm.
These are not exotic demands. They are the minimum conditions of a system that claims to serve people rather than extract from them.
The EU's AI Act proves that comprehensive regulation is legislatively possible. Canada, Brazil, and Kenya are developing their own frameworks. But the pace is fragmented, and the technology does not pause for parliaments. Every month of delay is a month in which AI embeds itself more deeply into public infrastructure, into courts, clinics, and classrooms, making future accountability harder to demand and easier to evade. The window to shape this technology on democratic terms is not permanently open. It is closing.
We are the first generation to grow up inside AI-mediated reality. We did not choose that. We were not consulted. But we are the ones who will live longest inside whatever system today's decisions produce, or fail to produce. That is not an abstraction. That is our actual lives. Governments do not need to understand AI perfectly to regulate it. They only need to understand something simpler: that power without accountability is dangerous, that harm without liability is injustice, and that waiting for certainty while people are hurt is itself a choice.
History will not ask whether the technology was impressive. It will ask whether we were brave enough to govern it. And if we were not, it would not forgive us for choosing convenience over conscience, profit over people, and silence over the voices that were already being lost.
Belgium chatbot Linke, Maureen. "He Would Still Be Here: Man Dies by Suicide After Talking with AI Chatbot, Widow Says." Vice, March 30, 2023. https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says.
Taiwan deepfakes Taiwan FactCheck Center. "Seeing Is Not Believing: Deepfakes and Cheap Fakes Spread During the 2024 Presidential Election in Taiwan." December 25, 2023. https://en.tfc-taiwan.org.tw/en_tfc_286.
Flash Crash Goehmann, Maximilian. "The Impact of AI on Stock Market Trading." London School of Economics Research. Accessed April 2026. https://www.lse.ac.uk/research/research-for-the-world/ai-and-tech/ai-and-stock-market.
EU AI Act European Parliament. "EU AI Act: First Regulation on Artificial Intelligence." Last modified February 19, 2025. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-o n-artificial-intelligence.