The AI Apocalypse Might Not Come From the Tech, But From Congress

The AI Apocalypse Might Not Come From the Tech, But From Congress

America is facing a crisis in AI regulation. AI didn’t quietly slip into our lives—it burst through the door. When OpenAI’s ChatGPT went public, it opened the floodgates. Suddenly, dozens of powerful AI tools were available to anyone with an internet connection. You can use them to write, code, generate art, or spin up fake videos that are nearly indistinguishable from reality.

But with this power has come chaos. Deepfakes mislead the public, while well-intentioned uses of AI can backfire with real-world consequences. In 2023, students at Texas A&M University-Commerce were threatened with failing grades. This was after an instructor used ChatGPT to “detect” cheating, only to be proven wrong when students produced Google Docs timestamps as evidence.

Alon Yamin, the CEO of Copyleaks, understands this all too well. “When AI detectors are used in education without proper explanation or context, it can lead to false accusations and undue stress on students.”

As AI makes its way into classrooms, workplaces, and every corner of daily life, regulation is more important than ever. However, America is now on the brink of a ten-year ban on state-level AI rules. Some experts warn that this move could leave millions unprotected while the technology races ahead.

America’s AI regulation moratorium

Regulating new tech isn’t a new challenge. Take drones, for example. Before consumer-grade drones took off, there were hardly any rules about where or how they could be flown. But as incidents mounted—people flying into restricted airspace or spying on neighbors—the FAA stepped in. The government organization now requires drone registration while setting clear boundaries. It’s a great example of how governments respond to fast-moving technology.

With AI, the pace is faster than ever, but regulation has struggled to keep up. Even as AI systems make their way into our lives, the government’s response has lagged behind. Now, instead of racing to catch up, lawmakers may be putting on the brakes.

Buried in President Donald Trump’s sweeping “big, beautiful bill” is a provision that would bar states and local governments from enacting or enforcing any AI regulations for a full decade. If passed, this would freeze state-led efforts to address AI’s risks and hand all regulatory power to Washington. State rules on everything from deepfakes in elections to AI in hiring, housing, and education would be put on the back burner.

Supporters of the moratorium include some of Silicon Valley’s biggest names. They argue that a “patchwork” of state laws would create headaches for tech companies. They also argue that it could slow innovation and threaten America’s lead over global rivals like China. 

Sen. Bernie Moreno, a Republican from Ohio, told Congress, “AI doesn’t understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. You can’t have a patchwork of 50 states.” Microsoft President Brad Smith also echoed the need to “give the country time” to let federal lawmakers set the rules.

Why the moratorium could be a problem

Critics of the ban warn that this approach is dangerous. More than 260 state lawmakers from all 50 states have signed a letter opposing the ban, arguing it would tie their hands. They point out that states have often acted faster and more nimbly than Congress. Also, many existing laws, like deepfake labeling before elections or data privacy requirements, could be wiped out.

South Carolina Attorney General Alan Wilson doesn’t dispute the power and potential of AI. However, he isn’t a fan of banning states from making their own AI regulations. “AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens. Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That’s not leadership, that’s federal overreach.”

Dario Amodei, CEO of Anthropic, writes in a New York Times opinion piece that, “A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act, and no national policy as a backstop.”

The risks of waiting for federal AI regulation

Debates over AI regulation in America can feel abstract, but for many people, waiting isn’t an option. The Texas A&M case isn’t the only example where well-intentioned uses of AI have backfired with real-world consequences.

Consider Amazon’s experiment with an AI-powered recruiting tool in 2018. The company hoped to speed up hiring. However, the system “learned” from a decade of mostly male resumes, so it taught itself to penalize applications that mentioned women’s colleges or achievements like “women’s chess club captain.” Instead of fixing bias, the AI quietly amplified it. Amazon scrapped the project, but only after real harm was done.

The risks go beyond hiring. When Detroit deployed AI-powered facial recognition, the technology led to multiple false arrests, including cases where innocent people were jailed based solely on a machine’s match.

As the CEO of Copyleaks puts it, “My biggest concern is the normalization of opaque, unchecked AI systems making decisions that impact people’s lives, especially in education, employment, and access to services. If we don’t prioritize transparency, fairness, and oversight now, we risk embedding systemic biases and misinformation into tools that scale globally.”

What’s at stake if states can’t act on their own?

States haven’t just sat on their hands while AI raced ahead. Across the country, local lawmakers have stepped in to fill the regulatory void, passing some of the first laws in the world to address AI’s new risks.

In South Dakota, lawmakers recently passed a bill requiring labels on political deepfakes in the run-up to elections. It helps protect voters from being misled by convincing fake videos and audio clips. 

California’s landmark privacy law, the CCPA, sets national standards for how companies collect and use personal data, including data used to train AI systems. In New York, new rules require transparency from companies that use AI to screen job candidates. The goal is to root out bias and give rejected applicants a fair shot at answers.

These efforts may not be perfect, but they show how states can move faster and more flexibly than Congress. As South Dakota state senator Liz Larson put it, “I could understand a moratorium, potentially, if there was a better alternative that was being offered at the federal level. But there’s not. It’s irritating. And if they’re not going to do it, then we have to.”

Copyleaks CEO thinks a middle-ground approach might be best. “A hybrid approach, where the federal government sets a strong baseline and states have room to adapt or lead in specific areas, would allow for both innovation and accountability.”

Conclusion

America stands at a crossroads in the age of artificial intelligence. The debate over who should write the rules—Washington or the states—isn’t just a fight over legal technicalities. It’s about how quickly and thoughtfully we can respond to technology that is already shaping lives and jobs.

If the ten-year moratorium on state-level AI regulation becomes law, it won’t just be a delay—it will be a gamble that federal action will arrive in time and be impactful enough to make a difference. As we’ve seen with deepfakes, biased hiring algorithms, and wrongful arrests, waiting for a one-size-fits-all solution can leave real people exposed to real harm.

We shouldn’t have to rely on voluntary promises or wait for Congress to catch up while the pace of AI only accelerates. America needs a smarter and more agile approach when it comes to AI regulation. One that lets states continue to innovate and protect their residents, while working toward strong, clear national standards.

Source

📰 Crime Today News is proudly sponsored by DRYFRUIT & CO – A Brand by eFabby Global LLC

Design & Developed by Yes Mom Hosting

Crime Today News

Crime Today News is Hyderabad’s most trusted source for crime reports, political updates, and investigative journalism. We provide accurate, unbiased, and real-time news to keep you informed.

Related Posts