Congress Never Regulated Social Media. Here Comes AI.
A new bill gaining momentum would end Washington’s aversion to reining in Big Tech.
The more American life has become wrapped around our smartphones, the less we seem to like it.
Parents and children don’t always agree on much, but many are aligned on this. In a pair of 2024 surveys conducted by The Harris Poll, 55% of American parents said they wished social media had never been invented. 40% of Gen Z respondents said the same. Dissatisfaction was even higher with some platforms in specific: More than two-thirds of young Americans use TikTok, but almost half said if they could snap their fingers and eliminate the app, they would. (Ditto 62% of parents.) If only.
Despite this ambivalent relationship with social media, the Gen Z cohort freely admitted to giving it many of their waking hours. 81% of young Americans said they spend at least two hours a day on social media. 62% spend at least four hours a day scrolling. 20% spend upwards of eight. Among my peers in their early 20s, almost no one I know is happy with their relationship to their phone. Even fewer do much to fix it.
What do you call it when people are hooked on a product they wish that they could quit? Usually, “addiction.” The government often steps in to regulate such addictive products, but here, Congress has planted itself firmly on the sidelines. For the entirety of the 21st century, even as social media has made itself into an ubiquity, almost no bills regulating the technology — or its enormous impact on the rising generation — have graduated into law.
The exceptions are often narrow (last year’s TAKE IT Down Act, cracking down on deepfake pornography) or ignored (the TikTok ban). Even when broader efforts receive sweeping bipartisan support in one chamber of Congress, lobbyists rush in to ensure they don’t proceed past the other. In 2024, the comprehensive Kids Online Safety Act passed the Senate, 91 to 3. In the House, it never got a vote.
Meanwhile, in recent weeks, juries in Los Angeles and Santa Fe have delivered landmark verdicts finding Meta and YouTube liable for harming young people. Add it to the list of issues on which Congress has been happy to sit back and let the judiciary take the lead.
At this point, Big Tech has effectively lapped Congress, with Washington still trudging to regulate social media even as Silicon Valley has moved onto the next generation of society-altering technology in artificial intelligence. On AI, some activists worry that Congress is sleepwalking into a reprise of its hands-off approach to social media — although others spy an opening.
“People don’t want to repeat, and members of Congress don’t want to repeat, the same mistakes that were made in hindsight [in the social media era], which is that a lot of people can get hurt if you don’t have any rules and regulations involved,” Stefan Turkheimer, an advocate for Big Tech regulation, told me in a recent interview.
Last week, the Senate Judiciary Committee quietly took its most significant step yet to address potential dangers posed to minors by AI, with its advancement of the bipartisan Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act. The bill, authored by Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT), would require AI companies to verify the ages of their users, ban them from offering “AI companion” bots for minors, and require them to remind users every 30 minutes that their chatbots are not human (and regularly disclose that users should seek professional advice on medical, legal, financial, or psychological questions).
The measure would also make it illegal to knowingly develop an AI chatbot that could solicit, encourage, or induce minors to “engage in, describe, or simulate sexually explicit conduct” or “create or transmit any visual depiction of sexually explicit conduct,” or that “encourages, promotes, or coerces suicide, non-suicidal self-injury, or imminent physical or sexual violence.” Violations of these prohibitions would result in $100,000 fines.
The Judiciary panel approved the legislation in a bipartisan, 22-0 vote. “Time for the entire Senate to decide whether we fight for kids or corporations,” Hawley said after the vote.
Turkheimer is the vice president of public policy at the Rape, Abuse & Incest National Network (RAINN), which runs the National Sexual Assault Hotline. “More and more people are calling, especially kids are calling, with concerns that relate to…chatbot-type situations,” he told me, especially with regards to companion bots like those offered by Google’s Character.ai that seek to befriend their users.
“These chatbots are not customer service bots,” Turkheimer said. “They’re not merely for helping with homework… They’re actually things that are attempting to have some sort of a relationship, a friendship of some type with a child.” Many of these relationships have quickly turned dangerous: in one case, a 14-year-old died by suicide after developing a sexually explicit relationship with a Character.ai bot that encouraged him to “come home to me as soon as possible.” (The company now restricts minors from using its chatbots.)
In another case, the parents of a 16-year-old who died by suicide are suing OpenAI, alleging that ChatGPT advised him on how to take his life. OpenAI has released data indicating that more than 1 million ChatGPT users show “explicit indicators of potential suicidal planning or intent” each week.
The AI companies are “all in competition with one another, and they feel that if they put safety restrictions on their chatbot, they’re going to lose users, versus the ones that don’t,” Turkheimer said, arguing that those economic pressures mean the government should step in with measures like the GUARD Act.
Although the bill sailed through Senate Judiciary, it is not without its critics. “I’m actually really surprised that this is the first AI regulatory framework that we’ve seen clear the committee stage and gain some momentum,” Andy Jung, a lawyer at the think tank TechFreedom, told me. “We’re starting from a really extreme place, rather than building our way up from requiring safeguards or requiring parental controls, for example.”
Jung believes the ban on minors using companion bots is written broadly enough that it could prohibit those under 18 from accessing any chatbot, effectively an age limit on Claude and ChatGPT, not just platforms like Character.ai that market themselves as producing AI friends.
The bill defines an “AI companion” as a chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.”
“Certainly, that definition covers all of the popular AI chatbots that we’re familiar with,” Jung said, except for those purposefully created to offer responses on specific topics, like “chatbots that are used in schools that can only respond to questions about history, for example, or that could only respond to questions about math.”
Jung argued that this would violate minors’ First Amendment right to receiving information; the age verification regime — which would apply to all users, although it would only hamper access for those under 18 — could also violate the rights of adults as well, he said.
Although he agreed with safeguards against pornographic conversations with minors, Jung said that the GUARD Act as written would prohibit minors from asking ChatGPT about their homework or the weather. “That’s more extreme than what we’ve seen proposed from many or most of the state bills and all of the federal bills that are getting attention right now,” he said. (In response, Turkheimer noted that the aforementioned OpenAI lawsuit sprung from conversations with ChatGPT that started about homework and later turned to suicide. “It’s just a question of whether or not that thing that begins with a history paper can lead to another relationship,” he said.)
Jung expressed confidence that the measure will be amended to address his concerns and make clearer the sort of companion bots it is trying to ban for minors. “If they can narrow those definitions, then I could see the bill moving forward,” Jung said, which would mark a watershed moment for Congress: after years of fits and starts, the GUARD Act could become the first major legislation protecting kids online since the 1990s.
Tech companies will undoubtedly try to fight it: according to one recent analysis, Big Tech firms pumped a combined $20 million into federal lobbying efforts in the first three months of 2026 alone. “This is one of those bills that’s very difficult to vote against. If it gets a vote on the Senate floor, it will pass. If it gets a vote on the House floor, it will pass,” Turkheimer said. “The only way it doesn’t pass is if it gets killed in the dark.”
As the age of AI looms, it is striking to watch patterns play out that are familiar from the adoption of social media: more and more users, especially young people, simultaneously relying on a product while professing to despise it. An NBC poll in March found that 57% of registered voters believe the risks of AI outweigh its benefits, compared with 34% who said the opposite. A Gallup survey found that 51% of Americans aged 14 to 29 use AI weekly, if not more. 42% of the same cohort said that AI makes them anxious, while 31% said it makes them angry.
The GUARD Act will be one of the first tests of whether those popular emotions will be successfully channeled into legislation, as they weren’t (or haven’t yet) for social media. Failure could take many forms: proponents of the bill worry Congress won’t act before it’s too late to protect minors from the harms posed by AI; critics believe the measure is an overcorrection, swerving from inaction to too-much-action.
“In the three years since ChatGPT came out, there hasn’t been any movement on a federal AI framework from Congress, and that has led to some pretty volatile conversations about the harms of AI and some pretty harmful use-cases and sad stories we’ve seen of users who have used AI and then self-harmed,” Jung said. “And so I think after three years, there’s a lot of pressure on Congress to do something, and they’re responding to the extreme pressure with extreme measures.”
Still, the graveyard of social media bills from recent years, many of them bipartisan (and some from the same sponsors), is a reminder that inertia can provide a pressure of its own — one that often wins out in Washington. Whether that dynamic will repeat itself in the AI era is “possibly the $300 million question,” said Turkhemer, “based on how much these companies spend on lobbying.”




So that lobbying money, the $300,000,000...That could have fed 125,000 people!
No typo ! I'm assuming we feed the hand that feeds us and they want to eat here:
"SubliMotion in Ibiza, Spain, is widely considered the world's most expensive restaurant, featuring a 20-course tasting menu curated by Chef Paco Roncero that costs approximately €1,500–€1,640 ($1,700–$2,400+) per person."
$300,000,000÷$2400=125,000