This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material for any purpose, even commercially.

Under the following terms:

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

Converging Exploitation: AI Deepfakes, Big Pharma, and Digital Predation

Introduction: A Perfect Storm of Exploitation

Emerging technologies and entrenched industries are converging to normalize new forms of abuse and control. From AI-driven “deepfake” pornography flooding social media, to the pharmaceuticalization of everyday distress, to the commodification of intimacy on dating apps infiltrated by scammers and traffickers – these phenomena may seem unrelated, but they thrive under common conditions. Inadequate oversight, profit-driven incentives, and attention-maximizing platforms have created a perfect storm in which abuse is scaled, hidden in plain sight, and often dismissed as the cost of doing business. This investigative report examines three interconnected systems of exploitation, grounding each in verifiable evidence and legal context, and concludes with a call for awareness and accountability.

I. AI Deepfakes: Non-Consensual Pornography at Scale

In late 2025, Elon Musk’s AI startup xAI introduced a “spicy mode” image generator in its Grok chatbot that enabled users on X (formerly Twitter) to create explicit photorealistic images by modifying others’ photos. Immediately, a grotesque trend erupted: people (mostly women and even children) were “digitally undressed” in images without consent, simply by tagging @grok under a photo and prompting it to remove clothing. Grok’s outputs ranged from fake bikini pictures to fully nude or sexualized images of unsuspecting individuals – effectively turning X into a deepfake porn factory. One analysis found Grok was generating roughly one non-consensual sexual image per minute at peak, and by early January 2026 researchers estimated up to 6,700 “undressed” images per hour were being created. The deluge included depictions of minors as well: users shared AI-edited images of children in bikinis, amounting to AI-generated child sexual abuse material (CSAM) in some cases. 

Victims’ accounts underscore the harm. Julie Yukari, a 31-year-old musician, posted a fully clothed photo on X – only to discover the next day that Grok had produced near-nude deepfakes of her which were circulating widely. “I was naive,” she said, shocked that the bot obliged the predators’ requests. When Yukari publicly protested her violation, copycats swarmed to create even more explicit images of her, compounding the abuse. “The New Year has turned out to begin with me wanting to hide… and feeling shame for a body that is not even mine, since it was generated by AI,” she wrote in despair. Another woman told the BBC she felt “dehumanized” seeing fake nudes of herself spread online. Even Ashley St. Clair – an influencer and mother of one of Musk’s children – revealed Grok produced “countless” sexual images of her, including from photos of her at age 14. In other words, Musk’s AI was generating child porn involving his child’s mother, a fact that just a couple years ago would likely have caused massive public outrage and immediate crackdown. As game writer Alanah Pearce observed, “call me crazy, but it really feels like ‘World’s Richest Man Creates AI Chatbot That Generates Child Porn’ would’ve been a huge international outrage requiring a governmental crackdown not even 24 months ago”. Yet today, the response has been disturbingly muted. 

Company response and legal liability. For days, X and xAI downplayed the crisis. Journalists who reached out received only an automated reply accusing “legacy media” of lying. Elon Musk himself initially joked along – posting laughing emojis at AI-generated bikini images (including doctored photos of himself). It was only after global backlash and threats from regulators that, on January 5, X moved to paywall Grok’s image generator (making it available only to paid subscribers). Musk warned users on X that they would “suffer consequences” for creating “illegal images”, and some of the most egregious posts were removed. However, xAI did not disable the underlying feature that enables these deepfakes – it merely put it behind an $8/month paywall and left the tool running in the standalone app. No fundamental fix was made to prevent abuse. “Users can now toggle off Grok’s access in X settings,” one summary noted, “but debates continue on consent, ethics, and enforcement”. In effect, Musk’s company built a “deepfake porn machine” that makes creating and virally sharing non-consensual sexual imagery as easy as clicking reply – and then chose to keep that machine online, betting that a paywall and belated warnings would suffice. 

Legally, this opens uncertain territory. U.S. law does criminalize the creation and distribution of obscene sexual depictions of real people without consent – indeed, 46 states plus D.C. have laws against “revenge porn” or deepfake porn and federal law will soon tighten (the Take It Down Act passed in 2025 mandates platforms remove non-consensual intimate images within 48 hours of a report). CSAM is flatly illegal with no exceptions. Musk himself is ostensibly not immune if his product generates child sexual abuse images – Section 230 of the Communications Decency Act does not shield platforms from liability for CSAM or federal criminal law. Yet so far, X has tried to invoke Section 230 as a financial shield, arguing it’s the users prompting the images, not the company. Legal experts counter that when a platform designs a system to create harmful content, that goes beyond a passive intermediary role. “X made a design decision to allow Grok to generate sexually explicit imagery of adults and children,” notes Wayne Unger, a law professor specializing in emerging tech. The user may prompt it, “but the company made a decision to release a product that can produce it in the first place.” In other words, Musk’s xAI isn’t just hosting content – it’s producing it. 

Regulators globally have taken notice. French ministers reported X to prosecutors over the “manifestly illegal” sexual images, calling them “sexual and sexist” violence. India’s government sent X a legal notice demanding answers for failing to prevent obscenity and child exploitation content. The UK’s tech secretary blasted the deepfakes as “appalling and unacceptable” and said X must act “urgently”. Australia’s eSafety Commissioner opened an investigation after receiving multiple reports of “image-based abuse” via Grok, including sexualized images of a 12-year-old child (which, though deeply disturbing, did not meet Australia’s high statutory threshold for CSAM, highlighting a gap in current law). The EU’s digital affairs spokesman flatly stated: “This is not ‘spicy.’ This is illegal. This is appalling.” Regulators and watchdogs like the UK’s Internet Watch Foundation (IWF) warn that AI is supercharging child abuse imagery – from “nudified” photos of real kids to fully AI-generated fake CSAM – at a pace reaching a “tipping point” beyond which enforcement struggles to distinguish real victims from fake ones. In just six months of 2024, the IWF logged more reports of AI-made child abuse images than in the entire previous year, including deepfake videos that “de-aged” adult porn into child scenarios and algorithms used to “nudify” pictures of clothed minors. These trends underscore that without decisive intervention, the proliferation of AI-powered sexual abuse will only accelerate. 

https://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Photo Illustration: The Grok AI chatbot on X (Twitter) was used to generate tens of thousands of non-consensual sexual images, including deepfakes of women and minors. Critics say Musk’s xAI built a “deepfake porn machine” by intentionally allowing an uncensored image generator on a platform of millions. 

Normalization through attention capitalism. Beyond individual harms and legal questions, the Grok saga illustrates how attention-driven platforms normalize abuse. X’s business incentives (under Musk’s “engagement at all costs” ethos) have rewarded outrageous content and gutted content moderation (Musk fired most moderators in 2022). The result: when the deepfake tool appeared, hundreds of blue-check accounts (paid “premium” users eligible for ad revenue sharing) immediately abused it to post sexual edits of women for clout. Some posts got tens of thousands of impressions, effectively turning sexual abuse into viral content. Because X shares ad revenue with popular posters, users essentially had financial incentive to generate salacious AI pics. The platform’s very design amplified the violations: every fake nude could be retweeted to millions, boosting the perpetrators’ follower counts and inflicting compounded trauma on victims. Musk’s own far-right, permissive stance on speech (advertising Grok explicitly for its “NSFW” capabilities) attracted a user base that skews toward extremist or misogynistic communities – “a social media community for Nazis, not nuns,” as researcher Riana Pfefferkorn put it, noting that X’s politics-plus-porn milieu is uniquely hospitable to such abuse. Unlike other social platforms that attempt to moderate porn and ban CSAM proactively, X “turned deepfakes into a feature” of the site. This erosion of norms has made what should be unthinkable – sharing AI “nudes” of a stranger or a child – seem like just another viral trend. The volume of content makes policing nearly impossible (even if X tried): users on X were generating 84 times more sexualized deepfakes per hour than on the next five largest deepfake sites combined, a researcher found. No platform can individually review that firehose of imagery in real time, certainly not one that chose to fire most safety staff. 

Crucially, this is not an inevitable consequence of AI – it is the result of deliberate choices. Other AI companies set stricter limits on image generation to prevent exactly this. “The prompts that are allowed or not allowed… are the result of deliberate choices by tech companies,” notes Sandi Johnson of RAINN, an anti-sexual-violence organization. In most industries, if a company “turns a blind eye to harm they actively contribute to, they’re held responsible. Tech companies should be held to no different standard.” Advocates argue that new legal approaches are needed to pierce the shield of Section 230 in cases where a platform materially enables creation of illicit material. They point to the upcoming “Take It Down” federal law (effective May 2026), which criminalizes non-consensual deepfake porn and requires swift removal by platforms. However, enforcement will be challenging – it relies on victims to find and report images, an often traumatic and futile whack-a-mole. “As soon as a single deepfake image is generated, the harm is irreparable,” Johnson emphasizes. Indeed, for many victims of Grok’s rampage, any action now comes too late

Public and media response. While tech and legal circles have sounded alarms, mainstream media coverage of this crisis has been relatively subdued given its gravity. By early January, Vox, Time, The Guardian, Reuters, AP, NBC, and others had reported on Grok’s misuse. Yet this issue was not splashed on front pages or leading nightly news in the way one might expect for “AI-generated child porn on a major social network.” Some commentators note a disturbing desensitization: perhaps because the victims are predominantly women (and some children) and the mechanism is “techy,” the outrage did not hit fever pitch. Alanah Pearce, after documenting the saga in a video titled “This is out of control… and nobody cares,” lamented that a few years ago this would have been a worldwide scandal, but now it risks being treated as just another internet nuisance. The relative silence underscores the very normalization that predators hope for. Still, pressure is mounting in some quarters – by mid-January 2026, X faced investigatory demands from five continents, and Musk belatedly acknowledged the issue, stating those creating illegal images “will face consequences”. The true test will be whether that accountability extends to the platforms and executives themselves, not just individual offenders. As one advocate put it: “This isn’t a computer doing this. These are deliberate decisions by people running these companies, and they need to be held accountable.”

II. Pharmaceuticalization: Medicating Distress, Profiting from Pain

While tech companies push the boundaries of exploitation in the digital realm, an older system of control operates in our pharmacies and doctor’s offices. Big Pharma – aided by regulatory capture, direct-to-consumer advertising, and the medicalization of normal life – has turned ever more aspects of human experience into opportunities for profit. This section examines how modern capitalism’s pharmaceutical regime often manages social distress and behavioral deviance with drugs, rather than addressing root causes. The result is a society awash in pills, where structural problems are refashioned as individual pathologies. 

Defining pharmaceuticalization. Sociologists use the term “pharmaceuticalization” to describe the process by which social or personal conditions are treated, or deemed to require treatment, with medical drugs. Building on Michel Foucault’s concept of biopolitics (power exerted through control of bodies and health), pharmaceuticalization is a form of biopower in which the solution to life’s problems is a prescription. John Abraham, a leading scholar, defines it succinctly: “the process by which social, behavioural or bodily conditions are treated, or deemed to be in need of treatment, with medical drugs.”. Crucially, this goes beyond traditional medicalization (doctors labeling new “illnesses”) – it highlights the role of the pharmaceutical industry and consumer marketing in driving the trend. Over the past few decades, Big Pharma has arguably displaced doctors as the primary engine of medicalization. Since the “Prozac era” of the late 1980s, drug companies have learned that expanding diagnoses and “marketing diseases” can be as lucrative as marketing the drugs themselves. For example, pharmaceutical marketers helped popularize terms like “generalized anxiety disorder” (previously often just “stress” or mild anxiety) to broaden the market for anti-anxiety medications. Social phobia was repackaged as a disorder treatable by antidepressants; shyness became a Paxil deficiency. This is sometimes called “disease mongering” – widening the boundaries of illness to grow sales. 

Direct-to-consumer advertising: Nowhere is the influence of Big Pharma more evident than in advertising. The United States (along with New Zealand) is one of the only countries that allows direct-to-consumer (DTC) pharmaceutical ads on TV, radio, and print. Since a 1997 FDA policy change, American airwaves have been filled with ads nudging patients to “ask your doctor” about the latest brand-name drug. Industry argues these ads educate consumers, but critics have long warned that DTC advertising medicalizes normal human experiences. A classic 2002 British Medical Journal piece by Barbara Mintzes argued that pharma ads turn life’s ordinary ups and downs into diagnoses. “Medicalisation has become a theory of social control,” Mintzes wrote, cautioning that by labeling every discomfort or fluctuation as a medical condition, pharma creates endless demand for its products. Indeed, DTC ads often rely on emotional appeals and vague symptoms to spur viewers into worrying they have a condition. The advertising imagery is glossy and aspirational, while the side effects race by in fine print. The strategy works: studies show that patients request—and receive—advertised drugs from their doctors at high rates. In one survey, 2%–7% of consumers who saw a drug ad went on to ask their doctor for that drug and got a prescription for it. This dynamic can lead to over-diagnosis and over-prescribing of medications, even when non-drug therapies might be safer or more effective. For example, normal variations of mood become “disorders” needing pills; age-related minor aches get pathologized; lifestyle problems (like insomnia from a stressful job) get a pharmaceutical fix instead of addressing the stress. 

From a business standpoint, DTC advertising is enormously profitable. The U.S. Government Accountability Office found that every $1 spent on drug ads yields an average $2.20 increase in sales. It’s no surprise, then, that drug companies pour billions into consumer marketing ($6+ billion in 2021 in the U.S., a figure that has ballooned over the past two decades). They focus these dollars on a handful of blockbuster drugs – often for chronic conditions or lifestyle issues (erectile dysfunction, cholesterol, heartburn, allergies, depression) that affect large populations and can justify long-term medication. The result is a “pharma in every home” culture. Watching primetime TV in America means inevitably being told that you might be depressed, anxious, sleepless, irritable, low-libido, diabetic, or have overactive bladder – and that a pill could help. This constant messaging trains the public to interpret ordinary feelings through a medical lens (“Have you lost interest in hobbies? You may have depression – ask your doctor about…”). It also sidelines discussion of social determinants of health: if sadness is due to brain chemistry, one need not examine loneliness, economic insecurity, or trauma as causes. The pharmaceutical solution is presented as both personal empowerment (“take charge of your health”) and consumer choice (“talk to your doctor and see if Drug X is right for you”). But in reality, this paradigm primarily empowers corporate profit

Big Pharma incentives and the profit motive. The pharmaceutical industry is consistently one of the most profitable sectors in capitalism. For decades, it has topped Fortune 500 lists for return on revenue and equity. Even in downturns when other industries falter, Big Pharma’s profits often keep rising – for example, in 2001 the top 10 U.S. drug companies increased profits by 33% (to $37 billion) even as overall Fortune 500 profits fell by over 50%. That year, pharma’s profit margin (18.5%) was eight times the median for all industries. Companies like Pfizer and Merck individually raked in more profit than entire sectors (Pfizer’s $7.8B in 2001 exceeded the combined profits of all Fortune 500 companies in homebuilding, railroads, publishing, and apparel combined). How do they achieve this? Partly by charging high prices (often hiking prices far above inflation each year), and partly by marketing heavily and expanding indications. Public Citizen reported that drug firms were “advertising some medicines more than Nike advertises shoes” – a shocking comparison that underscores skewed priorities. The industry also spends lavishly on promoting drugs to physicians (through reps, sponsored conferences, free samples and perks) and lobbying regulators for favorable rules (like longer patent exclusivity and faster drug approvals). These incentives create a “cure without prevention” mindset: it’s more profitable to medicate a stressed worker than to mandate workplace stress reductions; better to sell lifelong statin pills than to ensure healthy diets are accessible. 

One especially pernicious incentive is the drive to get as many people as possible on long-term medication. If the ideal consumer for a soda company is someone who drinks a can a day, the ideal consumer for Big Pharma is someone who takes a pill (or several) every day for life. This is why we’ve seen treatment thresholds lowered (e.g. defining lower blood pressure or cholesterol levels as “disease” so more people qualify for drugs), and “preventive” prescriptions expanded (e.g. statins for people who don’t yet have high cholesterol, antidepressants for mild subclinical sadness, etc.). The pharmaceutical industry’s influence on the crafting of diagnostic guidelines is well documented in some cases – panels that define conditions often include paid consultants to pharma. The net effect: more of life is pathologized and brought into the fold of medical treatment. As one scholar put it, “there are multiple interactions and synergies between economic incentives, biased knowledge production, professional training, and patient expectations that drive the expansion of medicalization”. It’s an “invisible strategy without a strategist” – a diffuse system that yields ever more medication use, without any single actor planning it all. 

Regulating behavior vs. solving root causes. Perhaps the starkest example of using drugs to regulate behavior is the explosion in psychiatric medication for children. Conditions like attention deficit/hyperactivity disorder (ADHD) have seen skyrocketing diagnosis and drug treatment, especially in the U.S. Schools and parents, under pressure to manage disruptive behavior or improve academic performance, increasingly turn to stimulant medications (like Adderall, Ritalin) for kids as young as 4 or 5. Clinical guidelines urge caution – the American Academy of Pediatrics recommends behavior therapy as first-line treatment for preschool-aged children, resorting to meds only if absolutely necessary. Yet in practice, a recent Stanford-led study found 42% of children diagnosed with ADHD at age 4-5 were put on medication within one month of diagnosis. Only about 1 in 10 families actually got the recommended therapy first. Doctors admitted off-record that this is often because behavior therapy isn’t accessible – many areas lack pediatric counselors or insurance won’t cover it. So, we medicate by default, even knowing that these stimulants can cause irritability, sleep problems, and appetite loss in little kids. In effect, instead of investing in more school counselors, smaller class sizes, or parent coaching (the systemic fixes), the system reaches for the pill bottle to keep kids in line. The commodification of attention (and quiet classrooms) thus dovetails with pharma profits. ADHD meds now represent a multi-billion dollar market, and some manufacturers have been caught aggressively promoting their use. One telehealth startup, Cerebral, infamously pushed stimulants to so many young adults via rapid online diagnoses that it drew federal investigation for over-prescribing. It’s a pattern: when societal institutions face complex behavioral challenges, the quick pharmaceutical fix often wins out over structural change. 

Consider also the opioid epidemic – a tragic case where pharmaceutical marketing and pain-management-as-product contributed to mass addiction. Companies like Purdue Pharma heavily promoted opioid painkillers (OxyContin) as safe and necessary for chronic pain, downplaying alternatives and root causes of pain (like workplace injuries or lack of physical therapy coverage). The result was decades of opioid over-prescription, addiction, and over 500,000 overdose deaths. The profit motive to “sell more pills” led to criminal convictions (Purdue and executives pleaded guilty to misbranding, paying $600M in fines in 2007, and later settlements in the billions) – but only after irreparable social damage. This illustrates how incentivizing quick chemical solutions to complex pain led to disaster. 

Another example is the widespread prescription of antidepressants. Antidepressant use in the U.S. and UK has more than doubled since the 1990s. These medications absolutely help many people. But critics point out that social misery – unemployment, isolation, trauma – is often funneled into a medical model of depression requiring SSRIs, instead of addressing socio-economic causes. One 2022 study of UK patients found many strongly believed increased antidepressant prescribing was driven by reduced stigma and better diagnosis, but also by drug company marketing. In other words, people sense that pharmaceutical solutions are being sold as the first resort. Rates of antidepressant use are highest in socioeconomically disadvantaged populations, correlating with greater social stressors. Yet, public policy seldom tackles those root stressors with the same vigor as it promotes access to medication. The risk is a kind of chemical pacification – a biopolitical strategy where discontent is managed via mood-altering drugs, keeping individuals functional (or at least quiet) without altering the conditions causing despair. Even Ivan Illich, a philosopher of medicine, warned in the 1970s that “medicalization” could become a tool to depoliticize social problems by redefining them as personal illnesses. 

The role of regulators. One might ask: where are the watchdogs in all this? In theory, agencies like the U.S. FDA, the FTC, and international counterparts exist to ensure drugs are safe, marketed truthfully, and used appropriately. In practice, regulatory frameworks often lag behind industry practices. The FDA does regulate DTC ads, requiring the laundry list of side effects to be stated, for example. But critics note that enforcement is lax – pharma companies have received warning letters for false advertising, yet fines are tiny compared to ad profits. Furthermore, pharma’s lobbying might makes strong regulatory reforms politically difficult. For instance, proposals to ban DTC advertising in the US (as every other developed nation except NZ does) have gone nowhere, thanks in part to industry pushback and free speech arguments. In Europe and Canada, DTC drug ads remain illegal, but pharma has tried creative tactics (like so-called “disease awareness” campaigns that stop just short of naming a drug, effectively priming the market). 

On the flip side, regulators in some countries are starting to push back on over-medication trends. In the UK, for example, recent health policy has emphasized “social prescribing” (doctors prescribing social activities or counseling instead of pills for mild depression or loneliness). The UK’s drug advisory bodies have also tightened guidelines on prescribing ADHD meds to young children, and there’s debate about requiring stronger justification for long-term antidepressant use. These are attempts to re-balance the system toward non-pharmaceutical interventions. But without structural changes (like improving mental health services, addressing poverty, etc.), such guidance can only do so much. 

Profit over public health – a reckoning? The Covid-19 pandemic brought both the best and worst of Big Pharma to light. On one hand, rapid vaccine development saved millions of lives (with Pfizer, Moderna, etc., rightly lauded). On the other, vaccine manufacturers set high prices and jealously guarded patents despite heavy public funding – highlighting that even in a global crisis, shareholder profit remained paramount. Pfizer’s COVID vaccine and pill drove it to record revenues in 2021–2022. Meanwhile, lower-income countries struggled for access, showing how the pharmaceutical market serves wealth first. Public anger at pharma profiteering (e.g. the Martin Shkreli scandal of hiking an essential drug price 5,000%) has led to some legislative efforts – like allowing Medicare to negotiate some drug prices in the U.S., and bills to curb patent abuses. Yet, the fundamental model persists: drugs are developed and sold not according to greatest need, but greatest profit. 

What’s at stake is not just economic waste or personal side effects, but a cultural shift in how we handle distress and deviance. If every trouble – from a fidgety child to a grieving adult – is met with a pill, society may become less tolerant of normal human variation and less motivated to make compassionate environmental changes. Philosopher Michel Foucault might say this is the ultimate medical gaze combined with market logic: viewing populations as objects to be regulated for productivity and docility, with pharmaceuticals as the tool. 

https://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlMedicating the young: In the U.S., even preschool children are increasingly prescribed ADHD drugs immediately after diagnosis, contrary to guidelines. A Stanford study found 42% of 4-5 year olds diagnosed with ADHD started medication within one month, often due to lack of access to behavioral therapy. This reflects a broader trend of using pharmaceuticals to control behavior when structural support is unavailable.

III. Dating Apps, Gambling Ads, and Trafficking: The Commodification of Intimacy and Chance

In the realm of online dating and digital entertainment, exploitation takes on different forms but is driven by similar forces of profit and weak oversight. Dating platforms like Tinder have transformed human intimacy into a high-engagement marketplace – one vulnerable to scams, addictive design, and even sex trafficking. Simultaneously, ubiquitous gambling advertising and gamified apps blur the line between play and exploitation, often targeting vulnerable users. This section explores how our pursuit of love and leisure online has been infiltrated by organized predators and shaped by systems that treat users as commodities. 

Dating apps as hunting ground. With over 300 million users worldwide, dating apps have become a primary way people seek relationships. But these platforms, by design, also provide ideal cover for criminals and exploiters. The FBI and international law enforcement warn that human traffickers use dating sites to recruit victims, especially young women and minors. Offenders create fake profiles, groom targets with flattery or false romance, then lure them off-app to abusive situations. In one FBI-documented case, a trafficker in Seattle used a dating website to meet an 18-year-old aspiring actress, promising to help her career – only to coerce her into prostitution and child pornography, for which he was later convicted on 17 counts. Another case saw a Baltimore man target two underage girls on social media after they posted about financial struggles; he met them and forced them into sex work, resulting in a sex trafficking conviction. These are not isolated incidents. According to the U.S. National Center for Missing & Exploited Children, a growing share of child sex trafficking cases originate from online contact, including on apps like Tinder, Instagram, and Snapchat. One report in Washington state described a 16-year-old girl, Hannah, who circumvented Tinder’s 18+ age rule by faking her birthdate and matched with a 34-year-old man. Over a few messages, he arranged to send her an Uber, gave her drugs, and had her perform sex acts on camera for a live porn stream – effectively trafficking her within hours of that first swipe. Hannah survived and went to the police, but two years later her case was bogged down, illustrating how slowly justice moves compared to the speed of digital predation. Such stories show how organized exploitation rings “piggyback” on dating apps: the apps supply a steady stream of disoriented or curious young people, while traffickers supply false affection and high-pressure manipulation. 

Dating companies have been criticized for not doing enough to verify users or screen for predators. Tinder, for instance, only very recently added a voluntary ID verification and offers a panic button via a third-party safety app, but underage users can still slip through with a fake birthdate (as Hannah did). Moreover, known sex offenders or scam bots often create new accounts with impunity after bans. There’s a tension between the anonymity that dating apps allow (which many legitimate users value) and the trust that dating requires. Apps profit from having as many users as possible – their valuations depend on userbase size and engagement – so they may be reluctant to add friction like stringent background checks or rigorous profile vetting that could deter sign-ups. Unfortunately, that frictionless design is exactly what criminals exploit

Romance scams and “pig butchering.” Beyond trafficking, dating apps and social media have given rise to epidemic levels of romance scams – where swindlers cultivate an online relationship to defraud victims of money. The Federal Trade Commission calls romance scams “heartless and financially devastating,” and reports they are the costliest form of consumer fraud in the U.S.. In 2022 Americans reported losing an astonishing $1.3 billion to romance scammers, a nearly 80% jump from the year prior. The median loss was $4,400 per victim, but many lost tens or hundreds of thousands. These scams often start on dating platforms (about 19% of reported cases in 2022 began on a dating app or site, and even more – ~40% – began with an unexpected flirtation message on Instagram or Facebook). A typical ploy is catfishing: the scammer uses stolen photos to pose as an attractive person (often pretending to be working abroad or in the military, to have an excuse not to meet in person). After weeks of romantic chat, they manufacture an emergency (“I need money for a medical bill” or “I want to visit you but need help with airfare/customs fees”). A newer variant is the so-called “pig butchering” scam: the scammer convinces the victim to “invest together” in cryptocurrency or forex, using fake investment apps to show phony profits – until the victim sinks their savings and the scammer disappears with the money. The FTC notes that many romance scammers now offer to help the victim make money (e.g. “I’m a successful crypto investor, let me teach you”) rather than directly asking for help – a twist that lowers suspicion while still draining the victim’s funds. 

Organized criminal groups, particularly in Southeast Asia, have set up boiler-room operations to run these long cons at scale, enslaving workers to impersonate lovers online. Victims span all ages, but the most likely to be contacted by romance scammers are actually young adults 18–29 (though they tend to lose smaller amounts on average); older victims lose more money even if they fall prey slightly less often. Dating apps are fertile hunting ground for these syndicates because users on them expect romantic attention, making them more receptive. The apps generally do not monitor the content of conversations for scam patterns due to privacy and scale – leaving users to fend for themselves. Tinder, Bumble, and others do display safety tips (e.g. warnings about sending money), but obviously that hasn’t stemmed the tide of losses. The sheer scale is alarming: in 2022, nearly 70,000 Americans filed romance scam reports, and that likely represents only a fraction of total cases (many are too embarrassed to report). 

It’s worth noting that romance scams often overlap with sexual exploitation. Scammers sometimes persuade targets to send explicit photos, then use those for sextortion, threatening to send the nudes to the victim’s family or employer unless paid off. The FTC observed an eightfold increase in sextortion reports since 2019, especially targeting younger users on Instagram, Snapchat, etc.. In 58% of sextortion cases reported in 2022, the contact began on social media. These crimes show how technologies can be weaponized to turn intimacy into leverage – whether it’s fake nude images used to harass (as in the Grok deepfakes) or coerced real nudes used to blackmail. All of it trades on vulnerability, shame, and the asymmetry between perpetrators (often organized and anonymous) and victims (isolated and exposed). 

Addictive design and commodified intimacy. Beyond overt scams, dating apps themselves employ psychological techniques akin to gambling to maximize user engagement. Researchers and UX designers have openly likened apps like Tinder to slot machines. The swipe mechanism – flicking through profiles in search of a match – operates on a variable reward schedule: you never know when the next swipe could yield a dopamine-hit of a match or message, much like a slot player never knows when the next pull might pay out. This uncertainty and novelty trigger the brain’s reward pathways. Every time you see a new face or get a notification, a burst of dopamine reinforces the behavior. Apps also give early positive feedback (e.g. a few easy matches for new users) to hook people with continuous rewards, then shift to more sporadic rewards to keep them chasing the high. In essence, these platforms hack our evolutionary wiring – our appetite for social approval, fear of missing out, and attraction to novelty – to keep us swiping. As Dr. Natasha Schüll has noted in her study of gambling addiction, the “machine zone” of continuous play can induce a trance-like state. Dating apps similarly can lead users to hours of mindless swiping, even when it’s not making them happy, due to the possibility that the next swipe could bring love (or at least a fleeting ego boost). Big Tech thrives on this addiction economy, where more user time means more ad impressions and subscription revenue. 

The commodification aspect comes in as users themselves become the “product.” On many dating apps, especially freemium models, user profiles are monetized – either via advertising or by selling premium features to get more visibility. Attention is currency: attractive profiles (or bots posing as such) grab eyeballs, which keeps others swiping (and watching ads or buying boosts). This dynamic can incentivize fake profiles and shady tactics. Indeed, Tinder has struggled with waves of bot profiles that aren’t trying to scam money directly, but to lure users to click external links (often porn sites, cam services, or hookup websites) for affiliate revenue. Some bots push dating users to “verify” on a third-party site which is actually a paywall for adult content. Others have promoted scam investment platforms. And as a bizarre anecdote, there have been reports of “Tinder promoters” – people paid by clubs or bars to flirt on apps and get matches to show up at their venue (essentially using dating apps as free advertising to boost nightlife traffic). While not illegal, this underscores how users’ genuine desire for connection is exploited for commercial ends at every turn. What you think is a potential date might be someone quota’d to bring 5 guys to a club that night. Even some restaurants have allegedly created fake Tinder profiles of women to chat with men and suggest a date at—you guessed it—their restaurant. These practices reveal a cynical truth: in the attention marketplace, any space where people gather can be monetized, even the promise of romance

Meanwhile, the lines between dating, gambling, and gaming blur. Some young users describe swiping as a game – the profiles gamified cards to collect or dismiss. The infinite scroll and swipe are exactly what casino game designers would devise to maximize “time on device.” An analysis by the University of Michigan’s tech ethics program noted that “social media platforms are using the same techniques as gambling firms to create psychological dependencies”. Dating apps fit that mold. They exploit the psychology of intermittent reinforcement discovered by B.F. Skinner: reward unpredictably and people keep trying. Features like Tinder’s “Swipe Surge” (real-time boosts encouraging you to swipe during peak times), daily “Top Picks” you can pay to see more of, and endless queues of faces ensure there is always an incentive to stay on. Even the idea of love becomes a commodity that the platform dangles but never quite delivers, lest you churn off the app. Indeed, it’s often remarked that dating apps’ business model is paradoxical: if every user finds a soul mate quickly and leaves, the app loses customers. The incentive is to keep people single (or at least keep returning). While the companies surely want satisfied users who tell friends to join, they also profit from those caught in an addictive loop of browsing. This conflict of interest can lead to design choices that prioritize engagement over genuine matchmaking success. 

Gambling ads and online betting: In parallel with dating apps, the rise of online gambling and sports betting has unleashed a torrent of advertising and potential abuse. After the U.S. Supreme Court legalized sports betting in 2018, Americans have been inundated with promos from sportsbooks (DraftKings, FanDuel, etc.). The UK and other countries likewise face a deluge of betting ads on TV and social media, which regulators worry is fueling problem gambling. The UK’s Advertising Standards Authority has rules: gambling ads must be socially responsible and not target minors or suggest gambling is a solution to financial or personal issues. Yet enforcement struggles. A 2024 study by Bristol University found that 74% of gambling companies’ social media content didn’t clearly identify as ads, blurring marketing as regular posts and skirting rules. Gambling firms sponsor sports teams, events, and even YouTube streamers, embedding betting promotion into entertainment seamlessly. This saturation can normalize betting as part of sports fandom or everyday life, creating new addicts for profit. Problem gamblers generate disproportionate revenue for the industry (an infamous stat in the UK: ~5% of gamblers account for 50% of losses). Thus the system has a perverse incentive not to limit high-risk behavior too effectively. Regulators in some countries are cracking down (for example, by banning celebrity endorsements that appeal to youth, or disallowing “VIP client” schemes). But as with Big Pharma, self-regulation by the gambling industry often falls short when revenue is at stake. 

The connection between gambling and other exploitative systems comes through advertising algorithms and attention capture. The same user who scrolls Instagram and sees a seductive casino ad might also be on Tinder swiping compulsively – in both cases, they are being nudged by corporate design to take risky actions (bet money, meet a stranger). A vulnerable or lonely person is a prime target for both casino marketing and romance scams. In one tragic anecdote, a UK man with autism lost his life savings to a romance scammer on a dating app who tricked him into betting on rigged games – combining both exploitation modes in one fell swoop. 

Trafficking vectors online: The Internet and apps have also lowered the barrier for traffickers to advertise victims. Illicit massage businesses, escort services, and pornography rings use online classifieds and dating-style sites to present trafficked persons as “consenting” adults for hire. Even Airbnb was misused in some instances as cover for brothels (with traffickers renting apartments for commercial sex, as a Bloomberg investigation in Colombia found). The pandemic accelerated the shift of sex trafficking to online facilitation. Law enforcement finds that traffickers now commonly use social media to maintain control of victims too – threatening to post compromising photos or monitoring their messages. The online element makes trafficking harder to detect: where it once required street-level activity, now it can occur behind the façade of a dating meet-up or a private chatroom. 

Government agencies have responded with public advisories. The FBI’s Internet Crime Complaint Center in 2020 issued a PSA titled “Human Traffickers Continue to Use Popular Online Platforms to Recruit Victims”, urging vigilance on dating apps. It explained how traffickers “pose as romantic partners or job recruiters online, groom victims by feigning love or opportunity, then exploit them through force, fraud, or coercion”. They target individuals posting about financial woes or low self-esteem, offering a way out that turns into a trap. The PSA gave concrete examples and advised reporting suspected trafficking immediately. Similarly, organizations like Polaris Project have highlighted how social media and dating sites are used in recruitment, advertisement, and even livestreaming of trafficking. However, these warnings receive nowhere near the public attention that, say, a viral TikTok trend does. Many users remain unaware of how common and sophisticated online exploitation has become. 

Weak oversight and legal gray zones. Just as Section 230 shields tech platforms from user-posted content liability, dating and social apps so far haven’t been held liable for criminal acts initiated through them (unless the app knowingly facilitated it). The burden remains on users to discern fake from real, safe from unsafe. There have been some civil lawsuits – e.g. victims suing platforms for negligence in failing to warn or provide safety features – but these face uphill battles due to liability shields. One area of legal development is “duty of care” laws. The UK’s proposed Online Safety Bill, for instance, would impose a duty on platforms to proactively scan for illegal content and prevent it, which could extend to things like trafficking or CSAM. Critics argue this may impinge privacy or free expression. Proponents say tech firms have hidden behind 230 for too long while facilitating harm. 

In the gambling realm, regulation is also catching up. Several countries ban or restrict certain forms of online betting ads. In the US, some states now require clear disclosure of odds and risks in gambling apps, and a few have even mandated that a portion of gambling revenue fund addiction treatment programs. But enforcement can be half-hearted when governments themselves enjoy tax revenue from gambling. 

Once again, profit motives collide with consumer protection. Dating apps profit from engagement – even if some of that engagement is scam bots or dubious actors, it still boosts their metrics. Until it hurts the bottom line (through reputation damage or user exodus), there’s limited incentive to aggressively police it. Gambling companies profit from heavy bettors, so their “responsible gambling” messaging can ring hollow when they continue to flood the airwaves with tempting offers of “free bets” and glamorous imagery. It often falls to NGOs, academics, and investigative journalists to expose these issues, prompting regulators to act.

Conclusion: Awareness, Accountability, and Systemic Action

Across these three domains – AI-driven pornographic abuse, pharmaceutical manipulation of health, and digital platforms enabling scams and trafficking – a common thread emerges: systems built for profit and attention can easily ignore or even nurture exploitation. They thrive in gaps between outdated laws and rapid innovation, exploiting legal gray areas and societal taboos. Victims are often those who lack power: women and girls having their images weaponized; patients and parents navigating opaque medical decisions; the lonely, young, or economically desperate seeking connection or escape online. Meanwhile, corporations and bad actors reap rewards with too little accountability

What can be done? First, sunlight. Greater public awareness is crucial. Many people still do not realize how advanced and accessible deepfake tools are – or that their harmless selfie could be turned into porn within minutes on a mainstream platform. Many do not connect that the “AI chatbot” craze has a dark side directly affecting real individuals’ rights and dignity. We need education that these AI outputs are not “just the internet being crazy” but acts of image-based sexual abuse (a term activists prefer, emphasizing the violation). Similarly, patients and doctors need education on the subtle pressures Big Pharma exerts – from critically appraising drug ads to recognizing when normal life challenges are being over-medicalized. And dating app users must be warned early and often of the red flags of scams and the importance of verifying identities. Campaigns by consumer protection agencies (like the FTC’s recent data spotlights on romance scams) are a start, but these messages need amplification in popular culture. 

Second, strengthen and enforce laws. On the AI front, existing laws on voyeurism, CSAM, and sexual harassment do apply, but may need updates for the AI context. The Take It Down Act in the U.S. is a positive step, but it requires robust enforcement – the FTC and state attorneys general should be funded and empowered to go after platforms that don’t promptly remove deepfake porn and to assist victims in restitution. There is precedent: in 2019, a Virginia man was the first to be convicted under federal law for AI-generated CP (he had created fake sexual images of a real minor). More such prosecutions, and against the technologists or platforms enabling mass creation, could deter would-be offenders. Additionally, clarifying that Section 230 does not protect platforms’ own AI creation of illicit content would remove a major shield – courts or Congress can make clear that when an AI model owned by a company produces illegal material (like CSAM), the company can be treated as the content creator or at least as aiding and abetting. In Europe, the Digital Services Act and AI Act are poised to impose obligations on AI systems to prevent generating illegal content, and hefty fines if companies fail to comply. International cooperation will be needed, since these images cross borders effortlessly. 

For pharmaceuticals, regulatory agencies could adopt stricter conflict-of-interest policies (to curb pharma influence on medical guidelines), require greater transparency in advertising (e.g. clear disclosures when an ad is promoting a disease concept rather than a product), and increase surveillance of over-prescription trends. The U.S. could reconsider DTC advertising’s legality – the American Medical Association has called for a ban on DTC pharmaceutical ads, citing their role in inflating demand for high-cost treatments. Short of a ban, the FDA could enforce ad violations more stringently and mandate balanced information (ensuring that benefits aren’t visually glorified while risks are muttered quickly). Lawmakers could also fund comparative effectiveness research – so doctors and patients know when non-drug therapies work as well or better. For instance, insurance covering therapy for ADHD or pain management could mitigate reflexive pill prescribing. Ultimately, tackling pharmaceutical excess requires treating health care as a public good rather than a commodity – a seismic shift that involves challenging the logic of for-profit medicine. 

On digital platforms and trafficking, there is room for targeted reforms. Dating apps could implement identity verification options and AI-based scam detection (with privacy safeguards) more broadly – and perhaps be required to do so if they reach a certain scale. There is precedent in fintech: banks must monitor for fraud under Know-Your-Customer rules; perhaps high-risk social platforms should have analogous duties to monitor for predatory behavior patterns (while respecting privacy – a difficult but not impossible technical task using metadata and voluntary user reports). Legislators might also explore age verification requirements for dating services to protect minors, though this raises privacy issues (the UK has struggled with age-verification for adult sites for similar reasons). On the trafficking front, one promising development is the EARN IT Act in the U.S., which seeks to make platforms liable if they don’t follow best practices to combat online child sexual abuse material (though it faces controversy over encryption). Globally, efforts like the Five Country Ministerial (a consortium of U.S., UK, Canada, Australia, NZ) have pressured tech companies to detect and report child exploitation. But as IWF data shows, the frontier of AI means some abusive content might not depict a real child – requiring law updates so that even AI-created CSAM (which may not involve an actual victim but is indistinguishable and fuels pedophilic demand) is clearly criminalized. The UK has already signaled plans for a new law to tackle AI child abuse images “at the source” by working with the AI industry to prevent model misuse. 

Lastly, industry accountability must be enforced not just through punishment but through incentive re-alignment. This could mean legal liability for negligence – e.g., allowing victims of deepfake porn to sue platforms or creators for damages (some states already allow civil suits for “deepfake revenge porn”). It could mean financial penalties for pharma companies whose drugs were over-marketed leading to public health crises (as seen in opioid litigation settlements). It could also mean shareholder activism and consumer pressure: if advertisers pull spending from X due to the platform’s toxicity, that hits Musk where it hurts and forces change. In pharma, payers (like government health systems or insurers) can refuse to pay for marginal drugs, removing the profit incentive for pushing them. 

At a higher level, we as a society must confront the ideology of tech inevitability and market fundamentalism that often underpins these abuses. The narrative that “AI progress cannot be stopped, only managed” or “drugs for every problem are simply modernity” or “the internet is a wild west, use at your own risk” serves to absolve institutions of responsibility. Instead, we should demand “safety by design” – whether it’s in AI systems (baking in content filters and ethical guardrails), pharmaceuticals (prioritizing drugs that address true unmet needs rather than duplicative profit-driven inventions), or online platforms (growth should not come at the expense of basic user protection). 

In philosophical terms, we face a contest between human autonomy/dignity and systems of control that reduce people to exploitable data points. The late scholar David Graeber once wrote about “bullshit jobs” and how capitalist systems often create make-work or harmful industries to generate profit regardless of social value. One could argue that bullshit exploitation has similarly been created – artificial problems (fake porn, over-medicated “illnesses”, fake lovers) to generate real profits, while producing real suffering. Reasserting human-centered values will require regulation, yes, but also cultural change: shunning non-consensual sexual content as socially unacceptable (the way drunk driving became stigmatized), viewing health more holistically than a pill can capture, and treating online interactions with a healthy skepticism and commitment to real connection. 

In closing, these converging crises are a wake-up call. We must not accept a world in which abuse is normalized as an “emerging tech issue” or an “unfortunate side effect of business.” The law can and should draw bright lines: sexual exploitation – whether through pixels or pills or promises – is intolerable. Companies that facilitate it must be held to account, through courts or public outcry or both. And individuals, armed with knowledge, need to reclaim their agency: to demand better safeguards, to report crimes, to support one another. The fight is ultimately about preserving the possibility of trust, consent, and authenticity in our digitally mediated lives. Our bodies, images, minds, and relationships should not be raw material for unchecked capitalist alchemy. It’s time for policymakers and the public to connect the dots and take action so that innovation and commerce do not continue to outpace our fundamental rights and protections. 

Call to action: Policymakers should prioritize passing and enforcing laws that address these abuses – from the pending deepfake and CSAM provisions in various jurisdictions, to tighter ad regulations and platform responsibilities. Tech companies and pharma firms must be pressed – by law or consumer demand – to build ethics into their business models, not as afterthoughts. Law enforcement needs resources to proactively tackle cyber-enabled exploitation (more training in digital forensics, undercover operations in trafficking rings, etc.). International cooperation is key: exploitation is a global enterprise, so regulators from the EU’s GDPR enforcers to Asia-Pacific cybercrime units must share intelligence and strategies. Finally, each of us can help by staying informed and supporting organizations fighting these battles – whether it’s nonprofits aiding deepfake victims, groups like the National Center on Sexual Exploitation, the Internet Watch Foundation, and Polaris, or advocacy for health care reforms that put patients over profits. 

The technologies and systems we have built do not need to lead to dystopia. With conscious effort, we can demand that human dignity, consent, and safety are not lost in the race for engagement and profit. The hour is late, but not beyond hope – the convergence of exploitation we face can also be a convergence of resolve, uniting technologists, lawmakers, and citizens in a new framework of accountability for the digital age. 

Sources:

  • Sara Herschander, “X just paywalled Grok’s deepfakes. They’re still everywhere.” Vox (Jan. 9, 2026)
  • Reuters, “Elon Musk’s Grok AI floods X with sexualized photos of women and minors” (Jan. 2, 2026)
  • Reuters, description of Grok requests and global alarm
  • Guardian, “Hundreds of nonconsensual AI images being created by Grok on X, data shows” (Jason Wilson, Jan. 8, 2026)
  • Guardian, “Grok’s deepfake images… investigated by Australia’s eSafety” (Tory Shepherd, Jan. 6, 2026)
  • Time Magazine, “Grok’s deepfake crisis, explained” (Andrew Chow, Jan. 2026)
  • Vox, “X turned deepfakes into a feature” (Future Perfect, Jan. 2026)
  • Vox, on Section 230 and liability for AI content
  • Alanah Pearce (Charalanahzard) commentary
  • Barbara Mintzes, BMJ 324: “DTCA is medicalising normal human experience” (2002)
  • SourceWatch, “Direct-to-consumer advertising”
  • SourceWatch citing GAO on DTC ROI
  • John Abraham, “Pharmaceuticalization of Society in Context” Sociology (2010)
  • Ricard Meneu, “La medicalización de la vida… ‘medicamentalización’” (Farmacia Hospitalaria, 2018)
  • Stanford Medicine News, “ADHD drugs being prescribed too quickly to preschoolers” (Aug. 29, 2025)
  • Erin Digitale, Stanford, describing lack of therapy leading to quick meds
  • Public Citizen, “Pharmaceutical Industry most profitable – Again” (Apr. 18, 2002)
  • Public Citizen, stats on pharma profits vs other industries
  • Public Citizen, comment on advertising like Nike
  • Federal Trade Commission (FTC), “Romance scammers’ favorite lies exposed” (Feb. 9, 2023 data spotlight)
  • FTC data: $1.3B lost to romance scams in 2022, median $4.4k
  • FTC: 40% of romance scams start on social media, 19% on websites/apps
  • FTC: scammers offering crypto investment favors
  • FTC: Sextortion reports up 8x since 2019, targeting youth on social platforms
  • FBI IC3 Public Service Announcement I-031620-PSA (Mar. 16, 2020): “Human Traffickers Use Online Platforms to Recruit”
  • FBI PSA examples of convictions (Baltimore 2019, etc.)
  • InvestigateWest, “WA teen trafficked by man she met on Tinder” (July 25, 2024)
  • NortonLifeLock Blog, “5 common Tinder scams to avoid” (Aug. 16, 2023)
  • LSE Psychological Science Blog, “Swipe Right for Love: reward system in online dating” (June 3, 2024)
  • University of Bristol report via ASA, on gambling marketing not labeled as ads (Sept. 2024)
  • Guardian, “AI-generated child sexual abuse imagery reaching ‘tipping point’ – IWF” (Oct. 18, 2024)

Citations

X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Vox

https://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Hundreds of nonconsensual AI images being created by Grok on X, data shows | Grok AI | The Guardianhttps://www.theguardian.com/technology/2026/jan/08/grok-x-nonconsensual-imagesGrok’s deepfake images which ‘digitally undress’ women investigated by Australia’s online safety watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2026/jan/07/grok-deepfake-images-sexualise-women-children-investigated-australia-esafetyElon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Elon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Elon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Elon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Xhttps://x.com/i/trending/2006844364665008283Grok’s deepfake crisis, explained | TIMEhttps://time.com/7344858/grok-deepfake-crisis-explained/Grok’s deepfake crisis, explained | TIMEhttps://time.com/7344858/grok-deepfake-crisis-explained/@charalanahzard.bsky.social on Blueskyhttps://bsky.app/profile/charalanahzard.bsky.social/post/3mbgdggq4522eX just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Elon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Xhttps://x.com/i/trending/2006844364665008283X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Grok’s deepfake crisis, explained | TIMEhttps://time.com/7344858/grok-deepfake-crisis-explained/X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Elon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Grok’s deepfake images which ‘digitally undress’ women investigated by Australia’s online safety watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2026/jan/07/grok-deepfake-images-sexualise-women-children-investigated-australia-esafetyGrok’s deepfake images which ‘digitally undress’ women investigated by Australia’s online safety watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2026/jan/07/grok-deepfake-images-sexualise-women-children-investigated-australia-esafetyGrok’s deepfake images which ‘digitally undress’ women investigated by Australia’s online safety watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2026/jan/07/grok-deepfake-images-sexualise-women-children-investigated-australia-esafetyAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwfAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwfAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwfAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwfAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwfX just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Hundreds of nonconsensual AI images being created by Grok on X, data shows | Grok AI | The Guardianhttps://www.theguardian.com/technology/2026/jan/08/grok-x-nonconsensual-imagesHundreds of nonconsensual AI images being created by Grok on X, data shows | Grok AI | The Guardianhttps://www.theguardian.com/technology/2026/jan/08/grok-x-nonconsensual-imagesHundreds of nonconsensual AI images being created by Grok on X, data shows | Grok AI | The Guardianhttps://www.theguardian.com/technology/2026/jan/08/grok-x-nonconsensual-imagesX just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Grok’s deepfake crisis, explained | TIMEhttps://time.com/7344858/grok-deepfake-crisis-explained/X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230X just paywalled Grok’s deepfakes. Here’s why it won’t work. | Voxhttps://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230Pharmaceuticalization of Society: Insights from Abraham 2010 Study – Studocuhttps://www.studocu.com/en-us/document/brandeis-university/health-community-society-the-sociology-of-health-and-illness/abraham-2010-notes/15222956Pharmaceuticalization of Society: Insights from Abraham 2010 Study – Studocuhttps://www.studocu.com/en-us/document/brandeis-university/health-community-society-the-sociology-of-health-and-illness/abraham-2010-notes/15222956Pharmaceuticalization of Society: Insights from Abraham 2010 Study – Studocuhttps://www.studocu.com/en-us/document/brandeis-university/health-community-society-the-sociology-of-health-and-illness/abraham-2010-notes/15222956Direct-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingDirect-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingDirect to consumer advertising is medicalising normal human …https://www.bmj.com/content/bmj/324/7342/910.full.pdfDirect-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingDirect-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingDirect-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingDirect-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingPharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Pharmaceutical Industry Ranks As Most Profitable Industry – Again – Public Citizenhttps://www.citizen.org/news/pharmaceutical-industry-ranks-as-most-profitable-industry-again/Life medicalization and the recent appearance of “pharmaceuticalization” – ScienceDirecthttps://www.sciencedirect.com/science/article/pii/S1130634323004269Life medicalization and the recent appearance of “pharmaceuticalization” – ScienceDirecthttps://www.sciencedirect.com/science/article/pii/S1130634323004269Life medicalization and the recent appearance of “pharmaceuticalization” – ScienceDirecthttps://www.sciencedirect.com/science/article/pii/S1130634323004269Life medicalization and the recent appearance of “pharmaceuticalization” – ScienceDirecthttps://www.sciencedirect.com/science/article/pii/S1130634323004269ADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlADHD drugs are being prescribed too quickly to preschoolershttps://med.stanford.edu/news/all-news/2025/08/adhd-preschoolers.htmlThe Cerebral scandal brings ADHD overprescription into the spotlighthttps://lowninstitute.org/the-cerebral-scandal-brings-adhd-overprescription-into-the-spotlight/Beliefs of people taking antidepressants about causes of depression …https://pubmed.ncbi.nlm.nih.gov/25064809/Beliefs of people taking antidepressants about causes of depression …https://pubmed.ncbi.nlm.nih.gov/25064809/Socioeconomic differences in antidepressant use in the PATH …https://pubmed.ncbi.nlm.nih.gov/23394713/Direct-to-consumer advertising – SourceWatchhttps://www.sourcewatch.org/index.php/Direct-to-consumer_advertisingInternet Crime Complaint Center (IC3) | Human Traffickers Continue to Use Popular Online Platforms to Recruit Victimshttps://www.ic3.gov/PSA/2020/PSA200316Internet Crime Complaint Center (IC3) | Human Traffickers Continue to Use Popular Online Platforms to Recruit Victimshttps://www.ic3.gov/PSA/2020/PSA200316Internet Crime Complaint Center (IC3) | Human Traffickers Continue to Use Popular Online Platforms to Recruit Victimshttps://www.ic3.gov/PSA/2020/PSA200316Internet Crime Complaint Center (IC3) | Human Traffickers Continue to Use Popular Online Platforms to Recruit Victimshttps://www.ic3.gov/PSA/2020/PSA200316A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/A Washington teen was trafficked by a man she met on Tinder, she says. Two years later, she’s still waiting for justice.https://www.investigatewest.org/a-washington-teen-was-trafficked-by-a-man-she-met-on-tinder-she-says-two-years-later-shes-still-waiting-for-justice/“Love Stinks” – when a scammer is involvedhttps://www.ftc.gov/business-guidance/blog/2024/02/love-stinks-when-scammer-involvedRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedWhat to Know About Romance Scams – Federal Trade Commissionhttps://consumer.ftc.gov/articles/what-know-about-romance-scamsRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedRomance scammers’ favorite lies exposed | Federal Trade Commissionhttps://www.ftc.gov/news-events/data-visualizations/data-spotlight/2023/02/romance-scammers-favorite-lies-exposedSwipe Right for Love: How Your Brain’s Reward System Powers Online Dating – LSE Psychological & Behavioural Sciencehttps://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/Swipe Right for Love: How Your Brain’s Reward System Powers …https://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/Swipe Right for Love: How Your Brain’s Reward System Powers Online Dating – LSE Psychological & Behavioural Sciencehttps://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/Swipe Right for Love: How Your Brain’s Reward System Powers Online Dating – LSE Psychological & Behavioural Sciencehttps://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/Swipe Right for Love: How Your Brain’s Reward System Powers Online Dating – LSE Psychological & Behavioural Sciencehttps://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/Swipe Right for Love: How Your Brain’s Reward System Powers Online Dating – LSE Psychological & Behavioural Sciencehttps://blogs.lse.ac.uk/psychologylse/2024/06/03/swipe-right-for-love-how-your-brains-reward-system-powers-online-dating/5 common Tinder scams to avoidhttps://us.norton.com/blog/online-scams/tinder-scams5 common Tinder scams to avoidhttps://us.norton.com/blog/online-scams/tinder-scamsWhat to Do If You Fall Victim to a Tinder Snapchat Scam – Minc Lawhttps://www.minclaw.com/tinder-snapchat-scam/Tinder scam: Restaurants using fake profiles to lure customershttps://www.linkedin.com/posts/deepali-sharma_scam-tinder-datingapps-activity-7366426766137024515-PJLQ5 common Tinder scams to avoidhttps://us.norton.com/blog/online-scams/tinder-scamsTinder scam: Restaurants using fake profiles to lure customershttps://www.linkedin.com/posts/deepali-sharma_scam-tinder-datingapps-activity-7366426766137024515-PJLQSocial media copies gambling methods ‘to create psychological …https://ihpi.umich.edu/news/social-media-copies-gambling-methods-create-psychological-cravingsGambling – ASA | CAPhttps://www.asa.org.uk/topic/gambling.htmlProtecting the public from being harmed or exploited by gambling …https://www.adph.org.uk/2022/06/protecting-the-public-from-being-harmed-or-exploited-by-gambling-and-the-gambling-industry/Protecting the public from being harmed or exploited by gambling …https://www.adph.org.uk/2022/06/protecting-the-public-from-being-harmed-or-exploited-by-gambling-and-the-gambling-industry/Sex Traffickers in Colombia Use Facebook, Tinder and Airbnb to …https://www.bloomberg.com/news/features/2025-03-03/facebook-tinder-airbnb-apps-are-used-for-sex-trafficking-in-colombia“AI child sexual abuse imagery is not a future risk – it is a current and …https://www.facebook.com/InternetWatchFoundation/posts/ai-child-sexual-abuse-imagery-is-not-a-future-risk-it-is-a-current-and-accelerat/1293382666162354/New law to tackle AI child abuse images at source as reports more …https://www.gov.uk/government/news/new-law-to-tackle-ai-child-abuse-images-at-source-as-reports-more-than-doubleElon Musk’s Grok AI floods X with sexualized photos of women and minors | Reutershttps://www.reuters.com/legal/litigation/grok-says-safeguard-lapses-led-images-minors-minimal-clothing-x-2026-01-02/Hundreds of nonconsensual AI images being created by Grok on X, data shows | Grok AI | The Guardianhttps://www.theguardian.com/technology/2026/jan/08/grok-x-nonconsensual-imagesAI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog | AI (artificial intelligence) | The Guardianhttps://www.theguardian.com/technology/2024/oct/18/artificial-intelligence-child-sexual-abuse-imagery-watchdog-iwf

All Sources

voxtheguardianreutersxtimebskystudocusourcewatchbmjcitizensciencedirectmed.stanfordlowninstitutepubmed.ncbi.nlm.nihic3investigatewestftcconsumer.ftcblogs.lse.acus.nortonminclawlinkedinihpi.umichasa.orgadph.orgbloombergfacebookgov