
Apparently, millions of Redditors were being included in a secret artificial experiment. Researchers did not get Redditors’ knowledge or consent for this experiment. A thread on Reddit recently exposed this experiment, and has triggered outrage across the internet and reignited a pretty fierce debate about AI ethics, data consent, and the responsibilities of tech researchers.
This study was conducted by researchers from Stanford and the University of Pennsylvania. Which used Reddit as a live testing ground to evaluate how AI-genered content would influence real users. There were over 47 million posts and comments analyzed in this study and AI-generated replies were secretly injected into Reddit threads to observe how users interacted with them.
To no one’s surprise, there’s plenty of backlash over this experiment, with critics claiming that it represents a clear breach of ethical standards, particularly around informed consent and user autonomy.
“Improper and highly unethical experiment that is deeply wrong on both a moral and legal level.”
-Ben Lee, Chief Legal Officer at Reddit
This is a complex situation due to the gray area between public data and ethical AI use
We’ve been reporting on for months, how AI companies are using everything that is published anywhere on the internet, to train their chatbots. Including stealing content from sites like this one. And that’s part of what makes this a complex situation.
You see, there is ethical AI data that can be used, but there’s also public data, of which Reddit posts are considered to be. However, this doesn’t mean that Redditors’ have consented to being part of a behavioral experiment. Particularly not one that is involving AI-generated manipulation.
The researchers from this study have released a statement and also provided a description of the research:
Over the past few months, we used multiple accounts to posts published on CMV. Our experiment assessed LLM’s persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful.
We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.
This group of researchers requested to remain anonymous. We’ve also learned that the accounts created to post these AI-generated content were posing as rape victims, trauma counselors specializing in abuse, a black man opposed to Black Lives Matter, among other personas. All of these accounts have been suspended, and many of the comments have been deleted from Reddit.
“This is one of the worst violations of research ethics I’ve ever seen.”
-Casey Fiesler, an information scientist at the University of Colorado
Fiesler went on to state on Bluesky that “Manipulating people in online communities using deception, without consent, is not ‘low risk’ and, as evidenced by the discourse in this Reddit post, resulted in harm.”
As you might have expected, the major theme in the Reddit thread about this, is all about trust.
📰 Crime Today News is proudly sponsored by DRYFRUIT & CO – A Brand by eFabby Global LLC
Design & Developed by Yes Mom Hosting