Skip to content
Customer Reviews

People are Prone to AI-Generated Disinformation: Shows Study

In an era where disinformation continues to plague our digital landscape, a concerning revelation emerges from a recent study. Research indicates that artificial intelligence (AI) has the capacity to produce misinformation that is more compelling than that created by humans. With the emergence of sophisticated AI models like GPT-3 and many more, which can produce outputs that seem real and compelling, the credibility gap between AI-generated disinformation and human-written content becomes a matter of concern. As a result, this MIT Technology Review article explores how and why individuals are more inclined to trust AI-generated disinformation than human-generated disinformation.

According to the article, recent research shows that misinformation produced by artificial intelligence (AI) is more persuasive to individuals than disinformation authored by humans. The findings suggest that people were 3% less likely to spot bogus tweets created by AI than those authored by humans. This trust gap, according to the article, raises concerns as the problem of AI-generated disinformation is projected to expand dramatically. Giovanni Spitale, the lead researcher from the University of Zurich, expresses concern about the efficacy of AI-generated deception as well as its cost-efficiency and speed, as the article suggests. More complex AI models, such as OpenAI’s GPT-4, are projected to accentuate this disparity in future research. Participants involved in the study were more likely to trust AI-written misleading tweets, probably due to the organized and condensed form of AI-generated writing, the article suggests. The emergence of generative AI makes it easier to create compelling but false information, raising the possibility of conspiracy theories and misinformation campaigns. According to the article, recognizing AI-generated text remains difficult, with AI text-detection techniques still in the early phases of research. Finally, the article indicates that, while OpenAI recognizes the possibility of their AI tools being weaponized for misinformation, the impact of such campaigns and the demographics most vulnerable to AI-generated inauthentic material warrant additional investigation. It is critical to handle the issue with caution since moderation on internet platforms continues to obstruct the propagation of misinformation.

With the rising strength and accessibility of AI models, the possibility of large-scale misinformation operations develops, creating detection and mitigation issues. The preceding text emphasizes the troubling truth that AI-generated disinformation can be more compelling than human-generated deception.

AI AND ML: LEADING BUSINESS GROWTH
Back To Top