Experts warn of nightmares filled with endless propaganda generated by artificial intelligence

Like generative AI exploded into the mainstreamboth excitement and concern have fast followed. And unfortunately according to the collaborator new study from researchers at Stanford, Georgetown, and OpenAI, one fear—that language-generating AI tools like ChatGPT could turn into chaos engines of mass disinformation—is not only possible, but imminent.

“These language models hold the promise of automating the creation of persuasive and misleading texts for use in influence operations, rather than relying on human labor,” the researchers write. “For society, this development raises a new set of concerns: the prospect of highly scalable — and perhaps even highly persuasive — campaigns by those seeking to covertly influence public opinion.”

“We analyzed the potential impact of generative language models on three well-known dimensions of influence operations – the actors who lead the campaigns, the deceptive behavior used as tactics, and the content itself,” they added, “and concluded that language models could significantly influence how influence operations will be conducted in the future.”

In other words, experts have found that AI for language modeling will undoubtedly make it easier and more efficient to generate massive amounts of disinformation, effectively turning the Internet into a post-truth hell. And users, companies and governments alike should prepare for the impact.

Of course, this wouldn’t be the first time a new and widely adopted technology has thrown a messy, misinformation-laden wrench into world politics. The 2016 election cycle was one such showdown, as Russian bots made a valiant effort to spread divisive, often false or misleading content as a means of disrupting the American political campaign.

But while real efficiency these bot campaigns have since been discussed as having archaic technology compared to the likes of ChatGPT. It’s still imperfect though – the writing tends to be good but not great, and the information it provides is so-so often very badly — ChatGPT is still remarkably good at generating sufficiently compelling and confident-sounding content. And it can produce this content at an amazing scale, eliminating almost all the need for time-consuming and more expensive human effort.

So with the incorporation of language modeling systems, it’s cheap to keep disinformation flowing – which is likely to do far more damage, much faster and more reliably to deploy.

“The potential of language models to compete with human-written content at low cost suggests that these models—like any powerful technology—can provide significant advantages to propagandists who choose to use them,” the study says. “These benefits could expand access to more actors, enable new influence tactics, and make campaign messages much more tailored and potentially effective.”

The researchers note that because AI and disinformation are changing so rapidly, their research is “speculative in nature.” Still, it’s a bleak picture of the Internet’s next chapter.

That said, the message wasn’t all doom and gloom (although there’s certainly a lot of both involved). Experts also suggest several ways to hopefully counter the new dawn of AI-driven disinformation. And while these are also imperfect and in some cases maybe even impossible, they are still a start.

For example, AI companies could be subject to stricter development policies, and their products would ideally be protected from launch until proven safeguards such as watermarks are installed into the technology; in the meantime, educators could work to promote media literacy in the classroom, a curriculum that will hopefully grow to include understanding the subtle clues that might tell when an AI is created.

Distribution platforms elsewhere could work on developing a “proof of identity” feature that goes a little deeper than the “check this box if it has a donkey eating ice cream” CAPTCHA. At the same time, these platforms could work to develop a department that specializes in identifying and removing any bad actors using AI from their respective sites. And in a bit of the Wild West, the researchers are even proposing the use of “radioactive data,” a complicated measure that would involve training machines on traceable data sets. (As is probably obvious, this “atomic bomb plan” like Casey Newton Platformer put it downis extremely risky.)

Each of these proposed solutions would have learning curves and risks, and none can fully combat AI abuse by itself. But we have to start somewhere, especially considering that AI programs seem to have a quite a serious lead.

READ MORE: How ‘radioactive data’ could help detect malicious AI [Platformer]

Leave a Reply

Your email address will not be published. Required fields are marked *