Authenticity’s Premium in the Age of AI
The flood of AI-generated content isn’t coming, because it’s already here.
Microsoft and LinkedIn reported that three-quarters of global knowledge workers are using AI on the job.
While there are obviously many pros to using AI in our role as communicators, we really need to be careful when it comes to content generation.
In this year’s annual communications trend radar, researchers at the Academic Society for Management & Communication call this issue “information inflation.”
They say, “The value of information is diminishing due to the continuous surge in the volume and accessibility of data and content.”
Sure, there are a few tell-tale signs of AI-generated copy.
As one example, senior AI researcher and lecturer at Swinburne University of Technology Dr. Jeremy Nguyen found an exponential increase in the word “delve” in papers on PubMed in 2023 and 2024, coinciding with ChatGPT’s widespread adoption.
We’ve also noticed an uncommon volume of other words like “tapestry” and “weave,” as well as disproportionate usage of “not/but also” sentence constructs in AI-generated copy.
Despite signs like these, it’s getting harder to tell if content has in fact been AI-generated.
Researchers at Cornell recently showed that GenAI text detectors’ accuracy rate hovers around 40%, but that that already-low rate gets cut in half when machine-generated content is manipulated.
Though it’s possible to use AI tools to enhance our outputs, unique points-of-view and writing styles with character will be even more critical to demonstrate authenticity in our communications, to build credibility, and to reach the intended audience.