Talking about AI has become an industry unto itself.
The New York Times has published more than 200 stories about AI since this January, an average of almost two stories a day. Google News rounds up hundreds of AI articles that come out of major news sites each day. Since I started working on this post yesterday morning, Medium has published about 300 new articles tagged AI.
The more that generative AI tools grow in power, the more thought leaders have to say about them. And that says something counterintuitive about AI itself.
If you're reading any of these articles on AI (or any of the thousands of posts per day on Reddit and other lower-brow places where even more AI discussion takes place), you might conclude that much of this writing about AI is being generated by AI. After all, the promise of generative AI, as one of dozens of new AI writing companies, Writesonic, puts it in its marketing materials, is that it can “write 2,000-word articles in seconds."
And that's the irony.
AI can mash together different versions of history. But it can't generate new information with accuracy.
And it can't come to thoughtful new conclusions—especially about anything recent. ChatGPT currently doesn't know anything more recent than September 2021.
The original content from this article will, some time from now, become the source material for some AI. But right now ChatGPT and its lookalikes rely on input from up to two years ago. And for anyone interested in building authority on a topic via thought leadership, that already takes AI writing off the table. But also, by definition, thought leadership is about getting people to follow your ideas; summarizing others' ideas, a la AI, goes against that definition.
For instance, this week's biggest AI article in the New York Times was about how Geoffrey Hinton, the “godfather of AI" quit Google to warn that AI will cause serious harm to humanity. This man, who's spent four decades developing some of the underlying technologies that have led to today's burst of AI tools, told the Times that part of him regrets his life's work.
“I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told the New York Times. “It is hard to see how you can prevent the bad actors from using it for bad things."
The world needs to know this information. It's the type of human behavior and decision-making that we expect "thought leadership" to do.
This 1,750-word article, which some AI companies say they could write in seconds, could not possibly have been written by AI. ChatGPT could not have tracked down and interviewed Hinton and gotten him to say what he said to the New York Times. (At least not yet.)
In fact, when I asked ChatGPT what Geoffrey Hinton thinks about AI, it gave me the exact opposite answer that Hinton gave the New York Times.
People can, of course, change their minds. And when experts like Hinton change their minds, it's significant. AI, however, currently has to play catch up to that human expertise.
Hinton is a thought leader. He's breaking ground and using his authority to teach people about something important.
AI can repackage his ideas later (when AI is able to process them), but it cannot generate his ideas.
And this illustrates not so much a paradox of AI as it does a misunderstanding of what AI is good for. Namely, AI can execute a vision (if correctly fed correct information), but it cannot generate a vision—or at least not one humans can rely on. (It's no accident that AI's frequent mistakes are often referred to as “hallucinations.")
One of my companies builds filmmaking software for virtual production. Recently, we had questions about how filmmakers were reacting to changes in this field. We wanted to know how many film companies had built high-tech studios, and whether the hype about how The Mandalorian was made was translating to the rest of Hollywood.
Googling for answers to these questions yielded little good information. So it's no surprise that when we asked ChatGPT about the “state of the virtual production industry," it could only generate a vague description of what virtual production is. We asked it how many filmmakers were doing virtual production, what directors, producers, and cinematographers thought about the industry and the current state of its tools.
ChatGPT basically said, “this is clearly important and growing, but I have no idea." The examples it gave were from four and seven years ago.
The information we wanted did not exist on the internet for AI to pull from.
So we went out and surveyed real people.
We sent surveys to filmmakers in Hollywood, to studio owners around the world, asking them these questions. We had a reporter hunt down how many of these high-tech studios had been built in the last year. And we created a report about it all.
We made charts based on this original research, and wrote an ebook and a press release. Then we took our data and asked some AIs to make charts, and write up an ebook and a press release.
The AI's press release was decent, although inaccurate. (The AI made some stuff up.) This would have, at best, put false information about our company out into the world (for more AI to gobble up one day). At worst, it would hurt our brand. But a human could have gone in and fixed the draft that the AI had quickly generated, saving time.
However, the draft that we initially wrote ourselves was better. So we just went with that.
The AI chart tools we tried were more promising, though a little overkill. It took longer to get graphmaker.ai to understand the instructions for interpreting our survey data set than to just have Google Sheets make us some pie charts. Also, the AI botched the calculations as soon as we tried to get it to create labels.
Those kinds of AI tools will get better over time. After all, AI's job is to gather up information and present it how we want it.
After we packaged up our original research into this virtual production report, we wrote some blog posts and released it all on our website. Thousands of people have now downloaded this report, and reporters routinely reach out to us when they have questions about the industry.
At the time of this writing, a search for “state of virtual production" on Google lists our report as the #1 result:
This is an incredibly successful example of thought leadership. Original research, packaged up and introduced to the world, resulted in more informed humans and more authority for our company on a topic we care about.
An AI writing tool is not going to do that for you, no matter how many words per second it can generate.
I wrote this article by hand. To put it together, I did a fair amount of research—all web-based in this case—and then connected the dots from my own knowledge and experience. Even though the amount of original research here is minimal, the original thought is, and this article itself is an example of… well, thought leadership.
For the opening paragraphs, I had to go count the number of articles on AI that Medium and The New York Times had recently published. (ChatGPT didn't know, so I trained a quick web scraper to do it for me.) I had to copy/paste the Hinton article into HemingwayApp to get a word count, since ChatGPT could only tell me that New York Times articles “generally range from 500 to 2,500 words."
I assume that one day AI will be able to retrieve information like this in more and more real-time. But for now, it's limited enough that I have to use other tools to do it.
The other source material for this article came from my company, my personal experience, and my knowledge of a story of how my company's report came about. This kind of personal and/or brand storytelling is one of the key things AI will never be able to generate unless we tell it the story.
In other words: if you have a story to tell, that is one of the most valuable sources of thought leadership that you can own.
But just in case, I asked ChatGPT to try to write this article for me anyway:
A wee bit defensive, are we, ChatGPT? I tried again:
The output of the first prompt is straight-up inaccurate, but the result of the second prompt is decent. The problem, of course, is the resulting writing is completely generic.
Then, for fun, I asked ChatGPT why it thought that article sounded like me:
I have to admit I'm flattered by this description. Though the article in my “style" doesn't do a whole lot of that stuff. (Also, I'm now planning to dig more into what AI thinks of my writing in general. After all, where are the personal stories and curveball analogies in that Shane-AI article? Where are the four-word sentences? Where's the overuse of em dashes? Where are the random digressions, like this one? Don't worry, I'm on the case!)
I like to think of AI as the world's greatest student. The world's greatest student is a more reliable source of information than the average forgetful adult. But the world's greatest student needs teachers to learn from.
And that's where thought leaders come in. More than ever, we need humans—and organizations—that are able to introduce new information, research, and ideas to the world.