Digital Transformation

What If Content AI Was Actually Smart?

A few years ago, a client asked me to train a content AI to do my job. I was in charge of content for a newsletter sent to more than 20,000 C-suite leaders. Each week, I curated 20 well-written, subject-matter-relevant articles from dozens of third-party publications.

But the client insisted that he wanted the content AI to pick the articles instead, with the ultimate goal of fully automating the newsletter.

I was legitimately curious if we could do it and how long it would take. For the next year, I worked with a business partner and a data scientist to deconstruct what makes articles “good” and “interesting.” Our end result was… mediocre.

The AI could surface articles that were similar to ones the audience had engaged with in the past, cutting down the time I needed to curate content by about 20 percent. Turns out, there was plenty we could teach an AI about “good” writing (active sentences, varied verbs), but we couldn’t make it smart — which is another way of saying we couldn’t teach it to recognize the ineffable nature of a fresh idea or a dynamic way of talking about it.

Ultimately my client pulled the plug on the AI project and eventually on the newsletter itself. But I’ve been thinking about that experience over the past few months as large language models (LLMs) like GPT-3 by OpenAI have gained broader mainstream attention.

I wonder if we would have been more successful today using an API into GPT-3?

GPT-3 is the foundation of more familiar products like ChatGPT and Jasper, which have an impressive ability to understand language prompts and craft cogent text at lightning speed on almost any topic.

Jasper even claims it allows teams to “create content 10X faster.” But the problematic grammar of getting 10X faster at something (I think they mean it takes one-tenth of the time?) highlights the negative flip side of content AI.

I’ve written about the superficial substance of AI-generated content and how these tools often make stuff up. Impressive as they are in terms of speed and fluency, the large language models today don’t think or understand the way humans do.

But what if they did? What if the current limitations of content AI — limitations that keep the pen firmly in the hands of human writers and thinkers, just like I held onto in that newsletter job — were resolved? Or put simply: What if content AI was actually smart?

Let’s walk through a few ways in which content AI has already gotten smarter, and how content professionals can use these content AI advances to their advantage.

5 Ways Content AI Is Getting Smarter

To understand why content AI isn’t truly smart yet, it helps to recap how large language models work. GPT-3 and “transformer models” (like PaLM by Google or AlexaTM 20B by Amazon) are deep learning neural networks that simultaneously evaluate all of the data (i.e., words) in a sequence (i.e., sentence) and the relationships between them.

To train them, the developers at Open.ai, in the case of GPT-3, used web content, which provided far more training data with more parameters than before, enabling more fluent outputs for a broader set of applications. Transformers don’t understand those words, however, or what they refer to in the world. The models can simply see how they are often ordered in sentences and the syntactic relationship between them.

As a consequence, today’s content AI works by predicting the next words in a sequence based on millions of similar sentences it has seen before. This is one reason why “hallucinations” — or made-up information — as well as misinformation are so common with large language models. These tools are simply creating sentences that look like other sentences they have seen in their training data. Inaccuracies, irrelevant information, debunked facts, false equivalencies — all of it — will show up in generated language if it exists in the training content.

And yet, these are not necessarily unsolvable problems. In fact, data scientists already have a few ways to address these issues.

Solution #1: Content AI Prompting

Anyone who has tried Jasper, Copy.ai, or another content AI app is familiar with prompting. Basically, you tell the tool what you want to write and sometimes how you want to write it. There are simple prompts — as in, “List the advantages of using AI to write blog posts.”

Prompts can also be more sophisticated. For example, you can input a sample paragraph or page of text written according to your firm’s rules and voice, and prompt the content AI to generate subject lines, social copy, or a new paragraph in the same voice and using the same style.

Prompts are a first-line method for setting rules that narrow the output from content AI. Keeping your prompts focused, direct, and specific can help limit the chances that the AI will generate off-brand and misinformed copy. For more guidance, check out AI researcher Lance Elliot’s nine rules for composing prompts to limit hallucinations.

Solution #2: “Chain of Thought” Prompting

Consider how you would solve a math problem or give someone directions in an unfamiliar city with no street signs. You would probably break down the problem into multiple steps and solve for each, leveraging deductive reasoning to find your way to the answer.

Chain of thought prompting leverages a similar process of breaking down a reasoning problem into multiple steps. The goal is to prime the LLM to produce text that reflects something resembling a reasoning or common-sense thinking process.

Scientists have used chain of thought techniques to improve LLM performance on math problems as well as on more complex tasks, such as inference — which humans automatically do based on their contextual understanding of language. Experiments show that with chain of thought prompts, users can produce more accurate results from LLMs.

Some researchers are even working to create add-ons to LLMs with pre-written, chain of thought prompts, so that the average user doesn’t need to learn how to do them.

Solution #3: Fine-tuning Content AI

Fine-tuning involves taking a pre-trained large language model and training it to fulfill a specific task in a specific field by exposing it to relevant data and eliminating irrelevant data.

A fine-tuned data language model ideally has all the language recognition and generative fluency of the original but focuses on a more specific context for better results. Codex, the OpenAI derivative of GPT-3 for writing computer code, is a fine-tuned model.

There are hundreds of other examples of fine-tuning for tasks like legal writing, financial reports, tax information, and so on. By fine-tuning a model using copy on legal cases or tax returns and correcting inaccuracies in generated results, an organization can develop a new tool that can reliably draft content with fewer hallucinations.

If it seems implausible that these government-driven or regulated fields would use such untested technology, consider the case of a Colombian judge who reportedly used ChatGPT to draft his decision brief (without fine-turning).

Solution #4: Specialized Model Development

Many view fine-tuning a pre-trained model as a fast and relatively inexpensive way to build new models. It’s not the only way, though. With enough budget, researchers and technology providers can leverage the techniques of transformer models to develop specialized language models for specific domains or tasks.

For example, a group of researchers working at the University of Florida and in partnership with Nvidia, an AI technology provider, developed a health-focused large language model to evaluate and analyze language data in the electronic health records used by hospitals and clinical practices.

The result was reportedly the largest-known LLM designed to evaluate the content in clinical records. The team has already developed a related model based on synthetic data, which alleviates privacy worries from using a content AI based on personal medical records.

Solution #5: Add-on Functionality

Generating content is often part of a larger workflow within a business. So some developers are adding functionality on top of the content for a greater value-add.

For example, as referenced in the section about chain of thought prompts, researchers are trying to develop prompting add-ons for GPT-3 so that everyday users don’t have to learn how to prompt well.

That’s just one example. Another comes from Jasper, which recently announced a set of Jasper for Business enhancements in a clear bid for enterprise-level contracts. These include a user interface that lets users define and apply their organization’s “brand voice” to all the copy they create. Jasper has also developed bots that allow users to use Jasper inside enterprise applications that require text.

Another solution provider called ABtesting.ai layers web A/B testing capabilities on top of language generation to test different variants of web copy and CTAs to identify the highest performers.

Next steps for Leveraging Content AI

The techniques I’ve described so far are enhancements or workarounds of today’s foundational models. As the world of AI continues to evolve and innovate, however, researchers will build AI with abilities closer to real thinking and reasoning.

The Holy Grail of “artificial generation intelligence” (AGI) — a kind of meta-AI that can fulfill a variety of different computational tasks — is still alive and well. Others are exploring ways to enable AI to engage in abstraction and analogy.

The message for humans whose lives and passions are wrapped up in content creation is: AI is going to keep getting smarter. But we can “get smarter,” too.

I don’t mean that human creators try to beat an AI at the kind of tasks that require massive computing power. With the advent of LLMs, humans won’t write more nurture emails and social posts than a content AI anymore.

But for the time being, the AI needs prompts and inputs. Think of those as the core ideas about what to write. And even when a content AI surfaces something new and original, it still needs humans who recognize its value and elevate it as a priority. In other words, innovation and imagination remain firmly in human hands. The more time we spend using those skills, the wider our lead.

Learn more about content strategy every week. Subscribe to The Content Strategist newsletter for more articles like this sent directly to your inbox.

Image by PhonlamaiPhoto

Get better at your job right now.

Read our monthly newsletter to master content marketing. It’s made for marketers, creators, and everyone in between.

Trending stories