Side Menu

Blog

Reinvent Your Content Discovery Strategy in the Age of AI

ChatGPT, Bing, and Google’s Bard grab the headlines. The evolving use of AI-generative content tools prompts fear, excitement, and chaos among marketers.

It’s clear why. You need help. Forty-six percent of marketers say one person (or group) is in charge of their organization’s content calendar. Who wouldn’t want to taste the AI apple?

More importantly, you need to get your brand’s content where your audience is. Even with its limitations, machine learning has changed how many people search. Google has long used AI to deliver the right answers so searchers don’t have to click for more information. AI content generators, like ChatGPT, also attract a share of searchers who prefer those tools for more detailed answers (that don’t require a click.)

How do you adapt to get your content discovered by your targeted audience in this AI-entrenched world? Take a pause to reflect and strategize.

 

AI brings limitations

Start the update for your content discovery strategy by understanding the downside of AI tools. Consider these three factors:

 

1. Machines can’t understand intent

Social media and search algorithms are improving at offering readers the content they want. But understanding user intent remains a work in progress and always will.

Let’s say someone searches for “jaguars.” Do they want information about the animal, the Jacksonville, Fla., football team, or the British car manufacturer? Google wouldn’t know, and it wouldn’t ask them to clarify. It would take its best guess, and the searcher would likely need to refine their search at least once.

A machine can’t know definitively what a reader wants, so people must refine their searches. That data feeds into the AI tool to improve the algorithm, but it will never be perfect.

 

2. Nuance is lost on AI

A machine struggles to understand nuance. It communicates complex topics in a black-and-white way. As the “father of modern linguistics,” Noam Chomsky explains:

[Machine-learning programs’] deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case.

Machine-learning programs neglect to communicate with nuance. They “learn” from the online information, accurate or not, and fail to view complex situations from multiple lenses.

 

3. AI spreads misinformation and bias

Chatbots like ChatGPT and Bard can’t decipher what’s accurate information and what’s fake news. Bard made headlines when it got a fact wrong about the James Webb Space Telescope in its first demo. The AI tool made a mistake because it scraped and spat out misinterpreted news, extending the life cycle of false information.

At the same time, machine-learning programs are “trained” by people, so human bias is a real concern. For example, someone trained a computer model created to identify melanomas with clinical images. Unfortunately, 95% of the images in the training data set depicted white skin, which begs the question, “Would the computer model miss or over-diagnose skin cancers in patients of color?”

AI-created content can be wrong, biased, and misused. Therefore, it needs to be fact-checked. You can better address the challenges in your content discovery strategy by understanding them.

No Comment

Sorry, the comment form is closed at this time.