Forget Trusting Content—Trust People: Rebuilding Credibility in the Age of AI

In the last year, we've seen an explosion of content, mainly driven by the use of generative AI. Tools like GPT, Claude, Gemini and others create emails, white papers, research articles and I would say most of marketing content we see today.

The result is that we are drowning in content, which can now be produced at a fraction of the time (and cost) it was just three/four years ago.

Yet, this productivity gain has triggered a major crisis of credibility.

When we read an article (like this one), we now question its origin and the author. Is it really written by a human, by ChatGPT, or with a mix of both? For most people this uncertainty is a major barrier to trust.

We thought that technology would save us. However, these hopes are collapsing, Watermarking and AI detection tools are showing serious issues:

Anyway, the question "Human or machine" sounds so much 2023. The question for today is

"Does it matter who created the content"? 

Hybrid Content Is Here to Stay

AI is now a partner for writing. It's true for most people and it started a long time ago from spelling/grammar checks, to auto complete, to ... Google search. For non native writers like me the transition is natural.

People can use AI to support writing at different stages of the creation process:

  • validate ideas (does what I am writing make sense? )

  • expand on initial ideas ( any aspects I may have missed in looking at this question)

  • check the structure of the article (any better way to organize, rewrite my article)

  • check the actionability ( what questions would a reader have after reading this article - this one come from Stuart McFaul )

  • does it look "my style" (with a GPT)

  • and the usual spellcheck, grammar check...

At the end of the day, the goal is to produce an article that is credible and efficient (I am talking business content, not poetry) and most B2B articles, most white papers are now co-auhored.

So what sets the good content apart?

  1. Original Perspective – Does the content offer a viewpoint grounded in experience or strategy?

  2. Domain Knowledge – Does it reflect depth, not just surface fluency?

  3. Credibility – Does a real person with a reputation stand behind it?

This isn’t just a theory. In a recent interview published by Columbia Journalism Review (https://www.cjr.org/feature-2/how-were-using-ai-tech-gina-chua-nicholas-thompson-emilia-david-zach-seward-millie-tran.php) Emilia David, an AI reporter at VentureBeat, articulated this boundary clearly:

“Writing is hard, and it is my least favorite task, but I do not want AI to write for me… I want my readers to know that I am not just rattling off facts but helping them make informed decisions.”

Trust Isn’t in the Content—It’s in the Network

Marc Meyer captures well this shift in this very interesting article (https://www.linkedin.com/pulse/impending-inflection-point-ai-future-social-media-marc-meyer-v0cge/) : “Synthetic content will become the norm… The future of social media won’t be defined by content creation, but by the ability to discern what’s real and who to trust.”

If we agree that content alone can't carry trust, then what can do it ?

Having worked in social media for so many years, let me offer a solution:

People, Relationships, Social Signals.

What really matters is not whether a sentence or an article was generated by AI, by a human, or by a hybrid. It is whether is was endorsed by someone with expertise, credibility , and... influence.

This is the logic behind peer review in science, citations in academia, and editorial standards in journalism. It’s not just what is said—it’s who’s standing behind it.

Peer Validation at Scale

At eCairn, we have listened to tribes, communities and micro influencers for decades and we asked: What if we could scale peer validation through networks of experts/ key opinion leaders, influencers (all types)?

As an example, with our platform’s audience intelligence engine, we examined the behaviors of over 20,000 AI professionals across research, media, and enterprise. We aren't looking for the most liked or most followed—we are looking for who experts are engaging with, and what kinds of content they shared or debated.

Applying it to this article. (agree it's a little bit of a disturbing recursive) we surfaces these quotes:

  • "Trust is no longer a soft value. It’s a monetizable relationship, a strategic asset… Because when anything can be rendered, only trust can be earned.” Lukas N.P. Egger

These weren’t editorial picks. They emerged organically through network validation—the same way trust circulates in any professional community.

Mapping Credibility: The AI Influencer Graph

Our approach is not just about quotes—it’s about structures.

We map social networks among AI thought leaders: who follows who, who collaborates, and who cites which work.

AI Social Graph

The social graph forms the architecture of trust.

It's not 100% bulletproof.

Keep in mind, people are running campaigns designed for influence and counter influence, plus many KOLs actually work for companies or can be paid to post/like. However if 5-10 people from this group. - all key opinion leaders in AI - have liked, shared or commented on an article .... it has a good chance of being valuable and mostly human engineered, not just a remix of old content.

So this brings me the following questions, depending on which side of the content you stand

How are you filtering signal from noise in today's AI-saturated content landscape?

How can you ensure your content earns the trust and amplification of key opinion leaders in your field? What makes content shareworthy to the experts who matter?

Next
Next

Introducing Heatmap: A Powerful New Way to Compare Conversations Across Communities