 
			
							
						Are you lying to me?
 
							Introduction
There’s lots of excitement about AI, but have you ever noticed how it’s mainly coming from the sales, and marketing teams… and not from buyers? I don’t see many people raving about how AI made their buyer experience better… we shouldn’t ignore that.
That doesn’t mean the future isn’t exciting. AI has the potential to make everything we do truly buyer-centric, from initial marketing all the way to long-term client success. But if AI gives us the ability to build better human connections, we need to acknowledge the fundamental ingredient for any connection: trust.
Right now, we’re at a fork in the road between doing what’s easy and doing what’s authentic, credible, and trustworthy.
The Battleground
To understand where trust is being tested, we need to look at where AI is most directly intersecting with buyer interactions which we expect to be inherently human and personal.
A lot of marketing assets are in the clear here as long as they’re adding real value. The reason is simple… when you’re reading a whitepaper you’re not thinking ‘did Sally really think that’. The issue is when you’re hearing directly from a human, which is most common when receiving emails and seeing posts and comments from individuals on platforms like LinkedIn.
When you’re engaging directly with a human being, you expect a personal connection. When AI starts to replace that human effort, the risk of deception rises.
This is happening right now. A high percentage of emails, especially in outbound, are now written by AI, but sent by an SDR. You’ve probably also seen a sharp rise in AI generated comments on LinkedIn and other channels.
This is also just the beginning. Even areas that have been inherently human in recent years are now being replicated by AI– video in particular.
This is where the problem lies. If AI is driving the interaction, and the buyer believes it’s a human, the line between effective communication and outright deception gets blurry. Is that relationship built on authenticity, or on a lie?
Intent and Delivery
The motivations behind using AI to create content aren’t always nefarious. Often, it’s about filling a gap where someone lacks the skills or resources to write their own content. For example, I’m dyslexic and frequently use AI to improve what I’ve written.
That said, when businesses and individuals get excited about AI being able to do things ‘at scale’, you shift from it playing a helpful role in supporting the creative process, to being the primary driver of content, and we enter a gray area.
It’s one thing for AI to help a person express their ideas; it’s another for AI to completely replace that person whilst that individual essentially lies that it was their own authentic view. That’s just straight up deception.
If the buyer is led to believe they’re engaging with a human’s thoughts, ideas, or effort, when in reality they’re interacting with an AI, it’s a lie.
The Core of the Issue
A salesperson doesn’t have to be liked, but they do need to be credible. That starts with trust.
When a salesperson sends an email that says, “I saw your post on LinkedIn, and it made me think of [insert observation],” they’re creating a social contract. That one small word—“I”—is making a promise. It says the salesperson personally saw the post, thought about it, and crafted a message. The buyer is left with the impression that the outreach is genuine and personalized.
But more and more often, that “I” is a lie. The salesperson didn’t read the post; AI did. AI picked up on the keywords, generated the insight, and crafted the email. Yet, the message pretends it came from a person. And somehow, this has become normal.
The very first words a buyer reads from a seller can be a lie, and we’re just accepting it.
Trust is the currency of any relationship, and once it’s gone, it’s nearly impossible to regain.
Time and Quality Won’t Solve This
Most AI generated content is… not good enough, and way too easy to spot (game-changer anybody?). But that won’t always be the case. There is absolutely no logical reason why AI won’t be able to generate far better content across the board than the majority of people.
But it goes so far beyond quality now.
When I open an email, a LinkedIn post, or even a social media comment, the first thing I think is, “Who is this coming from?” The second is, “Are they lying to me?” Only after that do I consider the quality of the content.
It’s not enough for AI to create good content. Buyers care about who is reaching out to them and whether they can trust that person. If the answer is no, it doesn’t matter how perfectly written the message is.
Transparency Wins
This should not be rocket science. The solution is as obvious as it seems.
Don’t lie to people.
There’s nothing wrong with AI. If AI is being used to write content, it should be declared. If it’s writing emails, the emails should come from an AI, not from a human faking their personal efforts.
The moment you try to pass off AI-generated content as a personal human effort, you’re breaking that trust.
Conclusion
As I mentioned earlier, we are at a fork in the road. We can continue to let AI take over at the cost of authenticity. Let’s call this a seller-first mentality.
Or we can remember that our job as salespeople and marketers is to help our buyers, to serve them, to support them. If we acknowledge that, we will be able to stop breaking the social contracts, and build a relationship from a place of trust.
Last week Will Aitkin said that businesses that lead with radical transparency will have a competitive edge. He is not wrong.
Stop lying to your buyers.
 
						 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			 
			
 
			 
			