This blog was originally featured in the July 16th, 2025 newsletter found here: INBOX INSIGHTS, July 16, 2025: AI Beefs, Which AI Models to Use
AI Beefs: What’s Really Grinding Our Gears
So there I was last week, sitting at my desk working on slides for our AI Kit course, and I thought, “You know what would be smart? Let me use AI to help create these slides about AI.”
Oh. The irony.
Three hours later, after crafting increasingly specific prompts, tweaking outputs, and essentially describing exactly what I wanted in excruciating detail, I realized something: I could have just made the damn slides myself in half the time. The amount of work that goes into prompting a system to make what I want is sometimes equal to just doing the work myself.
And that got me thinking about all the other ways AI frustrates us. Last week we talked about AI stigmas, and the response was incredible – so many people felt seen and heard. So this week, I asked our community on Slack and LinkedIn about their biggest “beef” with AI. This isn’t about solutions (though we’ll get there in future posts). This is about letting people air their grievances and realize they’re not alone in this era of “AI everything.”
The “It’s Making Us Dumber” Beef
Hannah S. hit on something that I think a lot of us feel but don’t always voice: “I feel responsible to myself to not be lazy and outsource all my thinking to AI. Might just be me being paranoid, but I don’t want to get dementia when I’m 50. Gotta workout my brain.”
Hannah, you’re not alone. There’s this weird guilt that comes with using AI – like we’re somehow cheating or taking the easy way out. Shane C. echoed this sentiment: “I don’t want AI thinking for me. I have a small business and I feel like I need to be the differentiator, which means my ideas are that differentiator.”
Here’s what I find interesting: both Hannah and Shane aren’t anti-AI. They’re pro-human brain. There’s a difference.
The “It’s Not Actually Smart” Beef
Todd B. called out one of my biggest pet peeves: “My biggest one is people thinking it’s actually AI (specifically LLMs) and thinking it actually thinks.”
Joy S. had the perfect analogy: “My husband says it’s the new high definition. Everybody used to say things were HD that weren’t actually HD. It’s the same now with AI.”
This drives me up the wall too. When people treat LLMs like they’re sentient beings making conscious decisions, it creates unrealistic expectations. Which brings us to…
The “Why Are You Like This?” Beef
Kelsey R. perfectly captured the frustration of AI’s confidence in its own wrongness: “when it gives me a very stupid wrong answer, or didn’t actually research what I needed it to. So I say ‘that’s not fully true’ and it goes YOU’RE RIGHT!. Like thanks, Chevin (ChatGPT and Kevin). I know that.”
Chevin. I’m dead.
This ties into what Tris mentioned about “inconsistent behavior and outputs from a Gem GPT with the same inputs.” Lisa K. expanded on this: “even if you use the same AI every time, every time you tweak that prompt, it gives you an answer that essentially takes a totally different train of thought than the first time.”
The “Human Problems, Not AI Problems” Beef
Christopher P. summed it up in the way only he can: “I have no beefs with AI, just beefs with the stupid humans who make it.”
Leslie dove deeper into this: “My AI beefs center around bias trained into the model(s) and the absolutely careless (and ubiquitous) deployment of AI everywhere with no way to delete it or opt out. Both of which are human problems, like Chris said, not AI problems.”
Joy S. mentioned environmental impacts and “people who use it to just play around without thinking about those impacts.” Again, human problems.
The “Corporate Replacement Fantasy” Beef
Sunny H. went full ranty pants mode (and I love her for it): “Companies who think that AI can replace critical thinking, creativity, or people (especially in situations where a human touch is make or break – like customer success or support).”
Jess H. felt this one hard: “AI support chatbots are driving me NUTS lately!! YES I looked through your help articles, no I don’t need you to send them to me.”
We’ve all been there. Trapped in bot hell when we just need to talk to a human who can actually solve our problem.
The “Mirror, Mirror” Beef
Pancho C. had perhaps the most honest response: “my complaint is not so much about AI but wish it would read my empty mind… I hate how AI can become a mirror showing and reflecting myself.”
Ouch. That one hit different.
He’s talking about how AI exposes our own lazy thinking – when we expect it to magically know what we want without us doing the work to clearly communicate it. It’s like holding up a mirror to our own unclear expectations.
The “Everything Is Falling Apart” Beef
Michael B. brought up the sustainability question: “AI is shaping up to be the next internet bubble but the hype is so loud no one’s hearing the voices of reason.”
Koreen P. added: “Companies think AI will solve their data problems, but it will only make them worse.”
These aren’t just technical concerns – they’re pointing to a fundamental misunderstanding of what AI can and can’t do.
What This All Means
Reading through these responses, I’m struck by how many of these “AI beefs” are really human beefs. We’re frustrated by:
- Our own lazy thinking
- Companies making bad decisions about AI implementation
- The gap between AI hype and reality
- Our tendency to anthropomorphize technology
- The lack of clear communication about what AI actually is
The most honest among us (like Pancho) admit that AI often serves as an uncomfortable mirror, reflecting back our own unclear thinking and unrealistic expectations.
The Bottom Line
Your AI frustrations are valid. You’re not alone in feeling like the emperor has no clothes sometimes. The technology itself isn’t the enemy – it’s how we’re thinking about it, implementing it, and using it that’s creating these problems.
And you know what? That’s actually good news. Because human problems have human solutions. More on that in the coming weeks!
What’s your biggest AI beef? Reply to this email or join our free Slack group, Analytics for Marketers.
– Katie Robbert, CEO
|
Need help with your marketing AI and analytics? |
You might also enjoy: |
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.