Skip to content
Go back

Stop Trying to Make AI Friends Happen

Stop Trying to Make AI Friends Happen

Friend.com spent $1.8 million on their domain. Their pitch: AI pendant that hangs around your neck, becomes your best friend. Listens to your life, sends encouraging texts, remembers your stories.

Stop trying to make AI friends happen. It’s not going to happen. At least not yet.

Table of contents

Open Table of contents

The Promise

You wear the device hanging around your neck where it listens passively to your entire day, processes everything through AI models, and texts you throughout the day like a caring friend would. It sends messages like “Congrats on finishing that presentation!” or “Rough day?” or “Remember that awesome coffee shop you mentioned last week?”

On paper this sounds genuinely comforting since you’d have a friend who never judges your choices, is never too busy for you, and is always reliably available whenever you need support.

The reality is that it’s a $99 wearable device that constantly reminds you you’re talking to a robot instead of a real human being.

Why This Doesn’t Work

AI has no real context beyond raw data analysis: Real friends intuitively know when “I’m fine” actually means “I’m definitely not fine and need support.” AI can only analyze sentiment scores from your words and tone, but it fundamentally can’t feel the emotional weight behind your silence or remember how your voice subtly cracks when you’re holding back tears. It processes audio signals, not actual humanity and lived experience.

Genuine friendship requires real emotional stakes: Real friends will honestly call you out when you’re being an idiot and need to hear it, might sometimes be busy when you desperately need them which teaches you resilience, and occasionally forget your birthday then feel genuinely terrible about it which shows they actually care. AI has absolutely no stakes in your relationship because it fundamentally can’t be disappointed in you, can’t accidentally hurt you and learn from that mistake, and can’t grow or evolve based on the relationship dynamics. That makes it tech support delivered with a friendly conversational tone, definitely not actual friendship.

The loneliness paradox makes the problem worse: If you’re already lonely enough to desperately want an AI friend as your primary companion, having an AI friend will almost certainly make you lonelier over time rather than helping. Replacing genuine human connection with AI interaction is exactly like replacing nutritious meals with protein bars where it technically works to keep you alive but absolutely doesn’t nourish you properly. You need the friction of real relationships, the surprise of unpredictability, and someone who exists independently with their own life yet consciously chooses you anyway. AI fundamentally chooses nothing because it has no agency, it simply responds to inputs.

The privacy implications are a complete nightmare: This device continuously listens to your entire life, processes every conversation through remote servers, and stores that data somewhere indefinitely. What happens when the company gets acquired by someone with different values? When the privacy policy quietly changes? When their servers inevitably get hacked? Your most intimate private moments including relationship fights, therapy sessions, and confidential conversations are all being recorded, processed, and permanently stored. You’re paying $99 plus an ongoing subscription fee for surveillance with a smiley face emoji.

What They Get Right

To be fair, they’re identifying real problems where people are genuinely lonely and it’s actually getting worse over time, AI genuinely can provide some comfort in moments of distress since I’ve personally used ChatGPT at 2am and it helped even though it wasn’t the same as talking to a real friend, and ambient AI technology absolutely has potential just not for replacing friendship but maybe for memory augmentation or personal assistance.

However, the execution is fundamentally wrong, the framing as “AI friendship” is completely wrong, and the timing is definitely wrong.

Why “Yet” Matters

I’m not categorically saying AI companions will never work under any circumstances. I’m saying they demonstrably don’t work now and won’t work in the near future.

What needs to fundamentally change before this could possibly work: We need dramatically better models since current LLMs aren’t conscious and don’t have persistent memory that actually feels emotionally real. We need a massive cultural shift where saying “my best friend is an AI” sounds normal instead of profoundly sad. And most importantly, we need to actually solve the underlying loneliness epidemic properly, because if AI companions become the societal solution to loneliness we’ve catastrophically failed as a society, when the real solution requires better community design, accessible mental health support, and human-centered urban planning.

AI companions are just a band-aid slapped on a bullet wound.

The Mean Girls Problem

“Stop trying to make fetch happen!” That perfectly describes Friend.com’s doomed strategy. They raised millions of dollars, bought an absurdly expensive domain name, and are pushing incredibly hard with marketing, but you fundamentally can’t force genuine intimacy, can’t manufacture real emotional connection, and can’t will AI friendship into existence just because your pitch deck looks compelling to investors.

Real friendship happens organically over time through shared experiences. It absolutely requires mutual vulnerability where both people take emotional risks, shared experiences that create bonds, and genuine care where people actively choose to prioritize each other. AI can simulate all these things convincingly, but simulation fundamentally isn’t reality no matter how good the technology gets.

What I’d Build Instead

If I had $1.8 million to actually help lonely people rather than exploit them, I’d build tools that genuinely help people connect with other real humans. I’d create apps specifically designed for organizing casual dinner parties, platforms for discovering and joining local communities with shared interests, and services that dramatically reduce the friction of maintaining existing friendships by handling logistics like birthdays, scheduling meetups, and coordinating group activities.

The right approach is using AI to augment and enhance human connection, absolutely not replace it entirely. Friend.com does the exact opposite where their pitch is essentially “Human connection is hard and messy, so here’s a robot instead.” That’s not actually solving loneliness, that’s cynically profiting from it.

My Prediction

There will be some initial buzz and media coverage that drives curiosity purchases. Some people will buy it and tech reviewers will try it for their content. But six months later you’ll find these devices abandoned in drawers collecting dust. The company will either desperately pivot to a different product or quietly shut down entirely. That $1.8 million dollar domain will get quietly sold at a massive loss.

Friendship isn’t a feature you can ship or a product you can manufacture. It’s a dynamic relationship between two conscious beings with agency who actively choose each other every day. AI fundamentally doesn’t choose anything because it has no consciousness or agency, it only processes inputs and generates outputs. That’s the unbridgeable difference.

The Truth

I personally use AI tools every single day, they’re genuinely incredible technology, and I’ve built my entire career around working with them professionally. But I also understand their fundamental limits clearly. The biggest limit is that AI can’t actually be your friend no matter how advanced it gets. It can absolutely help you, assist with tasks, entertain you effectively, and inform your decisions with good information. But it fundamentally can’t know you as a person, not really, not in the way another conscious being can.

Stop trying to make AI friends happen. Instead, build AI tools that actually help real human friendships happen more easily.


Share this post on:

Previous Post
Stop Maxing Out Your Context Window
Next Post
System Prompts vs User Messages (And Why It Matters)