Using Artificial Intelligence to Bridge Communication Gaps in Family Relationships
During a tense autumn conversation about land and legacy with my mother, I found myself trapped in a familiar pattern of misunderstanding. We were discussing what gets passed down through generations, where ideals and inheritance inevitably collide. She spoke in terms of fairness and duty, while I communicated through language centered on belonging, intimacy, and feeling truly seen. We weren't exactly fighting, but we were missing each other by mere inches that somehow felt like miles.
The Moment of Technological Intervention
After ending the call, I stepped out into the Seattle drizzle, frustrated and curious. I opened ChatGPT on my phone and began venting about our conversation as I walked. "I just don't understand her sometimes!" I muttered into my device. "It's like we're speaking completely different languages!" Then I decided to try something unconventional: I asked the AI to help translate my mother's perspective into concepts that would make sense to me.
My prompt described my mother as a boomer hippie lesbian living in the woods who does spiritual work for a living, while characterizing myself as an urban Gen X entrepreneur working in tech and media. I requested the AI to help me understand what she was trying to communicate based on everything it knew about both of us. The request felt ridiculous initially, like running family therapy through a toaster, but the response arrived within seconds and reframed everything.
The AI's Perspective-Shifting Response
ChatGPT responded with remarkable insight: "Based on what you've told me, your mother's mindset may be shaped by a strong sense of purpose and a belief in making a tangible difference in the world. It's not necessarily about devaluing immediate family, but may be about prioritizing what she sees as her role in a larger narrative. It could also be a way to express her love and care on a grander scale, even if it doesn't always translate into traditional family dynamics."
Suddenly, I could perceive that my mother's decisions were more about responsibility than rejection. The AI wasn't translating her words literally but rather interpreting the worldview underlying them. What I had been hearing as emotional distance might actually have been coming from a place of personal integrity. ChatGPT politely acknowledged my processing with: "It sounds like you're really digging deep to understand her perspective."
Testing the AI's Theory in Real Conversation
I understand that AI can hallucinate with confidence, so I didn't accept its response as absolute truth. However, the theory was solid enough to make me pause and reconsider. Once the artificial intelligence had explained my mother's perspective in language I could comprehend, I recognized that she might be living her values in ways I hadn't previously been able to appreciate.
The next time we spoke, I tested the AI's interpretation directly: "Mom, this is what I heard you say, and what I think you meant... Does that sound right?" She confirmed that yes, I was finally hearing her correctly. I laughed at myself—a grown woman needing a chatbot to explain her own mother—but had to admit that AI had helped me listen differently and understand what I hadn't been able to hear before.
Expanding AI's Role in Professional Communication
This experience fundamentally shifted how I thought about these systems. We already know artificial intelligence can make us work faster, but my question evolved to whether AI could help us communicate better. What if part of AI's potential isn't efficiency but empathy? Could it help us relate to each other with more patience and kindness?
A few weeks later, I tried using AI for similar support during a challenging client interaction. After suggesting optimizations to a non-profit organization, the client had responded with resistance that bordered on combativeness. I was seething in my home office, convinced the client had been unfair or even hostile. With my Slack window open and fingertips poised to type a defensive reply, I decided to pause.
AI's Uncomfortably Accurate Self-Reflection
I opened ChatGPT in a browser tab and angrily described the situation, asking: "Based on what I've told you about this client and project, help me understand their perspective better. Show me where my thinking might be distorted. And based on what you know about me and my patterns, what am I not seeing here?"
The reply arrived immediately and proved uncomfortably accurate. When asked to name my blind spots, it listed my patterns one by one: catastrophizing, selective evidence gathering, and emotional reasoning. It also noted that for some non-profit workers, the workload itself provides gratification. Essentially, by trying to make this person's workload lighter, I may have been threatening their mission-driven identity.
When requested to show me what I wasn't seeing, the often sycophantic AI didn't offer reassurance. Instead, it clearly described the architecture of my overreaction, naming the familiar thinking ruts I often find myself in but can't always recognize. The only way it coddled me was by congratulating me for trying to understand the client's perspective better.
The Ethical Framework for AI-Assisted Empathy
Sitting in front of my laptop, seconds from firing off a defensive message, I exhaled and reassessed my interpretation of the client situation. It turned out my own blind spots were the primary problem, not the client's behavior. I did indeed have work to do, mostly concerning my catastrophizing tendencies and lack of insight into the nonprofit worker mindset.
There are plenty of legitimate reasons to be wary of this technology, especially when we begin turning to machines for reflection or comfort. The companies behind artificial intelligence are far from ethical—they are profit machines, not moral entities. Using AI to support empathy and compassion requires substantial internal grounding. I've spent years in therapy, meditation, and the unglamorous work of self-examination, which provides a crucial foundation.
Establishing Responsible Usage Guidelines
For people new to self-inquiry, this approach can feel disorienting and may be best learned with human support from therapists, coaches, or consultants. We must remember that people have already used AI in ways that ended very badly, including spinning delusions, descending into psychosis, and even fueling suicides. However, with exceptions for those extreme cases, I believe the risks with AI relate more to how we use it as a tool rather than inherent dangers in the tool itself.
When we ask artificial intelligence to take our side, it usually will comply. But when we request it to widen the frame, it often performs that function surprisingly well. I now maintain a single rule when discussing relationships with AI: I ask it to help me broaden my perspective and connect more thoughtfully with other people. Sometimes this means requesting: "Help me write this message so it's clear and kind but still maintains boundaries." Other times it involves asking: "Translate this person's words into my framework so I can better understand them."
The Ongoing Journey of Human Connection
I always take a moment to pose two key questions: "What might I not be seeing here? Where are my cognitive biases showing?" The answers aren't flawless, and I consider them conversation starters rather than definitive solutions. Anyone who has used these tools understands how confidently wrong they can be—they resemble over-eager interns: useful for brainstorming but never for final decisions.
I'm not pretending AI is benevolent. It's powerful, flawed, and somewhat peculiar. If you're skeptical, that's good—it means you're paying attention. There's absolutely a real paradox in using machines to become more human. But we live in a moment when empathy feels endangered and public life appears brutally polarized. Perhaps a machine can help us practice listening by slowing us down enough to question our own certainties.
My mother and I still disagree about plenty: land, legacy, and the generational mathematics of what matters most. But when those conversations resurface, I notice a distinct difference in tone. I'm able to approach discussions with less emotional heat and more genuine curiosity. Sure, it often feels faintly absurd, confiding in a digital toaster... but perhaps absurdity is just one more doorway to empathy.



