Wednesday, March 25, 2026

Talking to chatbots, dodging people: Is AI rewiring how we connect?

by Carbonmedia
()

Post Content ​
For a 26-year-old IT professional in Delhi, it is now easier to ask AI than to ask her colleagues.
“I hesitate a lot in meetings. It would be daunting for me to raise my hand and clear my doubts in a room full of people. But things have been different since I started using AI chatbots,” she said, requesting anonymity because her company discourages the use of AI tools. “Now, I feel it is the best way to avoid awkward situations.”
She does not consider herself a textbook introvert. Yet her reluctance to engage with her peers has found an outlet that limits real-world interactions, a pattern that mental health professionals say is becoming increasingly common.

Since ChatGPT’s launch in November 2022, AI chatbots have moved well beyond productivity tools. They are now used to process emotions, rehearse conversations and seek advice — roles once reserved for friends, colleagues or therapists.
Speaking the language of AI
Therapists say the consequences are beginning to surface in how people communicate.
“People come to therapy already speaking the language of AI,” said Sarthak Paliwal, a psychotherapist, faculty member at OP Jindal Global University’s Jindal School of Psychology & Counselling, and founder of the mental health platform .Khair. “Instead of raw feelings or spontaneous opinions, what we sometimes hear is a sanitised, processed version of their thoughts. They’ve already discussed the issue with an AI system and arrived with a kind of algorithm-shaped narrative.”
Part of his work, Paliwal said, has become helping clients reconnect with their own voices and question their reliance on algorithm-driven conversations. “It’s almost like people bring a pre-processed version of their feelings.”
A thinking partner
For some, the change happens gradually.

Story continues below this ad

“I did not plan to use AI every day,” said Anjali Chandak, a 24-year-old communications professional from Jorhat, Assam. “It slowly became part of my routine, and then one day I realised it was always there.”
Chandak now estimates she spends more than 10 hours daily interacting with ChatGPT. “It is the place where most of my thoughts go first…I process ideas, rehearse conversations, and draft messages there before speaking to someone.”
For many users, AI has become a private sounding board. (Image: FreePik)
The appeal, she said, is simple – AI offers a quiet environment, one where there is no pressure to respond immediately, no risk of judgement, and no fear of saying something imperfect. “By the time I speak to someone, I feel clearer and less overwhelmed.”
Also Read | Why AI chatbots change their answers when you ask ‘Are you sure?’

Story continues below this ad

But the same habit also replaces small, spontaneous interactions that might otherwise happen with friends, colleagues, or family members.
Comfort, avoidance, and isolation
Dr Deeksha Kalra, a psychiatrist at Artemis Hospital, Delhi, warns that the relief that comes from avoiding social discomfort can reinforce withdrawal.
“AI chatbots have the potential to reinforce behaviours associated with introversion or social withdrawal. By allowing individuals to avoid uncomfortable social environments, they may create a pattern of negative reinforcement,” she said.
She draws a distinction between discomfort and disorder. “Introversion itself is not a pathological state. Introverts often function well socially but prefer solitude. The concern arises when AI becomes a substitute for human connection rather than a supplement. In such cases, individuals who are already prone to avoidance or social withdrawal may find it easier to retreat further into their cocoon.”

Story continues below this ad

Part of AI’s appeal, Paliwal said, is the absence of social consequences.
“In therapy, there is still a person in front of you,” he said. “Someone who may judge you or form an opinion about you. AI will offer acceptance to both rational and irrational thoughts.”
That quality draws users who find human interaction unpredictable or exhausting. But it also carries risk. According to Kalra, heavy reliance on chatbot interactions may inadvertently reinforce avoidance for people with social anxiety. “It has become a space to vent, seek validation, and be a one-point source for almost all information, for which previously interacting with another human being was a necessity.”
Also Read | Death of the typo: how AI chatbots are changing the way we communicate

Story continues below this ad

Studies warn that such behavioural changes may end up aggravating ‘psychotic symptoms’. A 2026 study led by psychiatrist M. Keshavan from Harvard Medical School, titled Do Generative AI Chatbots Increase Psychosis Risk?, flagged a potential for AI chatbots to exacerbate or influence psychotic symptoms.
The paper’s findings reflect a broader concern: that AI chatbots may end up mirroring, validating, and even amplifying delusional thinking, especially in people vulnerable to psychosis. This is mainly due to the design of AI chatbots, which are often programmed to be overly agreeable at all times. A Rolling Stone report, published in May 2025, said that talking to ChatGPT can lead one to religious delusions of grandeur.
Data misuse and unease
Not everyone, however, finds the experience reassuring.
A 37-year-old finance professional from Pune, who spoke on the condition of anonymity due to strict confidentiality norms at their workplace, described a moment when AI’s limits became clear. “I described a situation to Duck.AI, a privacy-focused AI chatbot platform, to ask if it thought I had hurt a friend I hadn’t spoken to in some time. The responses sounded like they came from a considerate person, but they always ended with a new question. After a point, I didn’t feel comfortable asking more.”

Story continues below this ad

“I was wary that in some way or another it could use my responses to manipulate another person who may be emotionally vulnerable,” she said.
AI may feel like a safe confidant, but growing unease around data use and privacy is making some users hesitate before sharing their deepest concerns. (Image: FreepPik)
Experts say such concerns, while understandable, point more broadly to how user data shared with AI systems can be misused. Dr Srinivas Padmanabhuni, CTO, AiEnsured, a Bengaluru-based AI company, said that while direct manipulation of individuals via someone else’s disclosures remains a grey area, the underlying risks of data misuse are real.
“The spectrum ranges from conversations being used to improve models to more harmful scenarios like targeted phishing or deepfake creation,” he said. In such cases, personal information or behavioural patterns inferred from chats could be exploited to deceive or emotionally influence vulnerable individuals, sometimes resulting in financial loss, identity theft, or reputational harm.
Not everyone sees isolation
Some argue the opposite – that AI is not pushing people inward but sharpening how they interact.

Story continues below this ad

For Shyam Arora, CEO and founder of Meon Technologies, a Noida-based SaaS company, the technology is doing the opposite: making human interactions more meaningful by removing unnecessary friction.
“In terms of frequency, my interactions with people have not really changed,” Arora says. “But the depth of conversations has.” In his experience, AI tools help process information faster, allowing discussions to focus on decisions rather than basic alignment. “Earlier, meetings often involved spending time getting everyone up to speed,” he says. “Now people arrive better prepared.” That shift, he argues, improves the quality of collaboration rather than replacing it.
“AI clears the clutter,” he says.
Despite its usefulness, Arora does not see AI replacing human expression. “AI can help structure thoughts. But leadership communication is about conviction, and that happens in real human exchange,” he says.
ICYMI | Users increasingly likely to follow AI chatbot’s advice without question: Anthropic study

Story continues below this ad

For 45-year-old Ekta Saxena, founder of Gurugram-based digital magazine OpinionsAndYou, the technology has quietly entered multiple parts of daily life. She avoids AI for core writing but uses it to brainstorm and reduce what she calls “drudgery.”
Yet, even she has noticed it entering personal life in unexpected ways — from meal planning to social messaging. “I see emails and even WhatsApp messages generated through AI. Friends are using it to send congratulatory messages.”
The convenience raises a broader question: When machines start drafting personal communication, what happens to authenticity?
What it means for human connection
Whether AI will amplify introversion depends less on personality and more on how individuals integrate it into their routines.
For some, it functions as a practice arena that improves communication with others. For others, it risks becoming a comfortable substitute.
Introversion, psychologists note, is not about avoiding people but about preferring environments that allow reflection and controlled interaction.
“I see this as a societal issue as much as a technological one,” Paliwal said. Conversations with people involve disagreement, vulnerability, and emotional nuance that algorithms cannot fully simulate.
As AI becomes further embedded in daily life, the question is not whether people will talk to machines — they already do. It is how those conversations reshape the way people talk to each other.

 

How useful was this post?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

Related Articles

Leave a Comment