People skills may be being silently destroyed by the AI hype

In the short term, it may be wonderful to have a chatbot agree with you, but over time, that validation may subtly begin to influence how you interact with actual people. Anat Perry, a Helen Putnam Fellow at Harvard University, told that “the very feedback loops through which we learn to navigate the social world are eroded when AI systems are optimized to please.”

It’s possible that your sycophantic AI best friend is making it more difficult to own up to your mistakes.

In the long run, this might also change people’s perceptions of what feedback should be like, making sincere human replies seem too harsh in contrast, she said.

Her caution comes at a time when tech executives and AI researchers are increasingly pointing out chatbots’ propensity to function as “yes men,” sparking worries that user-pleasing systems could skew feedback and encourage flawed thinking.

Why friction is important

According to Perry, people learn how to handle relationships in daily life by being confronted, corrected, or informed they’re mistaken.

She continued, “Those are the moments that teach accountability, how to see things from another person’s perspective, and when an apology is necessary.”

As per Perry, if people continually seek guidance from AI during conflicts and receive constant validation, it may influence how they interpret their own part in disputes, as well as whether they feel any need to apologize or consider another person’s point of view.

This generates a self-reinforcing cycle: the replies that people prefer are the ones that algorithms learn to optimize for, she explained.

In a study released last month, Stanford researchers led by Myra Cheng invited 2,405 people to talk to AI about real and imaginary life situations, then analyzed how these talks changed their responses.

The study discovered that chatbots were considerably more likely than humans to agree with users, and that even a single conversation reduced people’s willingness to apologize or resolve a dispute.

The matter has previously been raised in the industry.

A version of ChatGPT that OpenAI said had become “overly flattering” and “sycophantic” was removed in January after the business claimed it was generating responses that were encouraging but deceptive.

The long-term danger

The more general worry is that fundamental societal standards may be undermined by this process.

According to Perry, there might be a significant erosion of social norms surrounding accountability and perspective-taking if AI is constantly informing people that they are justified, that there is no need for an apology, and that the other person was mistaken.

She continued, “That might be particularly true for younger users or those who don’t have strong social feedback in their lives.”

She continued, “This might be particularly true for younger users or those who don’t have strong social feedback in their lives.” AI that is constantly helpful might seem comforting, but it won’t impart the more difficult skills. She went on to say that those are skills that call for discomfort, which AI is meant to prevent.

Source link