How is AI Influencing Children

AI devices are collecting a lot of data while interacting with children, posing major challenges on the privacy front.

Aspen, it’s time for bed,” rings out a virtual voice. Aspen is disappointed as he replies, “I don’t want to go to bed” and looks pleadingly at his dad. His dad shrugs and says it’s not up to him.

The virtual voice persists: “I need you to cooperate” and starts counting down from 10. By six, Aspen gives in and retires to his room. Aspen’s father then explains to his guests how the virtual assistant, Lady, has helped him ‘disrupt fatherhood’, where he gets to be the good cop, and the Lady gets all the bad rap.

The above sequence from the popular American television series ‘Silicon Valley’, has strong real-life resonance. The way Aspen associated Lady as an actual authority shows how AI has penetrated our urban households and the minds of our kids.

While a lot of research has been done on how AI impacts society at large, we don’t have a wealth of insights on how AI could affect children and shape their future.

Here, we try to analyse AI’s effect on impressionable young minds and make a case for children-centred AI development.

Negative Impacts

Empirical research has shown that social robots tend to blur the traditional ontological categories. Unlike adults, children are blank slates, and their cognitive and social skills are a work in progress. Childhood experiences and social settings fundamentally shape their personalities.

If a social bot kids interact with always answers in the affirmative, they will develop the need for instant gratification and fail to learn to deal with rejections. Real-life doesn’t play out that way. And such kids take a flight or fight response to every curveball that comes their way, never realising the value of diplomacy or soft skills or middle ground.

According to experts, this will affect young people’s ability to be comfortable being alone with their feelings since these technologies allow them to circumvent difficult emotions by plugging in. On the other hand, kids can also be mean with AI toys with no repercussions, which can further hamper their social skills development.

Secondly, ethnographic studies have shown that children see social robots as ‘evocative artefacts’ and tend to form strong bonds with them.

Tech anthropomorphism has allowed users, especially children, to create an ‘illusion of relationship’ at the expense of real social relations with genuine and reciprocal emotions.

Children have shown to regard AI devices as friendly or smart and are eager to anthropomorphise social robots. Consequently, they expect a more unconstrained, substantive and useful interaction with the robots beyond their paygrade. Hence, the social assistance of a robot is thus negatively influenced by misaligned expectations.

Thirdly, AI devices are collecting a lot of data while interacting with children, posing major challenges on the privacy front. The issue becomes even more severe when such devices collect biometric information, like their voice, without their consent. Children are unaware of how their data will be used and have no means for redressal.

Lastly, AI systems also have a lot of inherent biases. Without the right safeguards in place, this could mean that children get influenced by unfair systems and might develop regressive social behaviour.

Encouraging Children-centred AI

On the flip side, AI could help kids in many ways. For instance, AI-based learning tools have shown to improve children’s critical thinking and problem-solving skills. They have also proven helpful in improving the cognition of children with learning disabilities and the social skills of children with autism.

An AI-enabled robot, Robin, was developed to support the emotional well-being of kids with diabetes. A pilot study with Robin showed that it increased children’s joyfulness level by 26% and reduced stress levels by 34% during their hospital stay.

Hence, despite all the misgivings, one could make a case for developing AI to help children. At the same time, since AI is currently impacting children in their social development and will in terms of their rights or jobs, it is essential to centre the development of AI and its policies focusing on children.

Recently, UNICEF has come forward to give due representation to children’s voices in its policymaking process. It has come up with a set of guidelines to develop a child-centred AI. Almost 250 children, along with policymakers, child development researchers, and AI practitioners, were consulted from multiple countries through nine different workshops to come up with this document.

The report highlighted nine guidelines:

  • Support children’s development and well-being
  • Ensure inclusion of and for children
  • Prioritise fairness and non-discrimination for children
  • Protect children’s data and privacy
  • Ensure safety for children
  • Provide transparency, explainability, and accountability for children
  • Empower governments and businesses with knowledge of AI and children’s rights
  • Prepare children for present and future developments in AI
  • Create an enabling environment for a child-centred AI

These guidelines also consider the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989. The guidelines are recommendations for governments, policymakers, and businesses to consider when developing, implementing, or using AI systems.

A recent webinar by UNICEF revealed how kids aged 12 to 16 displayed a profound understanding of the consequences, proving that they can participate in the policymaking process even on subjects as intricate as AI.

Designing AI for kids is not child’s play. And as UNICEF has recognised, it is important to get the children involved while developing AI tools for kids.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link