AI devices are collecting a lot of data while interacting with children, posing major challenges on the privacy front.
Aspen, it’s time for bed,” rings out a virtual voice. Aspen is disappointed as he replies, “I don’t want to go to bed” and looks pleadingly at his dad. His dad shrugs and says it’s not up to him.
The virtual voice persists: “I need you to cooperate” and starts counting down from 10. By six, Aspen gives in and retires to his room. Aspen’s father then explains to his guests how the virtual assistant, Lady, has helped him ‘disrupt fatherhood’, where he gets to be the good cop, and the Lady gets all the bad rap.
The above sequence from the popular American television series ‘Silicon Valley’, has strong real-life resonance. The way Aspen associated Lady as an actual authority shows how AI has penetrated our urban households and the minds of our kids.
While a lot of research has been done on how AI impacts society at large, we don’t have a wealth of insights on how AI could affect children and shape their future.
Here, we try to analyse AI’s effect on impressionable young minds and make a case for children-centred AI development.
Negative Impacts
Empirical research has shown that social robots tend to blur the traditional ontological categories. Unlike adults, children are blank slates, and their cognitive and social skills are a work in progress. Childhood experiences and social settings fundamentally shape their personalities.
If a social bot kids interact with always answers in the affirmative, they will develop the need for instant gratification and fail to learn to deal with rejections. Real-life doesn’t play out that way. And such kids take a flight or fight response to every curveball that comes their way, never realising the value of diplomacy or soft skills or middle ground.
According to experts, this will affect young people’s ability to be comfortable being alone with their feelings since these technologies allow them to circumvent difficult emotions by plugging in. On the other hand, kids can also be mean with AI toys with no repercussions, which can further hamper their social skills development.
Secondly, ethnographic studies have shown that children see social robots as ‘evocative artefacts’ and tend to form strong bonds with them.
Tech anthropomorphism has allowed users, especially children, to create an ‘illusion of relationship’ at the expense of real social relations with genuine and reciprocal emotions.
Children have shown to regard AI devices as friendly or smart and are eager to anthropomorphise social robots. Consequently, they expect a more unconstrained, substantive and useful interaction with the robots beyond their paygrade. Hence, the social assistance of a robot is thus negatively influenced by misaligned expectations.
Thirdly, AI devices are collecting a lot of data while interacting with children, posing major challenges on the privacy front. The issue becomes even more severe when such devices collect biometric information, like their voice, without their consent. Children are unaware of how their data will be used and have no means for redressal.
Lastly, AI systems also have a lot of inherent biases. Without the right safeguards in place, this could mean that children get influenced by unfair systems and might develop regressive social behaviour.
Encouraging Children-centred AI
On the flip side, AI could help kids in many ways. For instance, AI-based learning tools have shown to improve children’s critical thinking and problem-solving skills. They have also proven helpful in improving the cognition of children with learning disabilities and the social skills of children with autism.