Finding AI’s voice

Today’s voice assistants are still a far cry from the hyper-intelligent thinking machines we’ve been musing about for decades. And it’s because that technology is actually the combination of three different skills: speech recognition, natural language processing and voice generation.

Each of these skills already presents huge challenges. In order to master just the natural language processing part? You pretty much have to recreate human-level intelligence. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data—including the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence—machines that can multitask, think, and reason for themselves. In this episode, we explore how machines learn to communicate—and what it means for the humans on the other end of the conversation.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link