HomeMachine LearningMachine Learning NewsAn Eye for AI May Leave the Whole World Blind

An Eye for AI May Leave the Whole World Blind

An Eye for AI May Leave the Whole World Blind 2
The artificial intelligence of modern machines, meaning the ability for computer programmes to mimic the conceptual reasoning of the human brain, either as software or through a physical machine, is improving year on year. In September of this year, the Allen Institute for Artificial Intelligence showed that an AI could get 90% on a 12th-grade science test. As recently as 2016 the Institute found that no machine could get more than 60%. There is every reason to think this rate of progression will continue, and with it, the kinds of questions long relegated to science fiction will become the pressing issues of the day.

Nations are walking blindly into the adoption of AI. In the United States, the government has no major department investigating or regulating the technology, with the White House’s AI task force narrowly concerned with the use of AI for economic, military and security benefits. In China, the government has supercharged investment in the technology and has begun a pilot programme in schools to prepare Chinese children to work with AI. What’s missing from that curriculum? Any discussion of ethics or risks.

While debates around what it is to be human have filled the cloistered walls of universities for centuries, now governments must recognise the potential impact of ignoring the ethics of artificial intelligence, and take steps to ensure that the technology develops in a safe and responsible way.

As it stands, AI’s current capacities pose little threat to humanity. We are largely safe because the codes of behaviour governing artificial intelligence – i.e. the processes by which AIs make decisions – are entered by programmers. There is little that artificial intelligence can do that hasn’t been set for it in some way by a human programmer. While a machine may be programmed to harm other humans, it cannot yet make that decision on its own. However, something called a neuromorphic chip threatens to change this.

The fundamental nature of machines will change when they no longer derive their decision-making processes from a programmer, but instead, learn how to behave and adapt their behaviour based on their environment – just as human beings ‘grow’, and their reasoning is informed by their experiences.

Neuromorphic chips would do exactly this, by replicating the neural architecture of the brain and the unique way in which neurons operate via intricate connections. They would allow AIs to process information in a nonlinear way, in marked contrast to traditional binary programming. These chips, in effect, make machines ‘think’ somewhat like us.

Today many companies are working on neuromorphic technology, from IBM’s TrueNorth chip to APT’s SpiNNaker, which claims to simulate neural networks in real-time. The SpiNNaker programme began in 2006, equipped with only 18 processors, but last year it was fitted with its millionth processor core. Their aim is to model up to a billion biological neurons in real-time – equating to approximately 1% of the human brain. Given the current rate of progress, it looks like they’ll get there.

Artificial Intelligence is quickly becoming an integral part of whole sectors, such as the financial industry, where technology can do the work of traders; police operations, where AI powers facial recognition in video surveillance; and the medical industry, with AI is helping to identify new drugs faster than human researchers can. We can’t predict the exact evolution of these technologies. However, it is abundantly clear that without clear ethical oversight, governments will be caught unawares.

One country leading the way is France. A national strategy for AI has been established, led by French mathematician, Fields Medal winner and MP Cedric Villani. In an interview earlier this year, Villani said: “We must valorise our research, define our industrial priorities, work on the ethical and legal framework and on AI training.”

But there is too little in the way of international cooperation. In June of this year, members of the OECD, including the United States, signed up to new AI principles calling for ethical commitments when dealing with the developing technology. However, OECD principles are not legally binding, and the time for statements of intent has passed. At United Nations General Assembly (UNGA) in September – where world leaders discussed topics such as climate change and health coverage – Artificial Intelligence, and the ethics and risks surrounding it, were not even featured on the agenda.

Now we need programmes of ethical oversight to evaluate and regulate AI. One of the ‘platforms’ of The World Economic Forum is titled ‘Shaping the Future of Technology Governance’. We need further such bold initiatives, more frequent discussions, and crucially, for the sense of interest and urgency displayed in such high-level private gatherings to be reflected in the sphere of international politics.

As neuromorphic chips become more widespread, and AI becomes embedded in our day to day lives, governments can no longer view the ethics and risks of artificial intelligence as unconnected to its applications. Even by next year’s UNGA, it may be too late to mitigate the risks of AI’s growth.

Source link

- Advertisment -An Eye for AI May Leave the Whole World Blind 4An Eye for AI May Leave the Whole World Blind 5

Most Popular

- Advertisment -An Eye for AI May Leave the Whole World Blind 6An Eye for AI May Leave the Whole World Blind 7