On Wednesday, a new funding initiative to support AI safety research was announced by four technology companies involved in the development of artificial intelligence (AI).
Google, Microsoft, and the AI startups Anthropic and OpenAI are in favour of the new programme, known as the AI Safety Fund. The companies released a joint statement stating that more than $10 million has already been committed to support the initiative with a few philanthropic partners.
The announcement comes after the establishment of the Frontier Model Forum in July 2023. The four tech companies have characterized the forum as an industry organization dedicated to guaranteeing the responsible and safe development of frontier AI models. The forum lists collaborating with those who influence policy and funding AI safety research as two of its main goals.
The four companies recognized the quick speed at which AI has developed in the past year in their announcement of the AI Safety Fund. One of OpenAI’s early chatbots was unable to count to ten with any degree of reliability as recently as 2019, but it is now capable of responding quickly to textual, visual, and auditory cues, as multiple tech experts pointed out in an unrelated paper about AI risks published earlier this week. It is reported that about nine months after the launch of OpenAI’s ChatGPT in November 2022, about 180 million users worldwide were expected to be using it.
Given how quickly AI is developing, industry experts have frequently demanded more safety research. There have been suggestions that until preventive measures are put in place, AI companies should hold off on developing new AI, or “frontier,” technology.
The four tech companies concurred in the announcement of the AI Safety Fund that AI safety research “is required.” The funding initiative “will support independent researchers worldwide who are affiliated with academic institutions, research institutions, and startups. The four named partners, Eric Schmidt, Jaan Tallinn, the Patrick J. McGovern Foundation, and the David and Lucile Packard Foundation, will provide initial funding pledges for the initiative.”
A foundation spokesperson told via email that the Patrick J. McGovern Foundation, one of the biggest global funders of pro-social AI, engages traditional tech companies and civil society in conversation to increase awareness of emerging opportunities and vulnerabilities. The spokesman continued, saying that AI safety is a multistakeholder process that balances engineering safeguards with the needs and interests of communities and consumers. It is not just a technical result.
Together with civil society advocates, technologists of various backgrounds come together under the AI Safety Fund to address a pressing question: how can we expedite research to create safe, efficient products which promote human welfare, while putting aside nebulous questions about existential risk? In writing, the spokesperson said.
Days before the United Kingdom (U.K.) hosts the world’s first global summit on AI safety next week, the safety funding initiative is being launched. According to a spokesperson for the U.K.’s Department for Science, Innovation & Technology, officials are aware of how quickly AI is developing and plan to collaborate closely with partners to comprehend new opportunities and risks so that they are appropriately handled.