An artificial intelligence specialist warns users against disclosing personal information on ChatGPT, such as political views or workplace grievances.
Oxford University AI professor Mike Wooldridge claims that conversing with a chatbot on a personal level or disclosing sensitive information would be “extremely unwise” because any information disclosed to the bot is used to train future iterations.
He goes on to say that because technology “tells you what you want to hear,” users shouldn’t anticipate a fair response to their remarks.
In this year’s Royal Institution Christmas talks, Wooldridge is delving into the topic of artificial intelligence. He will examine the “big questions facing AI research and dispel the myths about the true operation of this revolutionary technology,” the organization says.
Among the subjects he will cover are how chatbots operate and how a machine may be trained to interpret between languages. He will also talk about the nagging question around AI: can it ever really resemble humans?
Although humans are programmed to search for awareness in AI, Wooldridge told that this was an ineffective endeavor. He claimed that AI “has no empathy.” It is not sympathetic.
He went on to say that the technology is not doing that at all, and more importantly, it has never experienced anything. The technology is essentially made to try and tell you what you want to hear. That is its entire purpose.
His sobering advice was to just assume that anything you type into ChatGPT will be passed straight into future versions of the program. Retractions are also not really an option if you realize later that you disclosed too much to ChatGPT. Wooldridge claims that it is nearly hard to retrieve your data once it has been entered into the system because of the way AI models operate.
According to a spokesman for OpenAI, the organization behind ChatGPT, they introduced the ability to turn off chat history in April. Conversations begun when chat history is deactivated will not be utilized to train and improve our algorithms.
Major players in the field of artificial intelligence will join Wooldridge during the lecture series. He will also present a variety of robot pals, according to the Royal Institution, who will show what modern robots can and cannot do.
Michael Faraday began the Christmas lectures in 1825 at the London Royal Institution with the intention of enlightening and teaching youth about science. They are the oldest science television programs, having debuted in 1936.