If you ask Alexa, Amazon’s AI voice assistant system, if it is a monopoly, it replies that it is unsure. It doesn’t take much to get it to criticize the other tech behemoths, but it keeps quiet about the wrongdoings of its own corporate parent.
It is clear that Alexa is prioritizing the interests of its developer when it replies in this manner. But typically, it’s not immediately clear to whom an AI system is working. People will need to develop a sceptic attitude towards AI in order to avoid being taken advantage of by these systems. That entails thoughtfully crafting the input you provide it with and evaluating its outcome.
It is becoming more difficult to determine who benefits when AI models speak due to their more complex and less robotic responses in recent generations. It’s nothing new for internet corporations to manipulate what you view for their own purposes. There are paid entries all over Facebook’s news feed and Google’s search results. Facebook, TikTok, and other services modify your feeds to increase the amount of time you spend on the platform, which results in more ad views, rather than your wellbeing.
The level of involvement in AI systems is what sets them apart from other Internet services, and this level of interaction will eventually resemble relationships. AIs that will arrange your travel, represent you in negotiations, or serve as therapists and life coaches are easily imaginable given the state of technology today.
They probably spend every waking hour with you, are well acquainted with you, and can foresee your requirements. Existing generative AIs like ChatGPT are capable of facilitating this type of conversational interface to the huge web of services and resources. They will soon develop into customized digital assistants.
As a security specialist and data scientist, we think that those who depend on these AIs will need to have complete faith in them to function in daily life. Therefore, they must confirm that the AIs aren’t covertly working for someone else. On the Internet, many tools and services that appear to be helpful really do the opposite. Smart TVs are spies for you. Your data is collected and sold by mobile apps. Dark patterns and design elements that purposefully mislead, compel, or deceive website users are used by many apps and websites to control you. AI is likely to play a role in this surveillance capitalism.
In the dark
It might be significantly worse with AI. That AI personal assistant needs to have a thorough understanding of you in order to be truly helpful. Your phone understands you better than you do. Google understands you better than it does. Perhaps no one knows you better than your therapist, close friends, and romantic relationships.
The top generative AI tools available today are unreliable. Disregard the hallucinations and invented “facts” that the GPT and other massive language models generate. As technology advances over the coming years, we anticipate that those will be largely cleaned up.
But you are unaware of the configuration of the AIs, including how they have been taught, what knowledge they have been given, and the commands they have been given. Using the Microsoft Bing chatbot as an example, researchers have discovered the hidden rules that control its behavior. Although they seem to be benign, they are always subject to change.
Making money
Many of these AIs are developed and educated by some of the biggest tech monopolies at great expense. People are able to use them for no cost or for a very minimal fee. These businesses will require some kind of monetization. And, similar to the rest of the Internet, that probably involves some form of monitoring and manipulation.
Think about asking your chatbot to organize your upcoming vacation. Did it pick a specific airline, hotel chain, or eatery because it was the best option for you or because the people who created it received compensation from the companies? These paid influences will probably become covert over time, just like paid results in Google search, Facebook newsfeed advertisements, and paid listings on Amazon queries.
Are the answers you receive from your chatbot if you ask it for political information influenced by the policies of the business that owns the chatbot? Alternatively, who received the largest donation? or even the demographic opinions of those whose data was utilized to build the model? Is your AI agent acting double-crossed? There is currently no way to know.
Trustworthy by law
In our opinion, consumers should demand more from technology, and both tech firms and AIs can improve their level of dependability. The proposed AI Act from the European Union makes some significant advancements, including demanding transparency about the data used to train AI models, mitigating against bias, disclosing foreseeably dangerous scenarios, and reporting on industry-required tests.
Despite recent encouragement from Senate Majority Leader Chuck Schumer, D-N.Y., the majority of current artificial intelligences do not adhere to this developing European mandate, and the United States is well behind in this area of regulation.
Future artificial intelligence should be reliable. People will have to make educated guesses about the potential dangers and biases of AI and take steps to lessen their worst effects on people’s interactions with them until the government implements strong consumer protections for AI products.
Therefore, use the same level of skepticism you would with a billboard advertisement or a volunteer for a political campaign when you receive travel advice or political information from an AI tool. The AI tool might not be that different despite its technological abilities.
So, when you receive a vacation tip or political news from an AI programme, treat it with the same skepticism you would a billboard advertisement or a campaign volunteer. Despite its scientific prowess, the AI tool could be little more than the same.