The Trust Problem in AI

With tens of billions of dollars being spent on AI last year and major firms like OpenAI searching for trillions more, the tech sector is scrambling to build more generative AI models. The intention is to continuously show improved performance and, in the process, reduce the difference between what AI can achieve and what humans can do.

But while considering these new tools and systems, there is another gap that should be given equal, if not greater, consideration: the AI trust gap. When someone is willing to trust a computer to complete a task that would have otherwise been assigned to trained humans, this gap closes. If we want AI to be broadly used, we must invest in understanding this second, underappreciated gap and in finding solutions.

The aggregate of ongoing hazards—both real and perceived—related to artificial intelligence can be seen as the AI trust gap; the severity of these risks varies according on the application. They address both generative AI and predictive machine learning. Businesses are concerned about a number of short- to long-term difficulties, while consumers are raising worries about artificial intelligence, according to the Federal Trade Commission. Examine the following 12 AI hazards, which are among the most often mentioned in both groups:

  • Disinformation
  • Safety and security
  • The black box problem
  • Ethical concerns
  • Bias
  • Instability
  • Hallucinations in LLMs
  • Unknown unknowns
  • Job loss and social inequalities
  • Environmental impact
  • Industry concentration
  • State overreach

When combined, these hazards increase the general public’s mistrust and business worries regarding the use of AI. Thus, adoption is discouraged. For example, radiologists are hesitant to adopt AI when the technology’s opaque nature makes it difficult to comprehend how the algorithm decides on medical image segmentation, survival analysis, and prediction. For radiologists to feel that they are fulfilling their professional responsibilities in a responsible manner, it is imperative that there is some degree of clarity regarding the algorithmic decision-making process. However, that transparency is still a long way off. You should be concerned about a number of concerns, not simply the black box issue. Even if we improve at lowering the risks, we should expect the AI trust gap to remain constant due to comparable problems that arise in a variety of application scenarios and industries.

Three significant ramifications flow from this. First, adopters of AI—users at home and in enterprises, decision-makers in organizations, and policymakers—must bridge a continuous trust gap, regardless of how far we advance in enhancing AI’s performance. Secondly, businesses must make an investment to comprehend the dangers that are most accountable for the trust gap that is preventing users from using their applications and then take steps to reduce those risks. Third, combining people with AI will be the most important risk-management strategy. This means that humans will always be required to bridge the gap, and they must have the necessary training.

Examine the twelve hazards. There are four inquiries for each of them: How do they erode confidence in artificial intelligence? What are the alternatives for reducing or managing the risk, whether they are industry-initiated or mandated by regulators? Why do the available solutions only provide an incomplete solution, thereby enabling the risk to continue? What are the ramifications and lessons learned? When taken as a whole, these clarify the AI trust gap, explain why it is likely to exist, and suggest solutions.

Disinformation

Disinformation on the internet is not new, but AI technologies have made it more powerful. Elections in Bangladesh (where an opposition leader was portrayed in a bikini) and Moldova (where a phony video of the president endorsing a pro-Russian party circulated prior to the election) have been accompanied by AI-assisted deepfakes, which have given voters cause to mistrust crucial information required for democracies to function. 85% of internet users were concerned as of late 2023 about their incapacity to identify fraudulent content online, which is a significant issue considering the global elections that will take place in 2024.

Due to their drastic reductions in the number of human content moderators—the most effective line of defense against misinformation—social media corporations are mostly failing to confront the danger. For instance, as part of their “year of efficiency” in 2023, the biggest platform business, Meta, canceled contracts with outside content moderators, significantly lowered content moderation teams, and shelved a fact-checking technology that was in development. As a reminder that social media recommendation algorithms are just another type of artificial intelligence that can be exploited, the platform is currently grappling with a deluge of strange advertising-driven AI generated material. Meta’s retreats were echoed at X with even more extreme dismantling, and at YouTube with the axing of its content moderation crew. (Even while Tik Tok’s content moderation teams haven’t seen the same degree of layoffs, the company still needs to guard against new threats: anxieties about user data privacy and security being breached.) It is far from sufficient to have algorithmic or automatic content moderation provided in place of human moderation.

Regulators are taking over to force firms to take action in the event that no mitigation is being initiated by the company. Legislation aimed at combating deepfakes and misinformation linked to elections has been proposed in several US jurisdictions. As mandated by the EU’s recently implemented AI rule, the White House has an executive order requiring the “watermarking,” or clear labeling, of anything created by AI. In other cases, the government of India holds social media companies responsible for content that has been reported as harmful but has not been removed.

Though platforms might only direct their limited moderation resources to the markets under the greatest regulatory pressure, these well-meaning risk-management strategies could have unexpected outcomes instead of encouraging greater moderation. Despite the fact that these regions have many more users, the U.S. or the EU will receive an overallocation at the expense of the rest of the globe, especially emerging nations where commercial and regulatory demands are lower. There’s proof that this was going on even before the most recent cuts: Even though 90% of Facebook users are not from the United States, according to research by The Wall Street Journal, 87% of Facebook’s content moderation time was devoted to posts from the country in 2020.

The lesson here is that we need to face the harsh fact that it will be difficult to completely eradicate misinformation. It may even become more prevalent worldwide, and given the increasing sophistication of AI-assisted deepfakes, more deceiving, depending on your geographical location. Human awareness and instruction in “digital hygiene” will be crucial.

Safety and security

The forecast for AI safety and security threats is sobering. In the largest-ever study of AI and machine learning specialists, between 37.8% and 51.4% of all respondents assigned at least a 10% likelihood to possibilities as terrible as human extinction, with 48% of net optimists putting that probability at 5%. It’s difficult to imagine such negative assessments being accepted for any other technology currently in widespread use. There are, of course, less catastrophic risks: hostile use cases for AI tools in cyberattacks, being “jailbroken” to execute illegal commands, and so on. In the same survey, issues like the possibility of AI becoming jailbroken were assigned a reasonably high chance – the majority of respondents ranked it “likely” or “very likely” — even in 2043.

Regulations are essential for reducing these hazards once more. In order to detect vulnerabilities, generative AI models above a specific risk threshold are required under the EU laws and the White House executive order to publish the findings of simulated “red-team” attacks. However, it’s unclear if imposing such restrictions will actually reduce risk. Even worse, some policies, like those requiring red-teaming, can be promoting “security theater.” Few guidelines exist for infallible red-teaming techniques and standards, and while regulations compel some transparency, it is difficult to verify that such efforts were thorough. Unless their products are part of a wider AI ecosystem, startups are unlikely to have the resources to perform this work internally or to vouch for externally sourced tests. Moreover, the initial financial burden may prevent businesses from doing this work altogether.

The most important lesson is that, contrary to what many experts think, AI safety and security threats cannot be completely eliminated in the near future. For the most crucial and life-and-death applications, such as national security and health care, it will be crucial to keep humans involved in the process to ensure that decisions are never fully automated. For instance, in extremely sensitive negotiations between nuclear-armed nations, agreements would have to ensure that decisions about launching tests or missiles remain in the hands of humans. Awareness and preparation will therefore be crucial.

The black box problem

Building trust requires transparency. When it comes to AI, that can mean giving people alerts when they are engaging with a model, explaining how it generated a specific result, and being aware of the information stakeholders require and providing it in a way that they can comprehend. The EU AI Act and other important regulations will enforce specific transparency standards, but the ongoing problem is that AI companies have incentives to minimize transparency in order to protect their intellectual property, maintain competitive advantage, guard against malicious hacks, and lower their exposure to copyright lawsuits. Because of this, artificial intelligence is frequently a “black box” whose purpose is unclear.

One of the features that makes open-source AI development appealing is its industry-led approach to transparency. However, this also has its limitations. Experts are unable to agree on what exactly qualifies as “open-source” because AI models require much too many inputs, including training data, code that preprocesses it and controls the training process, the model architecture, etc. Businesses create their own definitions using this uncertainty as a pretext to keep the training data, which includes “synthetic” data, hidden from the general public. Even businesses that support open-source models, like Meta, are gradually becoming less “open”; their Llama 2 model is significantly less transparent than Llama 1. Furthermore, the transparency score of the Stanford Center for Research on Foundation Models gives even the industry standard, Llama 2, a mere 54 out of 100. Businesses like IBM have offered their “factsheets” as tracking and transparency tools, but unaudited self-disclosures aren’t the best ways to establish credibility.

Regulations are anticipated to be a factor in reducing the dangers associated with black box systems once more. A legitimate regulatory enforceability, standards, auditing criteria, and reliable auditors would be necessary for regulations to force businesses to submit to external audits of their AI models and publicize the findings. A recent study conducted by Cornell University revealed that a New York statute that mandates businesses who use automated employment decision tools to audit them for gender and racial prejudice is ineffective. Although the AI Risk Management Framework from the National Institute of Standards and Technology exists, it is currently ineffective due to a lack of certification, standards, or an auditing methodology.

The lesson here is that although transparency will advance, AI’s “black box” issue will persist. To facilitate adoption, each application area will need to create activities focused on increasing transparency. For instance, as previously mentioned, “interpretability” of AI — that is, the capacity to comprehend the rationale behind an algorithm’s decision — with respect to radiological applications is an important and developing area of study to promote clinical practice and adoption and helps to foster confidence among radiologists.

Ethical concerns

The majority of users concur that it is imperative to make sure algorithms transcend mathematics and data and are accompanied with guidelines guaranteeing ethical norms, such as respecting human rights and values regardless of what the mathematics says. There have been multiple attempts to encourage AI developers to unite under widely recognized ethical standards. The Asilomar AI principles, for instance, support a number of concepts in the development and application of AI models, including “human values,” “liberty and privacy,” and “common good.” However, these kinds of initiatives face three challenges.

To start, not everyone adheres to moral standards. The two leading AI countries, the United States and China, have different ideas on what “liberty and privacy” means. In the United States, free speech is protected, while in China, free speech is seen as a threat to the “common good.” Pro-life and pro-choice organizations disagree on “human values,” even in the fiercely divisive and culture-war-ridden United States. AI should be decolonized, according to some, while others want it to be anti-“woke.”

Furthermore, the capabilities of apolitical transnational bodies are restricted. Consistent with its mandate, the UN has ethical AI standards, and UNESCO has convened businesses to pledge to develop more ethical AI. The UN has little influence because the private sector develops AI the majority of the time.

Third, the organizational incentives of AI businesses heighten the conflicts between ethics and other factors. For instance, political diversity in ethical monitoring is necessary given the left-leaning nature of the workforce. Practically speaking, this is challenging: Google’s attempts to form an AI ethics advisory group were derailed when staff members objected to the choice of the president of the right-wing Heritage Foundation. The much-discussed boardroom drama at OpenAI involving Sam Altman, the abortive attempt to extract DeepMind from Google’s traditional business structure following its acquisition, and the collapse of Stability AI’s leadership serve as further frequent reminders of the struggle for supremacy among the leading AI firms: time and time again, profit margins triumph over AI for the sake of the “common good.”

The takeaway from this is that ethical conundrums are situation-specific and inevitable in AI systems; they become especially critical when they lead to conclusions that are harmful or exclusive. Human interaction will be crucial, including with groups constituted as oversight or governance boards and groups of outside observers.

Bias concerns

AI is prone to bias due to a variety of factors, including usage context, individual training participant restrictions, and biased or incomplete training data. When artificial intelligence (AI) is employed in crucial applications, such as when lenders are discovered to be even more likely to deny house loans to people of color, it can undermine public faith in the models. There are a number of mitigating measures that can be implemented, including expanding the AI talent pool, increasing the diversity of data sources, training AI developers to identify bias, applying fairness requirements to AI models, and more.

AI might never be consistently bias-free despite these corrective actions for a number of reasons. AI technologies can exhibit unexpected biases as a result of their limited exposure to real-world data, as they are taught in controlled contexts and may come across novel application environments. Moreover, the procedures for examining the existence of prejudice are challenging. It can be challenging to come to an agreement on when a decision is truly impartial because definitions of bias and unfairness can differ greatly among environments as diverse as the West, China, and India. For example, the concept of “fairness” lends itself to 21 alternative definitions. The creation of inaccurate ahistorical images by Google and Meta provides a clear example of the risks involved in even “unlearning” biases, since it may generate new, unforeseen associations that the AI model learns, worsening the situation overall. Additionally, artificial intelligence models incur the risk of running out of fresh, high-quality information to train on and of neutralizing biases resulting from scarce or subpar samples.

The lesson here is that biases and constraints in the form of data or trainers working within human bounds will inevitably be incorporated into AI models during training. It will be crucial to use common sense, alertness, and quick corrective action before they cause harm.

Instability concerns

When input is modified slightly but not meaningfully in specific circumstances, AI judgments may alter significantly, resulting in errors and minor to major variations in results. When an AI-assisted automobile drives past a stop sign due to a little impediment, for example, autonomous vehicles can be trusted for many tasks but occasionally they do not perform as intended. Academic research on the “stability” of AI has discovered that, aside from basic issues, it is mathematically impossible to create AI algorithms that are universally stable. This is in contrast to the fact that AI models are continuously improved upon by the addition of training data, the improvement of testing methods, and ongoing machine learning. Thus, when there is even a small amount of noise in the incoming data, we can never be certain that AI is making wise conclusions.

The takeaway from this is that, outside of training datasets, artificial intelligence systems are subject to minute variations. In these cases, having human workers on hand to make manual corrections or overrides will be essential.

Hallucinations in LLMs

AI hallucinations have led to models acting strangely, such as confessing to be in love with their users or saying they have spied on staff members of the company. Numerous AI manufacturers have created a variety of mitigating methods. IBM suggests, for instance, employing consistent output through the use of data templates, defining explicit limits on the use of the AI model, utilizing high-quality training data, and conducting ongoing testing and improvement. There is always a potential that hallucinations will manifest, regardless of the steps taken, according to research that indicates a statistical lower-bound on hallucination frequencies. As is also rational for probabilistic models, hallucination incidences can be predicted to decrease but are never completely eradicated, irrespective of the quality of the model design or dataset.

The lesson is to never rely on or use any generative AI model output in public, especially when it comes to high-stakes situations like legal documentation, without having it thoroughly examined by qualified experts. This can help prevent scenarios like the one in which ChatGPT, in the process of producing a legal brief, invented six fictitious court cases complete with fabricated statements and citations.

Unknown unknowns

Artificial intelligence is capable of actions that humans cannot predict. In addition to having blind spots and making mistakes that developers are unable to decipher, models may include training data that is out of date for the environment in which they are being used. Though they are capable of accurately identifying objects, image recognition models can make utterly incorrect decisions. Although training the models on new datasets continuously reduces the likelihood, there will always be information beyond the model’s field of view, and the hazards resulting from these missing parts compound and can take unexpected turns.

The lesson is that blindly applying AI, which has blind spots of its own, is a surefire way to go wrong; human judgment must always be included in decision-making processes, and application context must be understood.

Job losses and social inequalities

Wage increases should occur quickly in economies with increasing productivity. Though estimates of AI’s productivity impact differ, McKinsey has calculated an optimistic 3.3% annual increase in productivity by 2040 as a result of generative AI. AI will quadruple everyone’s productivity, according to former Google CEO Eric Schmidt. The head of the US Federal Reserve, Jerome Powell, is less optimistic about the immediate effects of AI on productivity.

Looking to history is a logical strategy to gain a better understanding of the influence. Unfortunately, history doesn’t offer much advice in this situation. It’s actually the case that as early digital technologies were introduced, U.S. worker productivity growth decreased. Even when it doubled after the World Wide Web’s inception in the late 1990s, the increase was fleeting. There were further surges in 2009 during the Great Recession, following the pandemic’s start in 2020, and up to 4.7% in the third quarter of 2023—too early to be linked to AI. This provides insufficient data to make an informed opinion about how AI will affect wages and productivity across economies.

Individual companies, on the other hand, are more optimistic, which could result in job losses as AI replaces human labor. However, that would imply that AI would raise the earnings of those in employment while causing wage losses for those whose jobs are eliminated, hence exacerbating social inequality. In response to these concerns, some researchers believe that generative AI can lessen inequality by providing lower-skilled workers with opportunities for upward mobility. More relevant historical data is that wage inequality tends to rise in nations where businesses have already resorted to automation; Black and Hispanic workers are overrepresented in the 30 occupations most exposed to automation, and underrepresented in the 30 occupations least exposed; women are expected to be disproportionately impacted by automation, with 79% of working women in occupations vulnerable to labor displacement by generative AI, compared to 58% of working men.

The overarching lesson is that the adoption of AI is hampered by the prospect of job losses and rising societal inequities. Even acknowledging AI adoption can be troublesome. UPS’s greatest layoff in history was caused by AI replacing humans, according to the CEO on an earnings call, although a spokeswoman later refuted any link between layoffs and AI. Clearly, the CEO wanted to indicate to investors that the company was implementing AI to gain from cost savings from reduced personnel, but it also had a negative public relations impact; this suggests that the impact on jobs provides friction in fully embracing AI. With various stakeholder concerns to consider, firms will hopefully adopt AI with caution.

Environmental impact

AI’s contribution of global data center power consumption is predicted to rise to 10% by 2025. By 2027, AI’s utilization of data centers might save the equivalent of half of the water utilized in the United Kingdom each year. Artificial intelligence requires increasingly powerful semiconductors, which contribute to one of the fastest-growing waste streams. Neither of these trends is showing signs of slowing. Increased usage of generative AI, particularly for image production, will exacerbate the situation. According to one study, 1,000 photos created using Stable Diffusion XL produce as much carbon dioxide as a gas-powered automobile driving 4.1 miles.

One key concern is that AI-aided applications may replace other environmentally costly activities, hence reducing emissions and resource use. Nonetheless, it is important to be mindful of its implications. Specific efforts, such as the Artificial Intelligence Environmental Impacts Act of 2024, which was sponsored in the United States Senate, are commendable but will be difficult to implement due to the lack of standards for measuring and validating AI-related emissions. Another risk-mitigation strategy is to power new data centers with renewable energy, but demand for them is expanding too quickly to be totally renewable. Even with recycling programs in place, only 13.8% of reported electronic waste is formally collected and recycled, with an estimated 16% falling outside the formal system in high and upper-middle income nations. For the near future, AI’s detrimental environmental impact is unavoidable.

The lesson here is that, just as some companies, such as the fossil fuel industry or gas-guzzling vehicle makers, have lost customer trust due to their environmental impact, AI may face comparable concerns. Human judgment is required to determine if the benefits of, instance, adding AI advances into products with acceptable enough alternatives outweigh the environmental costs.

Industry concentration

Despite the tremendous priority placed on AI by political leaders, its development is driven by industry. The reasons are structural: AI development necessitates a number of crucial inputs, including talent, data, processing capacity, and finance, all of which are more readily available in the private sector. Furthermore, these resources are concentrated in a few companies.

The AI value chain consists of two primary concentration points. A few active inventors creating AI models rely on a few huge corporations for important inputs. Nvidia, Salesforce, Amazon, Google, and Microsoft are the most active investors in the main AI pioneers, while Meta is a major supplier of open-source models.

Aside from money, AI model makers rely on Nvidia for graphics processing units and cloud providers like Amazon and Microsoft to operate the models, while Google, Meta, and Microsoft integrate AI to defend their main products. Even with a more competitive layer of AI apps and services suited to specific use cases, the industry’s basis will undoubtedly remain centralized. Users’ suspicion of Big Tech’s control will be exacerbated as technology becomes increasingly AI-intensive.

The typical effort to alleviate industry concentration dangers, namely regulatory monitoring, has arrived late. The Federal Trade Commission has only lately begun an investigation into the growing concentration issue. Meanwhile, the trends continue: since the probe began, Microsoft, the largest investor in OpenAI, has acquired the leading team from Inflection, and Amazon has spent $2.75 billion in Anthropic. OpenAI, Inflection, and Anthropic are the three most notable AI innovators in the United States right now.

The lesson is that concentration of power in a few firms undermines trust because consumers feel trapped, are concerned about overpaying, and are concerned about their data being seized by large firms who can abuse it in other areas.

State overreach

Trends indicate that governments around the world will increasingly employ AI and related capabilities to impose control over individuals. Furthermore, the proportion of people living in political situations classified as “free” by Freedom House has decreased over the last 15 years. According to Freedom House, global internet freedoms have been declining for 13 years in a row, and AI has helped to accelerate that decline in a variety of ways, including spreading state propaganda, enabling more efficient censorship, creating citizens’ behavioral profiles, and developing predictive analytics and surveillance. Consider that at least 75 of the world’s 176 countries use AI technologies for monitoring, including 51% of advanced democracies. The potential of abuse of power increase when governments gain access to citizen data, particularly with the advent of digital identification systems. Concerned professionals have recommended various potential checks and balances, but they have not been broadly adopted.

The bigger lesson is that concerns about governmental overreach may lead to the rejection of AI use, even when it can be helpful to society if handled with caution. Testing citizens’ willingness to accept tradeoffs will be crucial to ensuring their trust in governments utilizing AI. Consider the use of facial recognition technology by police: towns like San Francisco have outlawed it.

 

While much of the attention has been focused on the tremendous advancements in AI performance, Americans are becoming increasingly apprehensive about its influence. Trust in AI businesses has declined worldwide, with a particularly sharp dip in the United States. Granted, many tech companies and commentators claim that you can rapidly and easily create AI trust, but let’s not deceive ourselves; a stubborn AI trust gap still exists. And it is here to stay.

Even if the trust gap narrows, it’s crucial to note that trust doesn’t always result from a quantitative or logical calculation: even a single door plug blowing out of a plane disturbs our faith in the entire aviation system, which is statistically one of the safest modes of transportation. The trust deficit will have a significant impact on adoption in highly sensitive applications such as health care, finance, transportation, and national security. Leaders should identify which of the 12 hazards are most critical to an application and track progress toward reducing the gap.

The current focus is on training AI models to become more human-like. Let us not forget to train the humans. They must learn to identify what causes the AI trust deficit, accept that it will persist, and choose how to best fill the hole. To put it another way, the industry has spent tens of billions of dollars developing artificial intelligence solutions like Microsoft Copilot. It’s time to invest in the human, too: the pilot.

Source link

Most Popular