HomeArtificial IntelligenceArtificial Intelligence NewsMeta's AI Marketing Tools not Available for Politicians

Meta’s AI Marketing Tools not Available for Politicians

Facebook has a long history of utilizing artificial intelligence and machine learning algorithms to support its human-led moderation team in the task of reducing false content on the platform. With an experimental suite of generative AI tools that can generate backgrounds, adjust images, and create captions for an advertiser’s video content, the company began incorporating its machine learning expertise into its advertising efforts at the beginning of October. Prior to what is anticipated to be a harsh and contentious national election cycle, Meta will expressly not provide those tools to political marketers.

The majority of the social media ecosystem supports Meta’s decision to prohibit the use of generative AI, the company has not yet made the decision publicly known through updates to its advertising guidelines. Political advertisements are prohibited on TikTok and Snap’s networks, Google uses a keyword blacklist to stop its generative AI advertising tools from delving into political discourse.

This rule does have a wide range of exceptions in Meta. Parodies and satire are exempt from the tool ban, which only applies to deceptive AI-generated video in all content, including organic, non-paid posts. In a case where Meta left up a “altered” video of President Biden because, according to the company, it was not created by an AI, those exceptions are presently being examined by the independent Oversight Board of the company.

Leading AI firms in Silicon Valley, including Facebook, consented to voluntary pledges made by the White House in July to implement technical and regulatory safeguards in the development of their generative AI systems going forward. These include building a digital watermarking system to authenticate official content and clearly identify it as not artificial intelligence (AI)-generated, growing adversarial machine learning (also known as red-teaming) efforts to eliminate poor model behaviour, and exchanging safety and trust-related information with the government and industry peers.

Source link

Most Popular