Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Meta identifies networks pushing misleading content material seemingly generated by AI By Reuters

0


(Corrects fourth paragraph to replicate that that is the primary disclosure of text-based generative AI use, not of generative AI use altogether)

NEW YORK (Reuters) -Meta stated on Wednesday it had discovered “seemingly AI-generated” content material used deceptively on its Fb (NASDAQ:) and Instagram platforms, together with feedback praising Israel’s dealing with of the conflict in Gaza printed beneath posts from international information organizations and U.S. lawmakers.

The social media firm, in a quarterly safety report, stated the accounts posed as Jewish college students, African People and different involved residents, concentrating on audiences in america and Canada. It attributed the marketing campaign to Tel Aviv-based political advertising and marketing agency STOIC.

STOIC didn’t instantly reply to a request for touch upon the allegations.

WHY IT’S IMPORTANT

Whereas Meta has discovered primary profile photographs generated by synthetic intelligence in affect operations since 2019, the report is the primary to reveal using text-based generative AI expertise because it emerged in late 2022.

Researchers have fretted that generative AI, which might shortly and cheaply produce human-like textual content, imagery and audio, may result in simpler disinformation campaigns and sway elections.

In a press name, Meta safety executives stated they eliminated the Israeli marketing campaign early and didn’t assume novel AI applied sciences had impeded their means to disrupt affect networks, that are coordinated makes an attempt to push messages.

Executives stated they’d not seen such networks deploying AI-generated imagery of politicians sensible sufficient to be confused for genuine photographs.

KEY QUOTE

“There are a number of examples throughout these networks of how they use seemingly generative AI tooling to create content material. Maybe it offers them the flexibility to do this faster or to do this with extra quantity. However it hasn’t actually impacted our means to detect them,” stated Meta head of menace investigations Mike Dvilyanski.

BY THE NUMBERS

The report highlighted six covert affect operations that Meta disrupted within the first quarter.

Along with the STOIC community, Meta shut down an Iran-based community targeted on the Israel-Hamas battle, though it didn’t determine any use of generative AI in that marketing campaign.

CONTEXT

Meta and different tech giants have grappled with methods to tackle potential misuse of recent AI applied sciences, particularly in elections.

Researchers have discovered examples of picture mills from corporations together with OpenAI and Microsoft (NASDAQ:) producing photographs with voting-related disinformation, regardless of these corporations having insurance policies in opposition to such content material.

The businesses have emphasised digital labeling techniques to mark AI-generated content material on the time of its creation, though the instruments don’t work on textual content and researchers have doubts about their effectiveness.

© Reuters. FILE PHOTO: Meta AI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

WHAT’S NEXT

Meta faces key assessments of its defenses with elections within the European Union in early June and in america in November.



Leave a Reply

Your email address will not be published. Required fields are marked *