Meta identifies networks pushing misleading content material seemingly generated by AI By Reuters
(Corrects fourth paragraph to replicate that that is the primary disclosure of text-based generative AI use, not of generative AI use altogether)
NEW YORK (Reuters) -Meta stated on Wednesday it had discovered “seemingly AI-generated” content material used deceptively on its Fb (NASDAQ:) and Instagram platforms, together with feedback praising Israel’s dealing with of the conflict in Gaza printed beneath posts from international information organizations and U.S. lawmakers.
The social media firm, in a quarterly safety report, stated the accounts posed as Jewish college students, African People and different involved residents, concentrating on audiences in america and Canada. It attributed the marketing campaign to Tel Aviv-based political advertising and marketing agency STOIC.
STOIC didn’t instantly reply to a request for touch upon the allegations.
WHY IT’S IMPORTANT
Whereas Meta has discovered primary profile photographs generated by synthetic intelligence in affect operations since 2019, the report is the primary to reveal using text-based generative AI expertise because it emerged in late 2022.
Researchers have fretted that generative AI, which might shortly and cheaply produce human-like textual content, imagery and audio, may result in simpler disinformation campaigns and sway elections.
In a press name, Meta safety executives stated they eliminated the Israeli marketing campaign early and didn’t assume novel AI applied sciences had impeded their means to disrupt affect networks, that are coordinated makes an attempt to push messages.
Executives stated they’d not seen such networks deploying AI-generated imagery of politicians sensible sufficient to be confused for genuine photographs.
KEY QUOTE
“There are a number of examples throughout these networks of how they use seemingly generative AI tooling to create content material. Maybe it offers them the flexibility to do this faster or to do this with extra quantity. However it hasn’t actually impacted our means to detect them,” stated Meta head of menace investigations Mike Dvilyanski.
BY THE NUMBERS
The report highlighted six covert affect operations that Meta disrupted within the first quarter.
Along with the STOIC community, Meta shut down an Iran-based community targeted on the Israel-Hamas battle, though it didn’t determine any use of generative AI in that marketing campaign.
CONTEXT
Meta and different tech giants have grappled with methods to tackle potential misuse of recent AI applied sciences, particularly in elections.
Researchers have discovered examples of picture mills from corporations together with OpenAI and Microsoft (NASDAQ:) producing photographs with voting-related disinformation, regardless of these corporations having insurance policies in opposition to such content material.
The businesses have emphasised digital labeling techniques to mark AI-generated content material on the time of its creation, though the instruments don’t work on textual content and researchers have doubts about their effectiveness.
WHAT’S NEXT
Meta faces key assessments of its defenses with elections within the European Union in early June and in america in November.