Meta Identifies Networks Pushing Deceptive AI-Generated Content
NEW YORK, May 28, 2024 – Meta (META.O) revealed in a quarterly security report that it identified and removed “likely AI-generated” content used deceptively on its Facebook and Instagram platforms. This content included comments praising Israel’s handling of the Gaza war, posted beneath global news organizations’ and U.S. lawmakers’ posts. These accounts, posing as Jewish students, African Americans, and other concerned citizens, primarily targeted audiences in the United States and Canada. Meta attributed this deceptive campaign to Tel Aviv-based political marketing firm STOIC, which has yet to respond to the allegations.
Significance and Concerns
While AI-generated profile photos have been part of influence operations since 2019, this report marks the first disclosure of text-based generative AI technology being used since its rise in late 2022. Researchers worry that generative AI, capable of producing human-like text, images, and audio quickly and inexpensively, could enhance the effectiveness of disinformation campaigns and potentially influence elections.
Meta’s head of threat investigations, Mike Dvilyanski, emphasized that while generative AI might allow for quicker and more voluminous content creation, it hasn’t significantly hindered Meta’s ability to detect and disrupt these networks. Despite concerns, Meta has yet to see AI-generated images of politicians realistic enough to be mistaken for authentic photos.
Disrupted Operations
Meta highlighted six covert influence operations it disrupted in the first quarter, including the STOIC network and an Iran-based network focusing on the Israel-Hamas conflict, though no generative AI use was identified in the latter.
Context and Future Challenges
Tech giants like Meta continue to grapple with potential AI misuse, especially in elections. Instances of image generators from companies such as OpenAI and Microsoft producing voting-related disinformation have been noted, despite policies against such content. Digital labeling systems to mark AI-generated content at creation have been emphasized, but researchers question their effectiveness.
Meta faces significant tests with upcoming elections in the European Union in June and the United States in November, challenging its defenses against AI-driven disinformation.