真相集中营

The Guardian - China-Meta closes nearly 4800 fake accounts in China that tried to polarize US voters

November 30, 2023   6 min   1077 words

这篇报道揭示了中国网络用户创造了近4800个虚假社交媒体账户,试图通过在美国激化政治立场来分裂国家。这一行为展现了美国的外部对手如何利用社交媒体平台制造混乱和不信任。虚假账户伪装成美国人,通过转发推特等平台上政治内容,不是制造虚假信息,而是通过拉拢自由派和保守派的内容来夸大党派分歧,加剧极化。这次曝光提示明年各国选举面临在线虚假信息的威胁。Meta关闭这些虚假账户,但批评者指出,平台应更关注已存在的虚假信息,如它在2020年美国选举期间对“选举被操控”的广告未予处理。此外,对于即将到来的选举,社交媒体平台需要更加认真对待自身在公共领域的责任,而不仅仅是关注虚假账户。这个问题是一个提醒,要警惕外部威胁,同时也需要监管制度的支持,确保平台积极阻止对民主的潜在威胁。

Someone in China created thousands of fake Facebook and Instagram accounts designed to impersonate Americans and used them to spread polarizing political content in an apparent effort to divide the US ahead of next year’s elections, Meta said on Thursday.

The network of nearly 4,800 fake accounts was attempting to build an audience when it was identified and eliminated by the tech company, which owns Facebook and Instagram. The accounts sported fake photos, names and locations as a way to appear like everyday American Facebook users weighing in on political issues.

Instead of spreading fake content as other networks have done, the accounts were used to reshare posts from Twitter/X that were created by politicians, news outlets and others. The interconnected accounts pulled content from both liberal and conservative sources, an indication that its goal was not to support one side or the other but to exaggerate partisan divisions and further inflame polarization.

The newly identified network shows how US foreign adversaries exploit US-based tech platforms to sow discord and distrust, and it hints at the serious threats posed by online disinformation next year, when national elections will occur in the US, India, Mexico, Ukraine, Pakistan, Taiwan and other nations.

“These networks still struggle to build audiences, but they’re a warning,” said Ben Nimmo, who leads investigations into inauthentic behavior on Meta’s platforms. “Foreign threat actors are attempting to reach people across the internet ahead of next year’s elections, and we need to remain alert.”

Meta Platforms Inc, based in Menlo Park, California, did not publicly link the Chinese network to the Chinese government, but it did determine the network originated in that country. The content spread by the accounts broadly complements other Chinese government propaganda and disinformation that has sought to inflate partisan and ideological divisions within the US.

To appear more like normal Facebook accounts, the network would sometimes post about fashion or pets. Earlier this year, some of the accounts abruptly replaced their American-sounding usernames and profile pictures with new ones suggesting they lived in India. The accounts then began spreading pro-Chinese content about Tibet and India, reflecting how fake networks can be redirected to focus on new targets.

Meta also released a report on Wednesday evaluating the risk that foreign adversaries including Iran, China and Russia would use social media to interfere in elections. The report noted that Russia’s recent disinformation efforts have focused not on the US but on its war against Ukraine, using state media propaganda and misinformation in an effort to undermine support for the invaded nation.

Nimmo, Meta’s chief investigator, said turning opinion against Ukraine will probably be the focus of any disinformation Russia seeks to inject into US political debate ahead of next year’s election.

“This is important ahead of 2024,” Nimmo said. “As the war continues, we should especially expect to see Russian attempts to target election-related debates and candidates that focus on support for Ukraine.”

Meta often points to its efforts to shut down fake social media networks as evidence of its commitment to protecting election integrity and democracy. But critics say the platform’s focus on fake accounts distracts from its failure to address its responsibility for the misinformation already on its site that has contributed to polarization and distrust.

For instance, Meta will accept paid advertisements on its site to claim the US election in 2020 was rigged or stolen, amplifying the lies of Donald Trump and other Republicans whose claims about election irregularities have been repeatedly debunked. Federal and state election officials and the former US president’s own attorney general have said there is no credible evidence that the presidential election, which Trump lost to Joe Biden, was tainted.

When asked about its ad policy, the company said it was focusing on future elections, not ones from the past, and would reject ads that cast unfounded doubt on upcoming contests.

And while Meta has announced a new artificial intelligence policy that will require political ads to bear a disclaimer if they contain AI-generated content, the company has allowed other altered videos that were created using more conventional programs to remain on its platform, including a digitally edited video of Biden that claims he is a pedophile.

“This is a company that cannot be taken seriously and that cannot be trusted,” said Zamaan Qureshi, a policy adviser at the Real Facebook Oversight Board, an organization of civil rights leaders and tech experts who have been critical of Meta’s approach to disinformation and hate speech. “Watch what Meta does, not what they say.”

Meta executives discussed the network’s activities during a conference call with reporters on Wednesday, the day after the tech giant announced its policies for the upcoming election year – most of which were put in place for prior elections.

But 2024 poses new challenges, according to experts who study the link between social media and disinformation. Not only will many large countries hold national elections, but the emergence of sophisticated AI programs means it’s easier than ever to create lifelike audio and video that could mislead voters.

“Platforms still are not taking their role in the public sphere seriously,” said Jennifer Stromer-Galley, a Syracuse University professor who studies digital media.

Stromer-Galley called Meta’s election plans “modest” but noted it stood in stark contrast to the “wild west” of X. Since buying the X platform, then called Twitter, Elon Musk has eliminated teams focused on content moderation, welcomed back many users previously banned for hate speech and used the site to spread conspiracy theories.

Democrats and Republicans have called for laws addressing algorithmic recommendations, misinformation, deepfakes and hate speech, but there’s little chance of any significant regulations passing ahead of the 2024 election. That means it will fall to the platforms to voluntarily police themselves.

Meta’s efforts to protect the election so far are “a horrible preview of what we can expect in 2024”, according to Kyle Morse, deputy executive director of the Tech Oversight Project, a non-profit that supports new federal regulations for social media. “Congress and the administration need to act now to ensure that Meta, TikTok, Google, X, Rumble and other social media platforms are not actively aiding and abetting foreign and domestic actors who are openly undermining our democracy.”

Many of the fake accounts identified by Meta this week also had nearly identical accounts on X, where some of them regularly retweeted Musk’s posts. Those accounts remain active on X. A message seeking comment from the platform was not returned.