RBGPF
1.6500
Meta said Friday it had warned US presidential campaigns to be wary after it discovered an Iran-linked hacking attempt using the WhatsApp messaging service.
The announcement is the latest from a tech giant of hacking threats ahead of the November election between Vice President Kamala Harris and former president Donald Trump, after Google and Microsoft earlier uncovered similar attempts attributed to Iran.
WhatsApp accounts linked to an Iranian "threat actor" sent messages pretending to be technical support for AOL, Google, Yahoo or Microsoft, according to Meta.
"This malicious activity originated in Iran and attempted to target individuals in Israel, Palestine, Iran, the United States and the UK," Meta said in a post.
"This effort appeared to have focused on political and diplomatic officials, and other public figures, including some associated with administrations of President Joe Biden and former President Donald Trump."
Accounts involved in what appeared to be "social engineering" -- duping people into providing access to networks or vital data -- were blocked, according to Meta.
An investigation linked the attempts to the same hacking group responsible for similar attacks aimed at political, military, diplomatic and other officials reported by Microsoft and Google, Meta said.
"Given the heightened threat environment ahead of the US election, we also shared information about this malicious activity with law enforcement and with the presidential campaigns to encourage them to stay cautious against potential adversarial targeting," Meta said.
On Monday, three US intelligence and security agencies said Iran was behind a recent hack targeting Trump's presidential campaign and accused Tehran of seeking to influence the 2024 election.
The statement confirmed a Trump campaign claim from earlier this month, which came after internal communications and a dossier on running mate J.D. Vance were shared with a US media outlet.
Harris's campaign said on August 13 that it too had been targeted by foreign hackers, but did not give an indication of which country was believed to be behind the attempt.
OpenAI last week said it took down a cluster of ChatGPT accounts being used to create content for a "covert Iranian influence operation."
The operation used ChatGPT to create articles and short commentary, which was then shared on social media platforms or websites, according to OpenAI.
Main topics were the conflict in Gaza, Israel's presence at the Olympic Games, and the US presidential election.
"They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following," OpenAI said.
Groups linked to the Iranian government have "laid the groundwork for influence campaigns on trending election-related topics and begun to activate these campaigns in an apparent effort to stir up controversy or sway voters -– especially in swing states," said Clint Watts, general manager of the Microsoft Threat Analysis Center.
Earlier this year, Director of National Intelligence Avril Haines warned that Iran was becoming "increasingly aggressive" in its influence efforts, seeking to stoke discord and undermine democratic institutions as it has done in prior election cycles.
T.Gilbert--TFWP