- Explosive Growth of Election Deepfakes: AI-generated election deepfake content has increased by 130% monthly on X, with deepfakes rising 900% between 2019 and 2020.
- Deepfake Generators Easily Exploited: Nearly half of the tests (41%) by the Center for Countering Digital Hate (CCDH) successfully created election-related deepfakes using major AI image generators despite their policies.
- Social Media’s Role in Deepfake Dissemination: Lack of consistent fact-checking and moderation on platforms like X allows deepfakes to spread widely, undermining election integrity and misinformation safeguards.
Impact
- Increased Scrutiny on AI Platforms: The proliferation of deepfakes may lead to stricter regulations and oversight on AI technologies, particularly those capable of generating realistic images.
- Urgent Need for Improved Safeguards: Tech companies must enhance AI moderation tools and policies to combat the misuse of their platforms for political disinformation.
- Potential for Legislative Action: The spread of deepfakes could accelerate the enactment of laws aimed at penalizing the creation and dissemination of AI-generated misinformation.
- Investor Caution: Investors might become more cautious about funding AI startups without clear, ethical use cases and robust content moderation capabilities.
- Tech Companies’ Responsibility Increases: As deepfakes become a major concern, tech companies may face greater responsibility to invest in trust and safety, affecting their resource allocation and possibly their valuation.




