The report provides a comprehensive overview of synthetic content, focusing on AI-generated media that is often indistinguishable from human-created content. The rise of Generative AI (GenAI) tools has made it easier for individuals and organizations to produce synthetic content in various forms, such as text, audio, video, and images. The report identifies several risks associated with synthetic content, including malicious impersonation, political disinformation, misinformation, synthetic child sexual abuse material (CSAM), and non-consensual intimate imagery (NCII). Additionally, financial scams and discrimination are noted as emerging threats due to synthetic media.
The report explores various strategies and technical approaches to mitigate these risks, including watermarking, provenance tracking, metadata recording, synthetic content labeling, content detection, and legal restrictions, particularly regarding deepfakes. It also highlights ongoing U.S. policy efforts and legislative frameworks, such as the work of the U.S. Senate AI Working Group and federal agencies like NIST, FTC, FCC, and FEC, to regulate and manage the impact of synthetic content.
The report concludes by discussing the tradeoffs between privacy, security, and transparency. While technical safeguards can help combat harmful synthetic content, they also raise concerns over privacy due to the potential exposure of personal data.
Authors
Related Organizations
- Pages
- 35
- Published in
- United States of America
Table of Contents
- ISSUE BRIEF SYNTHETIC CONTENT 2
- I. Introduction 3
- II. Synthetic content or AI-generated content can create or exacerbate risks 3
- III. Policymakers scholars and technologists are creating frameworks for technical and organizational approaches to mitigating some of the risks associated with synthetic content 3
- IV. Safeguards against synthetic content harms can both support and be in tension with privacy and security 3
- V. Conclusion 3
- VI. Appendix Regulatory Frameworks in the U.S. 3
- ISSUE BRIEF SYNTHETIC CONTENT 3
- I. Introduction 4
- ISSUE BRIEF SYNTHETIC CONTENT 4
- II. Synthetic content or AI-generated content can create or exacerbate risks. 5
- Synthetic content 5
- ISSUE BRIEF SYNTHETIC CONTENT 5
- A. Malicious impersonation 6
- B. Disinformation and misinformation 6
- ISSUE BRIEF SYNTHETIC CONTENT 6
- 1. Elections and politics 7
- ISSUE BRIEF SYNTHETIC CONTENT 7
- 2. Health 8
- C. Synthetic NCII 8
- ISSUE BRIEF SYNTHETIC CONTENT 8
- D. Synthetic CSAM 9
- E. Financial synthetic content scams 9
- ISSUE BRIEF SYNTHETIC CONTENT 9
- F. Discrimination 10
- G. Loss of trust in media 10
- III. Policymakers scholars and technologists are creating frameworks for technical and organizational approaches to mitigating some of the risks associated with synthetic content. 10
- ISSUE BRIEF SYNTHETIC CONTENT 10
- A. Watermarking 11
- ISSUE BRIEF SYNTHETIC CONTENT 11
- B. Provenance tracking 12
- ISSUE BRIEF SYNTHETIC CONTENT 12
- C. Metadata recording 13
- ISSUE BRIEF SYNTHETIC CONTENT 13
- D. Synthetic content labeling and disclosure 14
- E. Synthetic content detection 14
- ISSUE BRIEF SYNTHETIC CONTENT 14
- F. Hashing and filtering 15
- G. Legal prohibitions on deepfakes and impersonation 15
- ISSUE BRIEF SYNTHETIC CONTENT 15
- Approach Description Legislation examples 16
- ISSUE BRIEF SYNTHETIC CONTENT 16
- ISSUE BRIEF SYNTHETIC CONTENT 17
- IV. Safeguards against synthetic content harms can both support and be in tension with privacy and security. 18
- A. Techniques for addressing harmful synthetic content can support privacy and 18
- ISSUE BRIEF SYNTHETIC CONTENT 18
- B. Techniques for combating harmful synthetic content can be in tension with 19
- 1. Transparency techniques can reveal personal data. 19
- ISSUE BRIEF SYNTHETIC CONTENT 19
- ISSUE BRIEF SYNTHETIC CONTENT 20
- 2. Transparency techniques can conflict with other privacy and data protection principles. 21
- C. Other factors may limit the effectiveness of techniques for combating harmful 21
- 1. Transparency techniques are not sufficient in isolation. 21
- ISSUE BRIEF SYNTHETIC CONTENT 21
- ISSUE BRIEF SYNTHETIC CONTENT 22
- 2. Transparency techniques can be easy to circumvent. 23
- 3. Effective transparency techniques require standardization interoperability and coordination. 23
- ISSUE BRIEF SYNTHETIC CONTENT 23
- ISSUE BRIEF SYNTHETIC CONTENT 24
- D. Maintaining privacy and security for digital content transparency techniques. 25
- ISSUE BRIEF SYNTHETIC CONTENT 25
- V. Conclusion 26
- ISSUE BRIEF SYNTHETIC CONTENT 26
- VI. Appendix Regulatory Frameworks in the U.S. 27
- A. Legislation synthetic content transparency authentication and prohibitions 27
- Jurisdiction Bill Description Status 27
- Enacted 27
- ISSUE BRIEF SYNTHETIC CONTENT 27
- Not enacted 28
- ISSUE BRIEF SYNTHETIC CONTENT 28
- ISSUE BRIEF SYNTHETIC CONTENT 29
- B. Legislation deepfakes and impersonation 30
- ISSUE BRIEF SYNTHETIC CONTENT 30
- C. Regulation federal agency action on synthetic content 31
- Agency Action Description 31
- ISSUE BRIEF SYNTHETIC CONTENT 31
- ISSUE BRIEF SYNTHETIC CONTENT 32
- D. Bipartisan U.S. Senate AI Working Group Roadmap for Artificial Intelligence 33
- Policy 33
- ISSUE BRIEF SYNTHETIC CONTENT 33
- If you have any questions please contact us at infofpf.org . 34
- ISSUE BRIEF SYNTHETIC CONTENT 34
- ISSUE BRIEF SYNTHETIC CONTENT 35