cover image: CDT Brief - Election Integrity Recommendations for Generative AI Developers

CDT Brief - Election Integrity Recommendations for Generative AI Developers

25 Jul 2024

While identifying solutions to the distribution of this content is absolutely necessary — and CDT has supported several initiatives to create voluntary standards for technology companies that help to prevent these risks — it is also necessary to consider the policies and product interventions that generative AI developers should adopt in order to prevent harmful content from being created on or sp. [...] Some AI distributors, like Meta and Google, have begun to release new generative AI tools in their advertising suites to enable advertisers to adjust the content and appearance of their ads in ways that may be hard for viewers to detect. [...] In the immediate term, watermarking is unlikely to be a full solution to the use of generative AI to spread disinformation due to the ease of stripping watermarks and the availability of many LLMs that do not add watermarks to their outputs. [...] Therefore, model developers should invest in funding internal and external researchers to test their systems for responses to these questions in widely spoken languages in the United States, and should publish their findings so they can be held to account for their policy and enforcement decisions. [...] Reporting should include information about the policies in place and how they are enforced, as well as disclosure of details about the number of queries that violate those policies, the accuracy and error rate of your classifiers and prompt refusals, and the composition of your trust and safety teams responsible for these policies and enforcement systems.
center for democracy & techology; cdt; elections & democracy; generative ai; dev

Authors

Tim Harper; CDT

Pages
15
Published in
United States of America

Table of Contents