Tabassi, and the entire NIST team carrying out responsibilities under the AI Executive Order, Thank you for the invitation to submit comments in response to the Request for Information (RFI) Related to NIST's Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence (Sections 4.1, 4.5, and 11). [...] (1) Developing a companion resource to the AI RMF for generative AI One resource that provides guidance and links to resources for identifying impacts of generative AI systems and mitigations for negative impacts is our own November 2023 publication, the UC Berkeley AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models, Version 1.0 (see . [...] The Berkeley GPAIS and foundation model profile discusses a wide variety of risks and harms of generative AI, highlights different roles for different AI actors (e.g., the role of AI developers vs. [...] Unfortunately leading technical solutions such as watermarking, labeling, and authenticating provenance will do little to stop this, and so it will be particularly critical to support robust content moderation across media platforms to facilitate the rapid removal of exploitative and illegal content and to provide redress for those harmed, in addition to criminalizing the creation and intentional. [...] Solely focusing on end-use and on the deployers of AI systems misses the importance for example of standards for privacy and security by design, for data quality and curation, and for testing and evaluation of general capabilities and vulnerabilities.
Authors
Related Organizations
- Pages
- 5
- Published in
- United States of America