cover image: T - ABLE OF C ONTENTS  - Introduction ................................................................................................................................. 2 I.

20.500.12592/m37q1h1

T - ABLE OF C ONTENTS - Introduction ................................................................................................................................. 2 I.

2 Feb 2024

Many of the same AI risk management techniques at the core of NIST’s AI RMF— including AI impact assessments,12 regular AI accuracy testing,13 and AI red-teaming efforts14— will be effective against the risks of generative AI technologies and the synthetic content they produce. [...] It is precisely because of the breadth of NIST’s AI RMF that EPIC encourages NIST to extend existing provisions of the AI RMF to the risks and harms of generative AI technologies. [...] In fact, NIST’s AI RMF already incorporates principles of transparency and accountability into AI risk management, stating, inter alia, that “[m]eaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system,” and that “maintaining the proven. [...] Providers of AI are also required to (1) keep documentation regarding the creation of the data set, including the formulation of assumptions and what the data is supposed to measure and represent,77 and (2) make automatic event logging technically possible when developing high-risk AI systems.78 And before the high-risk AI systems are deployed or placed on 70 EU AI Act at Art. [...] We appreciate this opportunity to reply to NIST’s RFI and are willing to engage with NIST further on any of the issues raised within our comment, including the centrality of data controls to AI risk management, the value and structure of effective AI red-teaming, and the emerging risks of generative AI.

Authors

Grant Fergusson

Pages
104
Published in
United States of America