cover image: U C B E R K E L E Y

20.500.12592/1h7vynp

U C B E R K E L E Y

17 Jun 2024

U C B E R K E L E Y C E N T E R F O R L O N G - T E R M C Y B E R S E C U R I T Y C L T C W H I T E P A P E R S E R I E S Improving the Explainability of Artificial Intelligence THE PROMISES AND LIMITATIONS OF COUNTERFACTUAL EXPLANATIONS A L E X A N D E R A S E M O T A C L T C W H I T E P A P E R S E R I E S C L T C W H I T E P A P E R S E R I E S Improving the Explainability of Artificial Intelli. [...] vi 1 I M P R O V I N G T H E E X P L A I N A B I L I T Y O F A R T I F I C I A L I N T E L L I G E N C E RECOMMENDATIONS FOR COMPANIES • Compare the recommendations for counterfactuals to observed data to evaluate their accuracy and effectiveness. [...] However, counterfactuals are not often used in practice, partially due to the gap between methodological research and 13 I M P R O V I N G T H E E X P L A I N A B I L I T Y O F A R T I F I C I A L I N T E L L I G E N C E applied practice. [...] org/10.1145/3287560.3287596 14 I M P R O V I N G T H E E X P L A I N A B I L I T Y O F A R T I F I C I A L I N T E L L I G E N C E Conclusion The rise of AI/ML has led to a growing need for explainability and transparency from what are often opaque systems. [...] 15 I M P R O V I N G T H E E X P L A I N A B I L I T Y O F A R T I F I C I A L I N T E L L I G E N C E Acknowledgments First, I’d like to acknowledge the UC Berkeley AI Policy Hub for graciously funding this work through the AI Security Initiative at the Center for Long-Term Cybersecurity (CLTC) and the CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Soci- et.
Pages
24
Published in
United States of America

Table of Contents