We examine the concept of hallucination and its implications and share best practices for the responsible use of GenAI.
Authors
- Pages
- 14
- Published in
- Japan
Table of Contents
- Not all hallucinations are bad acknowledging GenAIs constraints and benefits 1
- The AI daze 2
- Hallucinations are completely fabricated outputs from large language models. Even though they represent completely made-up facts the LLM output presents them with confidence and authority . 2
- Generative AI foundations and technology 3
- Understanding the mystery of data 4
- Generative anthropomorphism attributing human- like traits to nonhuman entities becomes evident as AI systems learn and derive creativity from vast datasets. 4
- Types of GenAI hallucination 5
- Hallucination in action in different sectors 6
- When GenAI is implemented in organizations and does not connect with data such as internal rules and work-related materials to generate content it can lead to hallucinatory responses. 6
- Transforming hallucination drawbacks into advantages 7
- Mitigating hallucination in GenAI 8
- Research advancements and best practices 9
- 3 best practices for mitigating hallucinations in GenAI 9
- Ethical and societal implications of hallucination in AI 10
- Guidelines for the responsible use of GenAI 11
- How NTT DATA is charting the future with GenAI 12
- Lets get started 13
- See what NTT DATA can do for you. 13
- Visit nttdata.com to learn more 13