Zekun Wu commited on
Commit
3b1fe5b
1 Parent(s): 1c72aee
Files changed (1) hide show
  1. app.py +17 -66
app.py CHANGED
@@ -11,81 +11,32 @@ st.markdown(
11
  """
12
  ## Welcome to the AI Explainability Demo
13
 
14
- This application demonstrates the principles of AI explainability in the context of the EU AI Act. It focuses on how Natural Language Explanations (NLE) can be used to provide clear, user-specific, and context-specific explanations of AI systems.
15
 
16
- ### Abstract
17
 
18
- This paper navigates the implications of the emerging EU AI Act for artificial intelligence (AI) explainability, revealing challenges and opportunities. It reframes explainability from mere regulatory compliance with the Act to an organizing principle that can drive user empowerment and compliance with broader EU regulations. The study’s unique contribution lies in attempting to tackle the ‘last mile’ of AI explainability: conveying explanations from AI systems to users. Utilizing explanatory pragmatism as the philosophical framework, it formulates pragmatic design principles for conveying ‘good explanations’ through dialogue systems using natural language explanations. AI-powered robo-advising is used as a case study to assess the design principles, showcasing their potential benefits and limitations. The study acknowledges persisting challenges in the implementation of explainability standards and user trust, urging future researchers to empirically test the proposed principles.
19
 
20
- **Key words**: EU AI Act, Explainability, Explanatory Pragmatism, Natural Language Explanations, Robo-Advising
21
 
22
- ### Table of Contents
23
- 1. Introduction
24
- 2. EU AI Act: Meanings of Explainability
25
- 3. Explanatory Pragmatism
26
- 4. NLE and Dialogue Systems
27
- 5. Robo-Advising Case Study
28
- 6. Limitations
29
- 7. Conclusion and Future Work
30
 
31
- ### Introduction
32
 
33
- The introduction outlines the structure of the paper, which is divided into six sections:
34
- 1. The EU AI Act's take on AI explainability.
35
- 2. Theoretical foundations of explanatory pragmatism.
36
- 3. The concept and principles of Natural Language Explanations (NLE).
37
- 4. Application of NLE in a Robo-Advising Dialogue System (RADS).
38
- 5. Limitations of the proposed approach.
39
- 6. Future directions for research.
40
 
41
- ### 1. EU AI Act: Meanings of Explainability
42
 
43
- The EU AI Act is part of the EU’s strategy to regulate AI, aiming to balance innovation with risk management. It categorizes AI systems based on risk levels, with high-risk systems subjected to stricter requirements. Explainability, although not explicitly mandated, is implied in several articles, notably Articles 13 and 14, focusing on transparency and human oversight.
44
 
45
- **Articles Overview**:
46
- - **Article 13**: Emphasizes transparency, requiring high-risk AI systems to be understandable and interpretable by users.
47
- - **Article 14**: Stresses human oversight to ensure AI systems are used safely and effectively.
48
-
49
- The paper argues that transparency and explainability are crucial for user empowerment and regulatory compliance.
50
-
51
- ### 2. Explanatory Pragmatism
52
-
53
- This section discusses different philosophical approaches to explanation, emphasizing explanatory pragmatism, which views explanations as communicative acts tailored to individual users' needs. The pragmatic framework consists of:
54
- - **Communicative View**: Explanations as speech acts aimed at facilitating understanding.
55
- - **Inferentialist View**: Understanding as context-dependent, involving relevant inferences.
56
-
57
- **Design Principles for a Good Explanation**:
58
- 1. Factually Correct: Accurate and relevant information.
59
- 2. Useful: Provides actionable insights.
60
- 3. Context Specific: Tailored to the user's context.
61
- 4. User Specific: Adapted to the user's knowledge level.
62
- 5. Provides Pluralism: Allows for multiple perspectives.
63
-
64
- ### 3. NLE and Dialogue Systems
65
-
66
- NLE transforms complex model workings into human-comprehensible language. Dialogue systems, which facilitate interaction between users and AI, are proposed as effective means for delivering NLE. Key design principles for dialogue systems include:
67
- 1. Natural language prompts.
68
- 2. Context understanding.
69
- 3. Continuity in dialogue.
70
- 4. Admission of system limitations.
71
- 5. Confidence levels for explanations.
72
- 6. Near real-time interaction.
73
-
74
- ### 4. Robo-Advising Case Study
75
-
76
- Robo-advising, although not explicitly high-risk per the EU AI Act, benefits from explainability for user trust and regulatory adherence. The paper illustrates this through hypothetical dialogues between users and a Robo-Advising Dialogue System (RADS), showcasing the principles in action. Different user profiles—retail consumers, data scientists, and regulators—demonstrate varied needs for explanations, highlighting RADS' adaptability and limitations.
77
-
78
- ### 5. Limitations
79
-
80
- The paper acknowledges technical and ethical challenges in implementing explainability:
81
- - Complexity of queries.
82
- - Coherence and relevance of explanations.
83
- - Context retention and information accuracy.
84
- - Risk of overreliance on AI.
85
-
86
- ### 6. Conclusion and Future Work
87
-
88
- The paper concludes that explainability should extend beyond regulatory compliance to foster ethical AI and user empowerment. It calls for empirical testing of the proposed design principles in real-world applications, particularly focusing on the scalability and practicality of implementing NLE in dialogue systems.
89
  """
90
  )
91
 
 
11
  """
12
  ## Welcome to the AI Explainability Demo
13
 
14
+ This application demonstrates principles of AI explainability in the context of the EU AI Act. It showcases how Natural Language Explanations (NLE) can be used to provide clear, user-specific, and context-specific explanations of AI systems.
15
 
16
+ ### Overview of the Paper
17
 
18
+ **Abstract**
19
 
20
+ This paper explores the implications of the EU AI Act for AI explainability, revealing both challenges and opportunities. It reframes explainability from mere regulatory compliance to a principle that can drive user empowerment and adherence to broader EU regulations. The study focuses on conveying explanations from AI systems to users, proposing design principles for 'good explanations' through dialogue systems using natural language. AI-powered robo-advising is used as a case study to illustrate the potential benefits and limitations of these principles.
21
 
22
+ **Key Topics:**
23
+ - **EU AI Act and Explainability**: Discusses the Act’s requirements for transparency and human oversight in AI systems, emphasizing the need for explainability.
24
+ - **Explanatory Pragmatism**: Introduces a philosophical framework that views explanations as communicative acts tailored to individual users' needs.
25
+ - **Natural Language Explanations (NLE)**: Proposes using NLE to make AI model workings comprehensible, enhancing user trust and understanding.
26
+ - **Dialogue Systems**: Explores the use of dialogue systems to deliver explanations interactively, making them more user-friendly and context-specific.
27
+ - **Robo-Advising Case Study**: Demonstrates the application of NLE principles in a financial services context, highlighting both the benefits and challenges.
 
 
28
 
29
+ ### Goals of the Demo
30
 
31
+ This demo aims to:
32
+ - Illustrate how NLE can be used to enhance the explainability of AI systems.
33
+ - Show how different explanation templates can be applied to generate meaningful explanations.
34
+ - Allow users to evaluate explanations and understand their quality based on defined principles.
 
 
 
35
 
36
+ ### Instructions
37
 
38
+ Use the sidebar to navigate through different functionalities of the demo, including Single Evaluation, Explanation Generation, and Batch Evaluation.
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  """
41
  )
42