LennardZuendorf commited on
Commit
f6d622b
1 Parent(s): bf15c20

feat: finally actual final changes

Browse files
Files changed (5) hide show
  1. README.md +4 -4
  2. explanation/plotting.py +2 -1
  3. main.py +1 -1
  4. model/mistral.py +1 -1
  5. public/about.md +1 -1
README.md CHANGED
@@ -51,7 +51,7 @@ This Project was part of my studies of Business Computing at the University of A
51
 
52
  1. Clone the repository using git or GitHub cli.
53
  2. Start the (virtual) environment.
54
- 3. Set the environment variable "HOSTING", i.e. like this `export HOSTING=local USER=admin PW=test`, see [fastAPI Docu](https://fastapi.tiangolo.com/advanced/settings/)
55
  3. Install the requirements using `pip install -r requirements.txt`
56
  4. Run the app using `uvicorn main:app`. You can add `--reload` to enable hot reloading. The app will be available at `localhost:8000`.
57
 
@@ -60,14 +60,14 @@ This Project was part of my studies of Business Computing at the University of A
60
 
61
  1. Clone the repository using git or GitHub cli.
62
  2. Build the docker image using `docker build -t thesis-webapp -f Dockerfile . .`, the command commented in the docker file or the command referenced by your hosting service.
63
- 3. Run the docker image using `docker run --name thesis-webapp -e HOSTING=local USER=admin PW=test -p 8080:8080 thesis-webapp`, the command commented in the docker file or the command referenced by your hosting service.
64
  4. The app will be available at `localhost:8080`. If you are using a hosting service, the port may be different.
65
 
66
  ### 🐳 Docker Image :
67
  (This assumes you have set up docker desktop or are using a hosting service able to handle Docker images.)
68
 
69
  1. Pull the docker image from ghcr using `docker pull ghcr.io/LennardZuendorf/thesis-webapp:1.3.1`.
70
- 2. Run the docker image in terminal using `docker run --name thesis-webapp -e HOSTING=local USER=admin PW=test -p 8080:8080 lennardzuendorf/thesis-webapp::1.3.1`, the command commented in the docker file or the command referenced by your hosting service.
71
  3. The app will be available at `localhost:8080`. If you are using a hosting service, the port may be different.
72
 
73
  ## 📝 License and Credits:
@@ -75,7 +75,7 @@ This Project was part of my studies of Business Computing at the University of A
75
  This project is licensed under the MIT License, see [LICENSE](LICENSE.md) for more information. Please cite this project, it's author and my university if you use it in your work.
76
 
77
  - Title: Building an Interpretable Natural Language AI Tool based on Transformer Models and approaches of Explainable AI.
78
- - Date: 2024-01-27
79
  - Author: Lennard Zündorf
80
  - University: HTW Berlin
81
 
 
51
 
52
  1. Clone the repository using git or GitHub cli.
53
  2. Start the (virtual) environment.
54
+ 3. Set the environment variable "HOSTING", i.e. like this `export HOSTING=local`, see [fastAPI Docu](https://fastapi.tiangolo.com/advanced/settings/)
55
  3. Install the requirements using `pip install -r requirements.txt`
56
  4. Run the app using `uvicorn main:app`. You can add `--reload` to enable hot reloading. The app will be available at `localhost:8000`.
57
 
 
60
 
61
  1. Clone the repository using git or GitHub cli.
62
  2. Build the docker image using `docker build -t thesis-webapp -f Dockerfile . .`, the command commented in the docker file or the command referenced by your hosting service.
63
+ 3. Run the docker image using `docker run --name thesis-webapp -e HOSTING=local -p 8080:8080 thesis-webapp`, the command commented in the docker file or the command referenced by your hosting service.
64
  4. The app will be available at `localhost:8080`. If you are using a hosting service, the port may be different.
65
 
66
  ### 🐳 Docker Image :
67
  (This assumes you have set up docker desktop or are using a hosting service able to handle Docker images.)
68
 
69
  1. Pull the docker image from ghcr using `docker pull ghcr.io/LennardZuendorf/thesis-webapp:1.3.1`.
70
+ 2. Run the docker image in terminal using `docker run --name thesis-webapp -e PW=test -p 8080:8080 lennardzuendorf/thesis-webapp::1.3.1`, the command commented in the docker file or the command referenced by your hosting service.
71
  3. The app will be available at `localhost:8080`. If you are using a hosting service, the port may be different.
72
 
73
  ## 📝 License and Credits:
 
75
  This project is licensed under the MIT License, see [LICENSE](LICENSE.md) for more information. Please cite this project, it's author and my university if you use it in your work.
76
 
77
  - Title: Building an Interpretable Natural Language AI Tool based on Transformer Models and approaches of Explainable AI.
78
+ - Date: 2024-02-14
79
  - Author: Lennard Zündorf
80
  - University: HTW Berlin
81
 
explanation/plotting.py CHANGED
@@ -42,13 +42,14 @@ def plot_seq(seq_values: list, method: str = ""):
42
  }, # White background
43
  )
44
 
 
45
  plt.axhline(0, color="black", linewidth=1)
46
  plt.title(f"Input Token Attribution with {method}")
47
  plt.xlabel("Input Tokens", labelpad=0.5)
48
  plt.ylabel("Attribution")
49
  plt.xticks(x_positions, tokens, rotation=45)
50
 
51
- # Adjust y-axis limits to ensure there's enough space for labels
52
  y_min, y_max = plt.ylim()
53
  y_range = y_max - y_min
54
  plt.ylim(y_min - 0.1 * y_range, y_max + 0.1 * y_range)
 
42
  }, # White background
43
  )
44
 
45
+ # setting plot properties, labels, and title
46
  plt.axhline(0, color="black", linewidth=1)
47
  plt.title(f"Input Token Attribution with {method}")
48
  plt.xlabel("Input Tokens", labelpad=0.5)
49
  plt.ylabel("Attribution")
50
  plt.xticks(x_positions, tokens, rotation=45)
51
 
52
+ # adjusting y-axis limits to ensure there's enough space for labels
53
  y_min, y_max = plt.ylim()
54
  y_range = y_max - y_min
55
  plt.ylim(y_min - 0.1 * y_range, y_max + 0.1 * y_range)
main.py CHANGED
@@ -147,7 +147,7 @@ with gr.Blocks(
147
  )
148
 
149
  # calling info functions on inputs/submits for different settings
150
- system_prompt.change(system_prompt_info, [system_prompt])
151
  xai_selection.change(xai_info, [xai_selection])
152
  model_selection.change(model_info, [model_selection])
153
 
 
147
  )
148
 
149
  # calling info functions on inputs/submits for different settings
150
+ system_prompt.input(system_prompt_info, [system_prompt])
151
  xai_selection.change(xai_info, [xai_selection])
152
  model_selection.change(model_info, [model_selection])
153
 
model/mistral.py CHANGED
@@ -34,7 +34,7 @@ TOKENIZER = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
34
  CONFIG = GenerationConfig.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
35
  base_config_dict = {
36
  "temperature": 1,
37
- "max_new_tokens": 100,
38
  "top_p": 0.9,
39
  "repetition_penalty": 1.2,
40
  "do_sample": True,
 
34
  CONFIG = GenerationConfig.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
35
  base_config_dict = {
36
  "temperature": 1,
37
+ "max_new_tokens": 64,
38
  "top_p": 0.9,
39
  "repetition_penalty": 1.2,
40
  "do_sample": True,
public/about.md CHANGED
@@ -12,7 +12,7 @@ This research tackles the rise of LLM based applications such a chatbots and exp
12
 
13
  ## Implementation
14
 
15
- This project is an implementation of PartitionSHAP into GODEL by Microsoft - [GODEL Model](https://huggingface.co/microsoft/GODEL-v1_1-large-seq2seq) which is a generative seq2seq transformer fine-tuned for goal directed dialog. It supports context and knowledge base inputs.
16
 
17
  The UI is build with Gradio, utilizing some custom components and FastAPI.
18
 
 
12
 
13
  ## Implementation
14
 
15
+ This project
16
 
17
  The UI is build with Gradio, utilizing some custom components and FastAPI.
18