Spaces:
Runtime error
Runtime error
Merge branch 'main' into Ling
Browse files- Dockerfile +12 -0
- Documentation.md +4 -1
- README.md +7 -0
- src/api.py +119 -39
- src/dataloader.py +0 -2
- src/fine_tune_T5.py +5 -2
- src/inference_t5.py +5 -3
- templates/index.html.jinja +26 -31
- templates/site_style/css/main.css +2 -0
Dockerfile
CHANGED
@@ -6,6 +6,18 @@ COPY requirements.txt .
|
|
6 |
|
7 |
RUN pip install --no-cache-dir --upgrade -r requirements.txt
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
COPY . .
|
10 |
|
11 |
CMD ["uvicorn", "src.api:app", "--host", "0.0.0.0", "--port", "7860"]
|
|
|
6 |
|
7 |
RUN pip install --no-cache-dir --upgrade -r requirements.txt
|
8 |
|
9 |
+
RUN useradd -m -u 1000 user
|
10 |
+
|
11 |
+
USER user
|
12 |
+
|
13 |
+
ENV HOME=/home/user \
|
14 |
+
PATH=/home/user/.local/bin:$PATH
|
15 |
+
|
16 |
+
|
17 |
+
WORKDIR $HOME/app
|
18 |
+
|
19 |
+
COPY --chown=user . $HOME/app
|
20 |
+
|
21 |
COPY . .
|
22 |
|
23 |
CMD ["uvicorn", "src.api:app", "--host", "0.0.0.0", "--port", "7860"]
|
Documentation.md
CHANGED
@@ -16,11 +16,12 @@ Le corpus est nettoyé avant d'être utilisé pour l'entraînement du LSTM. Seul
|
|
16 |
- LSTM réalisé à partir du <a href="https://loicgrobol.github.io//neural-networks/slides/03-transformers/transformers-slides.py.ipynb">cours</a> et de cet <a href="https://www.kaggle.com/code/columbine/seq2seq-pytorch">exemple</a> et de beaucoup d'autres référence en ligne.
|
17 |
- Fine-tuned transformers modèle lancé et pré-entrainé par Google :<a href="https://huggingface.co/google/mt5-small">google/mt5-small</a>, il s'agit d'une variance du <a href="https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/mt5">mT5</a>. Le model est entrainé pour notre tâche en se basant sur la documentation sur <a href="https://huggingface.co/docs/transformers/tasks/summarization">Summarisation</a> proposé par Huggingface.
|
18 |
|
|
|
19 |
# La méthodologie
|
20 |
|
21 |
## Répartition du travail 👥
|
22 |
Nous avons travaillé avec le logiciel de gestion de version Github en mettant en place une intégration continue envoyant directement les `pull request` sur l'espace Huggingface.
|
23 |
-
|
24 |
|
25 |
## Problèmes rencontrés et résolution
|
26 |
|
@@ -142,8 +143,10 @@ Pour ce faire nous nous sommes beaucoup inspirée du kaggle https://www.kaggle.c
|
|
142 |
## Résultats du LSTM
|
143 |
|
144 |
Les résultats du LSTM sont inutilisables mais ont permis au moins de se confronter à la difficulté de mettre en place des réseaux de neurones depuis pas grand chose.
|
|
|
145 |
On aurait aimé avoir plus de temps pour aller plus loin et comprendre mieux encore : l'entraîement par batch, pourquoi les résultats sont si mauvais, mettre d'autres stratégies de génération en place, ...
|
146 |
|
147 |
## Résultat du fine-tuning
|
148 |
|
149 |
Les résumés générés ne sont pas grammaticalement corrects à 100% mais les informations importantes du texte sont bien présentes dans le résumé, et la longeur du résumé correspond bien à notre attente. Cependant les résultats d'évaluation selon ROUGE est très mauvais, malgré une amélioration de 0.007 à 0.06 pour rouge1, il n'ést plus possible d'obtenir de meilleurs scores.
|
|
|
|
16 |
- LSTM réalisé à partir du <a href="https://loicgrobol.github.io//neural-networks/slides/03-transformers/transformers-slides.py.ipynb">cours</a> et de cet <a href="https://www.kaggle.com/code/columbine/seq2seq-pytorch">exemple</a> et de beaucoup d'autres référence en ligne.
|
17 |
- Fine-tuned transformers modèle lancé et pré-entrainé par Google :<a href="https://huggingface.co/google/mt5-small">google/mt5-small</a>, il s'agit d'une variance du <a href="https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/mt5">mT5</a>. Le model est entrainé pour notre tâche en se basant sur la documentation sur <a href="https://huggingface.co/docs/transformers/tasks/summarization">Summarisation</a> proposé par Huggingface.
|
18 |
|
19 |
+
|
20 |
# La méthodologie
|
21 |
|
22 |
## Répartition du travail 👥
|
23 |
Nous avons travaillé avec le logiciel de gestion de version Github en mettant en place une intégration continue envoyant directement les `pull request` sur l'espace Huggingface.
|
24 |
+
Idéalement, les `pull request` doivent être validées par deux membres du projet avant d'être accéptées afin d'éviter les erreurs en production. Nous n'avons pas mis en place ces restrictions à cause de la difficulté à gérer Docker dans Huggingface qui nous a nécessité beaucoup de modification.
|
25 |
|
26 |
## Problèmes rencontrés et résolution
|
27 |
|
|
|
143 |
## Résultats du LSTM
|
144 |
|
145 |
Les résultats du LSTM sont inutilisables mais ont permis au moins de se confronter à la difficulté de mettre en place des réseaux de neurones depuis pas grand chose.
|
146 |
+
|
147 |
On aurait aimé avoir plus de temps pour aller plus loin et comprendre mieux encore : l'entraîement par batch, pourquoi les résultats sont si mauvais, mettre d'autres stratégies de génération en place, ...
|
148 |
|
149 |
## Résultat du fine-tuning
|
150 |
|
151 |
Les résumés générés ne sont pas grammaticalement corrects à 100% mais les informations importantes du texte sont bien présentes dans le résumé, et la longeur du résumé correspond bien à notre attente. Cependant les résultats d'évaluation selon ROUGE est très mauvais, malgré une amélioration de 0.007 à 0.06 pour rouge1, il n'ést plus possible d'obtenir de meilleurs scores.
|
152 |
+
|
README.md
CHANGED
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Project Deep Learning - Text Summarisation tool and it's application programming interface
|
2 |
|
3 |
As part of the master course "Neural Network",for this university project, our task is about creating a application, a interface or a python library in the use of NLP(Natural Language Processing) with the help of an artificial neural network system.
|
|
|
1 |
+
---
|
2 |
+
title: SummaryProject
|
3 |
+
sdk: docker
|
4 |
+
app_file: src/api.py
|
5 |
+
pinned: false
|
6 |
+
---
|
7 |
+
|
8 |
# Project Deep Learning - Text Summarisation tool and it's application programming interface
|
9 |
|
10 |
As part of the master course "Neural Network",for this university project, our task is about creating a application, a interface or a python library in the use of NLP(Natural Language Processing) with the help of an artificial neural network system.
|
src/api.py
CHANGED
@@ -1,82 +1,162 @@
|
|
1 |
from fastapi import FastAPI, Form, Request
|
2 |
from fastapi.staticfiles import StaticFiles
|
3 |
from fastapi.templating import Jinja2Templates
|
|
|
|
|
4 |
|
5 |
from src.inference_lstm import inference_lstm
|
6 |
from src.inference_t5 import inference_t5
|
7 |
|
8 |
|
9 |
-
# ------ INFERENCE MODEL --------------------------------------------------------------
|
10 |
-
# appel de la fonction inference, adaptee pour une entree txt
|
11 |
def summarize(text: str):
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
text = inference_t5(text)
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
def
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
app = FastAPI()
|
33 |
|
34 |
-
# static files
|
35 |
templates = Jinja2Templates(directory="templates")
|
36 |
app.mount("/templates", StaticFiles(directory="templates"), name="templates")
|
37 |
|
38 |
|
39 |
@app.get("/")
|
40 |
async def index(request: Request):
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
|
44 |
@app.get("/model")
|
45 |
-
async def
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
|
49 |
@app.get("/predict")
|
50 |
-
async def
|
51 |
-
|
|
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
@app.post("/model")
|
55 |
-
async def
|
56 |
-
|
57 |
-
if not
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
59 |
return templates.TemplateResponse(
|
60 |
-
"index.html.jinja",
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
)
|
62 |
else:
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
|
68 |
-
# retourner le texte, les predictions et message d'erreur si formulaire envoye vide
|
69 |
@app.post("/predict")
|
70 |
async def prediction(request: Request, text: str = Form(None)):
|
|
|
|
|
|
|
71 |
if not text:
|
72 |
-
|
73 |
return templates.TemplateResponse(
|
74 |
-
"index.html.jinja",
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
)
|
76 |
else:
|
77 |
summary = summarize(text)
|
78 |
return templates.TemplateResponse(
|
79 |
-
"index.html.jinja",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
)
|
81 |
|
82 |
|
|
|
1 |
from fastapi import FastAPI, Form, Request
|
2 |
from fastapi.staticfiles import StaticFiles
|
3 |
from fastapi.templating import Jinja2Templates
|
4 |
+
import re
|
5 |
+
|
6 |
|
7 |
from src.inference_lstm import inference_lstm
|
8 |
from src.inference_t5 import inference_t5
|
9 |
|
10 |
|
|
|
|
|
11 |
def summarize(text: str):
|
12 |
+
"""
|
13 |
+
Returns the summary of an input text.
|
14 |
+
Parameter
|
15 |
+
---------
|
16 |
+
text : str
|
17 |
+
A text to summarize.
|
18 |
+
Returns
|
19 |
+
-------
|
20 |
+
:str
|
21 |
+
The summary of the input text.
|
22 |
+
"""
|
23 |
+
if global_choose_model.var == "lstm":
|
24 |
+
text = " ".join(inference_lstm(text))
|
25 |
+
return re.sub("^1|1$|<start>|<end>", "", text)
|
26 |
+
elif global_choose_model.var == "fineTunedT5":
|
27 |
text = inference_t5(text)
|
28 |
+
return re.sub("<extra_id_0> ", "", text)
|
29 |
+
elif global_choose_model.var == "":
|
30 |
+
return "You have not chosen a model."
|
31 |
+
|
32 |
+
|
33 |
+
def global_choose_model(model_choice):
|
34 |
+
"""This function allows to connect the choice of the
|
35 |
+
model and the summary function by defining global variables.
|
36 |
+
The aime is to access a variable outside of a function."""
|
37 |
+
if model_choice == "lstm":
|
38 |
+
global_choose_model.var = "lstm"
|
39 |
+
elif model_choice == "fineTunedT5":
|
40 |
+
global_choose_model.var = "fineTunedT5"
|
41 |
+
elif model_choice == " --- ":
|
42 |
+
global_choose_model.var = ""
|
43 |
+
|
44 |
+
|
45 |
+
# definition of the main elements used in the script
|
46 |
+
model_list = [
|
47 |
+
{"model": " --- ", "name": " --- "},
|
48 |
+
{"model": "lstm", "name": "LSTM"},
|
49 |
+
{"model": "fineTunedT5", "name": "Fine-tuned T5"},
|
50 |
+
]
|
51 |
+
selected_model = " --- "
|
52 |
+
model_choice = ""
|
53 |
+
|
54 |
+
|
55 |
+
# -------- API ---------------------------------------------------------------
|
56 |
app = FastAPI()
|
57 |
|
58 |
+
# static files to send the css
|
59 |
templates = Jinja2Templates(directory="templates")
|
60 |
app.mount("/templates", StaticFiles(directory="templates"), name="templates")
|
61 |
|
62 |
|
63 |
@app.get("/")
|
64 |
async def index(request: Request):
|
65 |
+
"""This function is used to create an endpoint for the
|
66 |
+
index page of the app."""
|
67 |
+
return templates.TemplateResponse(
|
68 |
+
"index.html.jinja",
|
69 |
+
{
|
70 |
+
"request": request,
|
71 |
+
"current_route": "/",
|
72 |
+
"model_list": model_list,
|
73 |
+
"selected_model": selected_model,
|
74 |
+
},
|
75 |
+
)
|
76 |
|
77 |
|
78 |
@app.get("/model")
|
79 |
+
async def get_model(request: Request):
|
80 |
+
"""This function is used to create an endpoint for
|
81 |
+
the model page of the app."""
|
82 |
+
return templates.TemplateResponse(
|
83 |
+
"index.html.jinja",
|
84 |
+
{
|
85 |
+
"request": request,
|
86 |
+
"current_route": "/model",
|
87 |
+
"model_list": model_list,
|
88 |
+
"selected_model": selected_model,
|
89 |
+
},
|
90 |
+
)
|
91 |
|
92 |
|
93 |
@app.get("/predict")
|
94 |
+
async def get_prediction(request: Request):
|
95 |
+
"""This function is used to create an endpoint for
|
96 |
+
the predict page of the app."""
|
97 |
+
return templates.TemplateResponse(
|
98 |
+
"index.html.jinja", {"request": request, "current_route": "/predict"}
|
99 |
+
)
|
100 |
|
101 |
|
102 |
@app.post("/model")
|
103 |
+
async def choose_model(request: Request, model_choice: str = Form(None)):
|
104 |
+
"""This functions allows to retrieve the model chosen by the user. Then, it
|
105 |
+
can end to an error message if it not defined or it is sent to the
|
106 |
+
global_choose_model function which connects the user choice to the
|
107 |
+
use of a model."""
|
108 |
+
selected_model = model_choice
|
109 |
+
# print(selected_model)
|
110 |
+
if not model_choice:
|
111 |
+
model_error = "Please select a model."
|
112 |
return templates.TemplateResponse(
|
113 |
+
"index.html.jinja",
|
114 |
+
{
|
115 |
+
"request": request,
|
116 |
+
"text": model_error,
|
117 |
+
"model_list": model_list,
|
118 |
+
"selected_model": selected_model,
|
119 |
+
},
|
120 |
)
|
121 |
else:
|
122 |
+
global_choose_model(model_choice)
|
123 |
+
return templates.TemplateResponse(
|
124 |
+
"index.html.jinja",
|
125 |
+
{
|
126 |
+
"request": request,
|
127 |
+
"model_list": model_list,
|
128 |
+
"selected_model": selected_model,
|
129 |
+
},
|
130 |
+
)
|
131 |
|
132 |
|
|
|
133 |
@app.post("/predict")
|
134 |
async def prediction(request: Request, text: str = Form(None)):
|
135 |
+
"""This function allows to retrieve the input text of the user.
|
136 |
+
Then, it can end to an error message or it can be sent to
|
137 |
+
the summarize function."""
|
138 |
if not text:
|
139 |
+
text_error = "Please enter your text."
|
140 |
return templates.TemplateResponse(
|
141 |
+
"index.html.jinja",
|
142 |
+
{
|
143 |
+
"request": request,
|
144 |
+
"text": text_error,
|
145 |
+
"model_list": model_list,
|
146 |
+
"selected_model": selected_model,
|
147 |
+
},
|
148 |
)
|
149 |
else:
|
150 |
summary = summarize(text)
|
151 |
return templates.TemplateResponse(
|
152 |
+
"index.html.jinja",
|
153 |
+
{
|
154 |
+
"request": request,
|
155 |
+
"text": text,
|
156 |
+
"summary": summary,
|
157 |
+
"model_list": model_list,
|
158 |
+
"selected_model": selected_model,
|
159 |
+
},
|
160 |
)
|
161 |
|
162 |
|
src/dataloader.py
CHANGED
@@ -38,8 +38,6 @@ class Data(torch.utils.data.Dataset):
|
|
38 |
<end> tokens depending on the text_type
|
39 |
get_words()
|
40 |
get the dataset vocabulary
|
41 |
-
make_dataset()
|
42 |
-
create a dataset with cleaned data
|
43 |
"""
|
44 |
|
45 |
def __init__(self, path: str, transform=None) -> None:
|
|
|
38 |
<end> tokens depending on the text_type
|
39 |
get_words()
|
40 |
get the dataset vocabulary
|
|
|
|
|
41 |
"""
|
42 |
|
43 |
def __init__(self, path: str, transform=None) -> None:
|
src/fine_tune_T5.py
CHANGED
@@ -159,7 +159,9 @@ if __name__ == '__main__':
|
|
159 |
# définition de device
|
160 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
161 |
# faire appel au model à entrainer
|
162 |
-
|
|
|
|
|
163 |
|
164 |
mt5_config = AutoConfig.from_pretrained(
|
165 |
"google/mt5-small",
|
@@ -167,6 +169,7 @@ if __name__ == '__main__':
|
|
167 |
length_penalty=0.6,
|
168 |
no_repeat_ngram_size=2,
|
169 |
num_beams=15,
|
|
|
170 |
)
|
171 |
|
172 |
model = (AutoModelForSeq2SeqLM
|
@@ -242,7 +245,7 @@ if __name__ == '__main__':
|
|
242 |
|
243 |
# faire appel au model en local
|
244 |
model = (AutoModelForSeq2SeqLM
|
245 |
-
.from_pretrained("t5_summary")
|
246 |
.to(device))
|
247 |
|
248 |
|
|
|
159 |
# définition de device
|
160 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
161 |
# faire appel au model à entrainer
|
162 |
+
|
163 |
+
hf_token = "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"
|
164 |
+
tokenizer = AutoTokenizer.from_pretrained('google/mt5-small', use_auth_token=hf_token )
|
165 |
|
166 |
mt5_config = AutoConfig.from_pretrained(
|
167 |
"google/mt5-small",
|
|
|
169 |
length_penalty=0.6,
|
170 |
no_repeat_ngram_size=2,
|
171 |
num_beams=15,
|
172 |
+
use_auth_token=hf_token
|
173 |
)
|
174 |
|
175 |
model = (AutoModelForSeq2SeqLM
|
|
|
245 |
|
246 |
# faire appel au model en local
|
247 |
model = (AutoModelForSeq2SeqLM
|
248 |
+
.from_pretrained("t5_summary", use_auth_token=hf_token )
|
249 |
.to(device))
|
250 |
|
251 |
|
src/inference_t5.py
CHANGED
@@ -3,6 +3,8 @@
|
|
3 |
"""
|
4 |
import re
|
5 |
import string
|
|
|
|
|
6 |
|
7 |
import contractions
|
8 |
import torch
|
@@ -32,11 +34,11 @@ def inference_t5(text: str) -> str:
|
|
32 |
# On défini les paramètres d'entrée pour le modèle
|
33 |
text = clean_text(text)
|
34 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
35 |
-
|
36 |
-
tokenizer =
|
37 |
# load local model
|
38 |
model = (AutoModelForSeq2SeqLM
|
39 |
-
.from_pretrained("Linggg/t5_summary",use_auth_token=
|
40 |
.to(device))
|
41 |
|
42 |
|
|
|
3 |
"""
|
4 |
import re
|
5 |
import string
|
6 |
+
import os
|
7 |
+
os.environ['TRANSFORMERS_CACHE'] = './.cache'
|
8 |
|
9 |
import contractions
|
10 |
import torch
|
|
|
34 |
# On défini les paramètres d'entrée pour le modèle
|
35 |
text = clean_text(text)
|
36 |
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
37 |
+
hf_token = "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"
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained("Linggg/t5_summary", use_auth_token=hf_token )
|
39 |
# load local model
|
40 |
model = (AutoModelForSeq2SeqLM
|
41 |
+
.from_pretrained("Linggg/t5_summary", use_auth_token = hf_token )
|
42 |
.to(device))
|
43 |
|
44 |
|
templates/index.html.jinja
CHANGED
@@ -1,35 +1,19 @@
|
|
1 |
<!DOCTYPE html>
|
2 |
-
<html lang="
|
3 |
<head>
|
4 |
<title>Text summarization API</title>
|
5 |
<meta charset="utf-8" />
|
6 |
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
|
7 |
-
<
|
|
|
8 |
<script>
|
9 |
function customReset()
|
10 |
{
|
11 |
-
document.getElementById("
|
12 |
document.getElementById("text").value = "";
|
13 |
document.getElementById("summary").value = "";
|
14 |
}
|
15 |
</script>
|
16 |
-
<script>
|
17 |
-
function submitBothForms()
|
18 |
-
{
|
19 |
-
document.getElementById("my_form").submit();
|
20 |
-
document.getElementById("choixModel").submit();
|
21 |
-
}
|
22 |
-
</script>
|
23 |
-
<script>
|
24 |
-
function getValue() {
|
25 |
-
var e = document.getElementById("choixModel");
|
26 |
-
var value = e.value;
|
27 |
-
var text = e.options[e.selectedIndex].text;
|
28 |
-
return text}
|
29 |
-
</script>
|
30 |
-
<script type="text/javascript">
|
31 |
-
document.getElementById('choixModel').value = "<?php echo $_GET['choixModel'];?>";
|
32 |
-
</script>
|
33 |
</head>
|
34 |
<body>
|
35 |
<div id="header">
|
@@ -44,22 +28,29 @@
|
|
44 |
<hr/>
|
45 |
</nav>
|
46 |
|
47 |
-
<div class="
|
48 |
-
<form id="
|
49 |
<label for="selectModel">Choose a model :</label>
|
50 |
-
<select name="
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
</select>
|
54 |
</form>
|
55 |
-
<button form ="
|
56 |
</div>
|
57 |
|
58 |
<div>
|
59 |
<table>
|
60 |
<tr>
|
61 |
<td>
|
62 |
-
<form id = "
|
63 |
<textarea id="text" name="text" placeholder="Enter your text here!" rows="15" cols="75">{{text}}</textarea>
|
64 |
<input type="hidden" name="textarea_value" value="{{ text }}">
|
65 |
</form>
|
@@ -71,9 +62,13 @@
|
|
71 |
</table>
|
72 |
</div>
|
73 |
<div class="buttons">
|
74 |
-
<!--
|
75 |
-
|
76 |
-
<button
|
|
|
|
|
|
|
|
|
77 |
</div>
|
78 |
|
79 |
<div class="copyright">
|
@@ -81,7 +76,7 @@
|
|
81 |
<li>© Untitled. All rights reserved.</li>
|
82 |
</ul>
|
83 |
<ul>
|
84 |
-
<li>
|
85 |
<li>Lingyun GAO -- Estelle SALMON -- Eve SAUVAGE</li>
|
86 |
</ul>
|
87 |
</div>
|
|
|
1 |
<!DOCTYPE html>
|
2 |
+
<html lang="en">
|
3 |
<head>
|
4 |
<title>Text summarization API</title>
|
5 |
<meta charset="utf-8" />
|
6 |
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
|
7 |
+
<style>html, body, div, h1, h2, p, blockquote,a, code, em, img, strong, u, ul, li,label, legend, caption, tr, th, td,header, menu, nav, section, summary{margin: 0;padding: 0;border: 0;font-size: 100%;font: inherit;vertical-align: baseline}header, menu, nav, section{display: block}div{margin-bottom: 20px}body{line-height: 1}ul{list-style: none}body{-webkit-text-size-adjust: none}input::-moz-focus-inner{border: 0;padding: 0}html{box-sizing: border-box}*, *:before, *:after{box-sizing: inherit}body{color: #5b5b5b;font-size: 15pt;line-height: 1.85em;font-family: 'Source Sans Pro', sans-serif;font-weight: 300;background-image: url("templates/site_style/images/background.jpg");background-size: cover;background-position: center center;background-attachment: fixed}h1, h2, h3{font-weight: 400;color: #483949;line-height: 1.25em}h1 a, h2 a, h3 a{color: inherit;text-decoration: none;border-bottom-color: transparent}h1 strong, h2 strong, h3 strong{font-weight: 600}h2{font-size: 2.85em}h3{font-size: 1.25em}strong, b{font-weight: 400;color: #483949}em, i{font-style: italic}a{color: inherit;border-bottom: solid 1px rgba(128, 128, 128, 0.15);text-decoration: none;-moz-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;-webkit-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;-ms-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out}a:hover{color: #ef8376;border-bottom-color: transparent}p, ul{margin-bottom: 1em}p{text-align: justify}hr{position: relative;display: block;border: 0;top: 4.5em;margin-bottom: 9em;height: 6px;border-top: solid 1px rgba(128, 128, 128, 0.2);border-bottom: solid 1px rgba(128, 128, 128, 0.2)}hr:before, hr:after{content: '';position: absolute;top: -8px;display: block;width: 1px;height: 21px;background: rgba(128, 128, 128, 0.2)}hr:before{left: -1px}hr:after{right: -1px}ul{list-style: disc;padding-left: 1em}ul li{padding-left: 0.5em;font-size: 85%;list-style: none}textarea{border-radius: 10px;resize: none;padding: 10px;line-height: 20px;word-spacing: 1px;font-size: 16px;width: 85%;height: 100%}::-webkit-input-placeholder{font-size: 17px;word-spacing: 1px}table{width: 100%}table.default{width: 100%}table.default tbody tr:first-child{border-top: 0}table.default tbody tr:nth-child(2n 1){background: #fafafa}table.default th{text-align: left;font-weight: 400;padding: 0.5em 1em 0.5em 1em}input[type="button"],input[type="submit"],input[type="reset"],button,.button{position: relative;display: inline-block;background: #df7366;color: #fff;text-align: center;border-radius: 0.5em;text-decoration: none;padding: 0.65em 3em 0.65em 3em;border: 0;cursor: pointer;outline: 0;font-weight: 300;-moz-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;-webkit-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;-ms-transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out;transition: background-color 0.35s ease-in-out, color 0.35s ease-in-out, border-bottom-color 0.35s ease-in-out}input[type="button"]:hover,input[type="submit"]:hover,input[type="reset"]:hover,button:hover,.button:hover{color: #fff;background: #ef8376}input[type="button"].alt,input[type="submit"].alt,input[type="reset"].alt,button.alt,.button.alt{background: #2B252C}input[type="button"].alt:hover,input[type="submit"].alt:hover,input[type="reset"].alt:hover,button.alt:hover,.button.alt:hover{background: #3B353C}#header{position: relative;background-size: cover;background-position: center center;background-attachment: fixed;color: #fff;text-align: center;padding: 5em 0 2em 0;cursor: default;height: 100%}#header:before{content: '';display: inline-block;vertical-align: middle;height: 100%}#header .inner{position: relative;z-index: 1;margin: 0;display: inline-block;vertical-align: middle}#header header{display: inline-block}#header header > p{font-size: 1.25em;margin: 0}#header h1{color: #fff;font-size: 3em;line-height: 1em}#header h1 a{color: inherit}#header .button{display: inline-block;border-radius: 100%;width: 4.5em;height: 4.5em;line-height: 4.5em;text-align: center;font-size: 1.25em;padding: 0}#header hr{top: 1.5em;margin-bottom: 3em;border-bottom-color: rgba(192, 192, 192, 0.35);box-shadow: inset 0 1px 0 0 rgba(192, 192, 192, 0.35)}#header hr:before, #header hr:after{background: rgba(192, 192, 192, 0.35)}#nav{position: absolute;top: 0;left: 0;width: 100%;text-align: center;padding: 1.5em 0 1.5em 0;z-index: 1;overflow: hidden}#nav > hr{top: 0.5em;margin-bottom: 6em}.copyright{margin-top: 50px}@media screen and (max-width: 1680px){body, input, select{font-size: 14pt;line-height: 1.75em}}@media screen and (max-width: 1280px){body, input, select{font-size: 12pt;line-height: 1.5em}#header{background-attachment: scroll}#header .inner{padding-left: 2em;padding-right: 2em}}@media screen and (max-width: 840px){body, input, select{font-size: 13pt;line-height: 1.65em}}#navPanel, #titleBar{display: none}@media screen and (max-width: 736px){html, body{overflow-x: hidden}body, input, select{font-size: 12.5pt;line-height: 1.5em}h2{font-size: 1.75em}h3{font-size: 1.25em}hr{top: 3em;margin-bottom: 6em}#header{background-attachment: scroll;padding: 2.5em 0 0 0}#header .inner{padding-top: 1.5em;padding-left: 1em;padding-right: 1em}#header header > p{font-size: 1em}#header h1{font-size: 1.75em}#header hr{top: 1em;margin-bottom: 2.5em}#nav{display: none}#main > header{text-align: center}div.copyright{margin-top: 10px}label, textarea{font-size: 0.8rem;letter-spacing: 1px;font-family: Georgia, 'Times New Roman', Times, serif}.buttons{display: flex;flex-direction: row;justify-content: center;margin-top: 20px}}
|
8 |
+
</style>
|
9 |
<script>
|
10 |
function customReset()
|
11 |
{
|
12 |
+
document.getElementById("text_form").value = "";
|
13 |
document.getElementById("text").value = "";
|
14 |
document.getElementById("summary").value = "";
|
15 |
}
|
16 |
</script>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
</head>
|
18 |
<body>
|
19 |
<div id="header">
|
|
|
28 |
<hr/>
|
29 |
</nav>
|
30 |
|
31 |
+
<div class="model_choice">
|
32 |
+
<form id="model_choice" method="post" action="/model">
|
33 |
<label for="selectModel">Choose a model :</label>
|
34 |
+
<select name="model_choice" class="selectModel" id="model_choice">
|
35 |
+
<!--A for jinja loop to retrieve option buttons from the api
|
36 |
+
and to keep them selected when a choice is made. -->
|
37 |
+
{% for x in model_list %}
|
38 |
+
{%if selected_model == x.model%}
|
39 |
+
<option value="{{x.model}}" selected>{{x.name}}</option>
|
40 |
+
{%else%}
|
41 |
+
<option value="{{x.model}}">{{x.name}}</option>
|
42 |
+
{%endif%}
|
43 |
+
{%endfor%}
|
44 |
</select>
|
45 |
</form>
|
46 |
+
<button form ="model_choice" class='search_bn' type="submit" class="btn btn-primary btn-block btn-large" rows="1" cols="50">Select model</button>
|
47 |
</div>
|
48 |
|
49 |
<div>
|
50 |
<table>
|
51 |
<tr>
|
52 |
<td>
|
53 |
+
<form id = "text_form" action="/predict" method="post" class="formulaire">
|
54 |
<textarea id="text" name="text" placeholder="Enter your text here!" rows="15" cols="75">{{text}}</textarea>
|
55 |
<input type="hidden" name="textarea_value" value="{{ text }}">
|
56 |
</form>
|
|
|
62 |
</table>
|
63 |
</div>
|
64 |
<div class="buttons">
|
65 |
+
<!--A if loop to disable Go and Reset button for the index page.-->
|
66 |
+
{% if current_route == "/" %}
|
67 |
+
<button>Please select a model</button>
|
68 |
+
{% else %}
|
69 |
+
<button form ="text_form" class='search_bn' type="submit" class="btn btn-primary btn-block btn-large" rows="1" cols="50">Go !</button>
|
70 |
+
<button form ="text_form" type="button" value="Reset" onclick="customReset();">Reset</button>
|
71 |
+
{% endif %}
|
72 |
</div>
|
73 |
|
74 |
<div class="copyright">
|
|
|
76 |
<li>© Untitled. All rights reserved.</li>
|
77 |
</ul>
|
78 |
<ul>
|
79 |
+
<li>University project as part of the NLP (Natural Language Processing) Master's program</li>
|
80 |
<li>Lingyun GAO -- Estelle SALMON -- Eve SAUVAGE</li>
|
81 |
</ul>
|
82 |
</div>
|
templates/site_style/css/main.css
CHANGED
@@ -246,6 +246,7 @@ textarea {
|
|
246 |
background: #3B353C;
|
247 |
}
|
248 |
|
|
|
249 |
/* Header */
|
250 |
|
251 |
#header {
|
@@ -469,4 +470,5 @@ textarea {
|
|
469 |
justify-content: center;
|
470 |
margin-top: 20px;
|
471 |
}
|
|
|
472 |
}
|
|
|
246 |
background: #3B353C;
|
247 |
}
|
248 |
|
249 |
+
|
250 |
/* Header */
|
251 |
|
252 |
#header {
|
|
|
470 |
justify-content: center;
|
471 |
margin-top: 20px;
|
472 |
}
|
473 |
+
|
474 |
}
|