alexpap commited on
Commit
c7cc7c1
1 Parent(s): d05e2cb

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -135,8 +135,9 @@ elif menu == "Training":
135
 
136
  st.markdown('''
137
  To train a QA-NLU model on the data we created, we use the `run_squad.py` script from [huggingface](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) and a SQuAD-trained QA model as our base. As an example, we can use `deepset/roberta-base-squad2` model from [here](https://huggingface.co/deepset/roberta-base-squad2) (assuming 8 GPUs are present):
138
-
139
- ````
 
140
  mkdir models
141
 
142
  python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \\
@@ -158,7 +159,6 @@ elif menu == "Training":
158
  --save_steps 100000 \\
159
  --gradient_accumulation_steps 8 \\
160
  --seed $RANDOM
161
- ````
162
  ''')
163
 
164
  elif menu == "Evaluation":
 
135
 
136
  st.markdown('''
137
  To train a QA-NLU model on the data we created, we use the `run_squad.py` script from [huggingface](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) and a SQuAD-trained QA model as our base. As an example, we can use `deepset/roberta-base-squad2` model from [here](https://huggingface.co/deepset/roberta-base-squad2) (assuming 8 GPUs are present):
138
+ ''')
139
+
140
+ st.code('''
141
  mkdir models
142
 
143
  python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \\
 
159
  --save_steps 100000 \\
160
  --gradient_accumulation_steps 8 \\
161
  --seed $RANDOM
 
162
  ''')
163
 
164
  elif menu == "Evaluation":