lewtun HF staff commited on
Commit
e6fac54
β€’
1 Parent(s): 1412968

Add AutoTrain backend API details

Browse files
Files changed (2) hide show
  1. .env.example +4 -0
  2. README.md +17 -3
.env.example ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ AUTOTRAIN_USERNAME=autoevaluator # The bot that authors evaluation jobs
2
+ HF_TOKEN=hf_xxx # An API token of the `autoevaluator` user
3
+ AUTOTRAIN_BACKEND_API=https://api-staging.autotrain.huggingface.co # The AutoTrain backend to send jobs to. Use https://api.autotrain.huggingface.co for prod
4
+ DATASETS_PREVIEW_API=https://datasets-server.huggingface.co # The API to grab dataset information from
README.md CHANGED
@@ -22,7 +22,7 @@ The table below shows which tasks are currently supported for evaluation in the
22
  | `multi_class_classification` | βœ… |
23
  | `multi_label_classification` | ❌ |
24
  | `entity_extraction` | βœ… |
25
- | `extractive_question_answering` | ❌ |
26
  | `translation` | βœ… |
27
  | `summarization` | βœ… |
28
  | `image_binary_classification` | βœ… |
@@ -30,14 +30,28 @@ The table below shows which tasks are currently supported for evaluation in the
30
 
31
  ## Installation
32
 
33
- To run the application, first clone this repository and install the dependencies as follows:
34
 
35
  ```
36
  pip install -r requirements.txt
37
  ```
38
 
39
- Then spin up the application by running:
 
 
 
 
 
 
40
 
41
  ```
42
  streamlit run app.py
 
 
 
 
 
 
 
 
43
  ```
22
  | `multi_class_classification` | βœ… |
23
  | `multi_label_classification` | ❌ |
24
  | `entity_extraction` | βœ… |
25
+ | `extractive_question_answering` | βœ… |
26
  | `translation` | βœ… |
27
  | `summarization` | βœ… |
28
  | `image_binary_classification` | βœ… |
30
 
31
  ## Installation
32
 
33
+ To run the application locally, first clone this repository and install the dependencies as follows:
34
 
35
  ```
36
  pip install -r requirements.txt
37
  ```
38
 
39
+ Next, copy the example file of environment variables:
40
+
41
+ ```
42
+ cp .env.examples .env
43
+ ```
44
+
45
+ and set the `HF_TOKEN` variable with a valid API token from the `autoevaluator` user. Finally, spin up the application by running:
46
 
47
  ```
48
  streamlit run app.py
49
+ ```
50
+
51
+ ## AutoTrain configuration details
52
+
53
+ Models are evaluated by AutoTrain, with the payload sent to the `AUTOTRAIN_BACKEND_API` environment variable. The current configuration for evaluation jobs running on Spaces is:
54
+
55
+ ```
56
+ AUTOTRAIN_BACKEND_API=https://api.autotrain.huggingface.co
57
  ```