cbensimon HF staff commited on
Commit
7cfee46
1 Parent(s): cd3d828

Add README config

Browse files
Files changed (1) hide show
  1. README.md +13 -40
README.md CHANGED
@@ -1,3 +1,13 @@
 
 
 
 
 
 
 
 
 
 
1
  # AutoPrompt
2
  An automated method based on gradient-guided search to create prompts for a diverse set of NLP tasks. AutoPrompt demonstrates that masked language models (MLMs) have an innate ability to perform sentiment analysis, natural language inference, fact retrieval, and relation extraction. Check out our [website](https://ucinlp.github.io/autoprompt/) for the paper and more information.
3
 
@@ -49,17 +59,7 @@ Depending on the language model (i.e. BERT or RoBERTa) you choose to generate pr
49
 
50
  ### Sentiment Analysis
51
  ```
52
- python -m autoprompt.create_trigger \
53
- --train glue_data/SST-2/train.tsv \
54
- --dev glue_data/SST-2/dev.tsv \
55
- --template '<s> {sentence} [T] [T] [T] [P] . </s>' \
56
- --label-map '{"0": ["Ġworse", "Ġincompetence", "ĠWorse", "Ġblamed", "Ġsucked"], "1": ["ĠCris", "Ġmarvelous", "Ġphilanthrop", "Ġvisionary", "Ġwonderful"]}' \
57
- --num-cand 100 \
58
- --accumulation-steps 30 \
59
- --bsz 24 \
60
- --eval-size 48 \
61
- --iters 180 \
62
- --model-name roberta-large
63
  ```
64
 
65
  ### Natural Language Inference
@@ -69,39 +69,12 @@ python -m autoprompt.create_trigger --train SICK_TRAIN_ALL_S.tsv --dev SICK_DE
69
 
70
  ### Fact Retrieval
71
  ```
72
- python -m autoprompt.create_trigger \
73
- --train $path/train.jsonl \
74
- --dev $path/dev.jsonl \
75
- --template '<s> {sub_label} [T] [T] [T] [P] . </s>' \
76
- --num-cand 10 \
77
- --accumulation-steps 1 \
78
- --model-name roberta-large \
79
- --bsz 56 \
80
- --eval-size 56 \
81
- --iters 1000 \
82
- --label-field 'obj_label' \
83
- --tokenize-labels \
84
- --filter \
85
- --print-lama
86
  ```
87
 
88
  ### Relation Extraction
89
  ```
90
- python -m autoprompt.create_trigger \
91
- --train $path/train.jsonl \
92
- --dev $path/dev.jsonl \
93
- --template '[CLS] {context} [SEP] {sub_label} [T] [T] [T] [P] . [SEP]' \
94
- --num-cand 10 \
95
- --accumulation-steps 1 \
96
- --model-name bert-base-cased \
97
- --bsz 32 \
98
- --eval-size 32 \
99
- --iters 500 \
100
- --label-field 'obj_label' \
101
- --tokenize-labels \
102
- --filter \
103
- --print-lama \
104
- --use-ctx
105
  ```
106
 
107
  ## Label Token Selection
1
+ ---
2
+ title: Autoprompt
3
+ emoji: 🏢
4
+ colorFrom: green
5
+ colorTo: indigo
6
+ sdk: streamlit
7
+ app_file: app.py
8
+ pinned: false
9
+ ---
10
+
11
  # AutoPrompt
12
  An automated method based on gradient-guided search to create prompts for a diverse set of NLP tasks. AutoPrompt demonstrates that masked language models (MLMs) have an innate ability to perform sentiment analysis, natural language inference, fact retrieval, and relation extraction. Check out our [website](https://ucinlp.github.io/autoprompt/) for the paper and more information.
13
 
59
 
60
  ### Sentiment Analysis
61
  ```
62
+ python -m autoprompt.create_trigger \\r\n --train glue_data/SST-2/train.tsv \\r\n --dev glue_data/SST-2/dev.tsv \\r\n --template '<s> {sentence} [T] [T] [T] [P] . </s>' \\r\n --label-map '{"0": ["Ġworse", "Ġincompetence", "ĠWorse", "Ġblamed", "Ġsucked"], "1": ["ĠCris", "Ġmarvelous", "Ġphilanthrop", "Ġvisionary", "Ġwonderful"]}' \\r\n --num-cand 100 \\r\n --accumulation-steps 30 \\r\n --bsz 24 \\r\n --eval-size 48 \\r\n --iters 180 \\r\n --model-name roberta-large
 
 
 
 
 
 
 
 
 
 
63
  ```
64
 
65
  ### Natural Language Inference
69
 
70
  ### Fact Retrieval
71
  ```
72
+ python -m autoprompt.create_trigger \\r\n --train $path/train.jsonl \\r\n --dev $path/dev.jsonl \\r\n --template '<s> {sub_label} [T] [T] [T] [P] . </s>' \\r\n --num-cand 10 \\r\n --accumulation-steps 1 \\r\n --model-name roberta-large \\r\n --bsz 56 \\r\n --eval-size 56 \\r\n --iters 1000 \\r\n --label-field 'obj_label' \\r\n --tokenize-labels \\r\n --filter \\r\n --print-lama
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ```
74
 
75
  ### Relation Extraction
76
  ```
77
+ python -m autoprompt.create_trigger \\r\n --train $path/train.jsonl \\r\n --dev $path/dev.jsonl \\r\n --template '[CLS] {context} [SEP] {sub_label} [T] [T] [T] [P] . [SEP]' \\r\n --num-cand 10 \\r\n --accumulation-steps 1 \\r\n --model-name bert-base-cased \\r\n --bsz 32 \\r\n --eval-size 32 \\r\n --iters 500 \\r\n --label-field 'obj_label' \\r\n --tokenize-labels \\r\n --filter \\r\n --print-lama \\r\n --use-ctx
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  ```
79
 
80
  ## Label Token Selection