lvkaokao commited on
Commit
7b2b2c3
1 Parent(s): e904c00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -1,3 +1,40 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+
5
+ ```python
6
+ #!/bin/bash
7
+ # Apache v2 license
8
+ # Copyright (C) 2021 Intel Corporation
9
+ # SPDX-License-Identifier: Apache-2.0
10
+
11
+ # Teacher Preparation
12
+
13
+ # Notes:
14
+ # Auto mixed precision can be used by adding --fp16
15
+ # Distributed training can be used with the torch.distributed.lauch app
16
+
17
+ TEACHER_PATH=./bert-base-uncased-teacher-preparation-pretrain
18
+ OUTPUT_DIR=$TEACHER_PATH
19
+ DATA_CACHE_DIR=/root/kaokao/Model-Compression-Research-Package/examples/transformers/language-modeling/wikipedia_processed_for_pretrain
20
+
21
+ python -m torch.distributed.launch \
22
+ --nproc_per_node=8 \
23
+ ../../examples/transformers/language-modeling/run_mlm.py \
24
+ --model_name_or_path bert-base-uncased \
25
+ --datasets_name_config wikipedia:20200501.en \
26
+ --data_process_type segment_pair_nsp \
27
+ --dataset_cache_dir $DATA_CACHE_DIR \
28
+ --do_train \
29
+ --learning_rate 5e-5 \
30
+ --max_steps 100000 \
31
+ --warmup_ratio 0.01 \
32
+ --weight_decay 0.01 \
33
+ --per_device_train_batch_size 8 \
34
+ --gradient_accumulation_steps 4 \
35
+ --logging_steps 10 \
36
+ --save_steps 5000 \
37
+ --save_total_limit 2 \
38
+ --output_dir $OUTPUT_DIR \
39
+ --run_name pofa-teacher-prepare-pretrain
40
+ ```