Text Generation
Transformers
PyTorch
Chinese
English
llama
text-generation-inference
baichuan-vicuna-7b / zero3_bf16_config.yaml
fireballoon's picture
Upload zero3_bf16_config.yaml
e40d45d
raw
history blame
501 Bytes
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 8
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: false
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false