Edit model card

Model description

Sampling-based watermark distilled Pythia 1.4B using the KGW k=2,γ=0.25,δ=2k=2, \gamma=0.25, \delta=2 watermarking strategy in the paper On the Learnability of Watermarks for Language Models.

Training hyperparameters

The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
0

Collection including cygu/pythia-1.4b-sampling-watermark-distill-kgw-k2-gamma0.25-delta2