Edit model card

Model description

Logit-based watermark distilled Llama 2 7B using the KGW k=1,γ=0.25,δ=1k=1, \gamma=0.25, \delta=1 watermarking strategy in the paper On the Learnability of Watermarks for Language Models.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
21
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Dataset used to train cygu/llama-2-7b-logit-watermark-distill-kgw-k1-gamma0.25-delta1

Collection including cygu/llama-2-7b-logit-watermark-distill-kgw-k1-gamma0.25-delta1