loganrobbins commited on
Commit
377ee71
·
verified ·
1 Parent(s): 4600161

Update model card: add GCS + WandB links

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -35,6 +35,19 @@ Autoregressive decoding in Large Language Models (LLMs) is inherently sequential
35
  2. Download the base trunk model (`openai/gpt-oss-20b`) via Hugging Face (or provide a local path).
36
  3. Download the adapter checkpoint from this repo and point `configs/gpt_oss_transfer_production.yaml` (or CLI flags) at it.
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ## Citation
39
 
40
  ```bibtex
 
35
  2. Download the base trunk model (`openai/gpt-oss-20b`) via Hugging Face (or provide a local path).
36
  3. Download the adapter checkpoint from this repo and point `configs/gpt_oss_transfer_production.yaml` (or CLI flags) at it.
37
 
38
+ ## Artifacts (public GCS)
39
+
40
+ The complete training artifacts and dataset archives are mirrored publicly in GCS:
41
+
42
+ - **Bucket root:** `https://storage.googleapis.com/parallel-decoder-transformer/`
43
+ - **Upload manifest (full listing):** `https://storage.googleapis.com/parallel-decoder-transformer/UPLOAD_MANIFEST.md`
44
+ - **Training checkpoints:** `https://storage.googleapis.com/parallel-decoder-transformer/checkpoints/gpt-oss-8xH100-50000steps/`
45
+ - **Dataset archives:** `https://storage.googleapis.com/parallel-decoder-transformer/data/archives/`
46
+
47
+ ## Training logs (Weights & Biases)
48
+
49
+ - **WandB run:** `https://wandb.ai/ljrweb-self/parallel-decoder-transformer/runs/fmuea63a`
50
+
51
  ## Citation
52
 
53
  ```bibtex