WCNegentropy commited on
Commit
14bd0dd
·
verified ·
1 Parent(s): 37eb06d

Remove BitTransformerLM/README.md - cleanup for OS launch

Browse files
Files changed (1) hide show
  1. BitTransformerLM/README.md +0 -177
BitTransformerLM/README.md DELETED
@@ -1,177 +0,0 @@
1
- # BitTransformerLM
2
-
3
- **Project Status:** Pre-release (v1 candidate)
4
-
5
- BitTransformerLM is a bit-centric transformer language model built entirely in PyTorch. The project began as a small prototype but has matured into a near-production system capable of modeling raw binary streams with sophisticated safety telemetry and automated scale-up tooling. This repository now serves as the canonical implementation under WCNegentropy.
6
-
7
- ## Historical Background
8
- - **Early Experiments** – Initial prototypes explored mapping text to parity-protected bits and training a minimal transformer on random data.
9
- - **Telemetry & Safety** – Added negentropy, LZ complexity and symbiosis scoring to measure information flow and gate unsafe outputs.
10
- - **Progressive Scaling** – Introduced reversible layers and automatic depth/width expansion for efficient curriculum training. The schedule now triggers expansions only when validation loss plateaus and decays the learning rate by √2 after each growth with a 100-step warm‑up.
11
- - **Compression Support** – Integrated run-length encoding and packed bit I/O with optional multi-task training on compressed sequences.
12
- - **Context Extension** – Implemented chunked attention and sliding-window inference for long sequences with optional overlapping windows.
13
- - **Attention Logging Toggle** – ``full_attn_logging=False`` skips reconstructing full ``T×T`` attention maps during chunked attention, cutting memory use for very long sequences.
14
- - **Diffusion LM Mode** – Enable bidirectional denoising by setting ``causal=False`` or toggling **Diffusion LM** in the dashboard. Chunked attention is automatically disabled in this mode and restored afterward.
15
- - **Dashboard & MCP Server** – Built a lightweight web UI backed by a management server for real‑time training, inference and model collapse. New `/metrics` and `/model_config` endpoints surface live telemetry and hyperparameters, and `/save_checkpoint` and `/download_checkpoint` enable Hugging Face weight sync. The insecure `/exec` route has been removed.
16
- - **Phase 1 Optimizations** – Configurable batch sizes with aligned OneCycle scheduling, gradient accumulation, mixed‑precision, memory‑mapped dataset streaming, scheduled compression ramps, selective ``torch.compile``, and an EMA‑smoothed safety gate with burn‑in to cut false positives.
17
-
18
- The codebase has undergone multiple stress tests and synthetic benchmarks (see `tests/TEST_RESULTS.md`) and now approaches a stable release.
19
-
20
- ## Quick Start
21
- Install dependencies using the CPU wheel of PyTorch (default):
22
- ```bash
23
- pip install --extra-index-url https://download.pytorch.org/whl/cpu -r requirements.txt
24
- ```
25
- When GPU acceleration is toggled in the dashboard, the application automatically
26
- installs the CUDA-enabled wheel:
27
- ```bash
28
- pip install --extra-index-url https://download.pytorch.org/whl/cu118 torch==2.7.1+cu118
29
- ```
30
- Run the example script:
31
- ```bash
32
- python example.py
33
- ```
34
- Adaptive scaling demo:
35
- The legacy `progressive_scaleup.py` script is retained for reference but has been
36
- superseded by `integration_schedule.py`, which offers a more flexible scaling
37
- workflow.
38
-
39
- Run the unified workflow:
40
- ```bash
41
- python unified_workflow.py --dashboard
42
- # disable gradient checkpointing for faster but memory-hungry runs
43
- python unified_workflow.py --no-checkpoint
44
- # use standard (non-reversible) transformer blocks
45
- python unified_workflow.py --no-reversible
46
- # enable 4-bit quantization-aware training
47
- python unified_workflow.py --qat
48
- ```
49
-
50
- For faster CPU execution, BitTransformerLM exposes a `cpu_autocast()` helper
51
- that enables bfloat16 mixed precision. Models created with
52
- `use_autocast=True` apply this automatically, or you can wrap individual
53
- forward passes:
54
-
55
- ```python
56
- from bit_transformer.torch_utils import cpu_autocast
57
-
58
- with cpu_autocast():
59
- logits, telemetry = model(bits)
60
- ```
61
-
62
- Reduce memory use when chunked attention is active by disabling full
63
- attention logging:
64
-
65
- ```python
66
- model = BitTransformerLM(chunk_size=128, full_attn_logging=False)
67
- ```
68
-
69
- Enable Diffusion LM training and sampling:
70
- ```bash
71
- python unified_workflow.py --diffusion --diffusion-steps 8 --dataset-size 32
72
- # choose noise schedule: linear, cosine, exp
73
- python unified_workflow.py --diffusion --noise-schedule cosine --diffusion-steps 16 --dataset-size 32
74
- # linearly decay noise over epochs
75
- python unified_workflow.py --diffusion --diffusion-curriculum --dataset-size 32
76
- ```
77
- Higher `--diffusion-steps` (8–16) improves sample quality at the cost of compute. When using the dashboard, enable the **Diffusion LM** toggle to run the model without causal masking or chunked attention.
78
- Generated samples automatically fix parity bits so they can be decoded back to text.
79
- To resume training across machines using Hugging Face storage:
80
- ```bash
81
- python unified_workflow.py --hf-repo your-username/bittransformerlm --hf-token $HF_TOKEN
82
- ```
83
- The dashboard exposes matching controls under **Hugging Face Checkpoints**. Provide a repository ID and optional token (falling back to the `HF_TOKEN` environment variable) and click **Upload weights** or **Download weights** to sync the model.
84
- Run the unit tests:
85
- ```bash
86
- pytest -q
87
- ```
88
-
89
- ### Mode management
90
-
91
- During training, ensure the model is in training mode with dropout enabled:
92
-
93
- ```python
94
- from bit_transformer.utils import set_dropout
95
-
96
- model.train()
97
- set_dropout(model, 0.1)
98
- ```
99
-
100
- Before running tests, performing inference, or committing weights to the repository, switch the model to evaluation mode and disable dropout:
101
-
102
- ```python
103
- model.eval()
104
- set_dropout(model, 0.0)
105
- ```
106
-
107
- This prevents CI failures from accidentally pushing weights that still have active dropout.
108
-
109
- ## Telemetry Metrics Explained
110
- BitTransformerLM reports three bounded metrics in ``[0, 1]`` during training and inference:
111
-
112
- - **Negentropy (K)** – departure from random noise; ``1`` denotes perfectly ordered bits while ``0`` is uniform randomness.
113
- - **LZ Complexity (C)** – differentiable proxy for Lempel–Ziv compressibility; low values imply repetitive patterns and high values frequent transitions.
114
- - **Symbiosis (S)** – agreement between model predictions and a reference distribution via KL divergence; scores near ``1`` show strong alignment.
115
-
116
- An Adaptive Computation Time (ACT) mechanism lets layers halt early once confidence exceeds a threshold. Halt probabilities are exported as ``halt_probs`` in telemetry for inspection.
117
-
118
- These metrics are logged alongside losses and can trigger safety gates when thresholds are violated. The dashboard monitors drift and emits warnings when recent values deviate beyond a configurable threshold.
119
-
120
- ## Core Features
121
- - **Bit-Native Modeling** – Works directly on 0/1 inputs with positional encodings and parity-protected text helpers.
122
- - **Telemetry Synthesizer** – Clusters activation summaries to surface coherent subspaces and detect drift.
123
- - **Submodel Distillation** – `TelemetrySynthesizer` selects representative sequences for `collapse_submodel`, which deepens
124
- and widens once (`width_scale` = 1.5) if telemetry floors aren't met; `save_distilled_model` places a `metrics.json` summary
125
- beside the distilled weights.
126
- - **Safety Gate** – `hil_safe_inference` enforces minimum complexity and symbiosis scores at runtime with EMA smoothing and a configurable burn‑in period.
127
- - **Quantization** – CPU inference can be quantized to int8 or trained with 4-bit QAT using the `--qat` flag.
128
- - **Distributed Training** – FSDP and pipeline helpers allow multi‑GPU scaling when hardware is available.
129
- - **Interactive Dashboard** – Live control of training, scaling and compression with optional GPU acceleration. The dashboard now exposes reversible layers, gradient checkpointing, ACT thresholds, λ floors, 4‑bit QAT and Diffusion LM toggles, real‑time telemetry charts powered by Chart.js, and Hugging Face checkpoint upload/download controls with `HF_TOKEN` fallback. Settings persist via `localStorage`.
130
- - **CI/CD Pipeline** – GitHub Actions install dependencies, run the tests and build distribution artifacts on every push.
131
-
132
- ## Development Workflow
133
- 1. Start the MCP server:
134
- ```bash
135
- python mcp_server.py
136
- ```
137
- 2. Launch the dashboard in another terminal:
138
- ```bash
139
- MCP_SERVER_ADDR=http://127.0.0.1:7000 python -m bit_transformer.dashboard_app
140
- ```
141
- 3. Submit training batches, scale the model and monitor telemetry from the web UI.
142
- The dashboard's appearance is controlled by `bit_transformer/static/style.css`.
143
-
144
- A `watcher.py` script can automatically restart the server and run tests when files change during local development.
145
-
146
- ## Container Deployment
147
- A `Dockerfile` and `start.sh` script build a minimal VM image that launches both the MCP server and dashboard.
148
-
149
- ```bash
150
- docker build -t bittransformerlm .
151
- docker run -p 5000:5000 -p 7000:7000 bittransformerlm
152
- ```
153
-
154
- By default the container installs the CPU-only PyTorch wheel. Set the build
155
- argument `TORCH_CUDA=cu118` to preinstall the GPU version. The container sets
156
- `MCP_SERVER_ADDR=http://127.0.0.1:7000` and exposes the dashboard on port 5000.
157
-
158
- ## Roadmap
159
- - Finalize S attribution tools and metric drift detection.
160
- - Publish an initial release package and rename the repository to **BitTransformerLM**.
161
- - Continue benchmarking on real datasets and expanding context window capabilities.
162
-
163
- ## Licensing
164
-
165
- This project is released under a combination of licenses and agreements to provide a clear framework for use, distribution, and contribution. All licensing documents can be found in the `LICENSE/` directory.
166
-
167
- The key documents are:
168
-
169
- * `LICENSE.txt`: The primary open-source license for the software, AGPLv3.
170
- * `COMMERCIAL_LICENSE.txt`: Terms for commercial use of the software.
171
- * `DISCLAIMER.txt`: Important legal disclaimers.
172
- * `ALIGNMENT_AND_TRANSPARENCY.txt`: Our commitment to alignment and transparency.
173
- * `TRADEMARK_POLICY.txt`: Guidelines for using the project's trademarks.
174
- * `CONTRIBUTOR_LICENSE_AGREEMENT.txt`: The agreement for all contributors to sign.
175
-
176
- Please review these documents carefully before using or contributing to the project.
177
-