Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
license: mit
|
|
|
|
| 1 |
+
## System-level view (60 seconds)
|
| 2 |
+
|
| 3 |
+
This is **not** just a GPT-2 model.
|
| 4 |
+
|
| 5 |
+
It is a small, self-contained **reasoning system** with:
|
| 6 |
+
|
| 7 |
+
- **Modules**: task-specific lenses (Analysis Note, Strategy Memo, Document Explainer, etc.).
|
| 8 |
+
- **Checkers**: second-pass reviewers that audit a module’s output.
|
| 9 |
+
- **Contracts**: every module must answer in fixed sections (e.g. CONTEXT / OPTIONS / RISKS / NEXT_STEPS).
|
| 10 |
+
|
| 11 |
+
Under the hood it still uses next-token prediction, but the **system behaviour** is:
|
| 12 |
+
|
| 13 |
+
> free-form task → choose lens (module) → generate structured output → optional checker review.
|
| 14 |
+
|
| 15 |
+
You can swap the engine (`gpt2`) for any stronger model. The **architecture stays the same**.
|
| 16 |
---
|
| 17 |
library_name: transformers
|
| 18 |
license: mit
|