Update README.md
Browse files
README.md
CHANGED
|
@@ -42,6 +42,7 @@ This model is a fine-tuned variant of the EleutherAI/pythia-1b causal language m
|
|
| 42 |
|
| 43 |
- **Use Cases:** Integration into chat applications, developer portals, knowledge retrieval systems, and automated support bots.
|
| 44 |
|
|
|
|
| 45 |
|
| 46 |
**Out-of-Scope:**
|
| 47 |
|
|
@@ -104,15 +105,6 @@ After 16 epochs, the training process yielded the following key outcomes:
|
|
| 104 |
- **Training Throughput:** 3.78 samples/sec, 0.47 steps/sec
|
| 105 |
- **Total FLOPs:** 2.72×10¹⁶
|
| 106 |
|
| 107 |
-
| Metric | Value |
|
| 108 |
-
|--------------------|------------|
|
| 109 |
-
| Final Train Loss | 0.03725 |
|
| 110 |
-
| Eval Loss (best) | *See logs* |
|
| 111 |
-
| Training Steps | 1,216 |
|
| 112 |
-
| Training Time | 2,572.28 s |
|
| 113 |
-
| Samples/sec | 3.78 |
|
| 114 |
-
| Steps/sec | 0.47 |
|
| 115 |
-
|
| 116 |
## Limitations & Biases
|
| 117 |
|
| 118 |
- Although highly accurate on InferenceVision topics, the model may generate plausible but incorrect or outdated information if presented with out-of-distribution queries.
|
|
|
|
| 42 |
|
| 43 |
- **Use Cases:** Integration into chat applications, developer portals, knowledge retrieval systems, and automated support bots.
|
| 44 |
|
| 45 |
+
For a hands-on guide on fine-tuning and using this model with **InferenceVision**, check out the [interactive notebook](https://github.com/doguilmak/InferenceVision/blob/main/usage/InferenveVision_LLM_QA.ipynb).
|
| 46 |
|
| 47 |
**Out-of-Scope:**
|
| 48 |
|
|
|
|
| 105 |
- **Training Throughput:** 3.78 samples/sec, 0.47 steps/sec
|
| 106 |
- **Total FLOPs:** 2.72×10¹⁶
|
| 107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
## Limitations & Biases
|
| 109 |
|
| 110 |
- Although highly accurate on InferenceVision topics, the model may generate plausible but incorrect or outdated information if presented with out-of-distribution queries.
|