Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,15 @@ tags:
|
|
7 |
- mlx
|
8 |
- mlx
|
9 |
base_model: mlx-community/SmolLM-1.7B-Instruct-8bit
|
|
|
|
|
10 |
---
|
11 |
|
12 |
# dattaraj/smol-lora-insurance-estimates
|
13 |
|
14 |
The Model [dattaraj/smol-lora-insurance-estimates](https://huggingface.co/dattaraj/smol-lora-insurance-estimates) was converted to MLX format from [mlx-community/SmolLM-1.7B-Instruct-8bit](https://huggingface.co/mlx-community/SmolLM-1.7B-Instruct-8bit) using mlx-lm version **0.19.1**.
|
|
|
|
|
15 |
|
16 |
## Use with mlx
|
17 |
|
@@ -33,4 +37,4 @@ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not
|
|
33 |
)
|
34 |
|
35 |
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
36 |
-
```
|
|
|
7 |
- mlx
|
8 |
- mlx
|
9 |
base_model: mlx-community/SmolLM-1.7B-Instruct-8bit
|
10 |
+
datasets:
|
11 |
+
- dattaraj/pc-insurance-cost-estimator
|
12 |
---
|
13 |
|
14 |
# dattaraj/smol-lora-insurance-estimates
|
15 |
|
16 |
The Model [dattaraj/smol-lora-insurance-estimates](https://huggingface.co/dattaraj/smol-lora-insurance-estimates) was converted to MLX format from [mlx-community/SmolLM-1.7B-Instruct-8bit](https://huggingface.co/mlx-community/SmolLM-1.7B-Instruct-8bit) using mlx-lm version **0.19.1**.
|
17 |
+
This is a test to demonstrate the power of small langauge models. We take a SmoLM 1.7B model and fine-tune it on insurance estimation dataset available at: https://huggingface.co/datasets/dattaraj/pc-insurance-cost-estimator
|
18 |
+
The fine-tuned language model is now expert at taking text description of damage and generating cost estimation.
|
19 |
|
20 |
## Use with mlx
|
21 |
|
|
|
37 |
)
|
38 |
|
39 |
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
40 |
+
```
|