File size: 5,974 Bytes
be42b8b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


lmlab-mistral-1b-untrained - GGUF
- Model creator: https://huggingface.co/lmlab/
- Original model: https://huggingface.co/lmlab/lmlab-mistral-1b-untrained/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [lmlab-mistral-1b-untrained.Q2_K.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q2_K.gguf) | Q2_K | 0.44GB |
| [lmlab-mistral-1b-untrained.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.IQ3_XS.gguf) | IQ3_XS | 0.49GB |
| [lmlab-mistral-1b-untrained.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.IQ3_S.gguf) | IQ3_S | 0.5GB |
| [lmlab-mistral-1b-untrained.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q3_K_S.gguf) | Q3_K_S | 0.5GB |
| [lmlab-mistral-1b-untrained.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.IQ3_M.gguf) | IQ3_M | 0.51GB |
| [lmlab-mistral-1b-untrained.Q3_K.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q3_K.gguf) | Q3_K | 0.54GB |
| [lmlab-mistral-1b-untrained.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q3_K_M.gguf) | Q3_K_M | 0.54GB |
| [lmlab-mistral-1b-untrained.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q3_K_L.gguf) | Q3_K_L | 0.58GB |
| [lmlab-mistral-1b-untrained.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.IQ4_XS.gguf) | IQ4_XS | 0.6GB |
| [lmlab-mistral-1b-untrained.Q4_0.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q4_0.gguf) | Q4_0 | 0.63GB |
| [lmlab-mistral-1b-untrained.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.IQ4_NL.gguf) | IQ4_NL | 0.63GB |
| [lmlab-mistral-1b-untrained.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q4_K_S.gguf) | Q4_K_S | 0.63GB |
| [lmlab-mistral-1b-untrained.Q4_K.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q4_K.gguf) | Q4_K | 0.66GB |
| [lmlab-mistral-1b-untrained.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q4_K_M.gguf) | Q4_K_M | 0.66GB |
| [lmlab-mistral-1b-untrained.Q4_1.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q4_1.gguf) | Q4_1 | 0.69GB |
| [lmlab-mistral-1b-untrained.Q5_0.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q5_0.gguf) | Q5_0 | 0.74GB |
| [lmlab-mistral-1b-untrained.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q5_K_S.gguf) | Q5_K_S | 0.74GB |
| [lmlab-mistral-1b-untrained.Q5_K.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q5_K.gguf) | Q5_K | 0.76GB |
| [lmlab-mistral-1b-untrained.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q5_K_M.gguf) | Q5_K_M | 0.76GB |
| [lmlab-mistral-1b-untrained.Q5_1.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q5_1.gguf) | Q5_1 | 0.8GB |
| [lmlab-mistral-1b-untrained.Q6_K.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q6_K.gguf) | Q6_K | 0.87GB |
| [lmlab-mistral-1b-untrained.Q8_0.gguf](https://huggingface.co/RichardErkhov/lmlab_-_lmlab-mistral-1b-untrained-gguf/blob/main/lmlab-mistral-1b-untrained.Q8_0.gguf) | Q8_0 | 1.12GB |




Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---

Sorry everyone this got sort of popular but it doesnt generate understandable text - I think there's a way to make this generate good results w/ relatively little compute I'll experiment a bit later

# LMLab Mistral 1B Untrained

This is an untrained base model modified from Mistral-7B-Instruct. It has 1.13 billion parameters.

## Untrained

This model is untrained. **This means it will not generate comprehensible text.**

## Model Details

### Model Description

- **Developed by:** LMLab
- **License:** Apache 2.0
- **Parameters:** 1.13 billion (1,134,596,096)
- **Modified from model:** [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1)

### Model Architecture

LMLab Mistral 1B is a transformer model, with the following architecture choices:

* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer

## Usage

Use `MistralForCausalLM`.

```python
from transformers import MistralForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('lmlab/lmlab-mistral-1b-untrained')
model = MistralForCausalLM.from_pretrained('lmlab/lmlab-mistral-1b-untrained')
text = "Once upon a time"
encoded_input = tokenizer(text, return_tensors='pt')
output = model.generate(**encoded_input)
print(tokenizer.decode(output[0]))
```

## Notice

This model does not have any moderation systems.