File size: 1,087 Bytes
2667551
b298024
fe28143
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2667551
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- code_eval
library_name: transformers
pipeline_tag: text-generation
tags:
- rag
- context obedient
- TroyDoesAI
- Mermaid
- Flow
- Diagram
- Sequence
- Map
- Context
- Accurate
- Summarization
- Story
- Code
- Coder
- Architecture
- Retrieval
- Augmented
- Generation
- AI
- LLM
- Mistral
- LLama
- Large Language Model
- Retrieval Augmented Generation
- Troy Andrew Schultz
- LookingForWork
- OpenForHire
- IdoCoolStuff
- Knowledge Graph
- Knowledge
- Graph
- Accelerator
- Enthusiast
- Chatbot
- Personal Assistant
- Copilot
- lol
- tags
- Pruned
- efficient
- smaller
- small
- local
- open
- source
- open source
- quant
- quantize
- ablated
- Ablation
- 'uncensored '
- unaligned
- 'bad '
- alignment
---

For those trying to shoe horn this large model on your machine every GB of saved memory counts when offloading to System RAM!

Here is a pruned down the 22.2 Billion parameter model by 2 junk layers to make a 21.5B that doesnt appear to lose any sense of quality.

https://huggingface.co/mistralai/Codestral-22B-v0.1