Shamane commited on
Commit
da98325
1 Parent(s): 7ad3851

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,10 +8,10 @@ datasets:
8
  ---
9
  ## Quick Summary
10
 
11
- ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/654aa1d86167ff03f70e32f9/vIvhuhwz99E-3xWvyTKi2.webp)
12
-
13
  This model is an adaptation of the `mistralai/Mistral-7B-Instruct-v0.2`, refined through the application of layer pruning techniques as detailed in the paper "The Unreasonable Ineffectiveness of the Deeper Layers." It incorporates methodologies from the `MergeKit` and `PruneMe` repositories to optimize its structure, focusing on reducing redundancy within the model's deeper layers without compromising its ability to generate coherent text. The model is maintained by Arcee-ai and represents a practical implementation of computational efficiency improvements in Large Language Models (LLMs), aiming to balance performance with resource usage effectively.
14
 
 
 
15
  ### Model Description
16
 
17
  This model represents a specialized iteration of the `mistralai/Mistral-7B-Instruct-v0.2`, optimized for efficiency and performance through selective layer pruning. Developed by Arcee-ai, it leverages insights from the "The Unreasonable Ineffectiveness of the Deeper Layers" research. The pruning process was informed by the `MergeKit` and `PruneMe` tools, focusing on eliminating redundant layers to ensure a leaner, more efficient model capable of generating high-quality text outputs.
 
8
  ---
9
  ## Quick Summary
10
 
 
 
11
  This model is an adaptation of the `mistralai/Mistral-7B-Instruct-v0.2`, refined through the application of layer pruning techniques as detailed in the paper "The Unreasonable Ineffectiveness of the Deeper Layers." It incorporates methodologies from the `MergeKit` and `PruneMe` repositories to optimize its structure, focusing on reducing redundancy within the model's deeper layers without compromising its ability to generate coherent text. The model is maintained by Arcee-ai and represents a practical implementation of computational efficiency improvements in Large Language Models (LLMs), aiming to balance performance with resource usage effectively.
12
 
13
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/654aa1d86167ff03f70e32f9/RcZ0aj8JRiom1jF5UOz85.webp)
14
+
15
  ### Model Description
16
 
17
  This model represents a specialized iteration of the `mistralai/Mistral-7B-Instruct-v0.2`, optimized for efficiency and performance through selective layer pruning. Developed by Arcee-ai, it leverages insights from the "The Unreasonable Ineffectiveness of the Deeper Layers" research. The pruning process was informed by the `MergeKit` and `PruneMe` tools, focusing on eliminating redundant layers to ensure a leaner, more efficient model capable of generating high-quality text outputs.