NeuralNovel commited on
Commit
d3efc55
1 Parent(s): b62200a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -32
README.md CHANGED
@@ -1,41 +1,21 @@
1
- # Pytorch to Safetensor Converter
2
-
 
 
 
3
  ---
4
 
 
5
 
 
 
6
 
7
- A simple converter which converts pytorch .bin tensor files (Usually listed as "pytorch_model.bin" or "pytorch_model-xxxx-of-xxxx.bin") to safetensor files. Reason?
8
-
9
- ~~because it's cool!~~
10
-
11
- Because the safetensor format decreases the loading time of large LLM models, currently supported in [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui). It also supports in-place loading, which effectively decreased the required memory to load a LLM.
12
-
13
- Note: Most of the code originated from [Convert to Safetensors - a Hugging Face Space by safetensors](https://huggingface.co/spaces/safetensors/convert), and this code cannot deal with files that are not named as "pytorch_model.bin" or "pytorch_model-xxxx-of-xxxx.bin".
14
-
15
- ### Limitations:
16
-
17
- The program requires **A lot** of memory. To be specific, your idle memory should be **at least** twice the size of your largest ".bin" file. Or else, the program will run out of memory and use your swap... that would be **slow!**
18
-
19
- This program **will not** re-shard (aka break down) the model, you'll need to do it yourself using some other tools.
20
-
21
- ### Usage:
22
 
23
- After installing python (Python 3.10.x is suggested), ``cd`` into the repository and install dependencies first:
24
 
25
- ```
26
- git clone https://github.com/Silver267/pytorch-to-safetensor-converter.git
27
- cd pytorch-to-safetensor-converter
28
- pip install -r requirements.txt
29
- ```
30
 
31
- Copy **all content** of your model's folder into this repository, then run:
32
 
33
- ```
34
- python convert_to_safetensor.py
35
- ```
36
- Follow the instruction in the program. Remember to use the **full path** for the model directory (Something like ``E:\models\xxx-fp16`` that contains all the model files). Wait for a while, and you're good to go. The program will automatically copy all other files to your destination folder, enjoy!
37
 
38
- ### Precision stuff
39
- if your original model is fp32 then don't forget to edit ``"torch_dtype": "float32",`` to ``"torch_dtype": "float16",`` in ``config.json``
40
- #### Note that this operation might (in rare occasions) cause the LLM to output NaN while performing operations since it decreases the precision to fp16.
41
- If you're worried about that, simply edit the line ``loaded = {k: v.contiguous().half() for k, v in loaded.items()}`` in ``convert_to_safetensor.py`` into ``loaded = {k: v.contiguous() for k, v in loaded.items()}`` and you'll have a full precision model.
 
1
+ ---
2
+ license: other
3
+ license_name: yi-license
4
+ license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
5
+ base_model: jondurbin/bagel-34b-v0.2
6
  ---
7
 
8
+ ![image/png](https://i.ibb.co/9VB5SHL/OIG1-3.jpg)
9
 
10
+ # ConvexAI/Luminex-34B-v0.1
11
+ This model is [Smaug-34b](https://huggingface.co/abacusai/Smaug-34B-v0.1) with LaserRMT applied.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ ### Evaluation Results
15
 
16
+ Coming Soon
 
 
 
 
17
 
18
+ ### Contamination Results
19
 
 
 
 
 
20
 
21
+ Coming Soon