Naphula commited on
Commit
a5709f7
·
verified ·
1 Parent(s): bbbfeda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -8,7 +8,7 @@ pinned: false
8
  ---
9
 
10
  # Model Tools by Naphula
11
- Tools to enhance LLM quantizations and merging
12
 
13
  # [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py)
14
  - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
@@ -17,6 +17,14 @@ Tools to enhance LLM quantizations and merging
17
  # config.py
18
  - Simply replace line 13 | BEFORE `ScalarOrGradient: TypeAlias = Union[float, List[float]]` → AFTER `ScalarOrGradient: TypeAlias = Union[float, List[float], str, bool]` | to allow for custom filepath strings within parameter settings.
19
 
 
 
 
 
 
 
 
 
20
  # [audit_della.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/audit_della.py)
21
  - Audit the compatibility of donor models for `Della` merges before merging. See: [example chart Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.png), [example log Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.log), [example chart Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.png), [example log Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.log)
22
 
@@ -40,13 +48,16 @@ Tools to enhance LLM quantizations and merging
40
  - Then assign the num_experts_per_tok in config.json (or the config.yaml)
41
 
42
  # [tokensurgeon.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokensurgeon.py)
43
- - Uses adaptive VRAM from Grim Jim's `measure.py` like `graph_v18` to prevent OOM. Use recommended [batch file](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fix_tokenizers.bat) here or modify sh. This supposedly avoids 'Potemkin village' fake patches like `gen_id_patcher` and `vocab_id_patcher`.
44
 
45
  # [tokeninspector.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokeninspector.py)
46
  - Audit your tokensurgeon results.
47
 
 
 
 
48
  # [eos_scanner.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner.py)
49
- - Updated! This tool scans the tokenizer jsons to detect any mismatches with EOS tokens, which cause early termination bugs. You can then use the [gen_id_patcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gen_id_patcher.py) to patch missing `generation_config.json` files for EOS token. See [this post](https://huggingface.co/Naphula/Q0_Bench/discussions/1?not-for-all-audiences=true#6987717c762f0a45f672e250) as well as the [EOS Scanner ReadMe](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner_readme.md) for more info.
50
 
51
  # [weight_counter.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/weight_counter.py)
52
  - This counts the number of models in a yaml and adds up the total weight values. Useful for large della/ties merges.
@@ -63,6 +74,9 @@ Tools to enhance LLM quantizations and merging
63
  # [textonly_ripper_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper_v2.py)
64
  - Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at [textonly_ripper.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper.md)
65
 
 
 
 
66
  # [vocab_resizer.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_resizer.py)
67
  - Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that `tokenizer.model` must be manually copied into the `/fixed/` folder.
68
 
@@ -70,11 +84,14 @@ Tools to enhance LLM quantizations and merging
70
  - This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others.
71
 
72
  # [model_index_json_generator.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator.py)
73
- - Generates a missing `model.safetensors.index.json` file. Useful for cases where safetensors may have been sharded at the wrong size.
74
 
75
  # [folder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder_content_combiner_anyfiles.py)
76
  - Combines all files in the script's current directory into a single output file, sorted alphabetically.
77
 
 
 
 
78
  # [GGUF Repo Suite](https://huggingface.co/spaces/Naphula/gguf-repo-suite)
79
  - Create and quantize Hugging Face models
80
 
 
8
  ---
9
 
10
  # Model Tools by Naphula
11
+ Tools to enhance LLM quantizations and merging. Merge and audit large language models with low VRAM.
12
 
13
  # [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py)
14
  - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
 
17
  # config.py
18
  - Simply replace line 13 | BEFORE `ScalarOrGradient: TypeAlias = Union[float, List[float]]` → AFTER `ScalarOrGradient: TypeAlias = Union[float, List[float], str, bool]` | to allow for custom filepath strings within parameter settings.
19
 
20
+ # [enable_fix_mistral_regex_true.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/enable_fix_mistral_regex_true.md)
21
+ - Merge models with extreme tokenizer incompatibility. Requires modifying the `mergekit.yaml` `tokenizer` section and adding `--fix-mistral-regex` to your merge commands. (Note: Do not use `token_surgeon.py`, `gen_id_patcher.py`, or `vocab_id_patcher.py` with this, they are obsolete now.) Configured for MN 12B by default. Follow the steps in this guide to modify these scripts:
22
+ - `mergekit/merge.py`
23
+ - `mergekit/options.py`
24
+ - `mergekit/scripts/moe.py`
25
+ - `mergekit/scripts/tokensurgeon.py`
26
+ - `mergekit/tokenizer/build.py`
27
+
28
  # [audit_della.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/audit_della.py)
29
  - Audit the compatibility of donor models for `Della` merges before merging. See: [example chart Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.png), [example log Asmodeus](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Asmodeus_Audit.log), [example chart Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.png), [example log Slimaki](https://huggingface.co/spaces/Naphula/model_tools/blob/main/Audits/Slimaki_Audit.log)
30
 
 
48
  - Then assign the num_experts_per_tok in config.json (or the config.yaml)
49
 
50
  # [tokensurgeon.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokensurgeon.py)
51
+ - Uses adaptive VRAM from Grim Jim's `measure.py` like `graph_v18` to prevent OOM. Use recommended [batch file](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fix_tokenizers.bat) here or modify sh. This avoids 'Potemkin village' fake patches like `gen_id_patcher` and `vocab_id_patcher`. For this to work properly, you must also run [shield_embeddings.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/shield_embeddings.py) and [shield_norms.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/shield_norms.py) on any merges made from models patched with with tokensurgeon.
52
 
53
  # [tokeninspector.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokeninspector.py)
54
  - Audit your tokensurgeon results.
55
 
56
+ # [arcee_fusion_salience_scanner.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/arcee_fusion_salience_scanner.py)
57
+ - Scan the salience % of your arcee_fusion merges. The default `tukey_fence` value is 1.5 which results in 12.5% salience, but [this can be adjusted (see guide here)](modify_arcee_fusion_tukey_fence_parameter.md).
58
+
59
  # [eos_scanner.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner.py)
60
+ - Updated! This tool scans the tokenizer jsons to detect any mismatches with EOS tokens, which cause early termination bugs. You can then use the [gen_id_patcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gen_id_patcher.py) and [vocab_id_patcher.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_id_patcher.py), or the [chatml_to_mistral.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/chatml_to_mistral.py) to patch missing `generation_config.json` files for EOS token. See [this post](https://huggingface.co/Naphula/Q0_Bench/discussions/1?not-for-all-audiences=true#6987717c762f0a45f672e250) as well as the [EOS Scanner ReadMe](https://huggingface.co/spaces/Naphula/model_tools/blob/main/eos_scanner_readme.md) for more info.
61
 
62
  # [weight_counter.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/weight_counter.py)
63
  - This counts the number of models in a yaml and adds up the total weight values. Useful for large della/ties merges.
 
74
  # [textonly_ripper_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper_v2.py)
75
  - Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at [textonly_ripper.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper.md)
76
 
77
+ # [json_reverter.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/json_reverter.py)
78
+ - Revert changes to all JSON files done by `gen_id_patcher.py`, `vocab_id_patcher.py` or other scripts, within a specified root folder. It re-downloads the source files from the HF repo.
79
+
80
  # [vocab_resizer.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_resizer.py)
81
  - Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that `tokenizer.model` must be manually copied into the `/fixed/` folder.
82
 
 
84
  - This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others.
85
 
86
  # [model_index_json_generator.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator.py)
87
+ - Generates a missing `model.safetensors.index.json` file. Useful for cases where safetensors may have been sharded at the wrong size. [Single tensor variant here.](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator_SingleTensor.py)
88
 
89
  # [folder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder_content_combiner_anyfiles.py)
90
  - Combines all files in the script's current directory into a single output file, sorted alphabetically.
91
 
92
+ # [folder+subfolder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder+subfolder_content_combiner_anyfiles.py)
93
+ - Combines all files in the script's directory, including all files within subdirectories (excluding blacklisted formats) into a single output file, sorted alphabetically.
94
+
95
  # [GGUF Repo Suite](https://huggingface.co/spaces/Naphula/gguf-repo-suite)
96
  - Create and quantize Hugging Face models
97