BigCodeLlama-92b / README.md
nisten's picture
Upload folder using huggingface_hub
1c120ff verified
|
raw
history blame
980 Bytes
metadata
base_model:
  - codellama/CodeLlama-70b-Instruct-hf
tags:
  - mergekit
  - merge
  - code
license: mit
pipeline_tag: conversational

BigCodeLLama 92b LFG πŸš€

Experimental 92B CodeLlaMA frankenstein to see how it benchmarks

Models Merged with base codellama/CodeLlama-70b-Instruct-hf

Models Merged

The following models were included in the merge:

  • ../CodeLlama-70b-Python-hf
  • ../CodeLlama-70b-Instruct-hf

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 69]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
- sources:
  - layer_range: [42, 80]
    model:
      model:
        path: ../CodeLlama-70b-Python-hf

And to put together the .gguf grab the 2 part files and unite to have your 8bit 92gb gguf.

cat BigCodeLlama-92b-q9.gguf.part0 BigCodeLlama-92b-q8.gguf.part1 > BigCodeLlama-92b-q8.gguf