stelterlab's picture
Rerun with latest AutoAWQ to fix problems
dbbfb78 verified
|
raw
history blame
1.73 kB
---
base_model: 01-ai/Yi-Coder-9B-Chat
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
quantized_by: stelterlab
---
## Yi Coder 9B Chat by 01-Ai
**Model creator:** [01-ai](https://huggingface.co/01-ai)<br>
**Original model**: [Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat)<br>
**AWQ quantization:** done by [stelterlab](https://huggingface.co/bartowski) in [INT4 GEMM](https://github.com/casper-hansen/AutoAWQ/?tab=readme-ov-file#int4-gemm-vs-int4-gemv-vs-fp16) with AutoAWQ by casper-hansen (https://github.com/casper-hansen/AutoAWQ/)<br>
## Model Summary:
Yi Coder 9B Chat is a new coding model from Yi, supporting a staggering 52 programming language, and featuring a max context length of 128k, making it great for ingesting large codebases.<br>This model is tuned for chatting, not auto completion, so should be chatted with for programming questions.<br>It is the first model under 10B parameters to pass 20% on LiveCodeBench.
## Technical Details
Trained on an extensive set of languages:
- java
- markdown
- python
- php
- javascript
- c++
- c#
- c
- typescript
- html
- go
- java_server_pages
- dart
- objective-c
- kotlin
- tex
- swift
- ruby
- sql
- rust
- css
- yaml
- matlab
- lua
- json
- shell
- visual_basic
- scala
- rmarkdown
- pascal
- fortran
- haskell
- assembly
- perl
- julia
- cmake
- groovy
- ocaml
- powershell
- elixir
- clojure
- makefile
- coffeescript
- erlang
- lisp
- toml
- batchfile
- cobol
- dockerfile
- r
- prolog
- verilog
128k context length, achieves 23% pass rate on LiveCodeBench, surpassing even some SOTA 15-33B models.
For more information see original model card [Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat)