alokabhishek commited on
Commit
a78c960
1 Parent(s): 22377b2

Updated Readme

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -30,7 +30,7 @@ This repo contains 4-bit quantized (using ExLlamaV2) model of Meta's meta-llama/
30
  - Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
31
 
32
 
33
- ### About 4 bit quantization using bitsandbytes
34
 
35
 
36
  - ExLlamaV2 github repo: [ExLlamaV2 github repo](https://github.com/turboderp/exllamav2)
@@ -39,9 +39,10 @@ This repo contains 4-bit quantized (using ExLlamaV2) model of Meta's meta-llama/
39
  # How to Get Started with the Model
40
 
41
  Use the code below to get started with the model.
 
42
 
43
 
44
- ## How to run from Python code
45
 
46
  #### First install the package
47
  ```shell
 
30
  - Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
31
 
32
 
33
+ ### About 4 bit quantization using ExLlamaV2
34
 
35
 
36
  - ExLlamaV2 github repo: [ExLlamaV2 github repo](https://github.com/turboderp/exllamav2)
 
39
  # How to Get Started with the Model
40
 
41
  Use the code below to get started with the model.
42
+ I will update how to inference using Python code later.
43
 
44
 
45
+ ## How to run using ExLlamaV2
46
 
47
  #### First install the package
48
  ```shell