johnrachwanpruna
commited on
Commit
•
3b64c66
1
Parent(s):
d209bdd
Update README.md
Browse files
README.md
CHANGED
@@ -38,13 +38,6 @@ metrics:
|
|
38 |
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
39 |
|
40 |
## Usage
|
41 |
-
These are several general ways to use the DBRX models:
|
42 |
-
* DBRX Base and DBRX Instruct are available for download on HuggingFace (see our Quickstart guide below). This is the HF repository for DBRX Base; DBRX Instruct can be found [here](https://huggingface.co/databricks/dbrx-instruct).
|
43 |
-
* The DBRX model repository can be found on GitHub [here](https://github.com/databricks/dbrx).
|
44 |
-
* DBRX Base and DBRX Instruct are available with [Databricks Foundation Model APIs](https://docs.databricks.com/en/machine-learning/foundation-models/index.html) via both *Pay-per-token* and *Provisioned Throughput* endpoints. These are enterprise-ready deployments.
|
45 |
-
* For more information on how to fine-tune using LLM-Foundry, please take a look at our LLM pretraining and fine-tuning [documentation](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md).
|
46 |
-
|
47 |
-
|
48 |
## Quickstart Guide
|
49 |
|
50 |
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
|
|
|
38 |
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
39 |
|
40 |
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
## Quickstart Guide
|
42 |
|
43 |
Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages:
|