File size: 2,710 Bytes
cd2355c 278fc7f 7645d86 278fc7f cd2355c 7645d86 cd2355c 7645d86 cd2355c 7645d86 278fc7f 7645d86 cd2355c 278fc7f 7645d86 cd2355c 278fc7f cd2355c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
SUBMIT_TEXT = f"""
# 🏎️ Submit
Models added here will be queued for evaluation on the Intel Developer Cloud ☁️. Depending on the queue, your model may take up to 10 days to show up on the leaderboard.
We will work to create greater transperancy as our leaderboard community grows.
## First steps before submitting a model
### 1) Make sure you can load your model and tokenizer using AutoClasses:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
```
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
Note: Make sure your model is public!
Note: If your model needs `use_remote_code=True`, we do not support this option yet, but we are working on adding it.
### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`.
### 3) Make sure your model has an open license.
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗 A good example of an open source license is apache-2.0.
Typically model licenses that are allow for commercial and research use tend to be the most attractive to other developers in the ecosystem.
### 4) Fill up your model card
We use your model card to better understand the properties of your model and make them more easily discoverable for other users.
Model cards are required to have mentions of the hardware, software, and infrastructure used for training - without this information
we cannot accept your model as a valid submission. Remember, only models trained on these processors are eligle to participate in evaluation:
Intel® Gaudi Accelerators, Intel® Xeon® Processors, Intel® Data Center GPU Max Series, Intel® ARC GPUs, and Intel® Core Ultra,
### 5) Select the correct precision
Not all models are converted properly from `float16` to `bfloat16`, and selecting the wrong precision can sometimes cause evaluation error (as loading a `bf16` model in `fp16` can sometimes generate NaNs, depending on the weight range).
## In case of model failure
If your model fails evaluation 😔, we will contact you by opening a new discussion in your model respository. Let's work together to get your model the love it deserves ❤️!
""" |