llama-3-debug
This model use for debug, the parameter is random.
It's small only '~32MB' memory size, that is efficent for you to download and debug.
llama-3-debug
model config modified as follow
config.intermediate_size = 128
config.hidden_size = 64
config.num_attention_heads = 2
config.num_key_value_heads = 2
config.num_hidden_layers = 1
If you want to load it by this code
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'xiaodongguaAIGC/llama-3-debug'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(model)
print(tokenizer)
- Downloads last month
- 1,079
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.