Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

TinyNaughtyLlama-v1.0 - GGUF

Original model description:

license: apache-2.0 model-index: - name: TinyNaughtyLlama-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 35.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 61.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 36.77 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 2.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kevin009/TinyNaughtyLlama-v1.0 name: Open LLM Leaderboard

Model Card for TinyLlama 1.1B

A DPO version of TinyLlama

Model Information

  • Model Name: TinyLlama 1.1B Chat
  • Model Version: 1.0
  • Model Type: Llama
  • Transformers Version: 4.35.2
  • Architecture: LlamaForCausalLM
  • Vocabulary Size: 32,000
  • Hidden Size: 2,048
  • Intermediate Size: 5,632
  • Number of Attention Heads: 32
  • Number of Hidden Layers: 22
  • Number of Key-Value Heads: 4
  • Attention Bias: False
  • Tie Word Embeddings: False
  • Max Position Embeddings: 2,048
  • BOS Token ID: 1
  • EOS Token ID: 2
  • Hidden Activation Function: silu
  • Initializer Range: 0.02
  • RMS Normalization Epsilon: 1e-05
  • Rope Scaling: Not specified
  • Rope Theta: 10,000.0
  • Torch Data Type: float16
  • Use Cache: True

Overview

Finetunned based on TinyLlama 1.1B Chat is a language model designed for various natural language processing tasks. It utilizes the Llama architecture with a substantial number of hidden layers, attention heads, and a large vocabulary size. The model is trained to generate text in a causal manner, which means it predicts the next word in a sequence based on the preceding context.

Key Features

  • Large vocabulary size of 32,000 words.
  • High hidden size (2,048) and intermediate size (5,632) for enhanced modeling capability.
  • 32 attention heads for capturing complex relationships in text.
  • 22 hidden layers for deep context understanding.
  • Utilizes silu (Sigmoid Linear Unit) as the hidden activation function.
  • Transformer version 4.35.2.
  • Supports cache for faster text generation.

Pretraining

The model has undergone pretraining with a pretraining task weight of 1.

Additional Information

  • Attention Bias is disabled in this model.
  • Word embeddings are not tied.
  • Max position embeddings support sequences up to 2,048 tokens.
  • RMS normalization epsilon is set to 1e-05.
  • Rope scaling and theta are specified as null and 10,000.0, respectively.

Disclaimer

While this model can generate text, it should be used responsibly and ethically. It may generate inappropriate or biased content, and it is the responsibility of users to filter and moderate the output.

Model Author

The model is available at TinyLlama/TinyLlama-1.1B-Chat-v1.0.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 37.03
AI2 Reasoning Challenge (25-Shot) 35.92
HellaSwag (10-Shot) 61.04
MMLU (5-Shot) 25.82
TruthfulQA (0-shot) 36.77
Winogrande (5-shot) 60.22
GSM8k (5-shot) 2.43
Downloads last month
5
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .