theus_concepttagger / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
90b3073 verified
|
raw
history blame
5.12 kB
---
license: mit
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
base_model: facebook/bart-large-cnn
model-index:
- name: theus_concepttagger
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- type: rouge
value: 34.8663
name: Rouge1
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 25.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=namanpundir/theus_concepttagger
name: Open LLM Leaderboard
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# theus_concepttagger
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6249
- Rouge1: 34.8663
- Rouge2: 15.1526
- Rougel: 26.1224
- Rougelsum: 26.5164
- Gen Len: 62.4475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4096 | 1.0 | 12753 | 1.6249 | 34.8663 | 15.1526 | 26.1224 | 26.5164 | 62.4475 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_namanpundir__theus_concepttagger)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.29|
|AI2 Reasoning Challenge (25-Shot)|24.57|
|HellaSwag (10-Shot) |25.50|
|MMLU (5-Shot) |23.12|
|TruthfulQA (0-shot) |48.25|
|Winogrande (5-shot) |48.30|
|GSM8k (5-shot) | 0.00|