base_model: huihui-ai/Falcon3-10B-Instruct-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
datasets:
- mlabonne/orpo-dpo-mix-40k
model-index:
- name: BuddyGlassUncensored2025.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 77.31
name: averaged accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 43.57
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 22.89
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.4
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.39
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.07
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=darkc0de%2FBuddyGlassUncensored2025.2
name: Open LLM Leaderboard
huihui-ai/Falcon3-10B-Instruct-abliterated trained on mlabonne/orpo-dpo-mix-40k for one full epoch
For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
Doc / guide: https://huggingface.co/docs/hub/model-cards
{{ card_data }}
Model Card for {{ model_id | default("Model ID", true) }}
{{ model_summary | default("", true) }}
Model Details
Model Description
{{ model_description | default("", true) }}
- Developed by: {{ developers | default("[More Information Needed]", true)}}
- Funded by [optional]: {{ funded_by | default("[More Information Needed]", true)}}
- Shared by [optional]: {{ shared_by | default("[More Information Needed]", true)}}
- Model type: {{ model_type | default("[More Information Needed]", true)}}
- Language(s) (NLP): {{ language | default("[More Information Needed]", true)}}
- License: {{ license | default("[More Information Needed]", true)}}
- Finetuned from model [optional]: {{ base_model | default("[More Information Needed]", true)}}
Model Sources [optional]
- Repository: {{ repo | default("[More Information Needed]", true)}}
- Paper [optional]: {{ paper | default("[More Information Needed]", true)}}
- Demo [optional]: {{ demo | default("[More Information Needed]", true)}}
Uses
Direct Use
{{ direct_use | default("[More Information Needed]", true)}}
Downstream Use [optional]
{{ downstream_use | default("[More Information Needed]", true)}}
Out-of-Scope Use
{{ out_of_scope_use | default("[More Information Needed]", true)}}
Bias, Risks, and Limitations
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
Recommendations
{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
How to Get Started with the Model
Use the code below to get started with the model.
{{ get_started_code | default("[More Information Needed]", true)}}
Training Details
Training Data
{{ training_data | default("[More Information Needed]", true)}}
Training Procedure
Preprocessing [optional]
{{ preprocessing | default("[More Information Needed]", true)}}
Training Hyperparameters
- Training regime: {{ training_regime | default("[More Information Needed]", true)}}
Speeds, Sizes, Times [optional]
{{ speeds_sizes_times | default("[More Information Needed]", true)}}
Evaluation
Testing Data, Factors & Metrics
Testing Data
{{ testing_data | default("[More Information Needed]", true)}}
Factors
{{ testing_factors | default("[More Information Needed]", true)}}
Metrics
{{ testing_metrics | default("[More Information Needed]", true)}}
Results
{{ results | default("[More Information Needed]", true)}}
Summary
{{ results_summary | default("", true) }}
Model Examination [optional]
{{ model_examination | default("[More Information Needed]", true)}}
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: {{ hardware_type | default("[More Information Needed]", true)}}
- Hours used: {{ hours_used | default("[More Information Needed]", true)}}
- Cloud Provider: {{ cloud_provider | default("[More Information Needed]", true)}}
- Compute Region: {{ cloud_region | default("[More Information Needed]", true)}}
- Carbon Emitted: {{ co2_emitted | default("[More Information Needed]", true)}}
Technical Specifications [optional]
Model Architecture and Objective
{{ model_specs | default("[More Information Needed]", true)}}
Compute Infrastructure
{{ compute_infrastructure | default("[More Information Needed]", true)}}
Hardware
{{ hardware_requirements | default("[More Information Needed]", true)}}
Software
{{ software | default("[More Information Needed]", true)}}
Citation [optional]
BibTeX:
{{ citation_bibtex | default("[More Information Needed]", true)}}
APA:
{{ citation_apa | default("[More Information Needed]", true)}}
Glossary [optional]
{{ glossary | default("[More Information Needed]", true)}}
More Information [optional]
{{ more_information | default("[More Information Needed]", true)}}
Model Card Authors [optional]
{{ model_card_authors | default("[More Information Needed]", true)}}
Model Card Contact
{{ model_card_contact | default("[More Information Needed]", true)}}
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 33.44 |
IFEval (0-Shot) | 77.31 |
BBH (3-Shot) | 43.57 |
MATH Lvl 5 (4-Shot) | 22.89 |
GPQA (0-shot) | 10.40 |
MuSR (0-shot) | 9.39 |
MMLU-PRO (5-shot) | 37.07 |