April 17, 2024

Felix-8B-v2: A model built with lawfulness alignment

Felix-8B-v2 is an experimental language model developed by Ontocord.ai, specializing in addressing lawfulness concerns under the Biden-Harris Executive Order on AI and the principles of the EU AI Act. This model has achieved one of the highest scores on the TruthfulQA benchmark compared to models of its size, showcasing its exceptional performance in providing accurate and reliable responses. Felix-8B-v2 is experimental and a research work product and a DPO reinforcement learning version of ontocord/sft-4e-exp2 which in turn is a fine-tuned version of TencentARC/Mistral_Pro_8B_v0.1. Felix-8B was DPO trained on our synthetically generated dataset Auto Redteam Triplets (ART): a synthetic dataset to perform reinforcement learning redteaming for the EU AI Act and Biden-Harris AI Executive Order concerns.

This model is exactly the same as Felix-8B except we modified the </s> and <s> tags of the original Felix-8b DPO model to fix the issue of being too verbose.

Please give feedback in the Community section. If you find any issues please let us know in the Community section so we can improve the model.

image/png

Model Description

Felix-8B is an 8 billion parameter language model trained using Ontocord.ai's proprietary auto-purpleteaming technique. The model has been fine-tuned and optimized using synthetic data, with the goal of improving its robustness and ability to handle a wide range of tasks while maintaining a strong focus on safety and truthfulness.

Downloads last month
16
Safetensors
Model size
8.99B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ontocord/Felix-8B-v2

Quantizations
1 model