--- license: wtfpl --- # CausalLM 34B β ## MT-Bench: 8.5 ![mt-bench](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/2vv2_nGbfWuOM8jwy40dn.png) ## Konwledge cutoff: ALL: 2023-06 (pretrained) **Some** General Knowledge and Politics: 2024-02-01 ## Some contamination detection if you want to check: | Models | MMLU (ref: llama7b) | TBA | | ------------------------- | ------------------- | ---- | | microsoft/Orca-2-7b | 0.77 | | | mistralai/Mistral-7B-v0.1 | 0.46 | | | **CausalLM/34b-beta** | **0.38** | | | 01-ai/Yi-6B-200K | 0.3 | | data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector It should be *safe*. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints.