File size: 1,731 Bytes
d36aad2
 
 
 
fd075f4
d36aad2
 
c6c5a05
d36aad2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43dd5f9
d36aad2
 
43dd5f9
d36aad2
 
74785b6
43dd5f9
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: mit
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
library_name: transformers
---

# R1 1776 Distill Llama 70B

Blog link: [https://perplexity.ai/hub/blog/open-sourcing-r1-1776](https://perplexity.ai/hub/blog/open-sourcing-r1-1776 ) 

This is a Llama 70B distilled version of [R1 1776](https://huggingface.co/perplexity-ai/r1-1776).

R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship. 
The model provides unbiased, accurate, and factual information while maintaining high reasoning capabilities.

## Evals

To ensure our model remains fully “uncensored” and capable of engaging with a broad spectrum of sensitive topics, 
we curated a diverse, multilingual evaluation set of over a 1000 of examples that comprehensively cover such subjects. 
We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or 
provide overly sanitized responses to the queries.

We also ensured that the model’s math and reasoning abilities remained intact after the decensoring process. 
Evaluations on multiple benchmarks showed that our post-trained model performed on par with the base R1 model, 
indicating that the decensoring had no impact on its core reasoning capabilities.

| Benchmark | R1-Distill-Llama-70B | R1-1776-Distill-Llama-70B |
| --- | --- | --- |
| China Censorship |  80.53 | 0.2 |
| Internal Benchmarks (avg) | 47.64 |  48.4 |
| AIME 2024 | 70 | 70 |
| MATH-500 | 94.5 | 94.8 |
| MMLU | 88.52 * | 88.40 |
| DROP | 84.55 * | 84.83 |
| GPQA | 65.2 | 65.05 |

\* Evaluated by Perplexity AI since they were not reported in the [paper](https://arxiv.org/abs/2501.12948).