|
--- |
|
license: other |
|
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- phi |
|
- nlp |
|
- math |
|
- code |
|
- chat |
|
- conversational |
|
inference: |
|
parameters: |
|
temperature: 0 |
|
widget: |
|
- messages: |
|
- role: user |
|
content: How should I explain the Internet? |
|
library_name: transformers |
|
--- |
|
|
|
**THIS IS A MIRROR OF https://ai.azure.com/explore/models/Phi-4/ ALONG WITH A CONVERTED TOKENIZER FOR llama.cpp** |
|
|
|
|
|
... OK tokenizer seems a bit off |
|
|
|
``` |
|
llama-cli -m phi-4.etf16-Q6_K.gguf -p "Tell me a joke." -n 256 -t 8 -c 2048 --temp 0.8 -ngl 99 |
|
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no |
|
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no |
|
ggml_cuda_init: found 2 CUDA devices: |
|
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes |
|
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes |
|
build: 1153 (d583cd03) with cc (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3) for x86_64-redhat-linux |
|
main: llama backend init |
|
main: load the model and apply lora adapter, if any |
|
llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3090) - 24111 MiB free |
|
llama_load_model_from_file: using device CUDA1 (NVIDIA GeForce RTX 3090) - 24111 MiB free |
|
llama_model_loader: loaded meta data with 29 key-value pairs and 243 tensors from phi-4.etf16-Q6_K.gguf (version GGUF V3 (latest)) |
|
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. |
|
llama_model_loader: - kv 0: general.architecture str = phi3 |
|
llama_model_loader: - kv 1: general.type str = model |
|
llama_model_loader: - kv 2: general.name str = Phi 4 |
|
llama_model_loader: - kv 3: general.version str = 4 |
|
llama_model_loader: - kv 4: general.organization str = Microsoft |
|
llama_model_loader: - kv 5: general.basename str = phi |
|
llama_model_loader: - kv 6: general.size_label str = 15B |
|
llama_model_loader: - kv 7: phi3.context_length u32 = 16384 |
|
llama_model_loader: - kv 8: phi3.rope.scaling.original_context_length u32 = 16384 |
|
llama_model_loader: - kv 9: phi3.embedding_length u32 = 5120 |
|
llama_model_loader: - kv 10: phi3.feed_forward_length u32 = 17920 |
|
llama_model_loader: - kv 11: phi3.block_count u32 = 40 |
|
llama_model_loader: - kv 12: phi3.attention.head_count u32 = 40 |
|
llama_model_loader: - kv 13: phi3.attention.head_count_kv u32 = 10 |
|
llama_model_loader: - kv 14: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 |
|
llama_model_loader: - kv 15: phi3.rope.dimension_count u32 = 128 |
|
llama_model_loader: - kv 16: phi3.rope.freq_base f32 = 250000.000000 |
|
llama_model_loader: - kv 17: general.file_type u32 = 18 |
|
llama_model_loader: - kv 18: phi3.attention.sliding_window u32 = 100352 |
|
llama_model_loader: - kv 19: tokenizer.ggml.model str = llama |
|
llama_model_loader: - kv 20: tokenizer.ggml.pre str = default |
|
llama_model_loader: - kv 21: tokenizer.ggml.tokens arr[str,100352] = ["<unk>", "▁Ġ", "er", "in", "on", ... |
|
llama_model_loader: - kv 22: tokenizer.ggml.scores arr[f32,100352] = [0.000000, -0.000000, -1.000000, -2.0... |
|
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,100352] = [2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... |
|
llama_model_loader: - kv 24: tokenizer.ggml.bos_token_id u32 = 100257 |
|
llama_model_loader: - kv 25: tokenizer.ggml.eos_token_id u32 = 100257 |
|
llama_model_loader: - kv 26: tokenizer.ggml.padding_token_id u32 = 100257 |
|
llama_model_loader: - kv 27: tokenizer.chat_template str = {% for message in messages %}{% if (m... |
|
llama_model_loader: - kv 28: general.quantization_version u32 = 2 |
|
llama_model_loader: - type f32: 81 tensors |
|
llama_model_loader: - type f16: 1 tensors |
|
llama_model_loader: - type q6_K: 161 tensors |
|
llm_load_vocab: SPM vocabulary, but newline token not found: unordered_map::at! Using special_pad_id instead.llm_load_vocab: special tokens cache size = 97 |
|
llm_load_vocab: token to piece cache size = 0.7072 MB |
|
llm_load_print_meta: format = GGUF V3 (latest) |
|
llm_load_print_meta: arch = phi3 |
|
llm_load_print_meta: vocab type = SPM |
|
llm_load_print_meta: n_vocab = 100352 |
|
llm_load_print_meta: n_merges = 0 |
|
llm_load_print_meta: vocab_only = 0 |
|
llm_load_print_meta: n_ctx_train = 16384 |
|
llm_load_print_meta: n_embd = 5120 |
|
llm_load_print_meta: n_layer = 40 |
|
llm_load_print_meta: n_head = 40 |
|
llm_load_print_meta: n_head_kv = 10 |
|
llm_load_print_meta: n_rot = 128 |
|
llm_load_print_meta: n_swa = 100352 |
|
llm_load_print_meta: n_embd_head_k = 128 |
|
llm_load_print_meta: n_embd_head_v = 128 |
|
llm_load_print_meta: n_gqa = 4 |
|
llm_load_print_meta: n_embd_k_gqa = 1280 |
|
llm_load_print_meta: n_embd_v_gqa = 1280 |
|
llm_load_print_meta: f_norm_eps = 0.0e+00 |
|
llm_load_print_meta: f_norm_rms_eps = 1.0e-05 |
|
llm_load_print_meta: f_clamp_kqv = 0.0e+00 |
|
llm_load_print_meta: f_max_alibi_bias = 0.0e+00 |
|
llm_load_print_meta: f_logit_scale = 0.0e+00 |
|
llm_load_print_meta: n_ff = 17920 |
|
llm_load_print_meta: n_expert = 0 |
|
llm_load_print_meta: n_expert_used = 0 |
|
llm_load_print_meta: causal attn = 1 |
|
llm_load_print_meta: pooling type = 0 |
|
llm_load_print_meta: rope type = 2 |
|
llm_load_print_meta: rope scaling = linear |
|
llm_load_print_meta: freq_base_train = 250000.0 |
|
llm_load_print_meta: freq_scale_train = 1 |
|
llm_load_print_meta: n_ctx_orig_yarn = 16384 |
|
llm_load_print_meta: rope_finetuned = unknown |
|
llm_load_print_meta: ssm_d_conv = 0 |
|
llm_load_print_meta: ssm_d_inner = 0 |
|
llm_load_print_meta: ssm_d_state = 0 |
|
llm_load_print_meta: ssm_dt_rank = 0 |
|
llm_load_print_meta: ssm_dt_b_c_rms = 0 |
|
llm_load_print_meta: model type = 14B |
|
llm_load_print_meta: model ftype = Q6_K |
|
llm_load_print_meta: model params = 14.66 B |
|
llm_load_print_meta: model size = 11.77 GiB (6.89 BPW) |
|
llm_load_print_meta: general.name = Phi 4 |
|
llm_load_print_meta: BOS token = 100257 '<|endoftext|>' |
|
llm_load_print_meta: EOS token = 100257 '<|endoftext|>' |
|
llm_load_print_meta: EOT token = 100265 '<|im_end|>' |
|
llm_load_print_meta: UNK token = 0 '<unk>' |
|
llm_load_print_meta: PAD token = 100257 '<|endoftext|>' |
|
llm_load_print_meta: FIM PRE token = 100258 '<|fim_prefix|>' |
|
llm_load_print_meta: FIM SUF token = 100260 '<|fim_suffix|>' |
|
llm_load_print_meta: FIM MID token = 100259 '<|fim_middle|>' |
|
llm_load_print_meta: EOG token = 100257 '<|endoftext|>' |
|
llm_load_print_meta: EOG token = 100265 '<|im_end|>' |
|
llm_load_print_meta: max token length = 33 |
|
llm_load_tensors: offloading 40 repeating layers to GPU |
|
llm_load_tensors: offloading output layer to GPU |
|
llm_load_tensors: offloaded 41/41 layers to GPU |
|
llm_load_tensors: CPU_Mapped model buffer size = 980.00 MiB |
|
llm_load_tensors: CUDA0 model buffer size = 5599.45 MiB |
|
llm_load_tensors: CUDA1 model buffer size = 5468.14 MiB |
|
................................................................................... |
|
llama_new_context_with_model: n_seq_max = 1 |
|
llama_new_context_with_model: n_ctx = 2048 |
|
llama_new_context_with_model: n_ctx_per_seq = 2048 |
|
llama_new_context_with_model: n_batch = 2048 |
|
llama_new_context_with_model: n_ubatch = 512 |
|
llama_new_context_with_model: flash_attn = 0 |
|
llama_new_context_with_model: freq_base = 250000.0 |
|
llama_new_context_with_model: freq_scale = 1 |
|
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (16384) -- the full capacity of the model will not be utilized |
|
llama_kv_cache_init: CUDA0 KV buffer size = 210.00 MiB |
|
llama_kv_cache_init: CUDA1 KV buffer size = 190.00 MiB |
|
llama_new_context_with_model: KV self size = 400.00 MiB, K (f16): 200.00 MiB, V (f16): 200.00 MiB |
|
llama_new_context_with_model: CUDA_Host output buffer size = 0.38 MiB |
|
llama_new_context_with_model: pipeline parallelism enabled (n_copies=6) |
|
llama_new_context_with_model: CUDA0 compute buffer size = 289.01 MiB |
|
llama_new_context_with_model: CUDA1 compute buffer size = 310.02 MiB |
|
llama_new_context_with_model: CUDA_Host compute buffer size = 34.04 MiB |
|
llama_new_context_with_model: graph nodes = 1606 |
|
llama_new_context_with_model: graph splits = 3 |
|
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) |
|
main: llama threadpool init, n_threads = 8 |
|
|
|
system_info: n_threads = 8 (n_threads_batch = 8) / 24 | CUDA : ARCHS = 860 | F16 = 1 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 512 | FA_ALL_QUANTS = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | |
|
|
|
sampler seed: 96750315 |
|
sampler params: |
|
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 |
|
dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = -1 |
|
top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.800 |
|
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 |
|
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist |
|
generate: n_ctx = 2048, n_batch = 2048, n_predict = 256, n_keep = 1 |
|
|
|
Tell me a joke.ordord Ġteaspoon ĠI Ġteaspoon g Ġteaspoon ĠV Ġteaspoon g Ġteaspoonart Ġteaspoon Ġk Ġteaspoon Aord Ġteaspoonill Ġteaspoon g Ġteaspoon i Ġteaspoonher Ġteaspoon g Ġv Ġtriplet Ġteaspoonart Ġteaspoon Ġk Ġteaspoon ĠIord Ġteaspoon i Ġteaspoon g Ġteaspoon ĠV Ġteaspoonher Ġteaspoon Ġk Ġteaspoonra Ġteaspoon , Ġteaspoon Ġk Ġteaspoon1⁄4ord Ġteaspoon Ġun Ġteaspoon Ġk ĠteaspoonRE Ġteaspoonher Ġteaspoon g Ġteaspoon , Ġteaspoon Ġkord Ġteaspoon1⁄4 Ġteaspoon A Ġteaspoon , Ġteaspoon Ġk Ġteaspoon Aord Ġteaspoon i Ġteaspoon g Ġteaspoonill Ġteaspoonell Ġteaspoon g Ġteaspoon ĠVord Ġteaspoon1⁄4 Ġteaspoonill Ġv Ġtriplet Ġteaspoon ĠD Ġteaspoon1⁄4 Ġteaspoon); Ġteaspoon1⁄4 Ġteaspoon Aord Ġteaspoonell Ġteaspoon1⁄4 Ġteaspoonher Ġteaspoon1⁄4 Ġteaspoonell Ġteaspoon ĠV Ġteaspoon Ġk Ġteaspoon); Ġv Ġtriplet Ġteaspoon ĠIord Ġteaspoonell Ġteaspoon1⁄4ord Ġteaspoon1⁄4 Ġteaspoon A Ġteaspoon , Ġteaspoon g Ġteaspoonart Ġteaspoon Ġk Ġteaspoon Aord Ġteaspoon); Ġv Ġtriplet Ġteaspoonill Ġteaspoon g ĠteaspoonRE Ġteaspoon g Ġteaspoonart Ġteaspoon Aord Ġteaspoon i Ġteaspoon g Ġteaspoonher Ġteaspoon A Ġteaspoon1⁄4 Ġteaspoonher Ġteaspoon , Ġv Ġtriplet Ġteaspoon ĠIord Ġteaspoon A Ġteaspoon1⁄4 Ġteaspoon Ġk Ġteaspoonell Ġteaspoon g Ġteaspoon); Ġteaspoonest Ġteaspoon Ġk Ġteaspoon Ġg Ġteaspoon Ġk Ġteaspoonct Ġteaspoon1⁄4 Ġteaspoon ĠD Ġteaspoon Ġk Ġv Ġtripletord ĠteaspoonRE Ġteaspoon Ġk Ġteaspoon ĠD Ġteaspoonop Ġteaspoonher Ġteaspoon g Ġteaspoonart Ġteaspoon Ġk Ġteaspoon ĠIar [end of text] |
|
|
|
|
|
llama_perf_sampler_print: sampling time = 6.05 ms / 246 runs ( 0.02 ms per token, 40634.29 tokens per second) |
|
llama_perf_context_print: load time = 1693.08 ms |
|
llama_perf_context_print: prompt eval time = 26.42 ms / 7 tokens ( 3.77 ms per token, 264.96 tokens per second) |
|
llama_perf_context_print: eval time = 3993.62 ms / 238 runs ( 16.78 ms per token, 59.60 tokens per second) |
|
llama_perf_context_print: total time = 4034.65 ms / 245 tokens |
|
``` |
|
|
|
---- |
|
|
|
MS model card follows |
|
|
|
|
|
# Phi-4 Model Card |
|
|
|
## Model Summary |
|
|
|
| | | |
|
|-------------------------|-------------------------------------------------------------------------------| |
|
| **Developers** | Microsoft Research | |
|
| **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures | |
|
| **Architecture** | 14B parameters, dense decoder-only Transformer model | |
|
| **Inputs** | Text, best suited for prompts in the chat format | |
|
| **Context length** | 16K tokens | |
|
| **GPUs** | 1920 H100-80G | |
|
| **Training time** | 21 days | |
|
| **Training data** | 9.8T tokens | |
|
| **Outputs** | Generated text in response to input | |
|
| **Dates** | October 2024 – November 2024 | |
|
| **Status** | Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data | |
|
| **Release date** | December 12, 2024 | |
|
| **License** | MSRLA | |
|
|
|
## Intended Use |
|
|
|
| | | |
|
|-------------------------------|-------------------------------------------------------------------------| |
|
| **Primary Use Cases** | Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:<br><br>1. Memory/compute constrained environments.<br>2. Latency bound scenarios.<br>3. Reasoning and logic. | |
|
| **Out-of-Scope Use Cases** | Our models is not specifically designed or evaluated for all downstream purposes, thus:<br><br>1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.<br>2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.<br>3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. | |
|
|
|
## Data Overview |
|
|
|
### Training Datasets |
|
|
|
Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from: |
|
|
|
1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code. |
|
|
|
2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.). |
|
|
|
3. Acquired academic books and Q&A datasets. |
|
|
|
4. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. |
|
|
|
Multilingual data constitutes about 8% of our overall data. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. |
|
|
|
#### Benchmark datasets |
|
|
|
We evaluated `phi-4` using [OpenAI’s SimpleEval](https://github.com/openai/simple-evals) and our own internal benchmarks to understand the model’s capabilities, more specifically: |
|
|
|
* **MMLU:** Popular aggregated dataset for multitask language understanding. |
|
|
|
* **MATH:** Challenging competition math problems. |
|
|
|
* **GPQA:** Complex, graduate-level science questions. |
|
|
|
* **DROP:** Complex comprehension and reasoning. |
|
|
|
* **MGSM:** Multi-lingual grade-school math. |
|
|
|
* **HumanEval:** Functional code generation. |
|
|
|
* **SimpleQA:** Factual responses. |
|
|
|
## Safety |
|
|
|
### Approach |
|
|
|
`phi-4` has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated synthetic datasets. The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization), including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted to multiple safety categories. |
|
|
|
### Safety Evaluation and Red-Teaming |
|
|
|
Prior to release, `phi-4` followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by `phi-4` in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model’s safety training including jailbreaks, encoding-based attacks, multi-turn attacks, and adversarial suffix attacks. |
|
|
|
Please refer to the technical report for more details on safety alignment. |
|
|
|
## Model Quality |
|
|
|
To understand the capabilities, we compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark. |
|
|
|
At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance: |
|
|
|
| **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** | |
|
|------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------| |
|
| Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** | |
|
| Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 | |
|
| Math | MGSM<br>MATH | 80.6<br>**80.4** | 53.5<br>44.6 | 79.6<br>75.6 | 86.5<br>73.0 | 89.1<br>66.3* | 87.3<br>80.0 | **90.4**<br>74.6 | |
|
| Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** | |
|
| Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** | |
|
| Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 | |
|
|
|
\* These scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following. We use the simple-evals framework because it is reproducible, but Meta reports 77 for MATH and 88 for HumanEval on Llama-3.3-70B. |
|
|
|
## Usage |
|
|
|
### Input Formats |
|
|
|
Given the nature of the training data, `phi-4` is best suited for prompts using the chat format as follows: |
|
|
|
```bash |
|
<|im_start|>system<|im_sep|> |
|
You are a medieval knight and must provide explanations to modern people.<|im_end|> |
|
<|im_start|>user<|im_sep|> |
|
How should I explain the Internet?<|im_end|> |
|
<|im_start|>assistant<|im_sep|> |
|
``` |
|
|
|
### With `transformers` |
|
|
|
```python |
|
import transformers |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model="microsoft/phi-4", |
|
model_kwargs={"torch_dtype": "auto"}, |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a medieval knight and must provide explanations to modern people."}, |
|
{"role": "user", "content": "How should I explain the Internet?"}, |
|
] |
|
|
|
outputs = pipeline(messages, max_new_tokens=128) |
|
print(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
## Responsible AI Considerations |
|
|
|
Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: |
|
|
|
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. `phi-4` is not intended to support multilingual use. |
|
|
|
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. |
|
|
|
* **Inappropriate or Offensive Content:** These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. |
|
|
|
* **Information Reliability:** Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. |
|
|
|
* **Limited Scope for Code:** Majority of `phi-4` training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. |
|
|
|
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include: |
|
|
|
* **Allocation:** Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. |
|
|
|
* **High-Risk Scenarios:** Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. |
|
|
|
* **Misinformation:** Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). |
|
|
|
* **Generation of Harmful Content:** Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. |
|
|
|
* **Misuse:** Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. |