--- datasets: - instruction-pretrain/ft-instruction-synthesizer-collection language: - en license: apache-2.0 tags: - mlx --- # mlx-community/instruction-pretrain-instruction-synthesizer The Model [mlx-community/instruction-pretrain-instruction-synthesizer](https://huggingface.co/mlx-community/instruction-pretrain-instruction-synthesizer) was converted to MLX format from [instruction-pretrain/instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) using mlx-lm version **0.14.3**. Original paper: [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491) ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/instruction-pretrain-instruction-synthesizer") response = generate(model, tokenizer, prompt="hello", verbose=True) ```