import gradio as gr from huggingface_hub import InferenceClient """ For more information on `huggingface_hub` Inference API support, please check the docs: https://huggingface.co/docs/huggingface_hub/v0.22.2/en/guides/inference """ DESCRIPTION = '''
This Space demonstrates the instruction-tuned modelzephyr-7b-beta by Hugging face. zephyr-7b-beta is the new open 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. Feel free to play with it, or duplicate to run privately!
🔎 Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). We found that removing the in-built alignment of these datasets boosted performance on MT Bench and made the model more helpful..
🦕 Looking for an even more powerful model? Check out the Hugging Chat integration for Meta Llama 3 70b
Ask me anything...