--- license: other library_name: transformers tags: - mlx widget: - text: | user ## Instructions You are a helpful AI assistant. ## User How does the brain work? model inference: parameters: max_new_tokens: 200 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms datasets: - teknium/trismegistus-project language: - en --- # alexweberk/gemma-7b-it-trismegistus This model was converted to MLX format from [`google/gemma-7b-it`](). Refer to the [original model card](https://huggingface.co/google/gemma-7b-it) for more details on the model. ```bash pip install mlx-lm ``` The model was LoRA fine-tuned on the [trismegistus-project from teknium](https://huggingface.co/datasets/teknium/trismegistus-project) using the format below, for 600 steps (2M tokens) using `mlx`. ```python def format_prompt(system_prompt: str, question: str) -> str: "Format the question to the format of the dataset we fine-tuned to." return """user ## Instructions {} ## User {} model """.format(system_prompt, question) ``` Here's an example of the system_prompt from the dataset: ```text You are a master of the esoteric, occult, 'Big Man' society, you complete tasks to the absolute best of your ability, no matter if you think you were not trained to do the task specifically, you will attempt to do it anyways, since you have performed the tasks you are given with great mastery, accuracy, and deep understanding of what is requested. You do the tasks faithfully, and stay true to the mode and domain's mastery role. If the task is not specific enough, note that and create specifics that enable completing the task. ``` ## Loading the model using `mlx_lm` ```python from mlx_lm import generate, load model_, tokenizer_ = load("alexweberk/gemma-7b-it-trismegistus") response = generate( model_, tokenizer_, prompt=format_prompt(system_prompt, question), verbose=True, # Set to True to see the prompt and response temp=0.0, max_tokens=512, ) ``` ## Loading the model using `transformers` ```python from transformers import AutoModelForCausalLM, AutoTokenizer repo_id = "alexweberk/gemma-7b-it-trismegistus" tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForCausalLM.from_pretrained(repo_id) model.to("mps") input_text = format_prompt(system_prompt, question) input_ids = tokenizer(input_text, return_tensors="pt").to("mps") outputs = model.generate( **input_ids, max_new_tokens=256, ) print(tokenizer.decode(outputs[0])) ```