--- license: mit datasets: - emre/llama-2-instruct-121k-code language: - en --- ## Overview This model, elucidator8918/apigen-prototype-0.1, is tailored for API generation, based on the Mistral-7B-Instruct-v0.1-sharded architecture fine-tuned on the LLAMA-2 Instruct 121k Code dataset. ## Key Information - **Model Name**: Mistral-7B-Instruct-v0.1-sharded - **Fine-tuned Model Name**: elucidator8918/apigen-prototype-0.1 - **Dataset**: emre/llama-2-instruct-121k-code - **Language**: English (en) ## Model Details - **LoRA Parameters (QLoRA):** - LoRA attention dimension: 64 - Alpha parameter for LoRA scaling: 16 - Dropout probability for LoRA layers: 0.1 - **bitsandbytes Parameters:** - Activate 4-bit precision base model loading - Compute dtype for 4-bit base models: float16 - Quantization type: nf4 - Activate nested quantization for 4-bit base models: No - **TrainingArguments Parameters:** - Number of training epochs: 1 - Batch size per GPU for training: 4 - Batch size per GPU for evaluation: 4 - Gradient accumulation steps: 1 - Enable gradient checkpointing: Yes - Maximum gradient norm: 0.3 - Initial learning rate: 2e-4 - Weight decay: 0.001 - Optimizer: paged_adamw_32bit - Learning rate scheduler type: cosine - Warm-up ratio: 0.03 - Group sequences into batches with the same length: Yes ## Usage - **Example Code (API Generation):** ```python from transformers import pipeline api_gen_pipeline = pipeline("text-generation", model="elucidator8918/apigen-prototype-0.1") # Run text generation pipeline with our next model prompt = "Write code to do a POST request in FastAPI framework to find the multiplication of two matrices using NumPy" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) ``` - **Output API Generation:** ``` [INST] Write code to do a POST request in fastapi framework to find the multiplication of two matrices using numpy [/INST] Below is an example of how to make a POST request in FastAPI to find the multiplication of two matrices using numpy: ``` ```python from fastapi import FastAPI, HTTPException import numpy as np app = FastAPI() @app.post("/matrix_multiplication") async def matrix_multiplication(matrix1: np.ndarray, matrix2: np.ndarray): if matrix1.shape[1]!= matrix2.shape[0]: raise HTTPException(status_code=400, detail="The number of columns in matrix1 must be equal to the number of rows in matrix2") result = np.matmul(matrix1, matrix2) return {"result": result} ``` This code defines a FastAPI endpoint at `/matrix_multiplication` that takes two matrices as input and returns the multiplication of the two matrices. The `np.matmul` function is used to perform the multiplication. The endpoint also includes a check to ensure that the number of columns in the first matrix is equal to the number of rows in the second matrix. To use this endpoint, you can make a POST request to `http://localhost:8000/matrix_multiplication` with the two matrices as input. The response will include the multiplication of the two matrices. ```python import requests matrix1 = np.array([[1, 2], [3, 4]]) matrix2 = np.array([[5, 6], [7, 8]]) response = requests.post("http://localhost:8000/matrix_multiplication", json={"matrix1": matrix1, "matrix2": matrix2}) print(response.json()) ``` This code makes a POST request to the endpoint with the two matrices as input and prints the response. The response should include the multiplication of the two matrices, which is `[[11, 14], [29, 36]]`. ## License This model is released under the MIT License.