--- license: apache-2.0 tags: - finetuned pipeline_tag: text-generation model-index: - name: deepseek-coder-6.7b-chat-and-function-calling results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 36.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 53.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 38.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 42.83 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 57.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 17.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-6.7b-chat-and-function-calling name: Open LLM Leaderboard --- # deepseek-coder-6.7b-chat-and-function-calling It was created by starting with the deepseek-coder-6.7b and training it on the open assistant dataset then training yhat on function calling. We have attached the wandb report in pdf form to view the training run at a glance. # Reson This model was fine tuned to allow it to work with the openai syntask and will return function when apperate. # Templete Us the following templete when interacting with the fine tuned model. # Referrals Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - Visit Runpod's Website! Paypal - If you want to leave a tip, it is appecaheted. - Visit My Paypal! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AIGym__deepseek-coder-6.7b-chat-and-function-calling) | Metric |Value| |---------------------------------|----:| |Avg. |40.91| |AI2 Reasoning Challenge (25-Shot)|36.09| |HellaSwag (10-Shot) |53.80| |MMLU (5-Shot) |38.29| |TruthfulQA (0-shot) |42.83| |Winogrande (5-shot) |57.22| |GSM8k (5-shot) |17.21|