--- license: other datasets: - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored --- - [wandb](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run17_megacode_min100) This is one of the first models trained on the LosslessMegaCodeTrainingV2_1m_Evol_Uncensored dataset. The version of the dataset used for this model was poorly filtered on some loose parameters that arent anything to write home about but plans for much more refined filtering are in the works - This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant. This model is extremely good at coding, and might be one of the best coding models for its size and much better than any 7b parameter model. Plans for bigger models are coming in the future. ### Prompt template [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" multi-line: ``` """ <|im_start|>system {system message}<|im_end|> <|im_start|>user {user prompt}<|im_end|> <|im_start|>assistant {Assistant answer}<|im_end|> """ ``` Benchmarks for the model can be found at the link bellow the model here is called (andreaskoepf/llama2-7b-megacode2_min100) - https://tju01.github.io/FastEval-OpenAssistant/ The link for the full dataset is bellow: - https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored Link for the filtered dataset used to make this model are bellow: - https://huggingface.co/datasets/andreaskoepf/megacode2-min100 The original posting for this model was uploaded at the link bellow. https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100