--- license: bigcode-openrail-m datasets: - bigcode/the-stack-dedup - Vipitis/Shadertoys-fine pipeline_tag: text-generation --- [Santacoder](https://huggingface.co/bigcode/santacoder) finetuned on [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine) for 1000 steps with a batch size of 2 and full sequence length of 2048. Origianl finetuning script from found [here](https://github.com/loubnabnl/santacoder-finetuning), adapted version to follow (soon^^). Main purpose of this model is to explore if finetuning models improves performance on [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval), results to follow (sooner). License carried over from model, however training data has an undefied license. Check details in [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys).