Spaces:
Running
title: README
emoji: ✨
colorFrom: gray
colorTo: red
sdk: static
pinned: false
BigCode
BigCode is an open scientific collaboration working on responsible training of large language models for coding applications. You can find more information on the main website or follow Big Code on Twitter. In this organization you can find the artefacts of this collaboration: StarCoder, a state-of-the-art language model for code, OctoPack, artifacts for instruction tuning large code models, The Stack, the largest available pretraining dataset with perimssive code, and SantaCoder, a 1.1B parameter model for code.
💫StarCoder
StarCoder is a 15.5B parameters language model for code trained for 1T tokens on 80+ programming languages. It uses MQA for efficient generation, has 8,192 tokens context window and can do fill-in-the-middle.
Models
- Paper: A technical report about StarCoder.
- GitHub: All you need to know about using or fine-tuning StarCoder.
- StarCoder: StarCoderBase further trained on Python.
- StarCoderBase: Trained on 80+ languages from The Stack.
- StarCoder+: StarCoderBase further trained on English web data.
- StarEncoder: Encoder model trained on TheStack.
- StarPii: StarEncoder based PII detector.
Tools & Demos
- StarCoder Playground: Write with StarCoder Models!
- VSCode Extension: Code with StarCoder!
- StarChat: Chat with StarCoder!
- Tech Assistant Prompt: With this prompt you can turn StarCoder into tech assistant.
- StarCoder Editor: Edit with StarCoder!
Data & Governance
- Governance Card: A card outlining the governance of the model.
- StarCoder License Agreement: The model is licensed under the BigCode OpenRAIL-M v1 license agreement.
- StarCoder Data: Pretraining dataset of StarCoder.
- StarCoder Search: Full-text search code in the pretraining dataset.
- StarCoder Membership Test: Blazing fast test if code was present in pretraining dataset.
🐙OctoPack
OctoPack consists of data, evals & models relating to Code LLMs that follow human instructions.
- Paper: Research paper with details about all components of OctoPack.
- GitHub: All code used for the creation of OctoPack.
- CommitPack: 4TB of Git commits.
- Am I in the CommitPack: Check if your code is in the CommitPack.
- CommitPackFT: 2GB of high-quality Git commits that resemble instructions.
- HumanEvalPack: Benchmark for Code Fixing/Explaining/Synthesizing across Python/JavaScript/Java/Go/C++/Rust.
- OctoCoder: Instruction tuned model of StarCoder by training on CommitPackFT.
- OctoCoder Demo: Play with OctoCoder.
- OctoGeeX: Instruction tuned model of CodeGeeX2 by training on CommitPackFT.
📑The Stack
The Stack is a 6.4TB of source code in 358 programming languages from permissive licenses.
- The Stack: Exact deduplicated version of The Stack.
- The Stack dedup: Near deduplicated version of The Stack (recommended for training).
- The Stack issues: Collection of GitHub issues.
- The Stack Metadata: Metadata of the repositories in The Stack.
- Am I in the Stack: Check if your data is in The Stack and request opt-out.
🎅SantaCoder
SantaCoder aka smol StarCoder: same architecture but only trained on Python, Java, JavaScript.
- SantaCoder: SantaCoder Model.
- SantaCoder Demo: Write with SantaCoder.
- SantaCoder Search: Search code in the pretraining dataset.
- SantaCoder License: The OpenRAIL license for SantaCoder.