Tabby is an open-source, self-hosted AI coding assistant. With Tabby, every team can set up its own LLM-powered code completion server with ease.
In this guide, you will learn how to deploy your own Tabby instance and use it for development directly from the Hugging Face website.
In this section, you will learn how to deploy a Tabby Space and use it for yourself or your orgnization.
You can deploy Tabby on Spaces with just a few clicks:
You need to define the Owner (your personal account or an organization), a Space name, and the Visibility. To secure the api endpoint, we’re configuring the visibility as Private.
You’ll see the Building status. Once it becomes Running, your Space is ready to go. If you don’t see the Tabby Swagger UI, try refreshing the page.
If you want to customize the title, emojis, and colors of your space, go to “Files and Versions” and edit the metadata of your README.md file.
Once Tabby is up and running, for a space link such as https://huggingface.com/spaces/TabbyML/tabby, the direct URL will be https://tabbyml-tabby.hf.space. This URL provides access to a stable Tabby instance in full-screen mode and serves as the API endpoint for IDE/Editor Extensions to talk with.
~/.tabby-client/agent/config.toml
. Uncomment both the [server]
section and the [server.requestHeaders]
section.https://UserName-SpaceName.hf.space
.You’ll notice a ✓ icon indicating a successful connection.
You’ve complete the setup, now enjoy tabing!
You can also utilize Tabby extensions in other IDEs, such as JetBrains.
If you have improvement suggestions or need specific support, please join Tabby Slack community or reach out on Tabby’s GitHub repository.