Cole Medin commited on
Commit
50de8d0
Β·
1 Parent(s): 88700c2

Updating README with new features and a link to our community

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -1,8 +1,12 @@
1
  [![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new)
2
 
3
- # Bolt.new Fork by Cole Medin
4
 
5
- This fork of Bolt.new allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
 
 
 
 
6
 
7
  # Requested Additions to this Fork - Feel Free to Contribute!!
8
 
@@ -20,21 +24,23 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
20
  - βœ… Publish projects directly to GitHub (@goncaloalves)
21
  - βœ… Ability to enter API keys in the UI (@ali00209)
22
  - βœ… xAI Grok Beta Integration (@milutinke)
 
 
 
 
 
23
  - ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
24
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
25
- - ⬜ **HIGH PRIORITY** Load local projects into the app
26
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
27
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
28
  - ⬜ Mobile friendly
29
- - ⬜ LM Studio Integration
30
  - ⬜ Together Integration
31
  - ⬜ Azure Open AI API Integration
32
- - ⬜ HuggingFace Integration
33
  - ⬜ Perplexity Integration
34
  - ⬜ Vertex AI Integration
35
  - ⬜ Cohere Integration
36
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
37
- - ⬜ Ability to revert code to earlier version
38
  - ⬜ Prompt caching
39
  - ⬜ Better prompt enhancing
40
  - ⬜ Have LLM plan the project in a MD file for better results/transparency
 
1
  [![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new)
2
 
3
+ # Bolt.new Fork by Cole Medin - oTToDev
4
 
5
+ This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
6
+
7
+ Join the community for oTToDev!
8
+
9
+ https://thinktank.ottomator.ai
10
 
11
  # Requested Additions to this Fork - Feel Free to Contribute!!
12
 
 
24
  - βœ… Publish projects directly to GitHub (@goncaloalves)
25
  - βœ… Ability to enter API keys in the UI (@ali00209)
26
  - βœ… xAI Grok Beta Integration (@milutinke)
27
+ - βœ… LM Studio Integration (@karrot0)
28
+ - βœ… HuggingFace Integration (@ahsan3219)
29
+ - βœ… Bolt terminal to see the output of LLM run commands (@thecodacus)
30
+ - βœ… Streaming of code output (@thecodacus)
31
+ - βœ… Ability to revert code to earlier version (@wonderwhy-er)
32
  - ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
33
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
34
+ - ⬜ **HIGH PRIORITY** - Load local projects into the app
35
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
36
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
37
  - ⬜ Mobile friendly
 
38
  - ⬜ Together Integration
39
  - ⬜ Azure Open AI API Integration
 
40
  - ⬜ Perplexity Integration
41
  - ⬜ Vertex AI Integration
42
  - ⬜ Cohere Integration
43
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
 
44
  - ⬜ Prompt caching
45
  - ⬜ Better prompt enhancing
46
  - ⬜ Have LLM plan the project in a MD file for better results/transparency