Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UI Glow - merge Stable Additions first as this is based on it #492

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
604ab87
feat: add custom unique filename when doanload as zip
lassecapel Nov 17, 2024
399affd
update comment to reflect the the codeline
lassecapel Nov 17, 2024
8978ed0
use a descriptive anique filename when downloading the files to zip
lassecapel Nov 20, 2024
43839b1
[fix]: artifact actionlist rendering in chat
PuneetP16 Nov 26, 2024
394d143
feat: prompt caching
SujalXplores Nov 26, 2024
62bdbc4
chore(doc): add prompt caching to README
SujalXplores Nov 26, 2024
651a4f8
Updated README Headings and Ollama Section
dustinwloring1988 Dec 1, 2024
cb1fd38
liniting fix
dustinwloring1988 Dec 1, 2024
88479fe
Merge pull request #9 from lassecapel/feat-add-custom-project-name
dustinwloring1988 Dec 1, 2024
416f7f9
Update constants.ts
dustinwloring1988 Dec 1, 2024
9d82637
Update docker-compose.yaml
dustinwloring1988 Dec 1, 2024
c4347cb
added collapsable chat area
dustinwloring1988 Dec 1, 2024
35107ad
Update ExamplePrompts.tsx
dustinwloring1988 Dec 1, 2024
2803f85
Merge pull request #11 from PuneetP16/fix-artifact-code-block-rendering
dustinwloring1988 Dec 1, 2024
78d5202
Merge pull request #10 from SujalXplores/feat/prompt-caching
dustinwloring1988 Dec 1, 2024
a35e493
Update BaseChat.module.scss
dustinwloring1988 Dec 1, 2024
d85dff6
Update BaseChat.tsx
dustinwloring1988 Dec 1, 2024
b3d6181
Merge pull request #16 from dustinwloring1988/default-prompt-change
dustinwloring1988 Dec 1, 2024
7066bfc
Merge pull request #17 from dustinwloring1988/collapsible-model-and-p…
dustinwloring1988 Dec 1, 2024
a70c0ea
Merge pull request #18 from dustinwloring1988/pretty-up
dustinwloring1988 Dec 1, 2024
ca4bcba
Merge pull request #20 from dustinwloring1988/readme-heading-ollama-s…
dustinwloring1988 Dec 1, 2024
5a7f491
Merge pull request #15 from dustinwloring1988/artifact-code-block
dustinwloring1988 Dec 1, 2024
a79aa1f
Merge pull request #19 from dustinwloring1988/unique-name-on-download…
dustinwloring1988 Dec 1, 2024
54cb475
Merge branch 'stable-additions' into linting-fix
dustinwloring1988 Dec 1, 2024
8a6d32f
Merge pull request #21 from dustinwloring1988/linting-fix
dustinwloring1988 Dec 1, 2024
b0b617d
lint fix
dustinwloring1988 Dec 1, 2024
7574337
fixed path
dustinwloring1988 Dec 1, 2024
44c3b59
Merge pull request #22 from dustinwloring1988/stable-additions
dustinwloring1988 Dec 1, 2024
e187e48
Merge pull request #23 from dustinwloring1988/prompt-caching
dustinwloring1988 Dec 1, 2024
0dbb155
Merge branch 'dev' into ui-glow
dustinwloring1988 Dec 1, 2024
dd196d3
Merge pull request #26 from dustinwloring1988/stable-additions
dustinwloring1988 Dec 1, 2024
df58e86
Merge pull request #25 from dustinwloring1988/ui-glow
dustinwloring1988 Dec 1, 2024
1118a4c
small fixes
dustinwloring1988 Dec 1, 2024
2439e1c
hotfix
dustinwloring1988 Dec 1, 2024
143ba5d
hotfix for test and lint done
dustinwloring1988 Dec 1, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 11 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@

This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.

Join the community for oTToDev!
## Join the community for oTToDev!

https://thinktank.ottomator.ai

# Requested Additions to this Fork - Feel Free to Contribute!!
## Requested Additions to this Fork - Feel Free to Contribute!!

- ✅ OpenRouter Integration (@coleam00)
- ✅ Gemini Integration (@jonathands)
Expand All @@ -31,6 +31,7 @@ https://thinktank.ottomator.ai
- ✅ Ability to revert code to earlier version (@wonderwhy-er)
- ✅ Cohere Integration (@hasanraiyan)
- ✅ Dynamic model max token length (@hasanraiyan)
- ✅ Prompt caching (@SujalXplores)
- ⬜ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
- ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
- ⬜ **HIGH PRIORITY** - Load local projects into the app
Expand All @@ -42,14 +43,13 @@ https://thinktank.ottomator.ai
- ⬜ Perplexity Integration
- ⬜ Vertex AI Integration
- ⬜ Deploy directly to Vercel/Netlify/other similar platforms
- ⬜ Prompt caching
- ⬜ Better prompt enhancing
- ⬜ Have LLM plan the project in a MD file for better results/transparency
- ⬜ VSCode Integration with git-like confirmations
- ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
- ⬜ Voice prompting

# Bolt.new: AI-Powered Full-Stack Web Development in the Browser
## Bolt.new: AI-Powered Full-Stack Web Development in the Browser

Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md)

Expand Down Expand Up @@ -124,6 +124,13 @@ Optionally, you can set the debug level:
VITE_LOG_LEVEL=debug
```

And if using Ollama set the DEFAULT_NUM_CTX, the example below uses 8K context and ollama running on localhost port 11434:

```
OLLAMA_API_BASE_URL=http://localhost:11434
DEFAULT_NUM_CTX=8192
```

**Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.

## Run with Docker
Expand Down Expand Up @@ -192,31 +199,6 @@ sudo npm install -g pnpm
pnpm run dev
```

## Super Important Note on Running Ollama Models

Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more.
This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want
to use where you specify a larger context window. Luckily it's super easy to do that.

All you have to do is:

- Create a file called "Modelfile" (no file extension) anywhere on your computer
- Put in the two lines:

```
FROM [Ollama model ID such as qwen2.5-coder:7b]
PARAMETER num_ctx 32768
```

- Run the command:

```
ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
```

Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason.
You'll see this new model in the list of Ollama models along with all the others you pulled!

## Adding New LLMs:

To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
Expand Down
104 changes: 104 additions & 0 deletions app/components/chat/BaseChat.module.scss
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,107 @@
.Chat {
opacity: 1;
}

.RayContainer {
--gradient-opacity: 0.85;
--ray-gradient: radial-gradient(rgba(83, 196, 255, var(--gradient-opacity)) 0%, rgba(43, 166, 255, 0) 100%);
transition: opacity 0.25s linear;
position: fixed;
inset: 0;
pointer-events: none;
user-select: none;
}

.LightRayOne {
width: 480px;
height: 680px;
transform: rotate(80deg);
top: -540px;
left: 250px;
filter: blur(110px);
position: absolute;
border-radius: 100%;
background: var(--ray-gradient);
}

.LightRayTwo {
width: 110px;
height: 400px;
transform: rotate(-20deg);
top: -280px;
left: 350px;
mix-blend-mode: overlay;
opacity: 0.6;
filter: blur(60px);
position: absolute;
border-radius: 100%;
background: var(--ray-gradient);
}

.LightRayThree {
width: 400px;
height: 370px;
top: -350px;
left: 200px;
mix-blend-mode: overlay;
opacity: 0.6;
filter: blur(21px);
position: absolute;
border-radius: 100%;
background: var(--ray-gradient);
}

.LightRayFour {
position: absolute;
width: 330px;
height: 370px;
top: -330px;
left: 50px;
mix-blend-mode: overlay;
opacity: 0.5;
filter: blur(21px);
border-radius: 100%;
background: var(--ray-gradient);
}

.LightRayFive {
position: absolute;
width: 110px;
height: 400px;
transform: rotate(-40deg);
top: -280px;
left: -10px;
mix-blend-mode: overlay;
opacity: 0.8;
filter: blur(60px);
border-radius: 100%;
background: var(--ray-gradient);
}

.PromptEffectContainer {
--prompt-container-offset: 50px;
--prompt-line-stroke-width: 1px;
position: absolute;
pointer-events: none;
inset: calc(var(--prompt-container-offset) / -2);
width: calc(100% + var(--prompt-container-offset));
height: calc(100% + var(--prompt-container-offset));
}

.PromptEffectLine {
width: calc(100% - var(--prompt-container-offset) + var(--prompt-line-stroke-width));
height: calc(100% - var(--prompt-container-offset) + var(--prompt-line-stroke-width));
x: calc(var(--prompt-container-offset) / 2 - var(--prompt-line-stroke-width) / 2);
y: calc(var(--prompt-container-offset) / 2 - var(--prompt-line-stroke-width) / 2);
rx: calc(8px - var(--prompt-line-stroke-width));
fill: transparent;
stroke-width: var(--prompt-line-stroke-width);
stroke: url(#line-gradient);
stroke-dasharray: 35px 65px;
stroke-dashoffset: 10;
}

.PromptShine {
fill: url(#shine-gradient);
mix-blend-mode: overlay;
}
94 changes: 73 additions & 21 deletions app/components/chat/BaseChat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ const ModelSelector = ({ model, setModel, provider, setProvider, modelList, prov
key={provider?.name}
value={model}
onChange={(e) => setModel(e.target.value)}
className="flex-1 p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus transition-all lg:max-w-[70%] "
className="flex-1 p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus transition-all lg:max-w-[70%]"
>
{[...modelList]
.filter((e) => e.provider == provider?.name && e.name)
Expand Down Expand Up @@ -116,6 +116,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
const TEXTAREA_MAX_HEIGHT = chatStarted ? 400 : 200;
const [apiKeys, setApiKeys] = useState<Record<string, string>>({});
const [modelList, setModelList] = useState(MODEL_LIST);
const [isModelSettingsCollapsed, setIsModelSettingsCollapsed] = useState(false);

useEffect(() => {
// Load API keys from cookies on component mount
Expand Down Expand Up @@ -167,6 +168,13 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
)}
data-chat-visible={showChat}
>
<div className={classNames(styles.RayContainer)}>
<div className={classNames(styles.LightRayOne)}></div>
<div className={classNames(styles.LightRayTwo)}></div>
<div className={classNames(styles.LightRayThree)}></div>
<div className={classNames(styles.LightRayFour)}></div>
<div className={classNames(styles.LightRayFive)}></div>
</div>
<ClientOnly>{() => <Menu />}</ClientOnly>
<div ref={scrollRef} className="flex flex-col lg:flex-row overflow-y-auto w-full h-full">
<div className={classNames(styles.Chat, 'flex flex-col flex-grow lg:min-w-[var(--chat-min-width)] h-full')}>
Expand Down Expand Up @@ -199,39 +207,83 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
</ClientOnly>
<div
className={classNames(
' bg-bolt-elements-background-depth-2 p-3 rounded-lg border border-bolt-elements-borderColor relative w-full max-w-chat mx-auto z-prompt mb-6',
'bg-bolt-elements-background-depth-2 p-3 rounded-lg border border-bolt-elements-borderColor relative w-full max-w-chat mx-auto z-prompt mb-6',
{
'sticky bottom-2': chatStarted,
},
)}
>
<ModelSelector
key={provider?.name + ':' + modelList.length}
model={model}
setModel={setModel}
modelList={modelList}
provider={provider}
setProvider={setProvider}
providerList={PROVIDER_LIST}
apiKeys={apiKeys}
/>
<svg className={classNames(styles.PromptEffectContainer)}>
<defs>
<linearGradient
id="line-gradient"
x1="20%"
y1="0%"
x2="-14%"
y2="10%"
gradientUnits="userSpaceOnUse"
gradientTransform="rotate(-45)"
>
<stop offset="0%" stopColor="#1488fc" stopOpacity="0%"></stop>
<stop offset="40%" stopColor="#1488fc" stopOpacity="80%"></stop>
<stop offset="50%" stopColor="#1488fc" stopOpacity="80%"></stop>
<stop offset="100%" stopColor="#1488fc" stopOpacity="0%"></stop>
</linearGradient>
<linearGradient id="shine-gradient">
<stop offset="0%" stopColor="white" stopOpacity="0%"></stop>
<stop offset="40%" stopColor="#8adaff" stopOpacity="80%"></stop>
<stop offset="50%" stopColor="#8adaff" stopOpacity="80%"></stop>
<stop offset="100%" stopColor="white" stopOpacity="0%"></stop>
</linearGradient>
</defs>
<rect className={classNames(styles.PromptEffectLine)} pathLength="100" strokeLinecap="round"></rect>
<rect className={classNames(styles.PromptShine)} x="48" y="24" width="70" height="1"></rect>
</svg>
<div>
<div className="flex justify-between items-center mb-2">
<button
onClick={() => setIsModelSettingsCollapsed(!isModelSettingsCollapsed)}
className={classNames('flex items-center gap-2 p-2 rounded-lg transition-all', {
'bg-bolt-elements-item-backgroundAccent text-bolt-elements-item-contentAccent':
isModelSettingsCollapsed,
'bg-bolt-elements-item-backgroundDefault text-bolt-elements-item-contentDefault':
!isModelSettingsCollapsed,
})}
>
<div className={`i-ph:caret-${isModelSettingsCollapsed ? 'right' : 'down'} text-lg`} />
<span>Model Settings</span>
</button>
</div>

{provider && (
<APIKeyManager
provider={provider}
apiKey={apiKeys[provider.name] || ''}
setApiKey={(key) => updateApiKey(provider.name, key)}
/>
)}
<div className={isModelSettingsCollapsed ? 'hidden' : ''}>
<ModelSelector
key={provider?.name + ':' + modelList.length}
model={model}
setModel={setModel}
modelList={modelList}
provider={provider}
setProvider={setProvider}
providerList={PROVIDER_LIST}
apiKeys={apiKeys}
/>
{provider && (
<APIKeyManager
provider={provider}
apiKey={apiKeys[provider.name] || ''}
setApiKey={(key) => updateApiKey(provider.name, key)}
/>
)}
</div>
</div>

<div
className={classNames(
'shadow-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background backdrop-filter backdrop-blur-[8px] rounded-lg overflow-hidden transition-all',
'relative shadow-xs border border-bolt-elements-borderColor backdrop-blur rounded-lg',
)}
>
<textarea
ref={textareaRef}
className={`w-full pl-4 pt-4 pr-16 focus:outline-none focus:ring-0 focus:border-none focus:shadow-none resize-none text-md text-bolt-elements-textPrimary placeholder-bolt-elements-textTertiary bg-transparent transition-all`}
className={`w-full pl-4 pt-4 pr-16 focus:outline-none resize-none text-bolt-elements-textPrimary placeholder-bolt-elements-textTertiary bg-transparent text-sm`}
onKeyDown={(event) => {
if (event.key === 'Enter') {
if (event.shiftKey) {
Expand Down
Loading