-
Notifications
You must be signed in to change notification settings - Fork 277
Home
Welcome to the avante.nvim wiki!
just set providers="copilot"
😄:
opts = {
provider = "copilot",
}
{
"yetone/avante.nvim",
build = "make BUILD_FROM_SOURCE=true luajit",
...
}
A more secure way to set API key is through secret manager. You can do that by prefixing api_key_name
like so:
{
"yetone/avante.nvim",
opts = {
provider = "claude",
claude = {
api_key_name = "cmd:bw get notes anthropic-api-key", -- the shell command must prefixed with `^cmd:(.*)`
}
}
}
To set up the development environment:
- Install StyLua for Lua code formatting.
- Install pre-commit for managing and maintaining pre-commit hooks.
- After cloning the repository, run the following command to set up pre-commit hooks:
pre-commit install --install-hooks
For setting up lua_ls you can use the following for nvim-lspconfig
:
lua_ls = {
settings = {
Lua = {
runtime = {
version = "LuaJIT",
special = { reload = "require" },
},
workspace = {
library = {
vim.fn.expand "$VIMRUNTIME/lua",
vim.fn.expand "$VIMRUNTIME/lua/vim/lsp",
vim.fn.stdpath "data" .. "/lazy/lazy.nvim/lua/lazy",
},
},
},
},
},
You can also use the following config for lazydev.nvim
:
{
"folke/lazydev.nvim",
ft = "lua",
cmd = "LazyDev",
opts = {
dependencies = {
-- Manage libuv types with lazy. Plugin will never be loaded
{ "Bilal2453/luvit-meta", lazy = true },
},
library = {
{ path = "~/workspace/avante.nvim/lua", words = { "avante" } },
{ path = "luvit-meta/library", words = { "vim%.uv" } },
},
},
},
Then you can set dev = true
in your lazy
config for development.
To add support for custom providers, one add AvanteProvider
spec into opts.vendors
:
{
provider = "my-custom-provider", -- You can then change this provider here
vendors = {
["my-custom-provider"] = {...}
},
windows = {
wrap_line = true,
width = 30, -- default % based on available width
},
--- @class AvanteConflictUserConfig
diff = {
debug = false,
autojump = true,
---@type string | fun(): any
list_opener = "copen",
},
}
A custom provider should following the following spec:
---@type AvanteProvider
{
endpoint = "https://api.openai.com/v1/chat/completions", -- The full endpoint of the provider
model = "gpt-4o", -- The model name to use with this provider
api_key_name = "OPENAI_API_KEY", -- The name of the environment variable that contains the API key
--- This function below will be used to parse in cURL arguments.
--- It takes in the provider options as the first argument, followed by code_opts retrieved from given buffer.
--- This code_opts include:
--- - question: Input from the users
--- - code_lang: the language of given code buffer
--- - code_content: content of code buffer
--- - selected_code_content: (optional) If given code content is selected in visual mode as context.
---@type fun(opts: AvanteProvider, code_opts: AvantePromptOptions): AvanteCurlOutput
parse_curl_args = function(opts, code_opts) end
--- This function will be used to parse incoming SSE stream
--- It takes in the data stream as the first argument, followed by SSE event state, and opts
--- retrieved from given buffer.
--- This opts include:
--- - on_chunk: (fun(chunk: string): any) this is invoked on parsing correct delta chunk
--- - on_complete: (fun(err: string|nil): any) this is invoked on either complete call or error chunk
---@type fun(data_stream: string, event_state: string, opts: ResponseParser): nil
parse_response_data = function(data_stream, event_state, opts) end
--- The following function SHOULD only be used when providers doesn't follow SSE spec [ADVANCED]
--- this is mutually exclusive with parse_response_data
---@type fun(data: string, handler_opts: AvanteHandlerOptions): nil
parse_stream_data = function(data, handler_opts) end
}
Few examples include perplexity, groq, and deepseek
---@type AvanteProvider
perplexity = {
endpoint = "https://api.perplexity.ai/chat/completions",
model = "llama-3.1-sonar-large-128k-online",
api_key_name = "cmd:bw get notes perplexity-api-key",
parse_curl_args = function(opts, code_opts)
return {
url = opts.endpoint,
headers = {
["Accept"] = "application/json",
["Content-Type"] = "application/json",
["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
},
body = {
model = opts.model,
messages = require("avante.providers").azure.parse_message(code_opts), -- you can make your own message, but this is very advanced
temperature = 0,
max_tokens = 8192,
stream = true, -- this will be set by default.
},
}
end,
-- The below function is used if the vendors has specific SSE spec that is not claude or openai.
parse_response_data = function(data_stream, event_state, opts)
require("avante.providers").azure.parse_response(data_stream, event_state, opts)
end,
},
---@type AvanteProvider
groq = {
endpoint = "https://api.groq.com/openai/v1/chat/completions",
model = "llama-3.1-70b-versatile",
api_key_name = "GROQ_API_KEY",
parse_curl_args = function(opts, code_opts)
return {
url = opts.endpoint,
headers = {
["Accept"] = "application/json",
["Content-Type"] = "application/json",
["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
},
body = {
model = opts.model,
messages = require("avante.providers").azure.parse_message(code_opts), -- you can make your own message, but this is very advanced
temperature = 0,
max_tokens = 4096,
stream = true, -- this will be set by default.
},
}
end,
parse_response_data = function(data_stream, event_state, opts)
require("avante.providers").azure.parse_response(data_stream, event_state, opts)
end,
},
---@type AvanteProvider
deepseek = {
endpoint = "https://api.deepseek.com/chat/completions",
model = "deepseek-coder",
api_key_name = "DEEPSEEK_API_KEY",
parse_curl_args = function(opts, code_opts)
return {
url = opts.endpoint,
headers = {
["Accept"] = "application/json",
["Content-Type"] = "application/json",
["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
},
body = {
model = opts.model,
messages = require("avante.providers").azure.parse_message(code_opts), -- you can make your own message, but this is very advanced
temperature = 0,
max_tokens = 4096,
stream = true, -- this will be set by default.
},
}
end,
parse_response_data = function(data_stream, event_state, opts)
require("avante.providers").azure.parse_response(data_stream, event_state, opts)
end,
}
If certain providers don't follow SSE streaming spec, you might want to implement parse_stream_data
for your custom providers.
See parse_and_call implementation for more information.
If you want to use local LLM that has a OpenAI-compatible server, set ["local"] = true
:
provider = "ollama",
---@type AvanteProvider
ollama = {
["local"] = true,
endpoint = "127.0.0.1:11434/v1",
model = "codegemma",
parse_curl_args = function(opts, code_opts)
return {
url = opts.endpoint .. "/chat/completions",
headers = {
["Accept"] = "application/json",
["Content-Type"] = "application/json",
},
body = {
model = opts.model,
messages = require("avante.providers").copilot.parse_message(code_opts), -- you can make your own message, but this is very advanced
max_tokens = 2048,
stream = true,
},
}
end,
parse_response_data = function(data_stream, event_state, opts)
require("avante.providers").openai.parse_response(data_stream, event_state, opts)
end,
},
},
You will be responsible for setting up the server yourself before using Neovim.
Since #346, we will expose certain functions that are considered "public" API through avante.api
.
Additionally, we will safely add certain keymaps if users yet to set those (only applies for lazy.nvim
users) for core functionality, including AvanteAsk
, AvanteEdit
, and AvanteRefresh
Important
This means Leaderaa won't be set to AvanteAsk
if you already set this mapping.
The following <Plug>
will also be available for compatibility sake:
<Plug>(AvanteAsk)
<Plug>(AvanteEdit)
<Plug>(AvanteRefresh)