Skip to content
Aaron Pham edited this page Sep 15, 2024 · 43 revisions

Welcome to the avante.nvim wiki!

secrets.

A more secure way to set API key is through secret manager. You can do that by prefixing api_key_name like so:

{
  "yetone/avante.nvim",
  opts = {
    provider = "claude",
    claude = {
      api_key_name = "cmd:bw get notes anthropic-api-key", -- the shell command must prefixed with `^cmd:(.*)`
      -- api_key_name = {"bw","get","notes","anthropic-api-key"}, -- if it is a table of string, then default to command.
    }
  }
}

slash commands

In the input box, we have support a few slash commands. Try /help for more information.

error when sending request to LLM?

Make sure that you have credits in your accounts 😃

copilot?

set provider="copilot".

pass in additional generation parameters.

You can just add any accepted body fields to curl for given LLM provider:

opts = {
  gemini = { -- see https://ai.google.dev/api/generate-content#request-body_1
    generationConfig = {
      stopSequences = {"test"},
    }
  }
}

development.

To set up the development environment:

  1. Install StyLua for Lua code formatting.
  2. Install pre-commit for managing and maintaining pre-commit hooks.
  3. After cloning the repository, run the following command to set up pre-commit hooks:
pre-commit install --install-hooks

For setting up lua_ls you can use the following for nvim-lspconfig:

lua_ls = {
  settings = {
    Lua = {
      runtime = {
        version = "LuaJIT",
        special = { reload = "require" },
      },
      workspace = {
        library = {
          vim.fn.expand "$VIMRUNTIME/lua",
          vim.fn.expand "$VIMRUNTIME/lua/vim/lsp",
          vim.fn.stdpath "data" .. "/lazy/lazy.nvim/lua/lazy",
        },
      },
    },
  },
},

You can also use the following config for lazydev.nvim:

      {
        "folke/lazydev.nvim",
        ft = "lua",
        cmd = "LazyDev",
        opts = {
          dependencies = {
            -- Manage libuv types with lazy. Plugin will never be loaded
            { "Bilal2453/luvit-meta", lazy = true },
          },
          library = {
            { path = "~/workspace/avante.nvim/lua", words = { "avante" } },
            { path = "luvit-meta/library", words = { "vim%.uv" } },
          },
        },
      },

Then you can set dev = true in your lazy config for development.

custom providers.

To add support for custom providers, one add AvanteProvider spec into opts.vendors:

{
  provider = "my-custom-provider", -- You can then change this provider here
  vendors = {
    ["my-custom-provider"] = {...}
  },
}

A custom provider should following the following spec:

---@type AvanteProvider
["my-custom-provider"] = {
  endpoint = "https://api.openai.com/v1/chat/completions", -- The full endpoint of the provider
  model = "gpt-4o", -- The model name to use with this provider
  api_key_name = "OPENAI_API_KEY", -- The name of the environment variable that contains the API key
  --- This function below will be used to parse in cURL arguments.
  --- It takes in the provider options as the first argument, followed by code_opts retrieved from given buffer.
  --- This code_opts include:
  --- - question: Input from the users
  --- - code_lang: the language of given code buffer
  --- - code_content: content of code buffer
  --- - selected_code_content: (optional) If given code content is selected in visual mode as context.
  ---@type fun(opts: AvanteProvider, code_opts: AvantePromptOptions): AvanteCurlOutput
  parse_curl_args = function(opts, code_opts) end
  --- This function will be used to parse incoming SSE stream
  --- It takes in the data stream as the first argument, followed by SSE event state, and opts
  --- retrieved from given buffer.
  --- This opts include:
  --- - on_chunk: (fun(chunk: string): any) this is invoked on parsing correct delta chunk
  --- - on_complete: (fun(err: string|nil): any) this is invoked on either complete call or error chunk
  ---@type fun(data_stream: string, event_state: string, opts: ResponseParser): nil
  parse_response_data = function(data_stream, event_state, opts) end
  --- The following function SHOULD only be used when providers doesn't follow SSE spec [ADVANCED]
  --- this is mutually exclusive with parse_response_data
  ---@type fun(data: string, handler_opts: AvanteHandlerOptions): nil
  parse_stream_data = function(data, handler_opts) end
}
Few examples include perplexity, groq, and deepseek
vendors = {
  ---@type AvanteProvider
  perplexity = {
    endpoint = "https://api.perplexity.ai/chat/completions",
    model = "llama-3.1-sonar-large-128k-online",
    api_key_name = "cmd:bw get notes perplexity-api-key",
    parse_curl_args = function(opts, code_opts)
      return {
        url = opts.endpoint,
        headers = {
          ["Accept"] = "application/json",
          ["Content-Type"] = "application/json",
          ["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
        },
        body = {
          model = opts.model,
          messages = { -- you can make your own message, but this is very advanced
            { role = "system", content = code_opts.system_prompt },
            { role = "user", content = require("avante.providers.openai").get_user_message(code_opts) },
          },
          temperature = 0,
          max_tokens = 8192,
          stream = true, -- this will be set by default.
        },
      }
    end,
    -- The below function is used if the vendors has specific SSE spec that is not claude or openai.
    parse_response_data = function(data_stream, event_state, opts)
      require("avante.providers").openai.parse_response(data_stream, event_state, opts)
    end,
  },
  ---@type AvanteProvider
  groq = {
    endpoint = "https://api.groq.com/openai/v1/chat/completions",
    model = "llama-3.1-70b-versatile",
    api_key_name = "GROQ_API_KEY",
    parse_curl_args = function(opts, code_opts)
      return {
        url = opts.endpoint,
        headers = {
          ["Accept"] = "application/json",
          ["Content-Type"] = "application/json",
          ["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
        },
        body = {
          model = opts.model,
          messages = { -- you can make your own message, but this is very advanced
            { role = "system", content = code_opts.system_prompt },
            { role = "user", content = require("avante.providers.openai").get_user_message(code_opts) },
          },
          temperature = 0,
          max_tokens = 4096,
          stream = true, -- this will be set by default.
        },
      }
    end,
    parse_response_data = function(data_stream, event_state, opts)
      require("avante.providers").openai.parse_response(data_stream, event_state, opts)
    end,
  },
  ---@type AvanteProvider
  deepseek = {
    endpoint = "https://api.deepseek.com/chat/completions",
    model = "deepseek-coder",
    api_key_name = "DEEPSEEK_API_KEY",
    parse_curl_args = function(opts, code_opts)
      return {
        url = opts.endpoint,
        headers = {
          ["Accept"] = "application/json",
          ["Content-Type"] = "application/json",
          ["Authorization"] = "Bearer " .. os.getenv(opts.api_key_name),
        },
        body = {
          model = opts.model,
          messages = { -- you can make your own message, but this is very advanced
            { role = "system", content = code_opts.system_prompt },
            { role = "user", content = require("avante.providers.openai").get_user_message(code_opts) },
          },
          temperature = 0,
          max_tokens = 4096,
          stream = true, -- this will be set by default.
        },
      }
    end,
    parse_response_data = function(data_stream, event_state, opts)
      require("avante.providers").openai.parse_response(data_stream, event_state, opts)
    end,
  },
}

custom parser for line call [ADVANCED ONLY]

If certain providers don't follow SSE streaming spec, you might want to implement parse_stream_data for your custom providers.

See parse_and_call implementation for more information.

local llms.

If you want to use local LLM that has a OpenAI-compatible server, set ["local"] = true:

       provider = "ollama",
       vendors = {
         ---@type AvanteProvider
         ollama = {
           ["local"] = true,
           endpoint = "127.0.0.1:11434/v1",
           model = "codegemma",
           parse_curl_args = function(opts, code_opts)
             return {
               url = opts.endpoint .. "/chat/completions",
               headers = {
                 ["Accept"] = "application/json",
                 ["Content-Type"] = "application/json",
               },
               body = {
                 model = opts.model,
                 messages = require("avante.providers").copilot.parse_message(code_opts), -- you can make your own message, but this is very advanced
                 max_tokens = 2048,
                 stream = true,
               },
             }
           end,
           parse_response_data = function(data_stream, event_state, opts)
             require("avante.providers").openai.parse_response(data_stream, event_state, opts)
           end,
         },
       },
     },

You will be responsible for setting up the server yourself before using Neovim.

keymaps and API, i guess.

Since #346, we will expose certain functions that are considered "public" API through avante.api.

Additionally, we will safely add certain keymaps if users yet to set those (only applies for lazy.nvim users) for core functionality, including AvanteAsk, AvanteEdit, and AvanteRefresh

Important

This means Leaderaa won't be set to AvanteAsk if you already set this mapping.

The following <Plug> will also be available for compatibility sake:

  • <Plug>(AvanteAsk)
  • <Plug>(AvanteEdit)
  • <Plug>(AvanteRefresh)

Example settings for keys settings in lazy.nvim:

    keys = function(_, keys)
      ---@type avante.Config
      local opts =
        require("lazy.core.plugin").values(require("lazy.core.config").spec.plugins["avante.nvim"], "opts", false)

      local mappings = {
        {
          opts.mappings.ask,
          function() require("avante.api").ask() end,
          desc = "avante: ask",
          mode = { "n", "v" },
        },
        {
          opts.mappings.refresh,
          function() require("avante.api").refresh() end,
          desc = "avante: refresh",
          mode = "v",
        },
        {
          opts.mappings.edit,
          function() require("avante.api").edit() end,
          desc = "avante: edit",
          mode = { "n", "v" },
        },
      }
      mappings = vim.tbl_filter(function(m) return m[1] and #m[1] > 0 end, mappings)
      return vim.list_extend(mappings, keys)
    end,

Important

If you have different keybinding, then update opts.mappings so that hint works accordingly.

If you are using lazy.nvim then use the snippet above.

{
  opts = {
    mappings = {
      ask = "<leader>ua", -- ask
      edit = "<leader>ue", -- edit
      refresh = "<leader>ur", -- refresh
    },
  }
}

extends apis and keybindings.

Read https://github.com/yetone/avante.nvim/blob/main/lua/avante/api.lua

custom .avanterules

All fields for all .avanterules are available here

clipboard.

If you wish to load img-clip.nvim via keys, you can use the following:

    keys = {
      {
        "<leader>ip",
        function()
          return vim.bo.filetype == "AvanteInput" and require("avante.clipboard").paste_image()
            or require("img-clip").paste_image()
        end,
        desc = "clip: paste image",
      },
    }

curl failed to writing to disk error

See https://github.com/yetone/avante.nvim/issues/315#issuecomment-2315957174

convert generated conflict to quickfix items

_G.convert_to_qf = function()
  require('avante.diff').conflicts_to_qf_items(function(items)
    if #items > 0 then
      vim.fn.setqflist(items, "r")
      vim.cmd('copen')
    end
  end)
end

Then you can call this function in mappings or anything you want to do with it.

dynamic window position

See https://github.com/yetone/avante.nvim/pull/527

wsl

Try set XDG_RUNTIME_DIR="/tmp/"