A Neovim plugin for interacting with various large language models (LLMs) like ChatGPT, Copilot, and local models directly within the editor.
llm.nvim is a Neovim plugin that provides a unified interface to interact with various large language models directly within the editor. It solves the problem of context switching by allowing developers to chat with AI, generate code, explain code, translate text, and perform other AI-assisted tasks without leaving their coding environment. The plugin supports both cloud-based models like ChatGPT and Copilot, as well as local models via Ollama.
Neovim users and developers who want to integrate AI-powered assistance into their editing workflow, including those working with code generation, documentation, translation, and code review tasks.
Developers choose llm.nvim for its extensive model support, customizable AI tools, and seamless Neovim integration, offering a flexible and powerful alternative to standalone AI applications or limited editor extensions.
A large language model (LLM) plugin for Neovim, provides commands to interact with LLM (like ChatGPT, Copilot, ChatGLM, kimi, deepseek, openrouter and local llms). Support Github models.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Integrates with a wide range of LLMs including OpenAI, local models via Ollama, and free platforms like GitHub Models and Cloudflare, as detailed in the key features table.
Allows defining custom tools with specific prompts and models for tasks like translation, code explanation, and test generation, shown in the examples directory.
Offers both floating and split-window chat interfaces with customizable keymaps and layouts, providing adaptability to different workflow preferences.
Supports free models from platforms like SiliconFlow and Cloudflare Workers AI, with detailed setup instructions for API key acquisition.
Requires environment variable configuration (LLM_KEY), dependency management (curl, fzf), and Lua-based setup, which can be daunting for less technical users.
Critical information is scattered across examples, wiki, and docs, with the README directing users to check examples first, leading to potential confusion.
For non-standard models like Ollama, users must implement custom streaming or parsing handlers, adding development effort as shown in the examples.