A Neovim plugin for generating and editing text using local LLMs like Llama and Mistral via Ollama.
gen.nvim is a Neovim plugin that enables text generation and editing using local Large Language Models (LLMs) via Ollama. It allows developers to run models like Llama and Mistral directly in their editor for tasks such as code fixing, grammar enhancement, and conversational AI, all without relying on external APIs. The plugin integrates seamlessly with Neovim's workflow, offering customizable prompts and multiple display modes.
Neovim users who want AI-assisted coding and text editing with local LLMs for privacy, offline use, or customization. It's ideal for developers working in environments where cloud-based AI services are restricted or undesirable.
Developers choose gen.nvim for its tight integration with Neovim, support for local LLMs via Ollama ensuring data privacy, and highly customizable prompt system that adapts to specific editing tasks without leaving the editor.
Neovim plugin to generate text using LLMs with customizable prompts
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Integrates with Ollama to run LLMs entirely on your machine, ensuring no data leaves your system, as emphasized in the project description for offline and private use.
Allows users to define prompts with dynamic placeholders like $text and $filetype, enabling tasks from code fixing to grammar enhancement, as shown in the custom prompts section.
Offers native Neovim commands like :Gen and display modes such as float or split, fitting directly into the editor's workflow without external windows.
Maintains session history with :Gen Chat, allowing follow-up questions that build on previous interactions, enhancing usability for iterative tasks.
Requires Ollama to be installed and running, adding an extra layer of setup and maintenance that might not be trivial for all users, as noted in the Requires section.
Relies on Ollama's model library, which may lack the breadth or updates of cloud-based services, potentially restricting access to newer or specialized models.
Running LLMs locally can demand significant CPU/GPU resources, leading to performance issues on less powerful machines, a trade-off for privacy that isn't addressed in the README.
Setting up custom prompts and integrating with tools like Telescope requires Lua scripting, which could be a barrier for users not versed in Neovim configuration.