From 4bc19d10c7b8026be5e943cccd98a1d5b086d5af Mon Sep 17 00:00:00 2001 From: Oli Morris Date: Thu, 17 Oct 2024 22:41:35 +0100 Subject: [PATCH] feat: :sparkles: improved workflows --- README.md | 22 ++--- doc/RECIPES.md | 63 ++++++++++++ doc/codecompanion-recipes.txt | 74 +++++++++++++- doc/codecompanion.txt | 64 +++++++----- lua/codecompanion/actions/static.lua | 135 -------------------------- lua/codecompanion/config.lua | 66 +++++++++++-- lua/codecompanion/strategies.lua | 57 +++++++++++ lua/codecompanion/strategies/chat.lua | 46 ++++++++- lua/codecompanion/workflow.lua | 99 ------------------- 9 files changed, 345 insertions(+), 281 deletions(-) delete mode 100644 lua/codecompanion/workflow.lua diff --git a/README.md b/README.md index f098e0bd..0482d75a 100644 --- a/README.md +++ b/README.md @@ -112,7 +112,7 @@ EOF > [!IMPORTANT] > The plugin requires the markdown Tree-sitter parser to be installed with `:TSInstall markdown` -[Telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) is a suggested inclusion in order to leverage Slash Commands. However other providers are available. Please refer to the [Chat Buffer](#speech_balloon-the-chat-buffer) section for more information. +[Telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) is a suggested inclusion in order to leverage Slash Commands. However, other providers are available. Please refer to the [Chat Buffer](#speech_balloon-the-chat-buffer) section for more information. ## :rocket: Quickstart @@ -196,7 +196,7 @@ There are keymaps available to accept or reject edits from the LLM in the [inlin -Run `:CodeCompanionActions` to open the action palette, which gives you access to all functionality of the plugin. By default the plugin uses `vim.ui.select` however you can change the provider by altering the `display.action_palette.provider` config value to be `telescope` or `mini_pick`. You can also call the Telescope extension with `:Telescope codecompanion`. +Run `:CodeCompanionActions` to open the action palette, which gives you access to all functionality of the plugin. By default the plugin uses `vim.ui.select`, however, you can change the provider by altering the `display.action_palette.provider` config value to be `telescope` or `mini_pick`. You can also call the Telescope extension with `:Telescope codecompanion`. > [!NOTE] > Some actions and prompts will only be visible if you're in _Visual mode_. @@ -209,7 +209,7 @@ The plugin has three core commands: - `CodeCompanionChat` - Open a chat buffer - `CodeCompanionActions` - Open the _Action Palette_ -However there are multiple options available: +However, there are multiple options available: - `CodeCompanion ` - Prompt the inline assistant - `CodeCompanion /` - Use the [prompt library](#clipboard-prompt-library) with the inline assistant e.g. `/commit` @@ -218,9 +218,9 @@ However there are multiple options available: - `CodeCompanionChat Toggle` - Toggle a chat buffer - `CodeCompanionChat Add` - Add visually selected chat to the current chat buffer -**Suggested workflow** +**Suggested plugin workflow** -For an optimum workflow, I recommend the following keymaps: +For an optimum plugin workflow, I recommend the following: ```lua vim.api.nvim_set_keymap("n", "", "CodeCompanionActions", { noremap = true, silent = true }) @@ -248,7 +248,7 @@ The plugin uses adapters to connect to LLMs. Out of the box, the plugin supports - Ollama (`ollama`) - Both local and remotely hosted - OpenAI (`openai`) - Requires an API key -The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer. +The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer. The _workflow_ strategy is a wrapper for the _chat_ strategy, allowing for [agentic workflows](#world_map-workflows). The plugin allows you to specify adapters for each strategy and also for each [prompt library](#clipboard-prompt-library) entry. @@ -538,15 +538,9 @@ More information on how tools work and how you can create your own can be found ### :world_map: Workflows -> [!WARNING] -> Workflows may result in the significant consumption of tokens if you're using an external LLM. +Workflows prompt an LLM multiple times, giving them the ability to build their answer step-by-step instead of at once. This leads to much better output as [outlined](https://www.deeplearning.ai/the-batch/issue-242/) by Andrew Ng. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference). -As [outlined](https://www.deeplearning.ai/the-batch/issue-242/) by Andrew Ng, agentic workflows have the ability to dramatically improve the output of an LLM. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference). Andrew [discussed](https://www.youtube.com/watch?v=sal78ACtGTc&t=249s) how an agentic workflow can be utilised via multiple prompts that invoke the LLM to self reflect. Implementing Andrew's advice, the plugin supports this notion via the use of workflows. At various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user. - -Currently, the plugin comes with the following workflows: - -- Adding a new feature -- Refactoring code +Implementing Andrew's advice, at various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user. The plugin contains a default `Code workflow`, as part of the prompt library, which guides the LLM into writing better code. Of course you can add new workflows by following the [RECIPES](doc/RECIPES.md) guide. diff --git a/doc/RECIPES.md b/doc/RECIPES.md index bd69e6c3..8b8bb0da 100644 --- a/doc/RECIPES.md +++ b/doc/RECIPES.md @@ -327,6 +327,69 @@ As outlined in the [README](README.md), an inline prompt can place its response In this example, the LLM response will be placed in a new buffer and the user's code will not be returned back to them. +## Workflows + +Workflows, at their core, are simply multiple prompts which are sent to the LLM in a turn-based manner. I fully recommend reading [Issue 242](https://www.deeplearning.ai/the-batch/issue-242/) of The Batch to understand their use. Workflows are setup in exactly the same way as prompts in the prompt library. Take the `code workflow` as an example: + +```lua +["Code workflow"] = { + strategy = "workflow", + description = "Use a workflow to guide an LLM in writing code", + opts = { + index = 4, + is_default = true, + short_name = "workflow", + }, + prompts = { + { + -- We can group prompts together to make a workflow + -- This is the first prompt in the workflow + { + role = constants.SYSTEM_ROLE, + content = function(context) + return fmt( + "You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the %s language", + context.filetype + ) + end, + opts = { + visible = false, + }, + }, + { + role = constants.USER_ROLE, + content = "I want you to ", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the second group of prompts + { + { + role = constants.USER_ROLE, + content = "Great. Now let's consider your code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the final group of prompts + { + { + role = constants.USER_ROLE, + content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.", + opts = { + auto_submit = false, + }, + }, + }, + }, +}, +``` + +You'll notice that the comments use the notion of "groups". These are collections of prompts which are added to a chat buffer in a timely manner. Infact, the second group will only be added once the LLM has responded to the first group...and so on. + ## Conclusion Hopefully this serves as a useful introduction on how you can expand CodeCompanion to create prompts that suit your workflow. It's worth checking out the [actions.lua](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) and [config.lua](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/config.lua) files for more complex examples. diff --git a/doc/codecompanion-recipes.txt b/doc/codecompanion-recipes.txt index 2bdb1971..7953a453 100644 --- a/doc/codecompanion-recipes.txt +++ b/doc/codecompanion-recipes.txt @@ -1,4 +1,4 @@ -*codecompanion-recipes.txt* For NVIM v0.10.0 Last change: 2024 October 08 +*codecompanion-recipes.txt* For NVIM v0.10.0 Last change: 2024 October 17 ============================================================================== Table of Contents *codecompanion-recipes-table-of-contents* @@ -8,6 +8,7 @@ Table of Contents *codecompanion-recipes-table-of-contents* - Recipe #1: Creating boilerplate code|codecompanion-recipes-recipe-#1:-creating-boilerplate-code| - Recipe #2: Using context in your prompts|codecompanion-recipes-recipe-#2:-using-context-in-your-prompts| - Other Configuration Options|codecompanion-recipes-other-configuration-options| + - Workflows |codecompanion-recipes-workflows| - Conclusion |codecompanion-recipes-conclusion| ============================================================================== @@ -393,6 +394,77 @@ In this example, the LLM response will be placed in a new buffer and the user’s code will not be returned back to them. +WORKFLOWS *codecompanion-recipes-workflows* + +Workflows, at their core, are simply multiple prompts which are sent to the LLM +in a turn-based manner. I fully recommend reading Issue 242 + of The Batch to understand +their use. Workflows are setup in exactly the same way as prompts in the prompt +library. Take the `code workflow` as an example: + +>lua + ["Code workflow"] = { + strategy = "workflow", + description = "Use a workflow to guide an LLM in writing code", + opts = { + index = 4, + is_default = true, + short_name = "workflow", + }, + prompts = { + { + -- We can group prompts together to make a workflow + -- This is the first prompt in the workflow + { + role = constants.SYSTEM_ROLE, + content = function(context) + return fmt( + "You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the %s language", + context.filetype + ) + end, + opts = { + visible = false, + }, + }, + { + role = constants.USER_ROLE, + content = "I want you to ", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the second group of prompts + { + { + role = constants.USER_ROLE, + content = "Great. Now let's consider your code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the final group of prompts + { + { + role = constants.USER_ROLE, + content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.", + opts = { + auto_submit = false, + }, + }, + }, + }, + }, +< + +You’ll notice that the comments use the notion of "groups". These are +collections of prompts which are added to a chat buffer in a timely manner. +Infact, the second group will only be added once the LLM has responded to the +first group…and so on. + + CONCLUSION *codecompanion-recipes-conclusion* Hopefully this serves as a useful introduction on how you can expand diff --git a/doc/codecompanion.txt b/doc/codecompanion.txt index 50c7faa1..b8b3c527 100644 --- a/doc/codecompanion.txt +++ b/doc/codecompanion.txt @@ -1,4 +1,4 @@ -*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 October 14 +*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 October 17 ============================================================================== Table of Contents *codecompanion-table-of-contents* @@ -92,7 +92,7 @@ Install the plugin with your preferred package manager: [!IMPORTANT] The plugin requires the markdown Tree-sitter parser to be installed with `:TSInstall markdown` Telescope.nvim is a -suggested inclusion in order to leverage Slash Commands. However other +suggested inclusion in order to leverage Slash Commands. However, other providers are available. Please refer to the |codecompanion-chat-buffer| section for more information. @@ -166,8 +166,8 @@ There are keymaps available to accept or reject edits from the LLM in the **Action Palette** Run `:CodeCompanionActions` to open the action palette, which gives you access -to all functionality of the plugin. By default the plugin uses `vim.ui.select` -however you can change the provider by altering the +to all functionality of the plugin. By default the plugin uses `vim.ui.select`, +however, you can change the provider by altering the `display.action_palette.provider` config value to be `telescope` or `mini_pick`. You can also call the Telescope extension with `:Telescope codecompanion`. @@ -183,7 +183,7 @@ The plugin has three core commands: - `CodeCompanionChat` - Open a chat buffer - `CodeCompanionActions` - Open the _Action Palette_ -However there are multiple options available: +However, there are multiple options available: - `CodeCompanion ` - Prompt the inline assistant - `CodeCompanion /` - Use the |codecompanion-prompt-library| with the inline assistant e.g. `/commit` @@ -192,9 +192,9 @@ However there are multiple options available: - `CodeCompanionChat Toggle` - Toggle a chat buffer - `CodeCompanionChat Add` - Add visually selected chat to the current chat buffer -**Suggested workflow** +**Suggested plugin workflow** -For an optimum workflow, I recommend the following keymaps: +For an optimum plugin workflow, I recommend the following: >lua vim.api.nvim_set_keymap("n", "", "CodeCompanionActions", { noremap = true, silent = true }) @@ -229,7 +229,8 @@ The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim -buffer. +buffer. The _workflow_ strategy is a wrapper for the _chat_ strategy, allowing +for |codecompanion-agentic-workflows|. The plugin allows you to specify adapters for each strategy and also for each |codecompanion-prompt-library| entry. @@ -381,6 +382,27 @@ pass it via an "Authorization" header: }) < +**Using OpenAI compatible Models like LMStudio or self-hosted models** + +To use any other OpenAI compatible models, change the URL in the `env` table, +set an API key: + +>lua + require("codecompanion").setup({ + adapters = { + ollama = function() + return require("codecompanion.adapters").extend("openai_compatible", { + env = { + url = "http[s]://open_compatible_ai_url", -- optional: default value is ollama url http://127.0.0.1:11434 + api_key = "OpenAI_API_KEY", -- optional: if your endpoint is authenticated + chat_url = "/v1/chat/completions", -- optional: default value, override if different + }, + }) + end, + }, + }) +< + **Connecting via a Proxy** You can also connect via a proxy: @@ -576,24 +598,16 @@ in the TOOLS guide. WORKFLOWS ~ +Workflows prompt an LLM multiple times, giving them the ability to build their +answer step-by-step instead of at once. This leads to much better output as +outlined by Andrew Ng. +Infact, it’s possible for older models like GPT 3.5 to outperform newer +models (using traditional zero-shot inference). - [!WARNING] Workflows may result in the significant consumption of tokens if - you’re using an external LLM. -As outlined by Andrew Ng, -agentic workflows have the ability to dramatically improve the output of an -LLM. Infact, it’s possible for older models like GPT 3.5 to outperform newer -models (using traditional zero-shot inference). Andrew discussed - how an agentic workflow -can be utilised via multiple prompts that invoke the LLM to self reflect. -Implementing Andrew’s advice, the plugin supports this notion via the use of -workflows. At various stages of a pre-defined workflow, the plugin will -automatically prompt the LLM without any input or triggering required from the -user. - -Currently, the plugin comes with the following workflows: - -- Adding a new feature -- Refactoring code +Implementing Andrew’s advice, at various stages of a pre-defined workflow, +the plugin will automatically prompt the LLM without any input or triggering +required from the user. The plugin contains a default `Code workflow`, as part +of the prompt library, which guides the LLM into writing better code. Of course you can add new workflows by following the RECIPES guide. diff --git a/lua/codecompanion/actions/static.lua b/lua/codecompanion/actions/static.lua index 1481115e..fbd4aab9 100644 --- a/lua/codecompanion/actions/static.lua +++ b/lua/codecompanion/actions/static.lua @@ -75,139 +75,4 @@ return { end, }, }, - { - name = "Workflows ...", - strategy = " ", - description = "Workflows to improve the performance of your LLM", - opts = { - index = 10, - }, - picker = { - prompt = "Select a workflow", - items = { - { - name = "Code a feature - Outline, draft, consider and then revise", - callback = function(context) - local agent = require("codecompanion.workflow") - return agent - .new({ - context = context, - strategy = "chat", - }) - :workflow({ - { - role = config.constants.SYSTEM_ROLE, - content = "You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the " - .. context.filetype - .. " language.", - opts = { - start = true, - }, - }, - { - condition = function() - return context.is_visual - end, - role = config.constants.USER_ROLE, - content = "Here is some relevant context: " .. send_code(context), - opts = { - contains_code = true, - start = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "I want you to help me code a feature. Before we write any code let's outline how we'll architect and implement the feature with the context you already have. The feature I'd like to add is ", - opts = { - start = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Thanks. Now let's draft the code for the feature.", - opts = { - auto_submit = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Great. Now let's consider the code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.", - opts = { - auto_submit = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.", - opts = { - auto_submit = true, - }, - }, - }) - end, - }, - { - name = "Refactor some code - Outline, draft, consider and then revise", - callback = function(context) - local agent = require("codecompanion.workflow") - return agent - .new({ - context = context, - strategy = "chat", - }) - :workflow({ - { - role = config.constants.SYSTEM_ROLE, - content = "You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the " - .. context.filetype - .. " language.", - opts = { - start = true, - }, - }, - { - condition = function() - return context.is_visual - end, - role = config.constants.USER_ROLE, - content = "Here is some relevant context: " .. send_code(context), - opts = { - contains_code = true, - start = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "I want you to help me with a refactor. Before we write any code let's outline how we'll architect and implement the code with the context you already have. What I'm looking to achieve is ", - opts = { - start = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Thanks. Now let's draft the code for the refactor.", - opts = { - auto_submit = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Great. Now let's consider the code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.", - opts = { - auto_submit = true, - }, - }, - { - role = config.constants.USER_ROLE, - content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.", - opts = { - auto_submit = true, - }, - }, - }) - end, - }, - }, - }, - }, } diff --git a/lua/codecompanion/config.lua b/lua/codecompanion/config.lua index f198754c..448883e3 100644 --- a/lua/codecompanion/config.lua +++ b/lua/codecompanion/config.lua @@ -332,11 +332,65 @@ Points to note: }, }, }, + ["Code workflow"] = { + strategy = "workflow", + description = "Use a workflow to guide an LLM in writing code", + opts = { + index = 4, + is_default = true, + short_name = "workflow", + }, + prompts = { + { + -- We can group prompts together to make a workflow + -- This is the first prompt in the workflow + { + role = constants.SYSTEM_ROLE, + content = function(context) + return fmt( + "You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the %s language", + context.filetype + ) + end, + opts = { + visible = false, + }, + }, + { + role = constants.USER_ROLE, + content = "I want you to ", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the second group of prompts + { + { + role = constants.USER_ROLE, + content = "Great. Now let's consider your code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.", + opts = { + auto_submit = false, + }, + }, + }, + -- This is the final group of prompts + { + { + role = constants.USER_ROLE, + content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.", + opts = { + auto_submit = false, + }, + }, + }, + }, + }, ["Explain"] = { strategy = "chat", description = "Explain how code in a buffer works", opts = { - index = 4, + index = 5, is_default = true, is_slash_cmd = false, modes = { "v" }, @@ -386,7 +440,7 @@ Points to note: strategy = "chat", description = "Generate unit tests for the selected code", opts = { - index = 5, + index = 6, is_default = true, is_slash_cmd = false, modes = { "v" }, @@ -440,7 +494,7 @@ Points to note: strategy = "chat", description = "Fix the selected code", opts = { - index = 6, + index = 7, is_default = true, is_slash_cmd = false, modes = { "v" }, @@ -498,7 +552,7 @@ Use Markdown formatting and include the programming language name at the start o strategy = "inline", description = "Send the current buffer to the LLM as part of an inline prompt", opts = { - index = 7, + index = 8, modes = { "v" }, is_default = true, is_slash_cmd = false, @@ -563,7 +617,7 @@ Use Markdown formatting and include the programming language name at the start o strategy = "chat", description = "Explain the LSP diagnostics for the selected code", opts = { - index = 8, + index = 9, is_default = true, is_slash_cmd = false, modes = { "v" }, @@ -646,7 +700,7 @@ This is the code, for context: strategy = "chat", description = "Generate a commit message", opts = { - index = 9, + index = 10, is_default = true, is_slash_cmd = true, short_name = "commit", diff --git a/lua/codecompanion/strategies.lua b/lua/codecompanion/strategies.lua index 70560c62..4f214b17 100644 --- a/lua/codecompanion/strategies.lua +++ b/lua/codecompanion/strategies.lua @@ -108,6 +108,63 @@ function Strategies:chat() return chat() end +---@return CodeCompanion.Chat|nil +function Strategies:workflow() + local workflow = self.selected + local prompts = workflow.prompts + local opts = workflow.opts + local stages = #prompts + + -- Expand the prompts + local eval_prompts = vim + .iter(prompts) + :map(function(prompt_group) + return vim + .iter(prompt_group) + :map(function(prompt) + local new_prompt = vim.deepcopy(prompt) + if type(new_prompt.content) == "function" then + new_prompt.content = new_prompt.content(self.context) + end + return new_prompt + end) + :totable() + end) + :totable() + + local messages = eval_prompts[1] + + -- We send the first batch of prompts to the chat buffer as messages + local chat = require("codecompanion.strategies.chat").new({ + adapter = self.selected.adapter, + context = self.context, + messages = messages, + auto_submit = (messages.opts and messages.opts.auto_submit) or false, + }) + table.remove(eval_prompts, 1) + + -- Then when it completes we send the next batch and so on + if stages > 1 then + local order = 1 + vim.iter(eval_prompts):each(function(prompt) + prompt = prompt[1] + local event = { + id = math.random(10000000), + order = order, + type = "once", + callback = function(chat_obj) + chat_obj:append_to_buf(prompt) + if prompt.opts and prompt.opts.auto_submit then + chat_obj:submit() + end + end, + } + chat:subscribe(event) + order = order + 1 + end) + end +end + ---@return CodeCompanion.Inline|nil function Strategies:inline() log:info("Strategy: Inline") diff --git a/lua/codecompanion/strategies/chat.lua b/lua/codecompanion/strategies/chat.lua index d0db3603..9ccb41cf 100644 --- a/lua/codecompanion/strategies/chat.lua +++ b/lua/codecompanion/strategies/chat.lua @@ -211,11 +211,13 @@ local last_chat = {} ---@field context table The context of the buffer that the chat was initiated from ---@field current_request table|nil The current request being executed ---@field current_tool table The current tool being executed +---@field cycle number The amount of times the chat has been sent to the LLM ---@field header_ns integer The namespace for the virtual text that appears in the header ---@field id integer The unique identifier for the chat ---@field intro_message? boolean Whether the welcome message has been shown ---@field messages? table The table containing the messages in the chat buffer ---@field settings? table The settings that are used in the adapter of the chat buffer +---@field subscribers table The subscribers to the chat buffer ---@field tokens? nil|number The number of tokens in the chat ---@field tools? CodeCompanion.Tools The tools available to the user ---@field tools_in_use? nil|table The tools that are currently being used in the chat @@ -241,11 +243,13 @@ function Chat.new(args) local self = setmetatable({ opts = args, context = args.context, + cycle = 0, header_ns = api.nvim_create_namespace(CONSTANTS.NS_HEADER), id = id, last_role = args.last_role or config.constants.USER_ROLE, messages = args.messages or {}, status = "", + subscribers = {}, tokens = args.tokens, tools_in_use = {}, create_buf = function() @@ -798,6 +802,8 @@ function Chat:submit(opts) log:debug("Messages:\n%s", self.messages) lock_buf(bufnr) + self.cycle = self.cycle + 1 + log:info("Chat request started") self.current_request = client .new({ adapter = settings }) @@ -841,7 +847,24 @@ function Chat:done(request) end log:info("Chat request completed") - return self:reset() + self:reset() + + if self.has_subscribers then + local function action_subscription(subscriber) + subscriber.callback(self) + if subscriber.type == "once" then + self:unsubscribe(subscriber.id) + end + end + + vim.iter(self.subscribers):each(function(subscriber) + if subscriber.order and subscriber.order <= self.cycle then + action_subscription(subscriber) + elseif not subscriber.order then + action_subscription(subscriber) + end + end) + end end ---Regenerate the response from the LLM @@ -1178,6 +1201,27 @@ function Chat:complete_models(request, callback) callback({ items = items, isIncomplete = false }) end +---Subscribe to a chat buffer +---@param event table {name: string, type: string, callback: fun} +function Chat:subscribe(event) + table.insert(self.subscribers, event) +end + +---Does the chat buffer have any subscribers? +function Chat:has_subscribers() + return #self.subscribers > 0 +end + +---Unsubscribe an object from a chat buffer +---@param id integer|string +function Chat:unsubscribe(id) + for i, subscriber in ipairs(self.subscribers) do + if subscriber.id == id then + table.remove(self.subscribers, i) + end + end +end + ---Clear the chat buffer ---@return nil function Chat:clear() diff --git a/lua/codecompanion/workflow.lua b/lua/codecompanion/workflow.lua deleted file mode 100644 index 9f1d83c2..00000000 --- a/lua/codecompanion/workflow.lua +++ /dev/null @@ -1,99 +0,0 @@ -local config = require("codecompanion.config") -local log = require("codecompanion.utils.log") - -local api = vim.api - ----@class CodeCompanion.Workflow -local Workflow = {} - ----@class CodeCompanion.WorkflowArgs ----@field context table ----@field strategy string - ----@param args table ----@return CodeCompanion.Workflow -function Workflow.new(args) - return setmetatable(args, { __index = Workflow }) -end - ----@param prompts table -function Workflow:workflow(prompts) - log:trace("Initiating workflow") - - local starting_prompts = {} - local workflow_prompts = {} - - for _, prompt in ipairs(prompts) do - if prompt.opts and prompt.opts.start then - if - (type(prompt.condition) == "function" and not prompt.condition()) - or (prompt.opts and prompt.opts.contains_code and not config.opts.send_code) - then - goto continue - end - - table.insert(starting_prompts, { - role = prompt.role, - content = prompt.content, - }) - else - table.insert(workflow_prompts, { - role = prompt.role, - content = prompt.content, - opts = { - auto_submit = prompt.opts and prompt.opts.auto_submit, - }, - }) - end - ::continue:: - end - - local function send_prompt(chat) - log:trace("Sending agentic prompt to chat buffer") - - if #workflow_prompts == 0 then - return - end - - local prompt = workflow_prompts[1] - chat:append_to_buf(prompt) - - if prompt.opts and prompt.opts.auto_submit then - chat:submit() - end - - return table.remove(workflow_prompts, 1) - end - - local chat = require("codecompanion.strategies.chat").new({ - type = "chat", - messages = starting_prompts, - }) - - if not chat then - return - end - - local group = api.nvim_create_augroup("CodeCompanionWorkflow", { - clear = false, - }) - - api.nvim_create_autocmd("User", { - desc = "Listen for CodeCompanion agent messages", - group = group, - pattern = "CodeCompanionChat", - callback = function(request) - if request.buf ~= chat.bufnr or request.data.status ~= "finished" then - return - end - - send_prompt(chat) - - if #workflow_prompts == 0 then - api.nvim_del_augroup_by_id(group) - end - end, - }) -end - -return Workflow