Skip to content

Commit

Permalink
feat: ✨ improved workflows
Browse files Browse the repository at this point in the history
  • Loading branch information
olimorris committed Oct 17, 2024
1 parent 6a2c924 commit 4bc19d1
Show file tree
Hide file tree
Showing 9 changed files with 345 additions and 281 deletions.
22 changes: 8 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ EOF
> [!IMPORTANT]
> The plugin requires the markdown Tree-sitter parser to be installed with `:TSInstall markdown`
[Telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) is a suggested inclusion in order to leverage Slash Commands. However other providers are available. Please refer to the [Chat Buffer](#speech_balloon-the-chat-buffer) section for more information.
[Telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) is a suggested inclusion in order to leverage Slash Commands. However, other providers are available. Please refer to the [Chat Buffer](#speech_balloon-the-chat-buffer) section for more information.

## :rocket: Quickstart

Expand Down Expand Up @@ -196,7 +196,7 @@ There are keymaps available to accept or reject edits from the LLM in the [inlin

<!-- panvimdoc-ignore-end -->

Run `:CodeCompanionActions` to open the action palette, which gives you access to all functionality of the plugin. By default the plugin uses `vim.ui.select` however you can change the provider by altering the `display.action_palette.provider` config value to be `telescope` or `mini_pick`. You can also call the Telescope extension with `:Telescope codecompanion`.
Run `:CodeCompanionActions` to open the action palette, which gives you access to all functionality of the plugin. By default the plugin uses `vim.ui.select`, however, you can change the provider by altering the `display.action_palette.provider` config value to be `telescope` or `mini_pick`. You can also call the Telescope extension with `:Telescope codecompanion`.

> [!NOTE]
> Some actions and prompts will only be visible if you're in _Visual mode_.
Expand All @@ -209,7 +209,7 @@ The plugin has three core commands:
- `CodeCompanionChat` - Open a chat buffer
- `CodeCompanionActions` - Open the _Action Palette_

However there are multiple options available:
However, there are multiple options available:

- `CodeCompanion <your prompt>` - Prompt the inline assistant
- `CodeCompanion /<prompt library>` - Use the [prompt library](#clipboard-prompt-library) with the inline assistant e.g. `/commit`
Expand All @@ -218,9 +218,9 @@ However there are multiple options available:
- `CodeCompanionChat Toggle` - Toggle a chat buffer
- `CodeCompanionChat Add` - Add visually selected chat to the current chat buffer

**Suggested workflow**
**Suggested plugin workflow**

For an optimum workflow, I recommend the following keymaps:
For an optimum plugin workflow, I recommend the following:

```lua
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
Expand Down Expand Up @@ -248,7 +248,7 @@ The plugin uses adapters to connect to LLMs. Out of the box, the plugin supports
- Ollama (`ollama`) - Both local and remotely hosted
- OpenAI (`openai`) - Requires an API key

The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer.
The plugin utilises objects called Strategies. These are the different ways that a user can interact with the plugin. The _chat_ strategy harnesses a buffer to allow direct conversation with the LLM. The _inline_ strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer. The _workflow_ strategy is a wrapper for the _chat_ strategy, allowing for [agentic workflows](#world_map-workflows).

The plugin allows you to specify adapters for each strategy and also for each [prompt library](#clipboard-prompt-library) entry.

Expand Down Expand Up @@ -538,15 +538,9 @@ More information on how tools work and how you can create your own can be found

### :world_map: Workflows

> [!WARNING]
> Workflows may result in the significant consumption of tokens if you're using an external LLM.
Workflows prompt an LLM multiple times, giving them the ability to build their answer step-by-step instead of at once. This leads to much better output as [outlined](https://www.deeplearning.ai/the-batch/issue-242/) by Andrew Ng. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference).

As [outlined](https://www.deeplearning.ai/the-batch/issue-242/) by Andrew Ng, agentic workflows have the ability to dramatically improve the output of an LLM. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference). Andrew [discussed](https://www.youtube.com/watch?v=sal78ACtGTc&t=249s) how an agentic workflow can be utilised via multiple prompts that invoke the LLM to self reflect. Implementing Andrew's advice, the plugin supports this notion via the use of workflows. At various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user.

Currently, the plugin comes with the following workflows:

- Adding a new feature
- Refactoring code
Implementing Andrew's advice, at various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user. The plugin contains a default `Code workflow`, as part of the prompt library, which guides the LLM into writing better code.

Of course you can add new workflows by following the [RECIPES](doc/RECIPES.md) guide.

Expand Down
63 changes: 63 additions & 0 deletions doc/RECIPES.md
Original file line number Diff line number Diff line change
Expand Up @@ -327,6 +327,69 @@ As outlined in the [README](README.md), an inline prompt can place its response

In this example, the LLM response will be placed in a new buffer and the user's code will not be returned back to them.

## Workflows

Workflows, at their core, are simply multiple prompts which are sent to the LLM in a turn-based manner. I fully recommend reading [Issue 242](https://www.deeplearning.ai/the-batch/issue-242/) of The Batch to understand their use. Workflows are setup in exactly the same way as prompts in the prompt library. Take the `code workflow` as an example:

```lua
["Code workflow"] = {
strategy = "workflow",
description = "Use a workflow to guide an LLM in writing code",
opts = {
index = 4,
is_default = true,
short_name = "workflow",
},
prompts = {
{
-- We can group prompts together to make a workflow
-- This is the first prompt in the workflow
{
role = constants.SYSTEM_ROLE,
content = function(context)
return fmt(
"You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the %s language",
context.filetype
)
end,
opts = {
visible = false,
},
},
{
role = constants.USER_ROLE,
content = "I want you to ",
opts = {
auto_submit = false,
},
},
},
-- This is the second group of prompts
{
{
role = constants.USER_ROLE,
content = "Great. Now let's consider your code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.",
opts = {
auto_submit = false,
},
},
},
-- This is the final group of prompts
{
{
role = constants.USER_ROLE,
content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.",
opts = {
auto_submit = false,
},
},
},
},
},
```

You'll notice that the comments use the notion of "groups". These are collections of prompts which are added to a chat buffer in a timely manner. Infact, the second group will only be added once the LLM has responded to the first group...and so on.

## Conclusion

Hopefully this serves as a useful introduction on how you can expand CodeCompanion to create prompts that suit your workflow. It's worth checking out the [actions.lua](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/actions.lua) and [config.lua](https://github.com/olimorris/codecompanion.nvim/blob/main/lua/codecompanion/config.lua) files for more complex examples.
74 changes: 73 additions & 1 deletion doc/codecompanion-recipes.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
*codecompanion-recipes.txt* For NVIM v0.10.0 Last change: 2024 October 08
*codecompanion-recipes.txt* For NVIM v0.10.0 Last change: 2024 October 17

==============================================================================
Table of Contents *codecompanion-recipes-table-of-contents*
Expand All @@ -8,6 +8,7 @@ Table of Contents *codecompanion-recipes-table-of-contents*
- Recipe #1: Creating boilerplate code|codecompanion-recipes-recipe-#1:-creating-boilerplate-code|
- Recipe #2: Using context in your prompts|codecompanion-recipes-recipe-#2:-using-context-in-your-prompts|
- Other Configuration Options|codecompanion-recipes-other-configuration-options|
- Workflows |codecompanion-recipes-workflows|
- Conclusion |codecompanion-recipes-conclusion|

==============================================================================
Expand Down Expand Up @@ -393,6 +394,77 @@ In this example, the LLM response will be placed in a new buffer and the
user’s code will not be returned back to them.


WORKFLOWS *codecompanion-recipes-workflows*

Workflows, at their core, are simply multiple prompts which are sent to the LLM
in a turn-based manner. I fully recommend reading Issue 242
<https://www.deeplearning.ai/the-batch/issue-242/> of The Batch to understand
their use. Workflows are setup in exactly the same way as prompts in the prompt
library. Take the `code workflow` as an example:

>lua
["Code workflow"] = {
strategy = "workflow",
description = "Use a workflow to guide an LLM in writing code",
opts = {
index = 4,
is_default = true,
short_name = "workflow",
},
prompts = {
{
-- We can group prompts together to make a workflow
-- This is the first prompt in the workflow
{
role = constants.SYSTEM_ROLE,
content = function(context)
return fmt(
"You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. You are an expert software engineer for the %s language",
context.filetype
)
end,
opts = {
visible = false,
},
},
{
role = constants.USER_ROLE,
content = "I want you to ",
opts = {
auto_submit = false,
},
},
},
-- This is the second group of prompts
{
{
role = constants.USER_ROLE,
content = "Great. Now let's consider your code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.",
opts = {
auto_submit = false,
},
},
},
-- This is the final group of prompts
{
{
role = constants.USER_ROLE,
content = "Thanks. Now let's revise the code based on the feedback, without additional explanations.",
opts = {
auto_submit = false,
},
},
},
},
},
<

You’ll notice that the comments use the notion of "groups". These are
collections of prompts which are added to a chat buffer in a timely manner.
Infact, the second group will only be added once the LLM has responded to the
first group…and so on.


CONCLUSION *codecompanion-recipes-conclusion*

Hopefully this serves as a useful introduction on how you can expand
Expand Down
64 changes: 39 additions & 25 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 October 14
*codecompanion.txt* For NVIM v0.10.0 Last change: 2024 October 17

==============================================================================
Table of Contents *codecompanion-table-of-contents*
Expand Down Expand Up @@ -92,7 +92,7 @@ Install the plugin with your preferred package manager:
[!IMPORTANT] The plugin requires the markdown Tree-sitter parser to be
installed with `:TSInstall markdown`
Telescope.nvim <https://github.com/nvim-telescope/telescope.nvim> is a
suggested inclusion in order to leverage Slash Commands. However other
suggested inclusion in order to leverage Slash Commands. However, other
providers are available. Please refer to the |codecompanion-chat-buffer|
section for more information.

Expand Down Expand Up @@ -166,8 +166,8 @@ There are keymaps available to accept or reject edits from the LLM in the
**Action Palette**

Run `:CodeCompanionActions` to open the action palette, which gives you access
to all functionality of the plugin. By default the plugin uses `vim.ui.select`
however you can change the provider by altering the
to all functionality of the plugin. By default the plugin uses `vim.ui.select`,
however, you can change the provider by altering the
`display.action_palette.provider` config value to be `telescope` or
`mini_pick`. You can also call the Telescope extension with `:Telescope
codecompanion`.
Expand All @@ -183,7 +183,7 @@ The plugin has three core commands:
- `CodeCompanionChat` - Open a chat buffer
- `CodeCompanionActions` - Open the _Action Palette_

However there are multiple options available:
However, there are multiple options available:

- `CodeCompanion <your prompt>` - Prompt the inline assistant
- `CodeCompanion /<prompt library>` - Use the |codecompanion-prompt-library| with the inline assistant e.g. `/commit`
Expand All @@ -192,9 +192,9 @@ However there are multiple options available:
- `CodeCompanionChat Toggle` - Toggle a chat buffer
- `CodeCompanionChat Add` - Add visually selected chat to the current chat buffer

**Suggested workflow**
**Suggested plugin workflow**

For an optimum workflow, I recommend the following keymaps:
For an optimum plugin workflow, I recommend the following:

>lua
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
Expand Down Expand Up @@ -229,7 +229,8 @@ The plugin utilises objects called Strategies. These are the different ways
that a user can interact with the plugin. The _chat_ strategy harnesses a
buffer to allow direct conversation with the LLM. The _inline_ strategy allows
for output from the LLM to be written directly into a pre-existing Neovim
buffer.
buffer. The _workflow_ strategy is a wrapper for the _chat_ strategy, allowing
for |codecompanion-agentic-workflows|.

The plugin allows you to specify adapters for each strategy and also for each
|codecompanion-prompt-library| entry.
Expand Down Expand Up @@ -381,6 +382,27 @@ pass it via an "Authorization" header:
})
<

**Using OpenAI compatible Models like LMStudio or self-hosted models**

To use any other OpenAI compatible models, change the URL in the `env` table,
set an API key:

>lua
require("codecompanion").setup({
adapters = {
ollama = function()
return require("codecompanion.adapters").extend("openai_compatible", {
env = {
url = "http[s]://open_compatible_ai_url", -- optional: default value is ollama url http://127.0.0.1:11434
api_key = "OpenAI_API_KEY", -- optional: if your endpoint is authenticated
chat_url = "/v1/chat/completions", -- optional: default value, override if different
},
})
end,
},
})
<

**Connecting via a Proxy**

You can also connect via a proxy:
Expand Down Expand Up @@ -576,24 +598,16 @@ in the TOOLS <doc/TOOLS.md> guide.

WORKFLOWS ~

Workflows prompt an LLM multiple times, giving them the ability to build their
answer step-by-step instead of at once. This leads to much better output as
outlined <https://www.deeplearning.ai/the-batch/issue-242/> by Andrew Ng.
Infact, it’s possible for older models like GPT 3.5 to outperform newer
models (using traditional zero-shot inference).

[!WARNING] Workflows may result in the significant consumption of tokens if
you’re using an external LLM.
As outlined <https://www.deeplearning.ai/the-batch/issue-242/> by Andrew Ng,
agentic workflows have the ability to dramatically improve the output of an
LLM. Infact, it’s possible for older models like GPT 3.5 to outperform newer
models (using traditional zero-shot inference). Andrew discussed
<https://www.youtube.com/watch?v=sal78ACtGTc&t=249s> how an agentic workflow
can be utilised via multiple prompts that invoke the LLM to self reflect.
Implementing Andrew’s advice, the plugin supports this notion via the use of
workflows. At various stages of a pre-defined workflow, the plugin will
automatically prompt the LLM without any input or triggering required from the
user.

Currently, the plugin comes with the following workflows:

- Adding a new feature
- Refactoring code
Implementing Andrew’s advice, at various stages of a pre-defined workflow,
the plugin will automatically prompt the LLM without any input or triggering
required from the user. The plugin contains a default `Code workflow`, as part
of the prompt library, which guides the LLM into writing better code.

Of course you can add new workflows by following the RECIPES <doc/RECIPES.md>
guide.
Expand Down
Loading

0 comments on commit 4bc19d1

Please sign in to comment.