Skip to content

Commit

Permalink
docs: remove /docs routing
Browse files Browse the repository at this point in the history
  • Loading branch information
Patrick-Erichsen committed Sep 16, 2024
1 parent ee99824 commit b1aeb12
Show file tree
Hide file tree
Showing 37 changed files with 122 additions and 110 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ sidebar_position: 3

## Slash commands

Slash commands can be combined with additional instructions, including [context providers](../chat/context-selection.md) or [highlighted code](../chat/context-selection.md).
Slash commands can be combined with additional instructions, including [context providers](chat/context-selection.md) or [highlighted code](chat/context-selection.md).

For example, with the “/edit” slash command you should describe the edit that you want the LLM to perform.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ sidebar_position: 4

While many Actions are based on templated prompts and can be customized with .prompt files, there are a number that execute more complex code under the hood.

Actions that generate inline diffs, like “/edit”, “/comment”, or right click actions, use the same prompt and response processing logic as [Edit](../edit/how-it-works.md).
Actions that generate inline diffs, like “/edit”, “/comment”, or right click actions, use the same prompt and response processing logic as [Edit](edit/how-it-works.md).

To learn how other slash commands work, see the full reference [here](../../customize/slash-commands.md).
To learn how other slash commands work, see the full reference [here](../customize/slash-commands.md).
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,20 @@ sidebar_position: 5

## Built-in slash commands

Continue has a large library of built-in slash commands, but when you first install we only display the most commonly used ones, like “/edit”, “/comment”, and “/share”. To add more actions, you can open [config.json](../../customize/config.mdx) and add them to the `slashCommands` array.
Continue has a large library of built-in slash commands, but when you first install we only display the most commonly used ones, like “/edit”, “/comment”, and “/share”. To add more actions, you can open [config.json](../customize/config.mdx) and add them to the `slashCommands` array.

## Custom slash commands

There are two ways to add custom slash commands:

1. With `.prompt` files - this is recommended in most cases. See the full reference [here](../../customize/deep-dives/prompt-files.md).
1. With `.prompt` files - this is recommended in most cases. See the full reference [here](../customize/deep-dives/prompt-files.md).
2. With `config.ts` - this gives you full programmatic access to the LLM, IDE, and other important entities by writing a JavaScript/TypeScript function

### Custom Slash Commands with `config.ts`

<!-- TODO: We need a config.ts reference -->
<!-- :::tip[config.ts]
Before adding a custom slash command, we recommend reading the [introduction to `config.ts`](../../customize/config.mdx).
Before adding a custom slash command, we recommend reading the [introduction to `config.ts`](../customize/config.mdx).
::: -->

If you want to go a step further than writing custom commands with natural language, you can write a custom function that returns the response. This requires using `config.ts` instead of `config.json`.
Expand Down Expand Up @@ -49,7 +49,7 @@ export function modifyConfig(config: Config): Config {
```

<!-- TODO: We need a config.ts reference -->
<!-- For full `config.ts` reference, see [here](../reference/config-ts.md). -->
<!-- For full `config.ts` reference, see [here](reference/config-ts.md). -->

## Other custom actions

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ The most common way to invoke an action is with a slash command. These are short

![slash-commands](/img/slash-commands.png)

A few of the most useful slash commands are available by default, like “/edit”, “/comment”, and “/share”, but Continue has a large built-in library of other options. To enable these, learn more [here](../../customize/slash-commands.md).
A few of the most useful slash commands are available by default, like “/edit”, “/comment”, and “/share”, but Continue has a large built-in library of other options. To enable these, learn more [here](../customize/slash-commands.md).

### Prompt files

It is also possible to write your own slash command by defining a “.prompt file.” Prompt files can be as simple as a text file, but also include templating so that you can refer to files, URLs, highlighted code, and more.

The full .prompt file reference can be found [here](../../customize/deep-dives/prompt-files.md).
The full .prompt file reference can be found [here](../customize/deep-dives/prompt-files.md).

:::tip[Prompt library]
To assist you in getting started, [we've curated a small library of `.prompt` files](https://github.com/continuedev/prompt-file-examples). We encourage community contributions to this repository, so please consider opening up a pull request with your own prompts!
Expand Down
8 changes: 8 additions & 0 deletions docs/docs/actions/model-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: Model setup
description: Actions model setup
keywords: [model]
sidebar_position: 2
---

By default, Actions uses the same model as [Chat](chat/model-setup.mdx) since we recommend a similar, 400B+ parameter model or one of the frontier models for the complex instructions that will be carried out.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ Language models aren't perfect, but can be made much closer by adjusting their o

We will also occasionally entirely filter out responses if they are bad. This is often due to extreme repetition.

You can learn more about how it works in the [Autocomplete deep dive](../../customize/deep-dives/autocomplete.md).
You can learn more about how it works in the [Autocomplete deep dive](../customize/deep-dives/autocomplete.md).
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ keywords: [customize]
sidebar_position: 5
---

Continue offers a handful of parameters in [`config.json`](../../customize/config.mdx) that can be tuned to find the perfect balance between suggestion quality and system performance for your specific needs and hardware capabilities:
Continue offers a handful of parameters in [`config.json`](../customize/config.mdx) that can be tuned to find the perfect balance between suggestion quality and system performance for your specific needs and hardware capabilities:

```json title="config.json"
"tabAutocompleteOptions": {
Expand All @@ -21,4 +21,4 @@ Continue offers a handful of parameters in [`config.json`](../../customize/confi
- `prefixPercentage`: Defines the proportion of the prompt dedicated to the code before the cursor.
- `multilineCompletions`: Controls whether suggestions can span multiple lines ("always", "never", or "auto").

For a comprehensive guide on all configuration options and their impacts, see the [Autocomplete deep dive](../../customize/deep-dives/autocomplete.md).
For a comprehensive guide on all configuration options and their impacts, see the [Autocomplete deep dive](../customize/deep-dives/autocomplete.md).
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -51,4 +51,4 @@ For LM Studio users, navigate to the "My Models" section, find your desired mode

## Other experiences

There are many more models and providers you can use with Autocomplete. Check them out [here](../../customize/model-types/autocomplete.md).
There are many more models and providers you can use with Autocomplete. Check them out [here](../customize/model-types/autocomplete.md).
Original file line number Diff line number Diff line change
Expand Up @@ -19,28 +19,28 @@ You can include the currently open file as context by pressing `cmd/ctrl + opt +

## Specific file

You can include a specific file in your current workspace as context by typing [`@files`](../../customize/context-providers.md#files) and selecting the file.
You can include a specific file in your current workspace as context by typing [`@files`](../customize/context-providers.md#files) and selecting the file.

## Specific folder

You can include a folder in your current workspace as context by typing [`@directory`](../../customize/context-providers.md#folders) and selecting the directory. It [works like `@codebase`](../../customize/deep-dives/codebase.md) but only includes the files in the selected directory.
You can include a folder in your current workspace as context by typing [`@directory`](../customize/context-providers.md#folders) and selecting the directory. It [works like `@codebase`](../customize/deep-dives/codebase.md) but only includes the files in the selected directory.

## Entire codebase

You can include your entire codebase as context by typing [`@codebase`](../../customize/context-providers.md#codebase-retrieval). You can learn about how @codebase works [here](../../customize/deep-dives/codebase.md).
You can include your entire codebase as context by typing [`@codebase`](../customize/context-providers.md#codebase-retrieval). You can learn about how @codebase works [here](../customize/deep-dives/codebase.md).

## Documentation site

You can include a documentation site as context by typing [`@docs`](../../customize/context-providers.md#documentation) and selecting the documentation site. You can learn about how @docs works [here](../../customize/deep-dives/docs.md).
You can include a documentation site as context by typing [`@docs`](../customize/context-providers.md#documentation) and selecting the documentation site. You can learn about how @docs works [here](../customize/deep-dives/docs.md).

## Terminal contents

You can include the contents of the terminal in your IDE as context by typing [`@terminal`](../../customize/context-providers.md#terminal).
You can include the contents of the terminal in your IDE as context by typing [`@terminal`](../customize/context-providers.md#terminal).

## Git diff

You can include all of the changes you've made to your current branch by typing [`@diff`](../../customize/context-providers.md#git-diff).
You can include all of the changes you've made to your current branch by typing [`@diff`](../customize/context-providers.md#git-diff).

## Other context

You can see a full list of built-in context providers [here](../../customize/context-providers.md) and how to create your own custom context provider [here](../../customize/tutorials/build-your-own-context-provider.md).
You can see a full list of built-in context providers [here](../customize/context-providers.md) and how to create your own custom context provider [here](../customize/tutorials/build-your-own-context-provider.md).
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ Using any selected code sections, all context that you have selected with @, and

The model response is then streamed directly back to the sidebar. Each code section included in the response will be placed into its own code block, which gives you buttons to either “Apply to current file”, “Insert at cursor”, or “Copy” for each section. When you press `cmd/ctrl + L` (VS Code) or `cmd/ctrl + J` (JetBrains) at the end of a session, all context is cleared and a new session is started, so that you can begin a new task.

If you would like to view the exact prompt that is sent to the model during Chat, you can [view this in the prompt logs](../troubleshooting.md#llm-prompt-logs).
If you would like to view the exact prompt that is sent to the model during Chat, you can [view this in the prompt logs](troubleshooting.md#llm-prompt-logs).

You can learn more about how `@codebase` works [here](../../customize/deep-dives/codebase.md) and `@docs` [here](../../customize/deep-dives/docs.md).
You can learn more about how `@codebase` works [here](../customize/deep-dives/codebase.md) and `@docs` [here](../customize/deep-dives/docs.md).
13 changes: 13 additions & 0 deletions docs/docs/chat/how-to-customize.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: How to customize
description: How to customize Chat
keywords: [customize, chat]
sidebar_position: 5
---

There are a number of different ways to customize Chat

- You can configure [`@codebase`](../customize/deep-dives/codebase.md)
- You can create your own [custom code RAG](../customize/tutorials/custom-code-rag.md)
- You can configure [`@docs`](../customize/deep-dives/docs.md)
- You can [build your own context provider](../customize/tutorials/build-your-own-context-provider.md)
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

:::info
This page recommends models and providers for Chat. Read more about how to set up your `config.json` [here](../../customize/config.mdx).
This page recommends models and providers for Chat. Read more about how to set up your `config.json` [here](../customize/config.mdx).
:::

## Best overall experience
Expand All @@ -18,7 +18,7 @@ For the best overall Chat experience, you will want to use a 400B+ parameter mod

### Claude Sonnet 3.5 from Anthropic

Our current top recommendation is Claude Sonnet 3.5 from [Anthropic](../../customize/model-providers/top-level/anthropic.md).
Our current top recommendation is Claude Sonnet 3.5 from [Anthropic](../customize/model-providers/top-level/anthropic.md).

```json title="config.json"
"models": [
Expand All @@ -33,7 +33,7 @@ Our current top recommendation is Claude Sonnet 3.5 from [Anthropic](../../custo

### Llama 3.1 405B from Meta

If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your best option right now. You will need to decide if you use it through a SaaS model provider (e.g. [Together](../../customize/model-providers/more/together.md) or [Groq](../../customize/model-providers/more/groq.md)) or self-host it (e.g. using [vLLM](../../customize/model-providers//more/vllm.md) or [Ollama](../../customize/model-providers/top-level/ollama.md)).
If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your best option right now. You will need to decide if you use it through a SaaS model provider (e.g. [Together](../customize/model-providers/more/together.md) or [Groq](../customize/model-providers/more/groq.md)) or self-host it (e.g. using [vLLM](../customize/model-providers//more/vllm.md) or [Ollama](../customize/model-providers/top-level/ollama.md)).

<Tabs groupId="providers">
<TabItem value="Together">
Expand Down Expand Up @@ -86,7 +86,7 @@ If you prefer to use an open-weight model, then Llama 3.1 405B from Meta is your

### GPT-4o from OpenAI

If you prefer to use a model from [OpenAI](../../customize/model-providers/top-level/openai.md), then we recommend GPT-4o.
If you prefer to use a model from [OpenAI](../customize/model-providers/top-level/openai.md), then we recommend GPT-4o.

```json title="config.json"
"models": [
Expand All @@ -101,7 +101,7 @@ If you prefer to use a model from [OpenAI](../../customize/model-providers/top-l

### Gemini 1.5 Pro from Google

If you prefer to use a model from [Google](../../customize/model-providers/top-level/gemini.md), then we recommend Gemini 1.5 Pro.
If you prefer to use a model from [Google](../customize/model-providers/top-level/gemini.md), then we recommend Gemini 1.5 Pro.

```json title="config.json"
"models": [
Expand All @@ -120,7 +120,7 @@ For the best local, offline Chat experience, you will want to use a model that i

### Llama 3.1 8B

If your local machine can run an 8B parameter model, then we recommend running Llama 3.1 8B on your machine (e.g. using [Ollama](../../customize/model-providers/top-level/ollama.md) or [LM Studio](../../customize/model-providers/more/lmstudio.md)).
If your local machine can run an 8B parameter model, then we recommend running Llama 3.1 8B on your machine (e.g. using [Ollama](../customize/model-providers/top-level/ollama.md) or [LM Studio](../customize/model-providers/more/lmstudio.md)).

<Tabs groupId="providers">
<TabItem value="Ollama">
Expand Down Expand Up @@ -149,7 +149,7 @@ If your local machine can run an 8B parameter model, then we recommend running L

### DeepSeek Coder 2 16B

If your local machine can run a 16B parameter model, then we recommend running DeepSeek Coder 2 16B (e.g. using [Ollama](../../customize/model-providers/top-level/ollama.md) or [LM Studio](../../customize/model-providers/more/lmstudio.md)).
If your local machine can run a 16B parameter model, then we recommend running DeepSeek Coder 2 16B (e.g. using [Ollama](../customize/model-providers/top-level/ollama.md) or [LM Studio](../customize/model-providers/more/lmstudio.md)).

<Tabs groupId="providers">
<TabItem value="Ollama">
Expand Down Expand Up @@ -181,4 +181,4 @@ If your local machine can run a 16B parameter model, then we recommend running D

## Other experiences

There are many more models and providers you can use with Chat beyond those mentioned above. Read more [here](../../customize/model-types/chat.md)
There are many more models and providers you can use with Chat beyond those mentioned above. Read more [here](../customize/model-types/chat.md)
4 changes: 2 additions & 2 deletions docs/docs/customize/context-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Type `@docs` to index and retrieve snippets from any documentation site.
{ "name": "docs" }
```

To learn more, visit `[@docs](../customize/deep-dives/docs.md)`.
To learn more, visit `[@docs](customize/deep-dives/docs.md)`.

### Open Files

Expand All @@ -66,7 +66,7 @@ Type '@open' to reference the contents of all of your open files. Set `onlyPinne

### Codebase Retrieval

Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](../customize/deep-dives/codebase.md).
Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](customize/deep-dives/codebase.md).

```json
{ "name": "codebase" }
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/customize/model-types/autocomplete.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ sidebar_position: 1

An "autocomplete model" is an LLM that is trained on a special format called fill-in-the-middle (FIM). This format is designed to be given the prefix and suffix of a code file and predict what goes between. This task is very specific, which on one hand means that the models can be smaller (even a 3B parameter model can perform well). On the other hand, this means that Chat models, though larger, will perform poorly.

In Continue, these models are used to display inline [Autocomplete](../../docs/chat/how-to-use-it.md) suggestions as you type.
In Continue, these models are used to display inline [Autocomplete](../../chat/how-to-use-it.md) suggestions as you type.

## Recommended Autocomplete models

If you have the ability to use any model, we recommend [Codestral](../model-providers/top-level/mistral.md) from Mistral.

If you want to run a model locally, we recommend [Starcoder2-3B] with [Ollama](../model-providers/top-level/ollama.md).
If you want to run a model locally, we recommend [Starcoder2-3B] with [Ollama]../model-providers/top-level/ollama.md).
4 changes: 2 additions & 2 deletions docs/docs/customize/model-types/chat.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ keywords: [chat]
sidebar_position: 2
---

A "chat model" is an LLM that is trained to respond in a conversational format. Because they should be able to answer general questions and generate complex code, the best chat models are typically large, often 405B+ parameters. In Continue, these models are used for [Chat](../../docs/chat/how-to-use-it.md), [Edit](../../docs/edit/how-to-use-it.md), and [Actions](../../docs/actions/how-to-use-it.md).
A "chat model" is an LLM that is trained to respond in a conversational format. Because they should be able to answer general questions and generate complex code, the best chat models are typically large, often 405B+ parameters. In Continue, these models are used for [Chat](../../chat/how-to-use-it.md), [Edit](../../edit/how-to-use-it.md), and [Actions](../../actions/how-to-use-it.md).

## Recommended Chat models

Expand All @@ -15,4 +15,4 @@ Otherwise, some of the next best options are:

- [GPT-4o](../model-providers/top-level/openai.md)
- [Gemini 1.5 Pro](../model-providers/top-level/gemini.md)
- [Llama3.1 405B](../tutorials/llama3.1.md)
- [Llama3.1 405B](../tutorials/llama3.1.md)
2 changes: 1 addition & 1 deletion docs/docs/customize/tutorials/llama3.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,4 +110,4 @@ SambaNova Cloud provides world record Llama3.1 70B/405B serving.
}
]
}
```
```
Loading

0 comments on commit b1aeb12

Please sign in to comment.