Skip to content

Commit 889e552

Browse files
committed
📝 Recommend starcoder 2 7b
1 parent ebb9650 commit 889e552

File tree

1 file changed

+22
-6
lines changed

1 file changed

+22
-6
lines changed

docs/docs/walkthroughs/tab-autocomplete.md

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,28 @@
22

33
Continue now provides support for tab autocomplete in [VS Code](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and [JetBrains IDEs](https://plugins.jetbrains.com/plugin/22707-continue/edit). We will be greatly improving the experience over the next few releases, and it is always helpful to hear feedback. If you have any problems or suggestions, please let us know in our [Discord](https://discord.gg/vapESyrFmJ).
44

5+
## Setting up with Starcoder 2 7b (recommended)
6+
7+
If you want to have the best autocomplete experience, we recommend using Starcoder 2 7b, which is available through [Fireworks AI](https://fireworks.ai/models/fireworks/starcoder-7b). To do this, obtain an API key and add it to your `config.json`:
8+
9+
```json
10+
{
11+
"tabAutocompleteModel": {
12+
"title": "Starcoder 2",
13+
"provider": "openai",
14+
"model": "accounts/fireworks/models/starcoder-7b",
15+
"apiBase": "https://api.fireworks.ai/inference/v1",
16+
"apiKey": "YOUR_API_KEY"
17+
}
18+
}
19+
```
20+
521
## Setting up with Ollama (default)
622

723
We recommend setting up tab-autocomplete with a local Ollama instance. To do this, first download the latest version of Ollama from [here](https://ollama.ai). Then, run the following command to download our recommended model:
824

925
```bash
10-
ollama run starcoder:3b
26+
ollama run starcoder2:3b
1127
```
1228

1329
Once it has been downloaded, you should begin to see completions in VS Code.
@@ -21,7 +37,7 @@ All of the configuration options available for chat models are available to use
2137
"tabAutocompleteModel": {
2238
"title": "Tab Autocomplete Model",
2339
"provider": "ollama",
24-
"model": "starcoder:3b",
40+
"model": "starcoder2:3b",
2541
"apiBase": "https://<my endpoint>"
2642
},
2743
...
@@ -32,7 +48,7 @@ If you aren't yet familiar with the available options, you can learn more in our
3248

3349
### What model should I use?
3450

35-
If you are running the model locally, we recommend `starcoder:3b`.
51+
If you are running the model locally, we recommend `starcoder2:3b`.
3652

3753
If you find it to be too slow, you should try `deepseek-coder:1.3b-base`.
3854

@@ -46,7 +62,7 @@ The following can be configured in `config.json`:
4662

4763
### `tabAutocompleteModel`
4864

49-
This is just another object like the ones in the `"models"` array of `config.json`. You can choose and configure any model you would like, but we strongly suggest using a small model made for tab-autocomplete, such as `deepseek-1b`, `starcoder-1b`, or `starcoder-3b`.
65+
This is just another object like the ones in the `"models"` array of `config.json`. You can choose and configure any model you would like, but we strongly suggest using a small model made for tab-autocomplete, such as `deepseek-1b`, `starcoder-1b`, or `starcoder2-3b`.
5066

5167
### `tabAutocompleteOptions`
5268

@@ -70,7 +86,7 @@ This object allows you to customize the behavior of tab-autocomplete. The availa
7086
"tabAutocompleteModel": {
7187
"title": "Tab Autocomplete Model",
7288
"provider": "ollama",
73-
"model": "starcoder:3b",
89+
"model": "starcoder2:3b",
7490
"apiBase": "https://<my endpoint>"
7591
},
7692
"tabAutocompleteOptions": {
@@ -93,7 +109,7 @@ Follow these steps to ensure that everything is set up correctly:
93109

94110
1. Make sure you have the "Enable Tab Autocomplete" setting checked (in VS Code, you can toggle by clicking the "Continue" button in the status bar).
95111
2. Make sure you have downloaded Ollama.
96-
3. Run `ollama run starcoder:3b` to verify that the model is downloaded.
112+
3. Run `ollama run starcoder2:3b` to verify that the model is downloaded.
97113
4. Make sure that any other completion providers are disabled (e.g. Copilot), as they may interfere.
98114
5. Make sure that you aren't also using another Ollama model for chat. This will cause Ollama to constantly load and unload the models from memory, resulting in slow responses (or none at all) for both.
99115
6. Check the output of the logs to find any potential errors (cmd/ctrl+shift+p -> "Toggle Developer Tools" -> "Console" tab in VS Code, ~/.continue/core.log in JetBrains).

0 commit comments

Comments
 (0)