|
1 | | -# CodeShell VSCode Extension |
| 1 | +# GAI Choy VSCode Extension |
2 | 2 |
|
3 | | -[](README_EN.md) |
| 3 | +GAI Choy stands for G̲enerative A̲I̲ empowered, C̲ode H̲elper O̲n Y̲our side. |
4 | 4 |
|
5 | | -`codeshell-vscode`项目是基于[CodeShell大模型](https://github.com/WisdomShell/codeshell)开发的支持[Visual Studio Code](https://code.visualstudio.com/Download)的智能编码助手插件,支持python、java、c++/c、javascript、go等多种编程语言,为开发者提供代码补全、代码解释、代码优化、注释生成、对话问答等功能,旨在通过智能化的方式帮助开发者提高编程效率。 |
| 5 | +Gai Choy, also known as Chinese mustard greens, is a type of leafy vegetable having a distinct, pungent flavor that is often described as spicy, slightly bitter, or peppery. Its strong flavor makes it a popular choice for adding depth and complexity to a variety of dishes. Despite its toughness, it becomes tender and more palatable when cooked, making it a versatile ingredient in the kitchen. |
6 | 6 |
|
7 | | -## 环境要求 |
| 7 | +<p align="center"><img src="assets/logo.png"></p> |
8 | 8 |
|
9 | | -- [node](https://nodejs.org/en)版本v18及以上 |
10 | | -- Visual Studio Code版本要求 1.68.1 及以上 |
11 | | -- [CodeShell 模型服务](https://github.com/WisdomShell/llama_cpp_for_codeshell)已启动 |
| 9 | +This project is forked from [codeshell-vscode](https://github.com/WisdomShell/codeshell-vscode), with additional support for Azure OpenAI (AOAI) service integration and a couple of other enhancements. See [NOTICE](NOTICE) for more details. |
12 | 10 |
|
13 | | -## 编译插件 |
| 11 | +The `GAI Choy` project is an open-source plugin developed based on the [CodeShell LLM](https://github.com/WisdomShell/codeshell) and Azure OpenAI service that supports [Visual Studio Code](https://code.visualstudio.com/Download). It serves as an intelligent coding assistant, offering support for various programming languages such as Python, Java, C/C++, JavaScript, Go, and more. This plugin provides features like code completion, code interpretation, code optimization, comment generation, and conversational Q&A to help developers enhance their coding efficiency in an intelligent manner. |
14 | 12 |
|
15 | | -如果要从源码进行打包,需要安装 `node` v18 以上版本,并执行以下命令: |
| 13 | +## Why another extension for AOAI? |
16 | 14 |
|
17 | | -```zsh |
18 | | -git clone https://github.com/WisdomShell/codeshell-vscode.git |
19 | | -cd codeshell-vscode |
20 | | -npm install |
21 | | -npm exec vsce package |
22 | | -``` |
23 | | - |
24 | | -然后会得到一个名为`codeshell-vscode-${VERSION_NAME}.vsix`的文件。 |
25 | | - |
26 | | -## 模型服务 |
27 | | - |
28 | | -[`llama_cpp_for_codeshell`](https://github.com/WisdomShell/llama_cpp_for_codeshell)项目提供[CodeShell大模型](https://github.com/WisdomShell/codeshell) 4bits量化后的模型,模型名称为`codeshell-chat-q4_0.gguf`。以下为部署模型服务步骤: |
29 | | - |
30 | | -### 编译代码 |
31 | | - |
32 | | -+ Linux / Mac(Apple Silicon设备) |
33 | | - |
34 | | - ```bash |
35 | | - git clone https://github.com/WisdomShell/llama_cpp_for_codeshell.git |
36 | | - cd llama_cpp_for_codeshell |
37 | | - make |
38 | | - ``` |
39 | | - |
40 | | - 在 macOS 上,默认情况下启用了Metal,启用Metal可以将模型加载到 GPU 上运行,从而显著提升性能。 |
41 | | - |
42 | | -+ Mac(非Apple Silicon设备) |
43 | | - |
44 | | - ```bash |
45 | | - git clone https://github.com/WisdomShell/llama_cpp_for_codeshell.git |
46 | | - cd llama_cpp_for_codeshell |
47 | | - LLAMA_NO_METAL=1 make |
48 | | - ``` |
49 | | - |
50 | | - 对于非 Apple Silicon 芯片的 Mac 用户,在编译时可以使用 `LLAMA_NO_METAL=1` 或 `LLAMA_METAL=OFF` 的 CMake 选项来禁用Metal构建,从而使模型正常运行。 |
51 | | - |
52 | | -+ Windows |
53 | | - |
54 | | - 您可以选择在[Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/about)中按照Linux的方法编译代码,也可以选择参考[llama.cpp仓库](https://github.com/ggerganov/llama.cpp#build)中的方法,配置好[w64devkit](https://github.com/skeeto/w64devkit/releases)后再按照Linux的方法编译。 |
| 15 | +Here's an exhuastive list of extensions I tried: |
55 | 16 |
|
56 | | -### 下载模型 |
| 17 | +- [openai-vscode](https://marketplace.visualstudio.com/items?itemName=AndrewButson.vscode-openai) |
| 18 | + - No code-completion feature |
| 19 | + - Does not seem to support [clustered AOAI setup behind Azure Application Gateway](https://github.com/denlai-mshk/aoai-fwdproxy-funcapp) |
| 20 | + - Not open sourced |
| 21 | +- [Code GPT](https://marketplace.visualstudio.com/items?itemName=DanielSanMedium.dscodegpt) |
| 22 | + - Similar to the above. Although it provides audo-code-completion feature, the supported model is limited without plus-subscription. |
| 23 | + <img src="assets/codegpt_autocomplete_provider.png" width="60%" height="60%"/> |
57 | 24 |
|
58 | | -在[Hugging Face Hub](https://huggingface.co/WisdomShell)上,我们提供了三种不同的模型,分别是[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)、[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)和[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)。以下是下载模型的步骤。 |
| 25 | +## Requirements |
59 | 26 |
|
60 | | -- 使用[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)模型推理,将模型下载到本地后并放置在以上代码中的 `llama_cpp_for_codeshell/models` 文件夹的路径 |
| 27 | +- [node](https://nodejs.org/en) version v18 and above |
| 28 | +- Visual Studio Code version 1.68.1 and above |
| 29 | +- The [CodeShell](https://github.com/WisdomShell/llama_cpp_for_codeshell) service is running (not required for AOAI integration) |
61 | 30 |
|
62 | | - ``` |
63 | | - git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4/blob/main/codeshell-chat-q4_0.gguf |
64 | | - ``` |
| 31 | +## Compile the Plugin |
65 | 32 |
|
66 | | -- 使用[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)、[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)推理,将模型放置在本地文件夹后,使用[TGI](https://github.com/WisdomShell/text-generation-inference.git)加载本地模型,启动模型服务 |
| 33 | +If you want to run the package from source code, you need to execute the following command: |
67 | 34 |
|
68 | | -```bash |
69 | | -git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat |
70 | | -git clone https://huggingface.co/WisdomShell/CodeShell-7B |
71 | | -``` |
72 | | - |
73 | | -### 加载模型 |
74 | | - |
75 | | -- `CodeShell-7B-Chat-int4`模型使用`llama_cpp_for_codeshell`项目中的`server`命令即可提供API服务 |
76 | | - |
77 | | -```bash |
78 | | -./server -m ./models/codeshell-chat-q4_0.gguf --host 127.0.0.1 --port 8080 |
| 35 | +```zsh |
| 36 | +git clone https://github.com/carusyte/GAI-Choy.git |
| 37 | +cd GAI-Choy |
| 38 | +npm install |
| 39 | +npm exec vsce package |
79 | 40 | ``` |
80 | 41 |
|
81 | | -注意:对于编译时启用了 Metal 的情况下,若运行时出现异常,您也可以在命令行添加参数 `-ngl 0 `显式地禁用Metal GPU推理,从而使模型正常运行。 |
82 | | - |
83 | | -- [CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)和[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)模型,使用[TGI](https://github.com/WisdomShell/text-generation-inference.git)加载本地模型,启动模型服务 |
| 42 | +and it will create a visx package file like: `gai-choy-${VERSION_NAME}.vsix`。 |
84 | 43 |
|
85 | | -## 模型服务[NVIDIA GPU] |
| 44 | +## Model Service |
86 | 45 |
|
87 | | -对于希望使用NVIDIA GPU进行推理的用户,可以使用[`text-generation-inference`](https://github.com/huggingface/text-generation-inference)项目部署[CodeShell大模型](https://github.com/WisdomShell/codeshell)。以下为部署模型服务步骤: |
| 46 | +### Azure OpenAI (AOAI) service |
88 | 47 |
|
89 | | -### 下载模型 |
| 48 | +The [AOAI service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) setup varies depending on how your cloud infrastructure is designed and implemented. Here's [how-to article](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource) to get you started. For more production-grade setup you may consult cloud architect, engineer or SRE. |
90 | 49 |
|
91 | | -在 [Hugging Face Hub](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)将模型下载到本地后,将模型放置在 `$HOME/models` 文件夹的路径下,即可从本地加载模型。 |
| 50 | +### CodeShell model |
92 | 51 |
|
93 | | -```bash |
94 | | -git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat |
95 | | -``` |
96 | | - |
97 | | -### 部署模型 |
98 | | - |
99 | | -使用以下命令即可用text-generation-inference进行GPU加速推理部署: |
100 | | - |
101 | | -```bash |
102 | | -docker run --gpus 'all' --shm-size 1g -p 9090:80 -v $HOME/models:/data \ |
103 | | - --env LOG_LEVEL="info,text_generation_router=debug" \ |
104 | | - ghcr.nju.edu.cn/huggingface/text-generation-inference:1.0.3 \ |
105 | | - --model-id /data/CodeShell-7B-Chat --num-shard 1 \ |
106 | | - --max-total-tokens 5000 --max-input-length 4096 \ |
107 | | - --max-stop-sequences 12 --trust-remote-code |
108 | | -``` |
109 | | - |
110 | | -更详细的参数说明请参考[text-generation-inference项目文档](https://github.com/huggingface/text-generation-inference)。 |
| 52 | +Note that this step is not required for AOAI integration. Please refer to [source repo's README.md](https://github.com/WisdomShell/codeshell-vscode/blob/main/README_EN.md#model-service) for details. |
111 | 53 |
|
| 54 | +## Configure the Plugin |
112 | 55 |
|
113 | | -## 配置插件 |
| 56 | +- Set the address for the CodeShell / AOAI service |
| 57 | +- Configure whether to enable automatic code completion suggestions |
| 58 | +- Set the time delay for triggering automatic code completion suggestions |
| 59 | +- Specify the maximum number of tokens for code completion |
| 60 | +- Specify the maximum number of tokens for Q&A |
| 61 | +- Configure the model runtime environment |
114 | 62 |
|
115 | | -VSCode中执行`Install from VSIX...`命令,选择`codeshell-vscode-${VERSION_NAME}.vsix`,完成插件安装。 |
| 63 | +Note: Different model runtime environments can be configured within the plugin. For the [CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4) model, you can choose the `CPU with llama.cpp"`option in the `Code Shell: Run Env For LLMs` menu. However, for the [CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B) and [CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat) models, you should select the `GPU with TGI toolkit` option. |
116 | 64 |
|
117 | | -- 设置CodeShell大模型服务地址 |
118 | | -- 配置是否自动触发代码补全建议 |
119 | | -- 配置自动触发代码补全建议的时间延迟 |
120 | | -- 配置补全的最大tokens数量 |
121 | | -- 配置问答的最大tokens数量 |
122 | | -- 配置模型运行环境 |
| 65 | +To use Azure OpenAI service as the LLM model, there're additional parameters that need to be configured: |
123 | 66 |
|
124 | | -注意:不同的模型运行环境可以在插件中进行配置。对于[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)模型,您可以在`Code Shell: Run Env For LLMs`选项中选择`CPU with llama.cpp`选项。而对于[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)和[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)模型,应选择`GPU with TGI toolkit`选项。 |
| 67 | +- Chat model deployed in Azure |
| 68 | +- Completion model deployed in Azure |
| 69 | +- API Key |
| 70 | +- API version |
125 | 71 |
|
126 | | - |
| 72 | +<img src="assets/settings.png" width="60%" height="60%" /> |
127 | 73 |
|
128 | | -## 功能特性 |
| 74 | +## Features |
129 | 75 |
|
130 | | -### 1. 代码补全 |
| 76 | +### 1. Code Completion |
131 | 77 |
|
132 | | -- 自动触发代码建议 |
133 | | -- 热键触发代码建议 |
| 78 | +- Automatic Code Suggestions |
| 79 | +- Keyboard Shortcut for Code Suggestions |
134 | 80 |
|
135 | | -在编码过程中,当停止输入时,代码补全建议可自动触发(在配置选项`Auto Completion Delay`中可设置为1~3秒),或者您也可以主动触发代码补全建议,使用快捷键`Alt+\`(对于`Windows`电脑)或`option+\`(对于`Mac`电脑)。 |
| 81 | +During the coding process, code completion suggestions can automatically trigger when you pause input (configurable with the `Auto Completion Delay` option, set to 1-3 seconds). Alternatively, you can manually trigger code completion suggestions using the shortcut key `Alt+\` (for Windows) or `Option+\` (for Mac). |
136 | 82 |
|
137 | | -当插件提供代码建议时,建议内容以灰色显示在编辑器光标位置,您可以按下Tab键来接受该建议,或者继续输入以忽略该建议。 |
| 83 | +When the plugin provides code suggestions, the suggested content appears in gray at the editor's cursor position. You can press the Tab key to accept the suggestion or continue typing to ignore it. |
138 | 84 |
|
139 | 85 |  |
140 | 86 |
|
141 | | -### 2. 代码辅助 |
| 87 | +### 2. Code Assistance |
142 | 88 |
|
143 | | -- 对一段代码进行解释/优化/清理 |
144 | | -- 为一段代码生成注释/单元测试 |
145 | | -- 检查一段代码是否存在性能/安全性问题 |
| 89 | +- Explain/Optimize/Cleanse a Code Segment |
| 90 | +- Generate Comments/Unit Tests for Code |
| 91 | +- Check Code for Performance/Security Issues |
146 | 92 |
|
147 | | -在vscode侧边栏中打开插件问答界面,在编辑器中选中一段代码,在鼠标右键CodeShell菜单中选择对应的功能项,插件将在问答界面中给出相应的答复。 |
| 93 | +In the VSCode sidebar, open the plugin's Q&A interface. Select a portion of code in the editor, right-click to access the CodeShell menu, and choose the corresponding function. The plugin will provide relevant responses in the Q&A interface. |
148 | 94 |
|
149 | 95 |  |
150 | 96 |
|
151 | | -### 3. 智能问答 |
| 97 | +### 3. Code Q&A |
152 | 98 |
|
153 | | -- 支持多轮对话 |
154 | | -- 支持会话历史 |
155 | | -- 基于历史会话(做为上文)进行多轮对话 |
156 | | -- 可编辑问题,重新提问 |
157 | | -- 对任一问题,可重新获取回答 |
158 | | -- 在回答过程中,可以打断 |
| 99 | +- Support for Multi-turn Conversations |
| 100 | +- Maintain Conversation History |
| 101 | +- Engage in Multi-turn Dialogues Based on Previous Conversations |
| 102 | +- Edit Questions and Rephrase Inquiries |
| 103 | +- Request Fresh Responses for Any Question |
| 104 | +- Interrupt During the Answering Process |
159 | 105 |
|
160 | 106 |  |
161 | 107 |
|
162 | | -在问答界面的代码块中,可以点击复制按钮复制该代码块,也可点击插入按钮将该代码块内容插入到编辑器光标处。 |
| 108 | +Within the Q&A interface's code block, you can click the copy button to copy the code block or use the insert button to insert the code block's content at the editor's cursor location. |
163 | 109 |
|
164 | | -## 开源协议 |
| 110 | +## License |
165 | 111 |
|
166 | 112 | Apache 2.0 |
167 | 113 |
|
| 114 | +## Attribution |
| 115 | + |
| 116 | +- [Illustration Vectors by Vecteezy](https://www.vecteezy.com/free-vector/illustration) |
| 117 | +- [Mustard greens by iconnut from Noun Project (CC BY 3.0)](https://thenounproject.com/browse/icons/term/mustard-greens/) |
| 118 | + |
168 | 119 | ## Star History |
169 | 120 |
|
170 | | -[](https://star-history.com/#WisdomShell/codeshell-vscode&Date) |
| 121 | +[](https://star-history.com/#carusyte/GAI-Choy.git&Date) |
0 commit comments