Skip to content

Commit def5b68

Browse files
committed
rebranded and finetune prompts
1 parent cbf5d39 commit def5b68

24 files changed

Lines changed: 371 additions & 512 deletions

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@ out
22
dist
33
node_modules
44
.vscode-test/
5-
# *.vsix
5+
*.vsix

CHANGELOG.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Changelog
2+
3+
All notable changes to this project will be documented in this file.
4+
5+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7+
8+
## [0.1.0] - 2023-11-13
9+
10+
First beta release.

NOTICE

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Notice @2023-11-10, author of GAI-Choy
2+
3+
This work is based off of the Apache 2.0 licensed project https://github.com/WisdomShell/codeshell-vscode
4+
5+
Major modification includes:
6+
7+
- Add support for Azure OpenAI service integration
8+
- Enhancements and bugfixes
9+
- Tweaked appearances of the UI
10+
- Rebranding of the extension

README.md

Lines changed: 72 additions & 121 deletions
Original file line numberDiff line numberDiff line change
@@ -1,170 +1,121 @@
1-
# CodeShell VSCode Extension
1+
# GAI Choy VSCode Extension
22

3-
[![English readme](https://img.shields.io/badge/README-English-blue)](README_EN.md)
3+
GAI Choy stands for G̲enerative A̲I̲ empowered, C̲ode H̲elper O̲n Y̲our side.
44

5-
`codeshell-vscode`项目是基于[CodeShell大模型](https://github.com/WisdomShell/codeshell)开发的支持[Visual Studio Code](https://code.visualstudio.com/Download)的智能编码助手插件,支持python、java、c++/c、javascript、go等多种编程语言,为开发者提供代码补全、代码解释、代码优化、注释生成、对话问答等功能,旨在通过智能化的方式帮助开发者提高编程效率。
5+
Gai Choy, also known as Chinese mustard greens, is a type of leafy vegetable having a distinct, pungent flavor that is often described as spicy, slightly bitter, or peppery. Its strong flavor makes it a popular choice for adding depth and complexity to a variety of dishes. Despite its toughness, it becomes tender and more palatable when cooked, making it a versatile ingredient in the kitchen.
66

7-
## 环境要求
7+
<p align="center"><img src="assets/logo.png"></p>
88

9-
- [node](https://nodejs.org/en)版本v18及以上
10-
- Visual Studio Code版本要求 1.68.1 及以上
11-
- [CodeShell 模型服务](https://github.com/WisdomShell/llama_cpp_for_codeshell)已启动
9+
This project is forked from [codeshell-vscode](https://github.com/WisdomShell/codeshell-vscode), with additional support for Azure OpenAI (AOAI) service integration and a couple of other enhancements. See [NOTICE](NOTICE) for more details.
1210

13-
## 编译插件
11+
The `GAI Choy` project is an open-source plugin developed based on the [CodeShell LLM](https://github.com/WisdomShell/codeshell) and Azure OpenAI service that supports [Visual Studio Code](https://code.visualstudio.com/Download). It serves as an intelligent coding assistant, offering support for various programming languages such as Python, Java, C/C++, JavaScript, Go, and more. This plugin provides features like code completion, code interpretation, code optimization, comment generation, and conversational Q&A to help developers enhance their coding efficiency in an intelligent manner.
1412

15-
如果要从源码进行打包,需要安装 `node` v18 以上版本,并执行以下命令:
13+
## Why another extension for AOAI?
1614

17-
```zsh
18-
git clone https://github.com/WisdomShell/codeshell-vscode.git
19-
cd codeshell-vscode
20-
npm install
21-
npm exec vsce package
22-
```
23-
24-
然后会得到一个名为`codeshell-vscode-${VERSION_NAME}.vsix`的文件。
25-
26-
## 模型服务
27-
28-
[`llama_cpp_for_codeshell`](https://github.com/WisdomShell/llama_cpp_for_codeshell)项目提供[CodeShell大模型](https://github.com/WisdomShell/codeshell) 4bits量化后的模型,模型名称为`codeshell-chat-q4_0.gguf`。以下为部署模型服务步骤:
29-
30-
### 编译代码
31-
32-
+ Linux / Mac(Apple Silicon设备)
33-
34-
```bash
35-
git clone https://github.com/WisdomShell/llama_cpp_for_codeshell.git
36-
cd llama_cpp_for_codeshell
37-
make
38-
```
39-
40-
在 macOS 上,默认情况下启用了Metal,启用Metal可以将模型加载到 GPU 上运行,从而显著提升性能。
41-
42-
+ Mac(非Apple Silicon设备)
43-
44-
```bash
45-
git clone https://github.com/WisdomShell/llama_cpp_for_codeshell.git
46-
cd llama_cpp_for_codeshell
47-
LLAMA_NO_METAL=1 make
48-
```
49-
50-
对于非 Apple Silicon 芯片的 Mac 用户,在编译时可以使用 `LLAMA_NO_METAL=1``LLAMA_METAL=OFF` 的 CMake 选项来禁用Metal构建,从而使模型正常运行。
51-
52-
+ Windows
53-
54-
您可以选择在[Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/about)中按照Linux的方法编译代码,也可以选择参考[llama.cpp仓库](https://github.com/ggerganov/llama.cpp#build)中的方法,配置好[w64devkit](https://github.com/skeeto/w64devkit/releases)后再按照Linux的方法编译。
15+
Here's an exhuastive list of extensions I tried:
5516

56-
### 下载模型
17+
- [openai-vscode](https://marketplace.visualstudio.com/items?itemName=AndrewButson.vscode-openai)
18+
- No code-completion feature
19+
- Does not seem to support [clustered AOAI setup behind Azure Application Gateway](https://github.com/denlai-mshk/aoai-fwdproxy-funcapp)
20+
- Not open sourced
21+
- [Code GPT](https://marketplace.visualstudio.com/items?itemName=DanielSanMedium.dscodegpt)
22+
- Similar to the above. Although it provides audo-code-completion feature, the supported model is limited without plus-subscription.
23+
<img src="assets/codegpt_autocomplete_provider.png" width="60%" height="60%"/>
5724

58-
[Hugging Face Hub](https://huggingface.co/WisdomShell)上,我们提供了三种不同的模型,分别是[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)。以下是下载模型的步骤。
25+
## Requirements
5926

60-
- 使用[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)模型推理,将模型下载到本地后并放置在以上代码中的 `llama_cpp_for_codeshell/models` 文件夹的路径
27+
- [node](https://nodejs.org/en) version v18 and above
28+
- Visual Studio Code version 1.68.1 and above
29+
- The [CodeShell](https://github.com/WisdomShell/llama_cpp_for_codeshell) service is running (not required for AOAI integration)
6130

62-
```
63-
git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4/blob/main/codeshell-chat-q4_0.gguf
64-
```
31+
## Compile the Plugin
6532

66-
- 使用[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)推理,将模型放置在本地文件夹后,使用[TGI](https://github.com/WisdomShell/text-generation-inference.git)加载本地模型,启动模型服务
33+
If you want to run the package from source code, you need to execute the following command:
6734

68-
```bash
69-
git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat
70-
git clone https://huggingface.co/WisdomShell/CodeShell-7B
71-
```
72-
73-
### 加载模型
74-
75-
- `CodeShell-7B-Chat-int4`模型使用`llama_cpp_for_codeshell`项目中的`server`命令即可提供API服务
76-
77-
```bash
78-
./server -m ./models/codeshell-chat-q4_0.gguf --host 127.0.0.1 --port 8080
35+
```zsh
36+
git clone https://github.com/carusyte/GAI-Choy.git
37+
cd GAI-Choy
38+
npm install
39+
npm exec vsce package
7940
```
8041

81-
注意:对于编译时启用了 Metal 的情况下,若运行时出现异常,您也可以在命令行添加参数 `-ngl 0 `显式地禁用Metal GPU推理,从而使模型正常运行。
82-
83-
- [CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)模型,使用[TGI](https://github.com/WisdomShell/text-generation-inference.git)加载本地模型,启动模型服务
42+
and it will create a visx package file like: `gai-choy-${VERSION_NAME}.vsix`
8443

85-
## 模型服务[NVIDIA GPU]
44+
## Model Service
8645

87-
对于希望使用NVIDIA GPU进行推理的用户,可以使用[`text-generation-inference`](https://github.com/huggingface/text-generation-inference)项目部署[CodeShell大模型](https://github.com/WisdomShell/codeshell)。以下为部署模型服务步骤:
46+
### Azure OpenAI (AOAI) service
8847

89-
### 下载模型
48+
The [AOAI service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) setup varies depending on how your cloud infrastructure is designed and implemented. Here's [how-to article](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource) to get you started. For more production-grade setup you may consult cloud architect, engineer or SRE.
9049

91-
[Hugging Face Hub](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)将模型下载到本地后,将模型放置在 `$HOME/models` 文件夹的路径下,即可从本地加载模型。
50+
### CodeShell model
9251

93-
```bash
94-
git clone https://huggingface.co/WisdomShell/CodeShell-7B-Chat
95-
```
96-
97-
### 部署模型
98-
99-
使用以下命令即可用text-generation-inference进行GPU加速推理部署:
100-
101-
```bash
102-
docker run --gpus 'all' --shm-size 1g -p 9090:80 -v $HOME/models:/data \
103-
--env LOG_LEVEL="info,text_generation_router=debug" \
104-
ghcr.nju.edu.cn/huggingface/text-generation-inference:1.0.3 \
105-
--model-id /data/CodeShell-7B-Chat --num-shard 1 \
106-
--max-total-tokens 5000 --max-input-length 4096 \
107-
--max-stop-sequences 12 --trust-remote-code
108-
```
109-
110-
更详细的参数说明请参考[text-generation-inference项目文档](https://github.com/huggingface/text-generation-inference)
52+
Note that this step is not required for AOAI integration. Please refer to [source repo's README.md](https://github.com/WisdomShell/codeshell-vscode/blob/main/README_EN.md#model-service) for details.
11153

54+
## Configure the Plugin
11255

113-
## 配置插件
56+
- Set the address for the CodeShell / AOAI service
57+
- Configure whether to enable automatic code completion suggestions
58+
- Set the time delay for triggering automatic code completion suggestions
59+
- Specify the maximum number of tokens for code completion
60+
- Specify the maximum number of tokens for Q&A
61+
- Configure the model runtime environment
11462

115-
VSCode中执行`Install from VSIX...`命令,选择`codeshell-vscode-${VERSION_NAME}.vsix`,完成插件安装。
63+
Note: Different model runtime environments can be configured within the plugin. For the [CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4) model, you can choose the `CPU with llama.cpp"`option in the `Code Shell: Run Env For LLMs` menu. However, for the [CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B) and [CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat) models, you should select the `GPU with TGI toolkit` option.
11664

117-
- 设置CodeShell大模型服务地址
118-
- 配置是否自动触发代码补全建议
119-
- 配置自动触发代码补全建议的时间延迟
120-
- 配置补全的最大tokens数量
121-
- 配置问答的最大tokens数量
122-
- 配置模型运行环境
65+
To use Azure OpenAI service as the LLM model, there're additional parameters that need to be configured:
12366

124-
注意:不同的模型运行环境可以在插件中进行配置。对于[CodeShell-7B-Chat-int4](https://huggingface.co/WisdomShell/CodeShell-7B-Chat-int4)模型,您可以在`Code Shell: Run Env For LLMs`选项中选择`CPU with llama.cpp`选项。而对于[CodeShell-7B](https://huggingface.co/WisdomShell/CodeShell-7B)[CodeShell-7B-Chat](https://huggingface.co/WisdomShell/CodeShell-7B-Chat)模型,应选择`GPU with TGI toolkit`选项。
67+
- Chat model deployed in Azure
68+
- Completion model deployed in Azure
69+
- API Key
70+
- API version
12571

126-
![插件配置截图](https://resource.zsmarter.cn/appdata/codeshell-vscode/screenshots/docs_settings_new.png)
72+
<img src="assets/settings.png" width="60%" height="60%" />
12773

128-
## 功能特性
74+
## Features
12975

130-
### 1. 代码补全
76+
### 1. Code Completion
13177

132-
- 自动触发代码建议
133-
- 热键触发代码建议
78+
- Automatic Code Suggestions
79+
- Keyboard Shortcut for Code Suggestions
13480

135-
在编码过程中,当停止输入时,代码补全建议可自动触发(在配置选项`Auto Completion Delay`中可设置为1~3秒),或者您也可以主动触发代码补全建议,使用快捷键`Alt+\`(对于`Windows`电脑)或`option+\`(对于`Mac`电脑)。
81+
During the coding process, code completion suggestions can automatically trigger when you pause input (configurable with the `Auto Completion Delay` option, set to 1-3 seconds). Alternatively, you can manually trigger code completion suggestions using the shortcut key `Alt+\` (for Windows) or `Option+\` (for Mac).
13682

137-
当插件提供代码建议时,建议内容以灰色显示在编辑器光标位置,您可以按下Tab键来接受该建议,或者继续输入以忽略该建议。
83+
When the plugin provides code suggestions, the suggested content appears in gray at the editor's cursor position. You can press the Tab key to accept the suggestion or continue typing to ignore it.
13884

13985
![代码建议截图](https://resource.zsmarter.cn/appdata/codeshell-vscode/screenshots/docs_completion.png)
14086

141-
### 2. 代码辅助
87+
### 2. Code Assistance
14288

143-
- 对一段代码进行解释/优化/清理
144-
- 为一段代码生成注释/单元测试
145-
- 检查一段代码是否存在性能/安全性问题
89+
- Explain/Optimize/Cleanse a Code Segment
90+
- Generate Comments/Unit Tests for Code
91+
- Check Code for Performance/Security Issues
14692

147-
在vscode侧边栏中打开插件问答界面,在编辑器中选中一段代码,在鼠标右键CodeShell菜单中选择对应的功能项,插件将在问答界面中给出相应的答复。
93+
In the VSCode sidebar, open the plugin's Q&A interface. Select a portion of code in the editor, right-click to access the CodeShell menu, and choose the corresponding function. The plugin will provide relevant responses in the Q&A interface.
14894

14995
![代码辅助截图](https://resource.zsmarter.cn/appdata/codeshell-vscode/screenshots/docs_assistants.png)
15096

151-
### 3. 智能问答
97+
### 3. Code Q&A
15298

153-
- 支持多轮对话
154-
- 支持会话历史
155-
- 基于历史会话(做为上文)进行多轮对话
156-
- 可编辑问题,重新提问
157-
- 对任一问题,可重新获取回答
158-
- 在回答过程中,可以打断
99+
- Support for Multi-turn Conversations
100+
- Maintain Conversation History
101+
- Engage in Multi-turn Dialogues Based on Previous Conversations
102+
- Edit Questions and Rephrase Inquiries
103+
- Request Fresh Responses for Any Question
104+
- Interrupt During the Answering Process
159105

160106
![智能问答截图](https://resource.zsmarter.cn/appdata/codeshell-vscode/screenshots/docs_chat.png)
161107

162-
在问答界面的代码块中,可以点击复制按钮复制该代码块,也可点击插入按钮将该代码块内容插入到编辑器光标处。
108+
Within the Q&A interface's code block, you can click the copy button to copy the code block or use the insert button to insert the code block's content at the editor's cursor location.
163109

164-
## 开源协议
110+
## License
165111

166112
Apache 2.0
167113

114+
## Attribution
115+
116+
- [Illustration Vectors by Vecteezy](https://www.vecteezy.com/free-vector/illustration)
117+
- [Mustard greens by iconnut from Noun Project (CC BY 3.0)](https://thenounproject.com/browse/icons/term/mustard-greens/)
118+
168119
## Star History
169120

170-
[![Star History Chart](https://api.star-history.com/svg?repos=WisdomShell/codeshell-vscode&type=Date)](https://star-history.com/#WisdomShell/codeshell-vscode&Date)
121+
[![Star History Chart](https://api.star-history.com/svg?repos=carusyte/GAI-Choy.git&type=Date)](https://star-history.com/#carusyte/GAI-Choy.git&Date)

0 commit comments

Comments
 (0)