Skip to content

Commit 5735467

Browse files
author
Aliaksei Bialiauski
authored
Merge pull request #75 from tracehubpm/stabilization
feat(#73): huge code duplication removed, objects refactored, packages
2 parents 6178928 + d113943 commit 5735467

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+315
-440
lines changed

.pdd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
--source=.
22
--verbose
3-
--exclude src/pdd-prompt.ts
3+
--exclude src/prompts/pdd-prompt.ts
44
--exclude target/**
55
--exclude xargs/**
66
--rule min-words:20

README.md

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -134,10 +134,26 @@ format using [JSON Packing Method](#json-packing-method):
134134
#### Cap Top 3
135135

136136
Some analysis results contains many problems.
137+
Consider this example:
138+
139+
```json
140+
{
141+
"size": 6,
142+
"problems": [
143+
"1. Lack of a clear description: The report lacks a clear and concise description of the problem. It simply states there are typos but does not specify what the typos are or how they impact the system.",
144+
"2. Missing steps to reproduce: There are no steps provided to reproduce the issue. This makes it difficult for developers to identify if they have fixed the issue correctly.",
145+
"3. No severity level: The severity level of the issue is not stated. This is important information for developers to prioritize how soon the issue should be resolved.",
146+
"4. Lack of environment details: The report does not mention which environment this issue occurs in (e.g., which version of the software, which operating system).",
147+
"5. Use of shorthand: The term 'take a look here' is used, which is not clear or professional. It is best to avoid using shorthand or colloquial language in formal documentation.",
148+
"6. Incomplete code block: The code block is not complete (it is cut off after the relevant lines). This makes it difficult for developers to understand the context of the issue."
149+
]
150+
}
151+
```
152+
137153
In order to make programmers not ignore the feedback reports by this action,
138154
we **minimize** amount of problems to just 3 or less.
139155
LLM at this stage picks the most important problems from previous analysis
140-
and adds them into new response:
156+
and adds them into new response:
141157

142158
```json
143159
{
@@ -223,16 +239,16 @@ them into JSON object:
223239
}
224240
```
225241

242+
In the [UML](https://en.wikipedia.org/wiki/Unified_Modeling_Language) notation, the full process looks like this:
243+
244+
![method.svg](/doc/method.svg)
245+
226246
#### JSON Packing Method
227247

228248
LLMs often produce suboptimal results when directly prompted to output in JSON format.
229249
That's why we let LLM "think" in English and ask to summarize JSON only at the final step of the operation.
230250
At this stage we pack previous LLM response to JSON object format.
231251

232-
In the [UML](https://en.wikipedia.org/wiki/Unified_Modeling_Language) notation, the process internals look like this:
233-
234-
![method.svg](/doc/method.svg)
235-
236252
### Puzzle (PDD) Analysis
237253

238254
This action supports analysis not only for issues created manually, but also for puzzles, a.k.a `todo` in your code.
@@ -247,7 +263,7 @@ Issue is treated as puzzle if it satisfies the following regex:
247263
The puzzle `(.+)` from #(\d+) has to be resolved:.+
248264
```
249265

250-
Then we are parsing the issue to find a tree path where puzzle is hidden.
266+
Then we parse the issue to find a tree path where puzzle is hidden.
251267

252268
This one
253269
```text

src/chat-gpt.ts

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -32,38 +32,39 @@ export class ChatGpt implements Model {
3232
* Ctor.
3333
* @param open Open AI
3434
* @param model Model name
35-
* @param system System prompt
36-
* @param prompt User prompt
3735
* @param temperature Temperature
3836
* @param max Max new tokens
3937
*/
4038
constructor(
4139
private readonly open: OpenAI,
4240
private readonly model: string,
43-
private readonly system: Scalar<string>,
44-
private readonly prompt: Scalar<string>,
4541
private readonly temperature: number,
4642
private readonly max: number
4743
) {
4844
this.open = open;
4945
this.model = model;
5046
}
5147

52-
async analyze() {
48+
async analyze(system: Scalar<string>, user: Scalar<string>) {
5349
const response = await this.open.chat.completions.create({
5450
model: this.model,
55-
temperature: 0.5,
51+
temperature: this.temperature,
52+
max_tokens: this.max,
5653
messages: [
5754
{
5855
role: "system",
59-
content: this.system.value()
56+
content: system.value()
6057
},
6158
{
6259
role: "user",
63-
content: this.prompt.value()
60+
content: user.value()
6461
}
6562
]
6663
});
6764
return response.choices[0].message.content?.trim();
6865
}
66+
67+
name(): string {
68+
return this.model;
69+
}
6970
}

src/context-expert.ts

Lines changed: 0 additions & 33 deletions
This file was deleted.

src/context-prompt.ts

Lines changed: 0 additions & 38 deletions
This file was deleted.

src/deep-infra.ts

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,22 +31,18 @@ export class DeepInfra implements Model {
3131
* Ctor.
3232
* @param token Token
3333
* @param model Model
34-
* @param system System prompt
35-
* @param prompt User prompt
3634
* @param temperature Temperature
3735
* @param max Max new tokens
3836
*/
3937
constructor(
4038
private readonly token: string,
4139
private readonly model: string,
42-
private readonly system: Scalar<string>,
43-
private readonly prompt: Scalar<string>,
4440
private readonly temperature: number,
4541
private readonly max: number
4642
) {
4743
}
4844

49-
async analyze() {
45+
async analyze(system: Scalar<string>, prompt: Scalar<string>) {
5046
const response = await fetch(
5147
'https://api.deepinfra.com/v1/openai/chat/completions', {
5248
method: 'POST',
@@ -57,11 +53,11 @@ export class DeepInfra implements Model {
5753
messages: [
5854
{
5955
role: "system",
60-
content: this.system.value()
56+
content: system.value()
6157
},
6258
{
6359
role: "user",
64-
content: this.prompt.value()
60+
content: prompt.value()
6561
}
6662
],
6763
}),
@@ -76,4 +72,8 @@ export class DeepInfra implements Model {
7672
);
7773
return answer.choices[0].message.content;
7874
}
75+
76+
name(): string {
77+
return this.model;
78+
}
7979
}

src/feedback.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
2222
* SOFTWARE.
2323
*/
24-
import {Comment} from "./comment";
24+
import {Comment} from "./github/comment";
2525
import {Covered} from "./covered";
2626
import {WithSummary} from "./with-summary";
2727
import * as core from "@actions/core";
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)