Skip to content

Commit 4cacb93

Browse files
authored
feat: package request within escaped json (#13)
* reusable terraform-based ci workflow Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * support all body parameters of the inference request api and package within json payload in order to properly escape special characters Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * update docs to reflect the name change as well as input parameters Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> --------- Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
1 parent 8db130e commit 4cacb93

File tree

4 files changed

+277
-88
lines changed

4 files changed

+277
-88
lines changed

.github/workflows/ci.yml

Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -25,20 +25,21 @@ jobs:
2525
persist-credentials: false
2626
sparse-checkout: action.yml
2727

28-
- name: Inference request
28+
- name: Setup Terraform
29+
uses: hashicorp/setup-terraform@v3
30+
31+
- name: Run Terraform
32+
id: terraform
33+
continue-on-error: true
34+
run: terraform plan
35+
36+
- name: AI inference request
2937
id: prompt
3038
uses: ./
3139
with:
32-
payload: |
33-
model: openai/gpt-4.1-mini
34-
messages:
35-
- role: system
36-
content: You are a helpful assistant
37-
- role: user
38-
content: What is the capital of France
39-
max_tokens: 100
40-
temperature: 0.9
41-
top_p: 0.9
40+
system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
41+
user-prompt: Troubleshoot the following Terraform output; ${{ steps.terraform.outputs.stderr }}
42+
show-payload: true
4243

4344
- name: Echo outputs
4445
run: |
@@ -49,7 +50,4 @@ jobs:
4950
echo "${{ steps.prompt.outputs.response-file }}"
5051
5152
echo "response-file contents:"
52-
cat "${{ steps.prompt.outputs.response-file }}" | jq
53-
54-
echo "payload:"
55-
echo "${{ steps.prompt.outputs.payload }}"
53+
cat "${{ steps.prompt.outputs.response-file }}"

README.md

Lines changed: 96 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,21 @@
1-
[![GitHub license](https://img.shields.io/github/license/op5dev/ai-inference-request?logo=apache&label=License)](LICENSE "Apache License 2.0.")
2-
[![GitHub release tag](https://img.shields.io/github/v/release/op5dev/ai-inference-request?logo=semanticrelease&label=Release)](https://github.com/op5dev/ai-inference-request/releases "View all releases.")
1+
[![GitHub license](https://img.shields.io/github/license/op5dev/prompt-ai?logo=apache&label=License)](LICENSE "Apache License 2.0.")
2+
[![GitHub release tag](https://img.shields.io/github/v/release/op5dev/prompt-ai?logo=semanticrelease&label=Release)](https://github.com/op5dev/prompt-ai/releases "View all releases.")
33
*
4-
[![GitHub repository stargazers](https://img.shields.io/github/stars/op5dev/ai-inference-request)](https://github.com/op5dev/ai-inference-request "Become a stargazer.")
4+
[![GitHub repository stargazers](https://img.shields.io/github/stars/op5dev/prompt-ai)](https://github.com/op5dev/prompt-ai "Become a stargazer.")
55

6-
# AI Inference Request via GitHub Action
6+
# Prompt GitHub AI Models via GitHub Action
77

88
> [!TIP]
9-
> [AI inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.") GitHub Models via this [GitHub Action](https://github.com/marketplace/actions/ai-inference-request-via-github-action "GitHub Actions marketplace.").
9+
> Prompt GitHub AI Models using [inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.") via GitHub Action API.
1010
1111
</br>
1212

1313
## Usage Examples
1414

1515
[Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task "Comparison of AI models for GitHub.") to choose the best one for your use-case.
1616

17+
### Summarize GitHub Issues
18+
1719
```yml
1820
on:
1921
issues:
@@ -30,18 +32,13 @@ jobs:
3032
steps:
3133
- name: Summarize issue
3234
id: prompt
33-
uses: op5dev/ai-inference-request@v2
35+
uses: op5dev/prompt-ai@v2
3436
with:
35-
payload: |
36-
model: openai/gpt-4.1-mini
37-
messages:
38-
- role: system
39-
content: You are a helpful assistant running within GitHub CI.
40-
- role: user
41-
content: Concisely summarize this GitHub issue titled ${{ github.event.issue.title }}: ${{ github.event.issue.body }}
42-
max_tokens: 100
43-
temperature: 0.9
44-
top_p: 0.9
37+
user-prompt: |
38+
Concisely summarize the GitHub issue
39+
with title '${{ github.event.issue.title }}'
40+
and body: ${{ github.event.issue.body }}
41+
max_tokens: 250
4542

4643
- name: Comment summary
4744
run: gh issue comment $NUMBER --body "$SUMMARY"
@@ -51,31 +48,92 @@ jobs:
5148
SUMMARY: ${{ steps.prompt.outputs.response }}
5249
```
5350
51+
### Troubleshoot Terraform Deployments
52+
53+
```yml
54+
on:
55+
pull_request:
56+
push:
57+
branches: main
58+
59+
jobs:
60+
provision:
61+
runs-on: ubuntu-latest
62+
63+
permissions:
64+
actions: read
65+
checks: write
66+
contents: read
67+
pull-requests: write
68+
models: read
69+
70+
steps:
71+
- name: Checkout repository
72+
uses: actions/checkout@v4
73+
74+
- name: Setup Terraform
75+
uses: hashicorp/setup-terraform@v3
76+
77+
- name: Provision Terraform
78+
id: provision
79+
uses: op5dev/tf-via-pr@v13
80+
with:
81+
working-directory: env/dev
82+
command: ${{ github.event_name == 'push' && 'apply' || 'plan' }}
83+
84+
- name: Troubleshoot Terraform
85+
if: failure()
86+
uses: op5dev/prompt-ai@v2
87+
with:
88+
model: openai/gpt-4.1-mini
89+
system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
90+
user-prompt: Troubleshoot the following Terraform output; ${{ steps.provision.outputs.result }}
91+
max-tokens: 500
92+
temperature: 0.7
93+
top_p: 0.9
94+
```
95+
5496
</br>
5597
5698
## Inputs
5799
58-
Either `payload` or `payload-file` is required with at least `model` and `messages` parameters, per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").
59-
60-
| Type | Name | Description |
61-
| ------ | -------------------- | ----------------------------------------------------------------------------------------------------------- |
62-
| Data | `payload` | Body parameters of the inference request in YAML format.</br>Example: `model…` |
63-
| Data | `payload-file` | Path to a file containing the body parameters of the inference request.</br>Example: `./payload.{json,yml}` |
64-
| Config | `show-payload` | Whether to show the payload in the logs.</br>Default: `true` |
65-
| Config | `show-response` | Whether to show the response content in the logs.</br>Default: `true` |
66-
| Admin | `github-api-version` | GitHub API version.</br>Default: `2022-11-28` |
67-
| Admin | `github-token` | GitHub token.</br>Default: `github.token` |
68-
| Admin | `org` | Organization for request attribution.</br>Example: `github.repository_owner` |
100+
The only required input is `user-prompt`, while every parameter can be tuned per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").
101+
102+
| Type | Name | Description |
103+
| ---------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
104+
| Common | `model` | Model ID to use for the inference request.</br>(e.g., `openai/gpt-4.1-mini`) |
105+
| Common | `system-prompt` | Prompt associated with the `system` role.</br>(e.g., `You are a helpful software engineering assistant`) |
106+
| Common | `user-prompt` | Prompt associated with the `user` role.</br>(e.g., `List best practices for workflows with GitHub Actions`) |
107+
| Common | `max-tokens` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max-tokens` cannot exceed the model's context length.</br>(e.g., `100`) |
108+
| Common | `temperature` | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic.</br>(e.g., range is `[0, 1]`) |
109+
| Common | `top-p` | An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.</br>(e.g., range is `[0, 1]`) |
110+
| Additional | `frequency-penalty` | A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text.</br>(e.g., range is `[-2, 2]`) |
111+
| Additional | `modalities` | The modalities that the model is allowed to use for the chat completions response.</br>(e.g., from `text` and `audio`) |
112+
| Additional | `org` | Organization to which the request is to be attributed.</br>(e.g., `github.repository_owner`) |
113+
| Additional | `presence-penalty` | A value that influences the probability of generated tokens appearing based on their existing presence in generated text.</br>(e.g., range is `[-2, 2]`) |
114+
| Additional | `seed` | If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result.</br>(e.g., `123456789`) |
115+
| Additional | `stop` | A collection of textual sequences that will end completion generation.</br>(e.g., `["\n\n", "END"]`) |
116+
| Additional | `stream` | A value indicating whether chat completions should be streamed for this request.</br>(e.g., `false`) |
117+
| Additional | `stream-include-usage` | Whether to include usage information in the response.</br>(e.g., `false`) |
118+
| Additional | `tool-choice` | If specified, the model will configure which of the provided tools it can use for the chat completions response.</br>(e.g., 'auto', 'required', or 'none') |
119+
| Payload | `payload` | Body parameters of the inference request in JSON format.</br>(e.g., `{"model"…`) |
120+
| Payload | `payload-file` | Path to a JSON file containing the body parameters of the inference request.</br>(e.g., `./payload.json`) |
121+
| Payload | `show-payload` | Whether to show the body parameters in the workflow log.</br>(e.g., `false`) |
122+
| Payload | `show-response` | Whether to show the response content in the workflow log.</br>(e.g., `true`) |
123+
| GitHub | `github-api-version` | GitHub API version.</br>(e.g., `2022-11-28`) |
124+
| GitHub | `github-token` | GitHub token for authorization.</br>(e.g., `github.token`) |
69125

70126
</br>
71127

72128
## Outputs
73129

74-
| Name | Description |
75-
| --------------- | -------------------------------------------------------- |
76-
| `response` | Response content from the inference request. |
77-
| `response-file` | File path containing the complete, raw response. |
78-
| `payload` | Body parameters of the inference request in JSON format. |
130+
Due to GitHub's API limitations, the `response` content is truncated to 262,144 (2^18) characters so the complete, raw response is saved to `response-file`.
131+
132+
| Name | Description |
133+
| --------------- | --------------------------------------------------------------- |
134+
| `response` | Response content from the inference request. |
135+
| `response-file` | File path containing the complete, raw response in JSON format. |
136+
| `payload` | Body parameters of the inference request in JSON format. |
79137

80138
</br>
81139

@@ -91,21 +149,21 @@ View [security policy and reporting instructions](SECURITY.md).
91149

92150
## Changelog
93151

94-
View [all notable changes](https://github.com/op5dev/ai-inference-request/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").
152+
View [all notable changes](https://github.com/op5dev/prompt-ai/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").
95153

96154
> [!TIP]
97155
>
98156
> All forms of **contribution are very welcome** and deeply appreciated for fostering open-source projects.
99157
>
100-
> - [Create a PR](https://github.com/op5dev/ai-inference-request/pulls "Create a pull request.") to contribute changes you'd like to see.
101-
> - [Raise an issue](https://github.com/op5dev/ai-inference-request/issues "Raise an issue.") to propose changes or report unexpected behavior.
102-
> - [Open a discussion](https://github.com/op5dev/ai-inference-request/discussions "Open a discussion.") to discuss broader topics or questions.
103-
> - [Become a stargazer](https://github.com/op5dev/ai-inference-request/stargazers "Become a stargazer.") if you find this project useful.
158+
> - [Create a PR](https://github.com/op5dev/prompt-ai/pulls "Create a pull request.") to contribute changes you'd like to see.
159+
> - [Raise an issue](https://github.com/op5dev/prompt-ai/issues "Raise an issue.") to propose changes or report unexpected behavior.
160+
> - [Open a discussion](https://github.com/op5dev/prompt-ai/discussions "Open a discussion.") to discuss broader topics or questions.
161+
> - [Become a stargazer](https://github.com/op5dev/prompt-ai/stargazers "Become a stargazer.") if you find this project useful.
104162

105163
</br>
106164

107165
## License
108166

109167
- This project is licensed under the **permissive** [Apache License 2.0](LICENSE "Apache License 2.0.").
110-
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/ai-inference-request/graphs/contributors "Contributors.").
168+
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/prompt-ai/graphs/contributors "Contributors.").
111169
- Copyright 2016-present [Rishav Dhar](https://rdhar.dev "Rishav Dhar's profile.") — All wrongs reserved.

SECURITY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,4 @@ Integrating security in your CI/CD pipeline is critical to practicing DevSecOps.
1717

1818
## Reporting a Vulnerability
1919

20-
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/ai-inference-request/security/advisories/new "Create a new security advisory.").
20+
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/prompt-ai/security/advisories/new "Create a new security advisory.").

0 commit comments

Comments
 (0)