You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* reusable terraform-based ci workflow
Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
* support all body parameters of the inference request api and package within json payload in order to properly escape special characters
Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
* update docs to reflect the name change as well as input parameters
Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
---------
Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
[](https://github.com/op5dev/ai-inference-request/releases"View all releases.")
[](https://github.com/op5dev/prompt-ai/releases"View all releases.")
3
3
*
4
-
[](https://github.com/op5dev/ai-inference-request"Become a stargazer.")
4
+
[](https://github.com/op5dev/prompt-ai"Become a stargazer.")
5
5
6
-
# AI Inference Request via GitHub Action
6
+
# Prompt GitHub AI Models via GitHub Action
7
7
8
8
> [!TIP]
9
-
> [AI inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request"GitHub API documentation.")GitHub Models via this [GitHub Action](https://github.com/marketplace/actions/ai-inference-request-via-github-action"GitHub Actions marketplace.").
9
+
> Prompt GitHub AI Models using [inference request](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request"GitHub API documentation.") via GitHub Action API.
10
10
11
11
</br>
12
12
13
13
## Usage Examples
14
14
15
15
[Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task"Comparison of AI models for GitHub.") to choose the best one for your use-case.
16
16
17
+
### Summarize GitHub Issues
18
+
17
19
```yml
18
20
on:
19
21
issues:
@@ -30,18 +32,13 @@ jobs:
30
32
steps:
31
33
- name: Summarize issue
32
34
id: prompt
33
-
uses: op5dev/ai-inference-request@v2
35
+
uses: op5dev/prompt-ai@v2
34
36
with:
35
-
payload: |
36
-
model: openai/gpt-4.1-mini
37
-
messages:
38
-
- role: system
39
-
content: You are a helpful assistant running within GitHub CI.
system-prompt: You are a helpful DevOps assistant and expert at debugging Terraform errors.
90
+
user-prompt: Troubleshoot the following Terraform output; ${{ steps.provision.outputs.result }}
91
+
max-tokens: 500
92
+
temperature: 0.7
93
+
top_p: 0.9
94
+
```
95
+
54
96
</br>
55
97
56
98
## Inputs
57
99
58
-
Either `payload` or `payload-file` is required with at least `model` and `messages` parameters, per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").
| Admin | `org` | Organization for request attribution.</br>Example: `github.repository_owner`|
100
+
The only required input is `user-prompt`, while every parameter can be tuned per [documentation](https://docs.github.com/en/rest/models/inference?apiVersion=2022-11-28#run-an-inference-request "GitHub API documentation.").
| Common | `model` | Model ID to use for the inference request.</br>(e.g., `openai/gpt-4.1-mini`) |
105
+
| Common | `system-prompt` | Prompt associated with the `system` role.</br>(e.g., `You are a helpful software engineering assistant`) |
106
+
| Common | `user-prompt` | Prompt associated with the `user` role.</br>(e.g., `List best practices for workflows with GitHub Actions`) |
107
+
| Common | `max-tokens` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max-tokens` cannot exceed the model's context length.</br>(e.g., `100`) |
108
+
| Common | `temperature` | The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic.</br>(e.g., range is `[0, 1]`) |
109
+
| Common | `top-p` | An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.</br>(e.g., range is `[0, 1]`) |
110
+
| Additional | `frequency-penalty` | A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text.</br>(e.g., range is `[-2, 2]`) |
111
+
| Additional | `modalities` | The modalities that the model is allowed to use for the chat completions response.</br>(e.g., from `text` and `audio`) |
112
+
| Additional | `org` | Organization to which the request is to be attributed.</br>(e.g., `github.repository_owner`) |
113
+
| Additional | `presence-penalty` | A value that influences the probability of generated tokens appearing based on their existing presence in generated text.</br>(e.g., range is `[-2, 2]`) |
114
+
| Additional | `seed` | If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result.</br>(e.g., `123456789`) |
115
+
| Additional | `stop` | A collection of textual sequences that will end completion generation.</br>(e.g., `["\n\n", "END"]`) |
116
+
| Additional | `stream` | A value indicating whether chat completions should be streamed for this request.</br>(e.g., `false`) |
117
+
| Additional | `stream-include-usage` | Whether to include usage information in the response.</br>(e.g., `false`) |
118
+
| Additional | `tool-choice` | If specified, the model will configure which of the provided tools it can use for the chat completions response.</br>(e.g., 'auto', 'required', or 'none') |
119
+
| Payload | `payload` | Body parameters of the inference request in JSON format.</br>(e.g., `{"model"…`) |
120
+
| Payload | `payload-file` | Path to a JSON file containing the body parameters of the inference request.</br>(e.g., `./payload.json`) |
121
+
| Payload | `show-payload` | Whether to show the body parameters in the workflow log.</br>(e.g., `false`) |
122
+
| Payload | `show-response` | Whether to show the response content in the workflow log.</br>(e.g., `true`) |
123
+
| GitHub | `github-api-version` | GitHub API version.</br>(e.g., `2022-11-28`) |
| `response` | Response content from the inference request. |
77
-
| `response-file` | File path containing the complete, raw response. |
78
-
| `payload` | Body parameters of the inference request in JSON format. |
130
+
Due to GitHub's API limitations, the `response` content is truncated to 262,144 (2^18) characters so the complete, raw response is saved to `response-file`.
| `response` | Response content from the inference request. |
135
+
| `response-file` | File path containing the complete, raw response in JSON format. |
136
+
| `payload` | Body parameters of the inference request in JSON format. |
79
137
80
138
</br>
81
139
@@ -91,21 +149,21 @@ View [security policy and reporting instructions](SECURITY.md).
91
149
92
150
## Changelog
93
151
94
-
View [all notable changes](https://github.com/op5dev/ai-inference-request/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").
152
+
View [all notable changes](https://github.com/op5dev/prompt-ai/releases "Releases.") to this project in [Keep a Changelog](https://keepachangelog.com "Keep a Changelog.") format, which adheres to [Semantic Versioning](https://semver.org "Semantic Versioning.").
95
153
96
154
> [!TIP]
97
155
>
98
156
> All forms of **contribution are very welcome** and deeply appreciated for fostering open-source projects.
99
157
>
100
-
> - [Create a PR](https://github.com/op5dev/ai-inference-request/pulls "Create a pull request.") to contribute changes you'd like to see.
101
-
> - [Raise an issue](https://github.com/op5dev/ai-inference-request/issues "Raise an issue.") to propose changes or report unexpected behavior.
102
-
> - [Open a discussion](https://github.com/op5dev/ai-inference-request/discussions "Open a discussion.") to discuss broader topics or questions.
103
-
> - [Become a stargazer](https://github.com/op5dev/ai-inference-request/stargazers "Become a stargazer.") if you find this project useful.
158
+
> - [Create a PR](https://github.com/op5dev/prompt-ai/pulls "Create a pull request.") to contribute changes you'd like to see.
159
+
> - [Raise an issue](https://github.com/op5dev/prompt-ai/issues "Raise an issue.") to propose changes or report unexpected behavior.
160
+
> - [Open a discussion](https://github.com/op5dev/prompt-ai/discussions "Open a discussion.") to discuss broader topics or questions.
161
+
> - [Become a stargazer](https://github.com/op5dev/prompt-ai/stargazers "Become a stargazer.") if you find this project useful.
104
162
105
163
</br>
106
164
107
165
## License
108
166
109
167
- This project is licensed under the **permissive** [Apache License 2.0](LICENSE "Apache License 2.0.").
110
-
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/ai-inference-request/graphs/contributors "Contributors.").
168
+
- All works herein are my own, shared of my own volition, and [contributors](https://github.com/op5dev/prompt-ai/graphs/contributors "Contributors.").
Copy file name to clipboardExpand all lines: SECURITY.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,4 +17,4 @@ Integrating security in your CI/CD pipeline is critical to practicing DevSecOps.
17
17
18
18
## Reporting a Vulnerability
19
19
20
-
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/ai-inference-request/security/advisories/new"Create a new security advisory.").
20
+
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/prompt-ai/security/advisories/new"Create a new security advisory.").
0 commit comments