-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(lark): Add /ask command using OpenAI #251
base: main
Are you sure you want to change the base?
Conversation
Integrates with the OpenAI API to allow users to ask questions directly within Lark chat. The command handles basic argument parsing, help requests, and sends the query to the configured AI model.
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @wuhuizuo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request introduces the /ask
command to the Lark chat application, enabling users to query OpenAI's language models directly within Lark. It adds the necessary dependencies, defines the command handler, and integrates with the OpenAI API to process user questions and return AI-generated responses. The implementation includes argument parsing, help text, and a placeholder for future integration with internal tools.
Highlights
- New Feature: Adds the
/ask
command to Lark, allowing users to ask questions to an AI assistant. - OpenAI Integration: Integrates with the OpenAI API to process questions and generate responses.
- Command Handling: Implements argument parsing and help text for the
/ask
command. - Context Passing: Sets up the context with OpenAI configuration, model, and system prompt.
Changelog
Click here to see the changelog
- chatops-lark/go.mod
- Added
github.com/sashabaranov/go-openai
as a dependency.
- Added
- chatops-lark/go.sum
- Updated checksums to include
github.com/sashabaranov/go-openai
.
- Updated checksums to include
- chatops-lark/pkg/events/handler/ask.go
- Created a new file
ask.go
to handle the/ask
command logic. - Implemented
runCommandAsk
function to process user questions and interact with the OpenAI API. - Added help text and detailed help text for the
/ask
command. - Implemented
processAskRequest
function to send the question to OpenAI and return the response. - Added logging for debugging purposes.
- Created a new file
- chatops-lark/pkg/events/handler/root.go
- Imported the
github.com/sashabaranov/go-openai
package. - Added constants for OpenAI configuration keys.
- Added the
/ask
command to thecommandConfigs
map. - Implemented the
SetupContext
function for the/ask
command to configure the OpenAI client.
- Imported the
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A bot in the chat,
Answers questions just like that.
With AI's might,
It sheds some light,
Making knowledge easy to grab.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the /ask
command, enabling users to query OpenAI's API directly within Lark chat. The implementation includes argument parsing, help requests, and interaction with the configured AI model. Overall, the changes are well-structured and add significant functionality to the bot. However, there are a few areas that could be improved for better error handling, context management, and code clarity.
Summary of Findings
- Error Handling in OpenAI API Call: The error handling for the OpenAI API call could be improved by providing more context in the error message. This would make it easier to debug issues when the API call fails.
- Context Management for OpenAI Configuration: The way OpenAI configuration is handled in the
SetupContext
function could be made more robust by adding error checking and validation for the configuration values. - TODO in
processAskRequest
: TheprocessAskRequest
function contains aTODO
comment indicating that the actual interaction with the AI model and MCP tools needs to be implemented. This should be addressed before merging the pull request.
Merge Readiness
The pull request introduces a valuable feature by integrating OpenAI with Lark. However, there are some improvements that should be addressed before merging, particularly regarding error handling and context management. While I am unable to directly approve this pull request, I recommend addressing the identified issues to ensure the stability and reliability of the new feature. Ensure that others review and approve this code before merging.
} | ||
|
||
// processAskRequest is a placeholder for the core logic interacting with the LLM and tools. | ||
// TODO: Implement the actual interaction with the AI model and MCP tools. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an important TODO
that needs to be addressed before merging. Ensure that the actual interaction with the AI model and MCP tools is implemented.
// TODO: Implement the actual interaction with the AI model and MCP tools.
// This implementation should include parsing the question, querying MCP tools, formatting a prompt, sending the prompt to the LLM, receiving the response, and formatting the response for Lark.
if err != nil { | ||
return "", fmt.Errorf("failed to process ask request: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log.Err(err).Msg("failed to create chat completion") | ||
return "", fmt.Errorf("failed to create chat completion: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case map[string]any: | ||
openaiCfg = openai.DefaultConfig(v["api_key"].(string)) | ||
openaiCfg.BaseURL = v["base_url"].(string) | ||
openaiCfg.APIType = openai.APIType(v["api_type"].(string)) | ||
openaiCfg.APIVersion = v["api_version"].(string) | ||
openaiCfg.Engine = v["engine"].(string) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be beneficial to add error checking after each type assertion to ensure the values are of the expected type. This can prevent unexpected panics if the configuration is not set up correctly.
if apiKey, ok := v["api_key"].(string); ok {
openaiCfg = openai.DefaultConfig(apiKey)
} else {
log.Error().Msg("api_key not found or not a string")
return ctx // or return an error, depending on desired behavior
}
if baseURL, ok := v["base_url"].(string); ok {
openaiCfg.BaseURL = baseURL
} else {
log.Error().Msg("base_url not found or not a string")
}
if apiType, ok := v["api_type"].(string); ok {
openaiCfg.APIType = openai.APIType(apiType)
} else {
log.Error().Msg("api_type not found or not a string")
}
if apiVersion, ok := v["api_version"].(string); ok {
openaiCfg.APIVersion = apiVersion
} else {
log.Error().Msg("api_version not found or not a string")
}
if engine, ok := v["engine"].(string); ok {
openaiCfg.Engine = engine
} else {
log.Error().Msg("engine not found or not a string")
}
newCtx = context.WithValue(newCtx, ctxKeyOpenAIModel, config["openai.model"]) | ||
newCtx = context.WithValue(newCtx, ctxKeyOpenAISystemPrompt, config["openai.system_prompt"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding error checking to ensure that config["openai.model"]
and config["openai.system_prompt"]
exist and are of the expected type (string).
model, ok := config["openai.model"].(string)
if !ok {
log.Error().Msg("openai.model not found or not a string")
return ctx // or return an error
}
systemPrompt, ok := config["openai.system_prompt"].(string)
if !ok {
log.Error().Msg("openai.system_prompt not found or not a string")
return ctx // or return an error
}
newCtx := context.WithValue(ctx, ctxKeyOpenAIConfig, openaiCfg)
newCtx = context.WithValue(newCtx, ctxKeyOpenAIModel, model)
newCtx = context.WithValue(newCtx, ctxKeyOpenAISystemPrompt, systemPrompt)
return newCtx
@gemini-code-assist[bot]: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Integrate with the Mark3 Control Plane (MCP) via the mcp-go client to list available tools. The list is formatted as YAML. Adds required dependencies including mcp-go and testify.
- Added new configuration options for LLM and MCP tools in README.md - Updated go.mod and go.sum with new dependencies - Implemented LLM and MCP tool interaction in ask.go - Added new file ask_mcp.go for MCP tool handling - Refactored context setup functions for commands
Integrates with the OpenAI API to allow users to ask questions directly within Lark chat. The command handles basic argument parsing, help requests, and sends the query to the configured AI model.