Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(lark): Add /ask command using OpenAI #251

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

wuhuizuo
Copy link
Contributor

@wuhuizuo wuhuizuo commented Apr 1, 2025

Integrates with the OpenAI API to allow users to ask questions directly within Lark chat. The command handles basic argument parsing, help requests, and sends the query to the configured AI model.

Integrates with the OpenAI API to allow users to ask questions
directly within Lark chat. The command handles basic argument
parsing, help requests, and sends the query to the configured
AI model.
Copy link

ti-chi-bot bot commented Apr 1, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link

ti-chi-bot bot commented Apr 1, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from wuhuizuo, ensuring that each of them provides their approval before proceeding. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @wuhuizuo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request introduces the /ask command to the Lark chat application, enabling users to query OpenAI's language models directly within Lark. It adds the necessary dependencies, defines the command handler, and integrates with the OpenAI API to process user questions and return AI-generated responses. The implementation includes argument parsing, help text, and a placeholder for future integration with internal tools.

Highlights

  • New Feature: Adds the /ask command to Lark, allowing users to ask questions to an AI assistant.
  • OpenAI Integration: Integrates with the OpenAI API to process questions and generate responses.
  • Command Handling: Implements argument parsing and help text for the /ask command.
  • Context Passing: Sets up the context with OpenAI configuration, model, and system prompt.

Changelog

Click here to see the changelog
  • chatops-lark/go.mod
    • Added github.com/sashabaranov/go-openai as a dependency.
  • chatops-lark/go.sum
    • Updated checksums to include github.com/sashabaranov/go-openai.
  • chatops-lark/pkg/events/handler/ask.go
    • Created a new file ask.go to handle the /ask command logic.
    • Implemented runCommandAsk function to process user questions and interact with the OpenAI API.
    • Added help text and detailed help text for the /ask command.
    • Implemented processAskRequest function to send the question to OpenAI and return the response.
    • Added logging for debugging purposes.
  • chatops-lark/pkg/events/handler/root.go
    • Imported the github.com/sashabaranov/go-openai package.
    • Added constants for OpenAI configuration keys.
    • Added the /ask command to the commandConfigs map.
    • Implemented the SetupContext function for the /ask command to configure the OpenAI client.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


A bot in the chat,
Answers questions just like that.
With AI's might,
It sheds some light,
Making knowledge easy to grab.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ti-chi-bot ti-chi-bot bot added the size/L label Apr 1, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the /ask command, enabling users to query OpenAI's API directly within Lark chat. The implementation includes argument parsing, help requests, and interaction with the configured AI model. Overall, the changes are well-structured and add significant functionality to the bot. However, there are a few areas that could be improved for better error handling, context management, and code clarity.

Summary of Findings

  • Error Handling in OpenAI API Call: The error handling for the OpenAI API call could be improved by providing more context in the error message. This would make it easier to debug issues when the API call fails.
  • Context Management for OpenAI Configuration: The way OpenAI configuration is handled in the SetupContext function could be made more robust by adding error checking and validation for the configuration values.
  • TODO in processAskRequest: The processAskRequest function contains a TODO comment indicating that the actual interaction with the AI model and MCP tools needs to be implemented. This should be addressed before merging the pull request.

Merge Readiness

The pull request introduces a valuable feature by integrating OpenAI with Lark. However, there are some improvements that should be addressed before merging, particularly regarding error handling and context management. While I am unable to directly approve this pull request, I recommend addressing the identified issues to ensure the stability and reliability of the new feature. Ensure that others review and approve this code before merging.

}

// processAskRequest is a placeholder for the core logic interacting with the LLM and tools.
// TODO: Implement the actual interaction with the AI model and MCP tools.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This is an important TODO that needs to be addressed before merging. Ensure that the actual interaction with the AI model and MCP tools is implemented.

// TODO: Implement the actual interaction with the AI model and MCP tools.
	// This implementation should include parsing the question, querying MCP tools, formatting a prompt, sending the prompt to the LLM, receiving the response, and formatting the response for Lark.

Comment on lines +73 to +74
if err != nil {
return "", fmt.Errorf("failed to process ask request: %w", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding more context to the error message, such as the question that was asked. This can help with debugging when the ask request fails.

return "", fmt.Errorf("failed to process ask request for question %q: %w", question, err)

Comment on lines 116 to 117
log.Err(err).Msg("failed to create chat completion")
return "", fmt.Errorf("failed to create chat completion: %w", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider including the model name in the error message for better debugging.

return "", fmt.Errorf("failed to create chat completion with model %s: %w", openaiModel.(string), err)

Comment on lines 65 to 70
case map[string]any:
openaiCfg = openai.DefaultConfig(v["api_key"].(string))
openaiCfg.BaseURL = v["base_url"].(string)
openaiCfg.APIType = openai.APIType(v["api_type"].(string))
openaiCfg.APIVersion = v["api_version"].(string)
openaiCfg.Engine = v["engine"].(string)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It would be beneficial to add error checking after each type assertion to ensure the values are of the expected type. This can prevent unexpected panics if the configuration is not set up correctly.

if apiKey, ok := v["api_key"].(string); ok {
		openaiCfg = openai.DefaultConfig(apiKey)
	} else {
		log.Error().Msg("api_key not found or not a string")
		return ctx // or return an error, depending on desired behavior
	}
	if baseURL, ok := v["base_url"].(string); ok {
		openaiCfg.BaseURL = baseURL
	} else {
		log.Error().Msg("base_url not found or not a string")
	}
	if apiType, ok := v["api_type"].(string); ok {
		openaiCfg.APIType = openai.APIType(apiType)
	} else {
		log.Error().Msg("api_type not found or not a string")
	}
	if apiVersion, ok := v["api_version"].(string); ok {
		openaiCfg.APIVersion = apiVersion
	} else {
		log.Error().Msg("api_version not found or not a string")
	}
	if engine, ok := v["engine"].(string); ok {
		openaiCfg.Engine = engine
	} else {
		log.Error().Msg("engine not found or not a string")
	}

Comment on lines 76 to 77
newCtx = context.WithValue(newCtx, ctxKeyOpenAIModel, config["openai.model"])
newCtx = context.WithValue(newCtx, ctxKeyOpenAISystemPrompt, config["openai.system_prompt"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding error checking to ensure that config["openai.model"] and config["openai.system_prompt"] exist and are of the expected type (string).

model, ok := config["openai.model"].(string)
	if !ok {
	log.Error().Msg("openai.model not found or not a string")
	return ctx // or return an error
	}
systemPrompt, ok := config["openai.system_prompt"].(string)
	if !ok {
	log.Error().Msg("openai.system_prompt not found or not a string")
	return ctx // or return an error
	}

	newCtx := context.WithValue(ctx, ctxKeyOpenAIConfig, openaiCfg)
	newCtx = context.WithValue(newCtx, ctxKeyOpenAIModel, model)
	newCtx = context.WithValue(newCtx, ctxKeyOpenAISystemPrompt, systemPrompt)
	return newCtx

Copy link

ti-chi-bot bot commented Apr 1, 2025

@gemini-code-assist[bot]: changing LGTM is restricted to collaborators

In response to this:

Code Review

This pull request introduces the /ask command, enabling users to query OpenAI's API directly within Lark chat. The implementation includes argument parsing, help requests, and interaction with the configured AI model. Overall, the changes are well-structured and add significant functionality to the bot. However, there are a few areas that could be improved for better error handling, context management, and code clarity.

Summary of Findings

  • Error Handling in OpenAI API Call: The error handling for the OpenAI API call could be improved by providing more context in the error message. This would make it easier to debug issues when the API call fails.
  • Context Management for OpenAI Configuration: The way OpenAI configuration is handled in the SetupContext function could be made more robust by adding error checking and validation for the configuration values.
  • TODO in processAskRequest: The processAskRequest function contains a TODO comment indicating that the actual interaction with the AI model and MCP tools needs to be implemented. This should be addressed before merging the pull request.

Merge Readiness

The pull request introduces a valuable feature by integrating OpenAI with Lark. However, there are some improvements that should be addressed before merging, particularly regarding error handling and context management. While I am unable to directly approve this pull request, I recommend addressing the identified issues to ensure the stability and reliability of the new feature. Ensure that others review and approve this code before merging.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

wuhuizuo added 2 commits April 1, 2025 21:34
Integrate with the Mark3 Control Plane (MCP) via the mcp-go
client to list available tools.

The list is formatted as YAML. Adds required dependencies including
mcp-go and testify.
- Added new configuration options for LLM and MCP tools in README.md
- Updated go.mod and go.sum with new dependencies
- Implemented LLM and MCP tool interaction in ask.go
- Added new file ask_mcp.go for MCP tool handling
- Refactored context setup functions for commands
@ti-chi-bot ti-chi-bot bot added size/XL and removed size/L labels Apr 2, 2025
@wuhuizuo wuhuizuo marked this pull request as ready for review April 2, 2025 10:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant