-
Notifications
You must be signed in to change notification settings - Fork 21
Add Structured Output Support #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 10 commits
42c2a25
52eebbb
53495be
923d837
f683534
89662bb
7b4e89d
0713969
83f9527
5434429
e001fd5
94eb06b
95e79b1
50d15a9
ac6b526
6518c7b
e6fe94a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -10,6 +10,8 @@ | |||||
| import optuna | ||||||
| import optuna_dashboard | ||||||
| import plotly | ||||||
| from pydantic import BaseModel | ||||||
| from pydantic import Field | ||||||
|
|
||||||
|
|
||||||
| class OptunaMCP(FastMCP): | ||||||
|
|
@@ -59,12 +61,38 @@ class TrialToAdd: | |||||
| system_attrs: dict[str, typing.Any] | None | ||||||
|
|
||||||
|
|
||||||
| class StudyResponse(BaseModel): | ||||||
| study_name: str | ||||||
| sampler_name: ( | ||||||
| typing.Literal["TPESampler", "NSGAIISampler", "RandomSampler", "GPSampler"] | None | ||||||
| ) = Field(default=None, description="The name of the sampler used in the study, if available.") | ||||||
| directions: list[str] | None = Field( | ||||||
|
||||||
| directions: list[str] | None = Field( | |
| directions: list[typing.Literal["minimize", "maximize"]] | None = Field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Notes] Since metric names correspond to the study's directions, we might be able to include the response in StudyResponse as follows:
StudyResponse:
...
metric_names: | None = Field(default=None, description="...")But this new approach may need further discussion, so we can work on it in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative approach is to return the list of TrialResponse.
So, how about keeping the structured_output=False for future updates?
| @mcp.tool(structured_output=True) | |
| @mcp.tool() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of MCP sets the default value of structured_output to None, which automatically chose the return type (either structured or unstructured). In the case of get_trials, structured output is returned if structured_output is not specified.
To preserve the current behavior of main for future updates, I explicitly set structured_output=False. This also simplifies the test cases, as mentioned in this discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I found that the proposed best_trial returns more information, such as user_attrs and system_attrs, than the previous implementation, while the proposed best_trials returns less information, such as distributions, datetime_start, datetime_end, and intermediate_values.
This PR did not introduce this gap between best_trial and best_trials, and we can address this issue in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for pointing that out. At the moment, I don’t have a strong preference regarding what should be returned in best_trial and best_trials. This issue will be addressed in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, we can define the response type for the optuna dashboard, such as:
class OptunaDashboardResponse(BaseModel):
url: str = Field(description="The URL of the Optuna dashboard.")But this lacks information on whether the server has been newly started or not, so we can discuss this approach in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defining a response type for the Optuna dashboard sounds like a good idea. This will also be addressed in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Notes] A nit, but supported samplers are listed in the definition of
set_sampleras well. We may defineSamplerName=typing.Literal["TPESampler", "NSGAIISampler", "RandomSampler", "GPSampler"]to prevent the inconsistency.But we can work on it in a follow-up PR.