Skip to content

Conversation

@mare5x
Copy link
Contributor

@mare5x mare5x commented Nov 11, 2025

  • With this workaround gpt-oss is now significantly more robust as it often does not call the submit_query_id tool but now we return the latest dataframe anyway.
  • Added a gpt-oss:20b config file.

Addresses #84

@mare5x mare5x requested a review from kosstbarz November 11, 2025 10:53
# Conflicts:
#	databao/agents/lighthouse/graph.py
@kosstbarz
Copy link
Contributor

I think if this change can break our existing use case, when agent asks for a clarification? Do we have this use case in Databao?

@mare5x
Copy link
Contributor Author

mare5x commented Nov 13, 2025

I think if this change can break our existing use case, when agent asks for a clarification? Do we have this use case in Databao?

I don't think that was ever properly discussed (#69). But together with changes from #87 we can sort of handle the issue with output modality hints.

Do you have an idea of how we could detect if the LLM is asking for a clarification vs if the LLM just didn't call submit?

@mare5x
Copy link
Contributor Author

mare5x commented Nov 13, 2025

I was doing some testing and it conveniently turns out that when the LLM asks for clarification (without a tool call), no dataframe will be returned by the workaround in this PR because every ask starts with a init_state which contains df=None. Consequently, a dataframe will only be returned if run_sql_query was executed in the current ask thread.

@mare5x mare5x merged commit 5768da3 into main Nov 14, 2025
2 checks passed
@mare5x mare5x deleted the mhostnik/lh-return-df branch November 14, 2025 10:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Empty output dataframes in the sample project on a local machine with ollama

3 participants