I tried to use tool calls with gpt-oss-120b (hosted on Scaleway inference API) and it didn't work.
The same exact pipeline works well with llama-3.3-70b-instruct so I'm wondering if there's a few changes to perform on Langchain's side to support this model with tool calling ?