This guide covers migrating an ash_ai app from the old LangChain-based runtime to the ReqLLM-based runtime.
- LangChain runtime integration was removed.
- LLM access now goes through ReqLLM.
- Tool orchestration now goes through
AshAi.ToolLoop. - Prompt-backed actions (
prompt/2) now use ReqLLM model specifications. - Generated chat code (
mix ash_ai.gen.chat) now uses ReqLLM.
- Update dependencies (
:langchainout,:req_llmin). - Move provider keys to
config :req_llm. - Replace LangChain model structs with ReqLLM model specs.
- Replace removed AshAi APIs with ReqLLM-first APIs.
- Re-run chat generator if you use generated chat code.
- Run format/tests/checks.
In mix.exs:
- Remove LangChain dependency.
- Add ReqLLM dependency:
{:req_llm, "~> 1.7"}Then fetch and resolve:
mix deps.getConfigure provider keys under :req_llm in config/runtime.exs.
config :req_llm,
openai_api_key: System.get_env("OPENAI_API_KEY"),
anthropic_api_key: System.get_env("ANTHROPIC_API_KEY"),
google_api_key: System.get_env("GOOGLE_API_KEY")Use only the providers your app needs.
prompt/2 and tool loops now use ReqLLM model specs.
- Before (LangChain struct-based setup):
LangChain.ChatModels.ChatOpenAI.new!(%{model: "gpt-4o"})- After (ReqLLM model spec):
"openai:gpt-4o"Model strings follow "provider:model-name" and can be browsed at https://llmdb.xyz.
| Old | New |
|---|---|
AshAi.setup_ash_ai/2 |
AshAi.ToolLoop.run/2 or AshAi.ToolLoop.stream/2 |
AshAi.functions/1 |
AshAi.list_tools/1 or AshAi.build_tools_and_registry/1 |
AshAi.iex_chat/2 |
AshAi.iex_chat/1 |
The prompt/2 macro remains, but model input uses ReqLLM model specs.
run prompt("openai:gpt-4o",
prompt: "Summarize: <%= @input.arguments.text %>",
tools: true
)Supported model forms:
- String model spec (
"provider:model") - ReqLLM tuple model forms
- Function returning one of the above
For prompt-backed actions, the new customization boundary is:
tools:filters AshAi-exposed tools.extra_tools:adds arbitraryReqLLM.Tools.req_llm_opts:passes provider/request options through to ReqLLM.transform_flow:is the preferred ReqLLM-native customization hook.
Before, custom tools were often attached by mutating the LangChain chain:
run prompt(llm,
tools: true,
modify_chain: fn chain, _context ->
chain
|> LangChain.Chains.LLMChain.add_tools([my_custom_tool])
|> LangChain.Chains.LLMChain.update_custom_context(%{trace_id: "abc"})
end
)Now, keep AshAi tools and arbitrary ReqLLM tools separate:
run prompt("openai:gpt-4o",
tools: true,
extra_tools: [
ReqLLM.Tool.new!(
name: "lookup_weather",
description: "Look up weather by city",
parameter_schema: [city: [type: :string, required: true]],
callback: fn %{"city" => city} -> {:ok, %{city: city, forecast: "sunny"}} end
)
],
req_llm_opts: [trace_id: "abc"]
)If you used modify_chain for prompt customization before, now express those changes directly against transform_flow:
run prompt("openai:gpt-4o",
tools: [],
transform_flow: fn flow_state, _context ->
%{
flow_state
| extra_tools: flow_state.extra_tools ++ [my_custom_tool],
req_llm_opts: Keyword.put(flow_state.req_llm_opts, :trace_id, "abc")
}
end
)Use AshAi.EmbeddingModels.ReqLLM with explicit model and dimensions.
vectorize do
embedding_model {AshAi.EmbeddingModels.ReqLLM,
model: "openai:text-embedding-3-small",
dimensions: 1536
}
endIf your app uses generated chat files, re-run:
mix ash_ai.gen.chat --liveor your existing generator flags. The generated code now uses ReqLLM and AshAi.ToolLoop.
Run:
mix format
mix test
mix checkOptional sanity check:
rg -n "LangChain|langchain" lib test configverbose?on prompt-backed actions is supported and logs tool-loop lifecycle events when set totrue.- Prompt-backed actions default to
max_iterations: :infinityfor tool loops; set an integer to enforce limits. - Tool-loop failures in prompt-backed actions are returned as action errors (instead of runtime raises), including the loop reason.
- Unconstrained
:mapprompt return types use a permissive schema (type: object) to avoid over-constraining map keys.
The old LangChain-era adapter concepts map to ReqLLM-era behavior as follows:
StructuredOutput->ReqLLM.generate_object/4with schema-derived typed action returns.CompletionTool->AshAi.ToolLoop.run/2orAshAi.ToolLoop.stream/2tool-calling orchestration.RequestJson-> prompt templates/messages + typed return schema casting inprompt/2.Raw-> use non-structured text generation directly via ReqLLM in custom code paths when typed action returns are not desired.
When migrating old modify_chain usage, move that customization into transform_flow, tools:, extra_tools:, and req_llm_opts: directly.
AshAi.EmbeddingModels.ReqLLM.generate/2 returns:
{:ok, embeddings}on success{:error, reason}on failure
- Missing API key errors:
Add the matching
:req_llmkey or environment variable for your selected provider. - Provider schema compatibility:
If a provider rejects strict tool schemas, set
strict: falsein tool loop or prompt tool options.