Skip to content

OCI Generative AI docs: Quality improvements and bug fixes#2940

Merged
Mason Daugherty (mdrxy) merged 3 commits intolangchain-ai:mainfrom
fede-kamel:oci-docs-improvements
Mar 5, 2026
Merged

OCI Generative AI docs: Quality improvements and bug fixes#2940
Mason Daugherty (mdrxy) merged 3 commits intolangchain-ai:mainfrom
fede-kamel:oci-docs-improvements

Conversation

@fede-kamel
Copy link
Copy Markdown
Contributor

Summary

Follow-up improvements to OCI Generative AI documentation (#2925) based on testing this morning. Apologies for the separate PR - I was working on these improvements when the original was merged.

Changes:

  1. Add example outputs - Show actual model responses so developers know what to expect
  2. Complete tool calling flow - Added full ToolMessage execution loop (was missing)
  3. Fix Gemini PDF format - Changed {"type": "media", ...} to correct {"type": "document_url", ...} format
  4. Simplify examples - Use strings/tuples where HumanMessage isn't required:
    • llm.invoke("question") instead of llm.invoke([HumanMessage(...)])
    • ("user", "..."), ("assistant", "...") tuples for multi-turn

Testing

All 13 integration tests pass against real OCI GenAI services:

  • Basic invocation, multi-turn, streaming, async
  • Tool calling with complete execution loop
  • Structured output with Pydantic
  • Vision (Llama 3.2 90B)
  • Gemini PDF processing
  • Text & image embeddings (Cohere)
  • RAG with FAISS
  • AI Agent (create_oci_agent)

Files Changed

  • src/oss/python/integrations/chat/oci_generative_ai.mdx
  • src/oss/python/integrations/providers/oci.mdx
  • src/oss/python/integrations/text_embedding/oci_generative_ai.mdx

- Add example output for invocation (shows SQL injection detection)
- Complete tool calling flow with ToolMessage execution loop
- Fix multiline strings (no more escaped newlines)
- Replace all '...' placeholders with full parameters
- Add proper imports where missing
- Show realistic output for vision and PDF examples
- Cleaner async examples with proper message format
The OCI API expects document_url format for PDFs:
  {"type": "document_url", "document_url": {"url": "data:application/pdf;base64,..."}}

Not the media format that was incorrectly documented.
All 12 integration tests now pass.
- Multi-turn: use ("user", ...), ("assistant", ...) tuple format
- Streaming: use simple string instead of messages list
- Async: use simple string for ainvoke/astream
- Vision prompt: condense to single line

HumanMessage only used where required (multimodal content, tool loops).
@fede-kamel
Copy link
Copy Markdown
Contributor Author

Hey Mason Daugherty (@mdrxy) - apologies for the separate PR! I was working on some improvements this morning when the original (#2925) got merged.

These are minor quality enhancements:

  • Added example outputs so developers know what to expect
  • Fixed the Gemini PDF format (was using wrong content type)
  • Simplified examples to use strings/tuples where HumanMessage isn't needed

All changes tested against real OCI GenAI services (13/13 tests pass).

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants