Skip to content

Port main branch updates to refactored api/common/conserver structure#149

Merged
pavanputhra merged 1 commit intooptimize-docker-imagesfrom
port-main-to-optimize
Apr 15, 2026
Merged

Port main branch updates to refactored api/common/conserver structure#149
pavanputhra merged 1 commit intooptimize-docker-imagesfrom
port-main-to-optimize

Conversation

@pavanputhra
Copy link
Copy Markdown
Contributor

Summary

Backports 7 commits from main (up to 5f3350c) into the optimize-docker-images split layout, adapting all file paths and imports from the old server/ structure.

  • New common/lib/openai_client.py — shared get_openai_client() / get_async_openai_client() supporting OpenAI, Azure OpenAI, and LiteLLM proxy; all links/storage now call this instead of constructing clients inline
  • deepgram_link — add LiteLLM proxy transcription path (transcribe_via_litellm), fix fd leak in audio temp file handling, make confidence check optional (not available on LiteLLM path)
  • wtf_transcribe — updated for new vfun /wtf API: simplified create_wtf_analysis (pass response body directly), file-binary field name, language option, diarize default→False, min-duration default→0, accept status 200 only
  • api/config endpoint uses Configuration.get_config() instead of reading the YAML file directly
  • tests — add mock_get_client patches to analyze_and_label and detect_engagement tests; fix test_external_ingress to patch api.index_vcon instead of api.index_vcon_parties
  • docs — add Langfuse integration and OTel Collector fan-out documentation to monitoring.md
  • .gitignore — add litellm_config.yaml (local dev file with credentials, was untracked)

Test plan

  • Run existing test suite via docker compose run --rm conserver poetry run pytest
  • Verify /config endpoint returns config correctly
  • Verify deepgram transcription works with both direct Deepgram key and LiteLLM proxy
  • Verify wtf_transcribe works against updated vfun /wtf endpoint

🤖 Generated with Claude Code

Backport 7 commits from main (5f3350c..e98a3df) into the split layout,
adapting all file paths and imports from the old server/ structure.

Changes ported:
- Add shared openai_client.py (common/lib/) with get_openai_client() and
  get_async_openai_client() supporting OpenAI, Azure, and LiteLLM proxy
- Refactor all OpenAI-using links and storage to use get_openai_client():
  analyze, analyze_and_label, analyze_vcon, check_and_tag, detect_engagement,
  openai_transcribe, chatgpt_files, milvus
- deepgram_link: add LiteLLM proxy path (transcribe_via_litellm), fix fd
  leak in audio temp file handling, make confidence check optional
- wtf_transcribe: update for new vfun /wtf API — simplified create_wtf_analysis
  (pass response body directly), file-binary field, language option,
  diarize default→False, min-duration default→0, status 200 only
- api: /config endpoint uses Configuration.get_config() instead of reading
  the YAML file directly
- tests: add mock_get_client patches to analyze_and_label and
  detect_engagement tests; fix test_external_ingress to patch api.index_vcon
  instead of api.index_vcon_parties
- docs: add Langfuse integration and OTel Collector fan-out documentation
- .gitignore: add litellm_config.yaml (contains local credentials)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@pavanputhra pavanputhra merged commit 799d174 into optimize-docker-images Apr 15, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant