Skip to content

Conversation

@Dubey123f
Copy link

@Dubey123f Dubey123f commented Oct 25, 2025

🐞 Problem

The existing evaluator.py passed an unsupported format parameter to provider.chat(), causing runtime errors.

βœ… Fix

  • Removed the invalid format kwarg from LLM API calls.
  • Added robust JSON parsing with graceful JSONDecodeError handling.
  • Added provider validation and improved logging.

πŸ§ͺ Testing

  • Tested locally using mock LLM responses.
  • JSON parsing verified for malformed and valid responses.

πŸ” Impact

This ensures stable resume evaluation across all model providers and prevents hard crashes during LLM calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant