Skip to content

Conversation

@0ameyasr
Copy link

Implementation Details

Usage

1. JD-Augmented 1-to-1 Scoring

Enables users to provide both a resume and a job description for detailed fit analysis:

python score.py /path/to/resume.pdf /path/to/jd.pdf

2. Multi-Candidate Batch Processing and Shortlisting

Accepts a directory of resumes, a single job description, and a cutoff score threshold. Processes all resumes, ranks them, and outputs a shortlist of candidates meeting the minimum score:

python score.py /path/to/resume_directory /path/to/jd.pdf <cutoff_score_int>

Core Module Changes

models.py

  • Added ScoresWithJD and EvaluationDataWithJD models to structure enriched evaluation (with JD) output

template_manager.py

  • Introduced jd_system_message and jd_evaluation_criteria prompts for fit analysis

evaluator.py

  • Implemented _load_evaluation_prompt_with_jd for loading the final, decorated JD-based evaluation prompt
  • Added evaluate_resume_with_jd to handle the JD-based evaluation flow

transform.py

  • Created transform_evaluation_response_with_jd to parse LLM output into CSV rows (to be saved)

score.py

  • Significantly refactored the driver script to support new CLI argument patterns
  • Added _jd_context function for job description parsing from PDF to text
  • Implemented evaluate_resume_with_jd as the core 1-to-1 scoring function
  • Created dir_main function to orchestrate batch processing and ranking workflow in the directory
  • Added print_evaluation_results_with_jd for clean, user-facing output formatting like the base evaluation output
  • Sanity checks have been added for all CLI usage arguments

Output (1-to-1):
Image

Output (N-to-1) (snippet)
Image

If this PR is accepted, it will close #166.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] JD-Augmented Scoring and Multi-Candidate Batch Processing

1 participant