Skip to content

[Feature Request] JD-Augmented Scoring and Multi-Candidate Batch Processing #166

@0ameyasr

Description

@0ameyasr

Problem

The hiring-agent tool currently provides robust 1-to-1 resume analysis based on the implicit job description for HackerRank's SDE Intern role. However, two critical capabilities are missing:

  1. The ability to evaluate candidates against an explicit job description provided as a PDF
  2. Support for batch processing multiple candidates against a single job description, which reflects the typical N-to-1 shortlisting workflow used in real-world hiring

Adding these capabilities would significantly expand the utility of the existing agent and better align it with practical recruitment workflows.

Proposed Solution

I propose implementing two high-impact workflows to extend the CLI functionality:

1. JD-Augmented 1-to-1 Scoring

Enables users to provide both a resume and a job description for detailed fit analysis:

python score.py /path/to/resume.pdf /path/to/jd.pdf

2. Multi-Candidate Batch Processing and Shortlisting

Accepts a directory of resumes, a single job description, and a cutoff score threshold. Processes all resumes, ranks them, and outputs a shortlist of candidates meeting the minimum score:

python score.py /path/to/resume_directory /path/to/jd.pdf <cutoff_score_int>

Implementation Details

A comprehensive set of changes has been developed in a separate fork that integrates cleanly with the existing architecture:

Core Module Changes

models.py

  • Added ScoresWithJD and EvaluationDataWithJD models to structure enriched evaluation (with JD) output

template_manager.py

  • Introduced jd_system_message and jd_evaluation_criteria prompts for fit analysis

evaluator.py

  • Implemented _load_evaluation_prompt_with_jd for loading the final, decorated JD-based evaluation prompt
  • Added evaluate_resume_with_jd to handle the JD-based evaluation flow

transform.py

  • Created transform_evaluation_response_with_jd to parse LLM output into CSV rows (to be saved)

score.py

  • Significantly refactored the driver script to support new CLI argument patterns
  • Added _jd_context function for job description parsing from PDF to text
  • Implemented evaluate_resume_with_jd as the core 1-to-1 scoring function
  • Created dir_main function to orchestrate batch processing and ranking workflow in the directory
  • Added print_evaluation_results_with_jd for clean, user-facing output formatting like the base evaluation output
  • Sanity checks have been added for all CLI usage arguments

Output (1-to-1):
Image

Output (N-to-1) (snippet)
Image

Status:
The complete implementation for this feature is finished on my fork. The code is modular, follows the project's style (black), and is ready for review. I am opening this issue to discuss the feature and ensure it aligns with the project's roadmap. If this direction is welcome, I am happy to add the required smoke tests and open a formal Pull Request immediately.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions