Before applying, the system must answer: "Will this resume pass the ATS filter for this specific job?" The ATS Analysis Engine scores, diagnoses, and suggests fixes.
Modern Applicant Tracking Systems (Workday, Greenhouse, Lever, iCIMS, Taleo) do the following:
- Parse the resume into structured fields (name, skills, experience)
- Extract keywords from the resume text
- Compare extracted keywords against the job requisition
- Rank candidates by match percentage
- Filter — resumes below a threshold (typically 60–75%) are never seen by a human
| Issue | Why It Fails |
|---|---|
| Tables / columns | Parser reads cells in wrong order |
| Headers in images | Text inside images is invisible to ATS |
| Fancy fonts | Some fonts don't map to standard Unicode |
| Missing section headers | ATS can't find "Experience" or "Education" |
| Acronyms only | ATS searches for "Machine Learning", not just "ML" |
| File format | Some ATS choke on .docx formatting; plain text or clean PDF is safer |
candidate_resume_text— the generated resume textjob_description_text— the full JD text
| Component | Weight | Method |
|---|---|---|
| Keyword Match | 40% | TF-IDF or embedding similarity of skills/tools |
| Section Completeness | 15% | Check presence of: Summary, Experience, Education, Skills |
| Keyword Density | 15% | Ratio of JD keywords found in resume vs. total JD keywords |
| Experience Relevance | 15% | Semantic similarity of bullet points to JD responsibilities |
| Formatting Score | 15% | No tables, standard headers, parseable structure |
# Pseudocode
def calculate_keyword_score(resume_text, jd_text):
jd_keywords = extract_keywords(jd_text) # spaCy NER + noun chunks
resume_keywords = extract_keywords(resume_text)
# Exact matches
exact_matches = jd_keywords & resume_keywords
# Fuzzy matches (e.g., "Kubernetes" ≈ "K8s")
fuzzy_matches = fuzzy_match(jd_keywords - exact_matches, resume_keywords)
# Semantic matches (embeddings)
semantic_matches = embedding_similarity(
jd_keywords - exact_matches - fuzzy_matches,
resume_keywords,
threshold=0.85
)
total_matched = len(exact_matches) + len(fuzzy_matches) + len(semantic_matches)
return (total_matched / len(jd_keywords)) * 100{
"overall_score": 78,
"breakdown": {
"keyword_match": 82,
"section_completeness": 100,
"keyword_density": 71,
"experience_relevance": 65,
"formatting": 95
},
"missing_keywords": [
{"keyword": "Terraform", "category": "DevOps Tool", "importance": "high"},
{"keyword": "CI/CD", "category": "Practice", "importance": "medium"}
],
"suggestions": [
"Add 'Terraform' to skills section — mentioned 3 times in JD",
"Include 'CI/CD' in at least one experience bullet",
"Consider expanding acronym 'ML' to 'Machine Learning'"
],
"formatting_issues": []
}When the score is below threshold (configurable, default 70%), the system:
- Identifies gaps — keywords in JD but not in resume
- Searches Candidate Profile — does the candidate actually have this skill?
- If yes → Suggest adding it. System can auto-include it in next generation.
- If no → Flag as "Skill Gap" (informational only — won't fabricate).
- Re-orders content — move matching bullets higher
- Rewrites summary — LLM generates a new summary incorporating missing keywords
Important
The system never fabricates experience or skills. It can only use what exists in the Candidate Profile. If the profile doesn't mention Terraform, it won't add Terraform.
flowchart LR
A[Resume Generator] -->|generated resume text| B[ATS Analyzer]
C[Job Discovery] -->|job description text| B
B -->|score + suggestions| D[Decision Engine]
D -->|score >= threshold| E[Proceed to Apply]
D -->|score < threshold| F[Re-generate with suggestions]
F --> A
The analyze → generate → re-analyze loop runs at most 2 iterations to avoid infinite loops. If the score is still low after 2 passes, the job is flagged for manual review.