Skip to content

DigitalHire-New/intention_extractor_dart

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

24 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

User Intent Classifier

Ultra-simple intent classification using GPT-3.5-turbo API. Classifies job-related messages into JOB_POST, INTERVIEW, or CANDIDATE_SEARCH intents.

πŸš€ 95%+ Accuracy | <1ms Cached | 300ms Debounced | Simple API

Features

βœ… 3 Intent Types

  • JOB_POST: Hiring, recruiting, posting job openings
  • INTERVIEW: Scheduling, conducting, assessing candidates
  • CANDIDATE_SEARCH: Finding, browsing, querying candidate databases

βœ… Powered by GPT-3.5-turbo

  • 95%+ accuracy
  • Handles new patterns automatically
  • Zero maintenance required
  • GPT extracts ALL fields (no regex/rules)

βœ… Performance Optimizations

  • In-memory caching: <1ms for repeated queries
  • Auto-debouncing: 300ms delay reduces API calls by 70%+
  • Smart concurrency: Max 3 concurrent requests
  • Cache stats: Monitor hit rates and performance

βœ… Automatic Field Extraction

  • title: Job position/title
  • skills: Technical and soft skills
  • salary: Compensation information
  • location: Work location (city, state)
  • workplace_type: Remote, Hybrid, or Onsite

Installation

dependencies:
  user_intent_classifier: ^2.0.0

Then run:

dart pub get

Configuration

1. Add Your OpenAI API Key

Before using the classifier, add your OpenAI API key to the configuration file:

File: lib/config/api_keys.dart

const String openaiApiKey = 'sk-proj-your-actual-api-key-here';

Get your API key from: OpenAI Platform

Important: Never commit your real API key. The file is tracked with a placeholder value.

2. Quick Start

import 'package:user_intent_classifier/user_intent_classifier.dart';

void main() async {
  // Initialize classifier (uses API key from config)
  final classifier = IntentClassifier();

  // Classify a message
  final result = await classifier.classify(
    'Hire Software Engineer in NYC with 5 years experience'
  );

  print('Intent: ${result.intent?.value}'); // JOB_POST
  print('Confidence: ${result.confidence}'); // 0.95
  print('Fields: ${result.fields}');
  // {title: Software Engineer, location: NYC, experience: 5 years}
}

API Reference

IntentClassifier

IntentClassifier()

Note: The classifier uses the OpenAI API key from lib/config/api_keys.dart.

Methods

classify(String message)

Classifies a single message.

Future<ClassificationResult> classify(String message)

classifyBatch(List<String> messages)

Batch classification for multiple messages.

Future<List<ClassificationResult>> classifyBatch(List<String> messages)

ClassificationResult

class ClassificationResult {
  final Intent? intent;           // JOB_POST, INTERVIEW, CANDIDATE_SEARCH, or null
  final Map<String, dynamic> fields;  // Extracted fields
  final double confidence;        // 0.0 to 1.0
  final String tier;              // 'gpt' or 'failed'
  final int responseTimeMs;       // Processing time
}

Intent Types

enum Intent {
  jobPost,          // JOB_POST - hiring, recruiting
  interview,        // INTERVIEW - scheduling interviews
  candidateSearch,  // CANDIDATE_SEARCH - finding candidates
}

Examples

Job Posting Classification

final classifier = IntentClassifier();
final result = await classifier.classify(
  'Looking for Flutter developer in New York, salary \$120k, remote work'
);

print(result.intent);  // Intent.jobPost
print(result.fields);
// {
//   title: Flutter Developer,
//   location: New York,
//   salary: \$120k,
//   workplace_type: Remote,
//   skills: [Flutter]
// }

Interview Scheduling

final classifier = IntentClassifier();
final result = await classifier.classify(
  'Schedule an interview with John tomorrow at 3 PM'
);

print(result.intent);  // Intent.interview
print(result.confidence);  // 0.92

Candidate Search

final classifier = IntentClassifier();
final result = await classifier.classify(
  'Find senior Python developers with AWS experience'
);

print(result.intent);  // Intent.candidateSearch
print(result.fields);
// {title: Senior Python Developer, skills: [Python, AWS]}

Performance

Metric Value
Model GPT-3.5-turbo
Accuracy 95%+
Response Time (Cache Hit) <1ms ⚑
Response Time (First Call) 700-1500ms
Response Time (Debounced) 300ms + API time
Timeout 3000ms (3 seconds)
Cost per Request ~$0.00008
Monthly Cost (10k req/day) ~$24

Migration from v1.x

Before (v1.x - Rules-based)

final classifier = IntentClassifier(); // Offline, free
final result = await classifier.classify(text);

After (v2.0 - GPT-based)

// Add your API key to lib/config/api_keys.dart first
final classifier = IntentClassifier();
final result = await classifier.classify(text);

Breaking Changes

  • OpenAI API key now required (configure in lib/config/api_keys.dart)
  • No offline mode (requires internet)
  • Response time 500-1200ms (was <35ms)
  • Small cost per request (was free)

Benefits

  • Much higher accuracy (95%+ vs 83.6%)
  • Zero maintenance
  • Handles new patterns automatically
  • Simpler codebase

Error Handling

If the API fails or times out (>5000ms), the classifier returns null intent:

final classifier = IntentClassifier();
final result = await classifier.classify('test message');

if (result.intent == null) {
  print('Classification failed or timed out');
  print('Tier: ${result.tier}'); // 'failed'
}

Cost Optimization

Automatic optimizations (built-in):

  • βœ… In-memory caching - 0 API calls for repeated queries (100 entry cache)
  • βœ… Auto-debouncing (300ms) - Reduces API calls by 70%+ for rapid typing
  • βœ… Smart concurrency - Max 3 requests, auto-cancels oldest

Real-world example (user typing "software engineer"):

User types: "s" β†’ "so" β†’ "sof" β†’ "soft" β†’ "software" β†’ "software engineer"
Without optimizations: 6 API calls = $0.00048
With optimizations: 1 API call = $0.00008 (83% cost reduction!)

Tips to reduce costs further:

  • Debouncing is enabled by default (use useDebounce: false to disable)
  • Cache automatically stores last 100 queries
  • Use clearCache() if needed
  • Monitor cache stats with getCacheStats()

Estimated costs:

  • 1,000 requests: ~$0.07
  • 10,000 requests: ~$0.70
  • 100,000 requests: ~$7.00

Note: With smart concurrency, rapid successive requests maintain stability while optimizing costs.

License

MIT License - see LICENSE file

Support

For issues and questions:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages