Skip to content

Wip/ls/threading core with ai#305

Open
LSchaffner wants to merge 30 commits intomasterfrom
wip/ls/threading_core_with_ai
Open

Wip/ls/threading core with ai#305
LSchaffner wants to merge 30 commits intomasterfrom
wip/ls/threading_core_with_ai

Conversation

@LSchaffner
Copy link
Contributor

@LSchaffner LSchaffner commented Mar 10, 2026

Pull Request: Threading System & AI Request Integration

Threading

A dedicated unit now handles the distribution of parallel resources.

New Components

ThreadGroup – An arbitrarily extensible enum for grouping tasks. Groups can be selectively terminated via stop_group.

SchedulingClass – Defines a task's priority and the minimum ratio of free resources required before the task is executed. This prevents outer tasks from clogging up resources when tasks spawn new tasks themselves.

ThreadTask – The actual task submitted to the scheduler. The following parameters can be specified:

Parameter Description
func The function to execute – must accept a threading.Event as its last parameter (for stopping)
scheduling_class Instance of SchedulingClass
task_group ThreadGroup assignment
semaphore Optional if for some task it is nessesary to have a specific maximum conccurrent counter (bsp: AI request to one provider)
callback Optional callback function, called with the function's result
args Positional arguments for the function
kwargs Keyword arguments for the function

Example:

current_app.thread_handler.submit(
    ThreadTask(
        generate_all_highlighted_desc,
        SchedulingClass.SYSTEM_CALL,
        ThreadGroup.VARIABLE_HIGHLIGHTING,
        Semaphore
        None,
        (variable_list, None),
        {},
    )
)

TaskResult – Returned by submit(), mimicking the threading.Thread interface via .done() and .result() to track task completion and results.

Scheduler

current_app.thread_handler runs a persistent background thread that handles scheduling. Tasks are started based on priority, free-ratio, and other parameters - for example, if the given semaphore for this task is avaliable.

Configuration

MAX_THREADS = 10

AI Requests

AI requests to various models and providers can now be made straightforwardly:

current_app.ai_request.ask_ai(
    "Hello", 
    return_methode,
    SchedulingClass.SYSTEM_CALL, 
    "ollama", 
    "llama3.1:8b", 
    "ollama_standard_api", 
    {"other_parameter_1": ..., "other_parameter_2": ..., }
)
Parameter Description
query The prompt / request
callback function which is called with the response form the AI
scheduling_class Optional (standart = SYSTEM_CALL) for the threading_handler
provider Optional provider name – if None, the model set in the config is used
model_name Optional model name – if None, the provider set in the config is used
api_method_name Optional api_method name – if None, the first method is used
extra_params Additional model-specific parameters

Adding a new model / provider / method

model and provider must be entered in the “ai_config” file, using the same structure as shown there

To support a new model, a new api_request_method can be implemented for the respective API interface. The extra_params can be used flexibly as needed. The ai_api_methods_abstract_class must be fully implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant