Problem
Provider calls in runQuery (main.go) use a background context with no deadline. A stuck upstream — hung ollama serve, slow/lost TCP connection to api.anthropic.com or api.openai.com, LM Studio server mid-reload — will freeze the CLI until the user sends Ctrl-C. There's no user-visible progress, no error, no recovery.
This was surfaced by Copilot on PR #72: TestProviderRespectsContextCancellation verifies the Provider contract (implementations honor ctx.Done()), but nothing in the app actually imposes a deadline on live provider calls.
Proposal
Wrap the provider call in runQuery with context.WithTimeout:
queryCtx, cancel := context.WithTimeout(ctx, config.RequestTimeout)
defer cancel()
resp, err := provider.Query(queryCtx, systemPrompt, userQuery)
Design notes:
- Default: something generous but finite, e.g. 60s. Streaming responses from Anthropic/OpenAI for short prompts typically complete in a few seconds; local Ollama/LM Studio vary by model size.
- Configurable via:
- Env var:
HOWTFDOI_REQUEST_TIMEOUT (accepts Go duration strings like 30s, 2m)
- Config file:
request_timeout: 60s
0 or unset = use default. A negative value could mean "no timeout" for users who genuinely want unbounded calls on slow local models.
- On deadline: return a friendly error (
Error: request timed out after 60s. If you're on a slow local model, set HOWTFDOI_REQUEST_TIMEOUT to a larger value.) rather than bubbling the raw context.DeadlineExceeded.
Test plan
- Unit test in
main_test.go that wires a blockingMockProvider through runQuery with a tiny timeout and asserts the friendly error message.
- Manual:
HOWTFDOI_REQUEST_TIMEOUT=100ms HOWTFDOI_AI_PROVIDER=ollama LMSTUDIO_BASE_URL=http://localhost:1/v1 ./howtfdoi list files should return the timeout error promptly instead of hanging.
Out of scope
- Per-provider timeouts (one value covers all providers; revisit if local models need materially different defaults).
- Retry logic / circuit breaker.
Problem
Provider calls in
runQuery(main.go) use a background context with no deadline. A stuck upstream — hungollama serve, slow/lost TCP connection toapi.anthropic.comorapi.openai.com, LM Studio server mid-reload — will freeze the CLI until the user sends Ctrl-C. There's no user-visible progress, no error, no recovery.This was surfaced by Copilot on PR #72:
TestProviderRespectsContextCancellationverifies the Provider contract (implementations honorctx.Done()), but nothing in the app actually imposes a deadline on live provider calls.Proposal
Wrap the provider call in
runQuerywithcontext.WithTimeout:Design notes:
HOWTFDOI_REQUEST_TIMEOUT(accepts Go duration strings like30s,2m)request_timeout: 60s0or unset = use default. A negative value could mean "no timeout" for users who genuinely want unbounded calls on slow local models.Error: request timed out after 60s. If you're on a slow local model, set HOWTFDOI_REQUEST_TIMEOUT to a larger value.) rather than bubbling the rawcontext.DeadlineExceeded.Test plan
main_test.gothat wires ablockingMockProviderthroughrunQuerywith a tiny timeout and asserts the friendly error message.HOWTFDOI_REQUEST_TIMEOUT=100ms HOWTFDOI_AI_PROVIDER=ollama LMSTUDIO_BASE_URL=http://localhost:1/v1 ./howtfdoi list filesshould return the timeout error promptly instead of hanging.Out of scope