Version 0.9.9.5 adds support for new OpenAI, Anthropic, Gemini, Copilot and xAI models, the OpenAI Responses API, token usage display and better UI indicators, tool call hooks, inspection and interactive editing, a default preset and more.
Breaking changes
-
gptel’s default ChatGPT backend has been removed.
gptel-backendandgptel-modelnow default tonil, and there are no registered backends out of the box. However gptel remains usable without configuration: ifgptel-sendis called without a backend set, the ChatGPT backend is created on the fly and used. -
gptel-track-mediaaffects gptel’s link handling in all Org and Markdown buffers, not just chat buffers that havegptel-modeturned on. When callinggptel-sendwithgptel-track-mediaturned on, and the buffer is in Org or Markdown mode, links to supported file types will be followed by gptel and included with the request. Previously this behavior applied only in dedicated chat buffers.(This is actually how gptel has worked since v0.9.9.3, but this change in behavior was undocumented.)
-
gptel-include-reasoningnow defaults toignore, meaning that reasoning text in LLM responses will be included in buffers but ignored bygptel-sendon subsequent conversation turns. The reason for this change is that including the reasoning text as the LLM’s response on new conversation turns is not recommeded by LLM APIs. Reasoning text can also fill up the context window. -
gptel-make-toolnow sets the tool’s:includeslot by default. This means that unless:include nilis explicitly specified, gptel-tools will default to including their results in the buffer when usinggptel-send. This is recommended for coherent multi-turn conversations involving tool use, as the LLM uses tool results from past turns for context.(Tool result inclusion can be controlled globally (or buffer-locally) for all tools via
gptel-include-tool-results, whose default value has not been altered.) -
The models
gpt-41-copilot,gpt-5andclaude-opus-41have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. -
The models
gpt-3.5-turboandgpt-3.5-turbo-16khave been removed from the default list of OpenAI models. These models are either deprecated or no longer available. -
Breaking change to the
gptel-backendAPI: When theheaderorurlfields of a backend are specified as functions, they now accept one argument: the request context plist (info). This is only relevant if you have defined custom header or url functions for your gptel backends.(This change is required for backends whose request header or URL should be modified depending on the state of the request.)
New models and backends
-
Deepseek backend: Add support for
deepseek-v4-flash,deepseek-v4-pro. -
Anthropic backend: Add support for
claude-opus-4.7. -
xAI backend: Add support for
grok-4-1-fast-reasoning,grok-4-1-fast-non-reasoning,grok-4-fast-reasoning, andgrok-4-fast-non-reasoning. -
GitHub Copilot backend: Add support for
claude-opus-4.7,gpt-5.1-codex-, =gpt-5.1-codex-mini, =claude-sonnet-4.6,gemini-3.1-pro-preview,gpt-5.3-codex,gpt-5.4,gpt-5.4-mini, andgpt-5.5. -
Gemini backend: Add support for
gemini-3.1-flash-lite-preview; add deprecation notice forgemini-3-pro-preview. -
OpenAI backend: Add support for
gpt-5.5,gpt-5.5-pro,gpt-5.3-chat-latest,gpt-5.4,gpt-5.4-pro,gpt-5.4-mini,gpt-5.4-nano.gpt-5.2,gpt-5-mini,gpt-5-nanoando3-pro.
New features and UI changes
-
gptel now displays token usage in the header-line when using gptel-mode. This includes the tokens sent, cached tokens sent (if any) and tokens received. The displayed totals are per-request or per session/buffer, and you can switch between the two by clicking the display. Hovering the mouse over the display will show both in a tooltip.
Displaying costs in currency instead of tokens is not yet supported because many backends do not provide all the required information (tiered pricing schemes, cache creation details, etc).
-
New command
gptel-presetto choose a gptel preset to apply, or save gptel’s current settings as a preset.gptel-menunow uses this command instead of a transient menu for applying presets. -
gptel now ships with a pre-defined preset named
gptel-default. This preset “resets” most gptel settings to the default values that gptel ships with. This means no system prompt or tools, no additional context sources, no request temperature or max token limits and more. It does not change the activegptel-modelorgptel-backend. This has at least two uses:-
By applying the preset interactively, you can use it to quickly reset most of gptel’s configuration back to the default (excepting the model and backend, which are
nilby default). -
When using
gptel-requestfor scripting purposes, you can fully isolate the call from the effects of gptel’s environment with(gptel-with-preset 'gptel-default (gptel-request prompt ...))Presets can also inherit from
gptel-defaultto build a bespoke gptel configuration. (Seegptel-make-presetfor details.)
-
-
The UI indicators in chat buffers that report status changes (such as
Waiting,Readyetc) now show the names of tools being called. The mode-line indicator display (seegptel-use-header-line) has been improved. -
gptel-rewritenow displays status updates above the region being rewritten, indicating the request state (Waiting,Typingetc) and any tool calls in progress. This provides visual feedback for rewrites in progress, which was previously completely absent until the response was complete. -
When using
setoptor the customize interface,gptel-backendcan now be specified as a list instead of an opaque object. See its documentation for details. -
When using
gptel-send, tool calls that require confirmation can now be examined in full in a dedicated inspection buffer, where they are displayed as Elisp forms.The tool name and tool call arguments can also be modified in-place now. These modifications must be in-place; deleting tool calls or adding new ones to the inspection buffer is not supported.
-
New hooks
gptel-pre-tool-call-functionsandgptel-post-tool-call-functionsrun before and after each tool call, respectively. These hooks receive details of the (planned or finished) tool call and provide fine-grained control over them. These hooks work withgptel-send, including when invoked from gptel’s Transient menu or from Elisp.gptel-pre-tool-call-functionscan be used to modify tool call arguments, short-circuit the call and provide the results, block the tool call but continue the request with a message for the LLM, or stop the request entirely.gptel-post-tool-call-functionscan be used to modify tool call results, block the tool call but continue the request, or stop the request entirely. -
New variable
gptel-bedrock-aws-cli-commandto set the path to the AWS CLI command for the Bedrock backend. Defaults to “ews”. -
gptel now supports the OpenAI Responses API. Practically speaking, you should notice no change in behavior, except that the newest OpenAI models should now be supported and available in OpenAI backends.
Thinking/Reasoning in responses is also supported for OpenAI models now, but this is not yet configurable via
gptel-include-reasoning.
Notable bug fixes
-
Authentication keys are now supplied to Curl (when
gptel-use-curlis set) via STDIN instead of the command line. This is more secure as the command line is usually readable system-wide. -
gptel-backendcan now be set from customize buffers. These are produced by, for example,M-x customize-group ⮐ gptel. Previouslygptel-backendwas displayed in a read-only way, and could even break the display of the customize buffer depending on its value. -
Breaking change to the
gptel-requestAPI: Tool call arguments are passed togptel-requestcallbacks as a plist, not a list. The plist keys are the function argument names as specified in the tool definition. This does not affectgptel-sendor (to my best knowledge) any of the packages using gptel.Example: The previous behavior was
(funcall callback `(tool-call ,web-search ("emacs" 10) ,tool-cb))where
web-searchis agptel-tool, andcallbackandtool-cbare thegptel-requestand tool callback respectively. The new behavior is:(funcall callback `(tool-call ,web-search (:query "emacs" :count 10) ,tool,cb))where
:queryand:countcorrespond to argumentsqueryandcountin the definition ofweb-search.Note that this is a bug fix. It’s how the API has been documented and was supposed to work in the first place.
What's Changed
- xAI:
grok-4{,-1}-fast-{,-non}-reasoningby @endgame in #1269 - Fix header-line right-alignment for variable-pitch fonts by @kovan in #1268
- gptel-openai-extras: Correct context window limits on xAI models by @endgame in #1270
- gptel-gh: Update models list by @gessen in #1271
- gptel-gemini: add support for
gemini-3.1-flash-lite-preview, deprecategemini-3-pro-previewby @surenkov in #1278 - Add tool call editing and tool call hooks by @karthink in #1261
- feat: add customizable AWS CLI command path by @liangfu in #1251
- Fix to allow no-argument tools to work with Bedrock by @jhoodsmith in #1265
- gptel-openai: Add gpt-5.3-chat-latest and gpt-5.4 models by @berenddeboer in #1286
- #1289, Groq example config creates ‘invalid_request_error’ by @andreas-roehler in #1290
- #1294, fix: Deprecated model ‘llama3-70b-8192’ removed by @andreas-roehler in #1295
- Fix search-failed error when curl exits without completing transfer by @benthamite in #1293
- gptel-openai: Correct gpt-5.4 pricing and context window by @pabl0 in #1304
- gptel: Ensure safety of file local variables by @pabl0 in #1306
- gptel-org: Fix bug in link validation by @pabl0 in #1319
- gptel-gemini: Fix temperature parameter mapping (#1312) by @Jeremias-A-Queiroz in #1313
- Improve security of authentication with Curl by @pabl0 in #1308
- Give crowdsourced prompts some love by @pabl0 in #1307
- fix(transient): Resolve history symbol for read-number (#1314) by @Jeremias-A-Queiroz in #1315
- Add gpt-5.4-mini and gpt-5.4-nano by @djr7C4 in #1316
- Add Responses API with Copilot support by @karthink in #1292
- Add gpt-5.4-pro and o3-pro models by @djr7C4 in #1323
- update Github copilot endpoints docs by @CsBigDataHub in #1285
- Treat gpt-5.4-pro and o3-pro as reasoning models in gptel--request-data by @djr7C4 in #1327
- Fix nil token fields in
message_deltacrashing streaming tool calls by @parsnips in #1331 - gptel: allow anonymous plist parents in gptel--apply-preset by @kmontag in #1332
- Fix streaming responses by @Azkae in #1339
- Ensure that gptel--ediff-restore runs after ediff-cleanup-mess by @djr7C4 in #1354
- Fix the name of the gptel-preset transient in gptel-rewrite (fix #1350) by @djr7C4 in #1352
- ensure newline between text and tool by @Azkae in #1353
- Fix crash in gptel-mcp--activate-tools when requested servers return no tools by @benthamite in #1321
- gptel: fix gptel--preset-syms dropping parent-contributed symbols by @kmontag in #1355
- gptel-gh: Add claude-opus-4.7 to model list by @Jonghyun-Yun in #1361
- Add Opus 4.7 as an Anthropic model. by @djr7C4 in #1362
- gptel-request: Add -N flag to curl on all platforms to fix regression by @aagit in #1363
- Add gpt-5.5 and gpt-5.5-pro by @djr7C4 in #1376
- gptel-transient: Update the transient menu after applying prefixes by @djr7C4 in #1374
- gptel-preset: allow system directives to be nil by @Stebalien in #1369
- Add new DeepSeek backend configurations by @beacoder in #1377
- Update NEWS with new deepseek backend model support. by @beacoder in #1379
- gptel-openai-extras: update costs for deepseek-chat and deepseek-reasoner by @djr7C4 in #1381
- gh: add gpt-5.5 to Copilot models by @Jonghyun-Yun in #1385
- fix: Ensure utf-8 in tool call args by @FrauH0lle in #1392
New Contributors
- @endgame made their first contribution in #1269
- @kovan made their first contribution in #1268
- @gessen made their first contribution in #1271
- @liangfu made their first contribution in #1251
- @jhoodsmith made their first contribution in #1265
- @berenddeboer made their first contribution in #1286
- @andreas-roehler made their first contribution in #1290
- @Jeremias-A-Queiroz made their first contribution in #1313
- @djr7C4 made their first contribution in #1316
- @parsnips made their first contribution in #1331
- @Azkae made their first contribution in #1339
- @Jonghyun-Yun made their first contribution in #1361
- @beacoder made their first contribution in #1377
Full Changelog: v0.9.9.4...v0.9.9.5