Skip to content

Commit 8c55baa

Browse files
feat: add Dev Proxy v2.4.0 release notes with new features and improvements
1 parent 4121352 commit 8c55baa

2 files changed

Lines changed: 90 additions & 0 deletions

File tree

public/blog/images/v2-4-0.png

476 KB
Loading
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
---
2+
title: "Dev Proxy v2.4.0 with accurate AI cost tracking and AI coding agent skill"
3+
description: "Dev Proxy v2.4.0 brings accurate cached token pricing, model tagging for OpenTelemetry, upgrade-safe configuration downloads, improved LLM failure simulation, and a new skill for AI coding agents."
4+
date: 2026-05-05
5+
author: "Waldek Mastykarz, Garry Trinder"
6+
tags: ["release"]
7+
image: "/blog/images/v2-4-0.png"
8+
---
9+
10+
We're excited to announce the release of **Dev Proxy v2.4.0!** This release focuses on making your AI cost tracking more accurate, improving LLM failure simulation, and introducing a brand new way for AI coding agents to work with Dev Proxy.
11+
12+
### **In this version:**
13+
14+
- Accurate cached token pricing in cost calculations
15+
- OpenTelemetry model tagging for requests and responses
16+
- Upgrade-safe configuration downloads
17+
- Improved LLM failure simulation
18+
- New Dev Proxy skill for AI coding agents
19+
20+
### **Accurate cached token pricing**
21+
22+
When your AI-powered app uses cached prompts, OpenAI charges significantly less for those tokens. Until now, the **OpenAITelemetryPlugin** treated all input tokens the same when calculating costs - meaning your reports overstated what you were actually spending.
23+
24+
Dev Proxy v2.4.0 adds full support for cached token pricing. The plugin now distinguishes between regular and cached input tokens, applies their distinct pricing rates, and reports costs accurately in telemetry and exported reports.
25+
26+
**Why this matters:** If your app benefits from prompt caching, you can now see exactly how much you're saving. No more guessing - just accurate, actionable cost data that helps you optimize your AI spending.
27+
28+
### **OpenTelemetry model tagging**
29+
30+
The **OpenAITelemetryPlugin** now tags spans with OpenTelemetry GenAI semantic conventions for the model used in each request and response:
31+
32+
- `gen_ai.request.model` - the model specified in the request
33+
- `gen_ai.response.model` - the model returned in the response
34+
35+
This improves trace and metric correlation in your observability stack, making it easier to filter, group, and analyze telemetry data by model.
36+
37+
### **Upgrade-safe configuration downloads**
38+
39+
Previously, `config get` downloaded configurations to a `config` subfolder inside the Dev Proxy installation folder. Every time you upgraded Dev Proxy, those downloaded configs were wiped out because installers overwrite the entire installation directory.
40+
41+
Starting with v2.4.0, `config get` downloads configurations to the user data folder instead:
42+
43+
| Platform | Location |
44+
|----------|----------|
45+
| macOS | `~/Library/Application Support/dev-proxy/configs/` |
46+
| Linux | `~/.config/dev-proxy/configs/` |
47+
| Windows | `%LocalAppData%\dev-proxy\configs\` |
48+
49+
Your downloaded configurations now survive upgrades, and there's a clear separation between built-in presets and your custom configs.
50+
51+
### **Improved LLM failure simulation**
52+
53+
We've fixed two issues in the **LanguageModelFailurePlugin** that affected how it simulates failures in OpenAI-style APIs:
54+
55+
- **Body encoding** - the plugin now correctly decodes request body bytes for proper OpenAI request detection and parsing
56+
- **Prompt role** - injected failure prompts now use the `"system"` role instead of `"user"` for chat completions and Responses API requests, better matching how real system-level failures would appear
57+
58+
These fixes make the failure simulation more realistic, helping you build more resilient AI-powered applications.
59+
60+
### **Dev Proxy skill for AI coding agents**
61+
62+
We've added a new Dev Proxy skill - a structured knowledge package that teaches AI coding agents (like GitHub Copilot, Claude, Cursor, and others) how to use Dev Proxy effectively.
63+
64+
The skill covers scenario-based workflows including mocking API responses, testing API resilience, testing LLM integrations, analyzing API usage, and setting up CI/CD pipelines. Instead of figuring out Dev Proxy configuration from scratch, your AI coding agent can now follow proven patterns and best practices.
65+
66+
You'll find the skill in the [`skills/dev-proxy/`](https://github.com/dotnet/dev-proxy/tree/main/skills/dev-proxy) folder in the Dev Proxy repository.
67+
68+
## Dev Proxy Toolkit
69+
70+
[Dev Proxy Toolkit](https://marketplace.visualstudio.com/items?itemName=garrytrinder.dev-proxy-toolkit) is an extension that makes it easier to work with Dev Proxy from within Visual Studio Code. Alongside the new release of Dev Proxy, we've also released a new version of the toolkit, v1.26.0.
71+
72+
In this version, we've:
73+
74+
- Updated all JSON snippets to use v2.4.0 schemas
75+
76+
Checkout out the [changelog](https://marketplace.visualstudio.com/items/garrytrinder.dev-proxy-toolkit/changelog) for more information on changes and bug fixes.
77+
78+
### **Why upgrade to v2.4.0?**
79+
80+
**Accurate cost insights** - Cached token pricing gives you real numbers, not inflated estimates
81+
**Better observability** - Model tagging improves filtering and analysis in your telemetry stack
82+
**Upgrade-safe configs** - Downloaded configurations survive Dev Proxy upgrades
83+
**Realistic failure simulation** - Improved body encoding and prompt roles for LLM failure testing
84+
**AI-assisted workflows** - Teach your coding agent how to use Dev Proxy with the new skill
85+
86+
### **Try it now**
87+
88+
Download **Dev Proxy v2.4.0** today and build better API-connected applications with confidence!
89+
90+
Got feedback or ideas? [Join us](https://github.com/dotnet/dev-proxy/discussions) and be part of the conversation.

0 commit comments

Comments
 (0)