Skip to content

Commit 577c650

Browse files
author
Vadim Bakhrenkov
committed
release: v0.2.2
Added: - Latency Tracking: Track execution duration for each tool call - `@stat.track` decorator for automatic latency tracking (recommended) - `async with stat.tracking(name, type)` context manager alternative - `duration_ms` parameter in `record()` for manual timing - New latency columns: `total_duration_ms`, `min_duration_ms`, `max_duration_ms` - `latency_summary` in `get_stats()` response with total duration - Per-tool `avg_latency_ms`, `min_duration_ms`, `max_duration_ms` metrics Changed: - Database schema bumped to v3 with latency tracking columns - `get_stats()` response now includes `latency_summary` object - Each stat item now includes latency fields - API Improvement: `@stat.track` decorator is now the recommended way to track calls - Eliminates the "first line" requirement - Automatic latency measurement - Never fails user code Migration: - Automatic database migration from v2 to v3 - Preserves all existing data - New latency columns default to 0/NULL for existing records
1 parent 0f5ba73 commit 577c650

17 files changed

+1431
-138
lines changed

CHANGELOG.md

Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,34 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.2] - 2026-02-16
11+
12+
### Added
13+
14+
- **Latency Tracking**: Track execution duration for each tool call
15+
- `@stat.track` decorator for automatic latency tracking (recommended)
16+
- `async with stat.tracking(name, type)` context manager alternative
17+
- `duration_ms` parameter in `record()` for manual timing
18+
- New latency columns: `total_duration_ms`, `min_duration_ms`, `max_duration_ms`
19+
- `latency_summary` in `get_stats()` response with total duration
20+
- Per-tool `avg_latency_ms`, `min_duration_ms`, `max_duration_ms` metrics
21+
22+
### Changed
23+
24+
- Database schema bumped to v3 with latency tracking columns
25+
- `get_stats()` response now includes `latency_summary` object
26+
- Each stat item now includes latency fields
27+
- **API Improvement**: `@stat.track` decorator is now the recommended way to track calls
28+
- Eliminates the "first line" requirement
29+
- Automatic latency measurement
30+
- Never fails user code
31+
32+
### Migration
33+
34+
- Automatic database migration from v2 to v3
35+
- Preserves all existing data
36+
- New latency columns default to 0/NULL for existing records
37+
1038
## [0.2.1] - 2026-02-01
1139

1240
### Added
@@ -73,7 +101,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
73101
- Full type annotations with strict mypy compliance (`py.typed` marker included)
74102
- Comprehensive test suite
75103

76-
[Unreleased]: https://github.com/tekkidev/mcpstat/compare/v0.2.1...HEAD
104+
[Unreleased]: https://github.com/tekkidev/mcpstat/compare/v0.2.2...HEAD
105+
[0.2.2]: https://github.com/tekkidev/mcpstat/compare/v0.2.1...v0.2.2
77106
[0.2.1]: https://github.com/tekkidev/mcpstat/compare/v0.1.2...v0.2.1
78107
[0.1.2]: https://github.com/tekkidev/mcpstat/compare/v0.1.1...v0.1.2
79108
[0.1.1]: https://github.com/tekkidev/mcpstat/compare/v0.1.0...v0.1.1

README.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@
55
[![PyPI - Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://pypi.org/project/mcpstat/)
66
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/tekkidev/mcpstat/tests.yaml)](https://github.com/tekkidev/mcpstat/actions/workflows/tests.yaml)
77
[![PyPI - Downloads](https://img.shields.io/pypi/dm/mcpstat)](https://pypistats.org/packages/mcpstat)
8+
[![Checked with mypy](https://img.shields.io/badge/mypy-checked-blue)](http://mypy-lang.org/)
9+
[![Security: bandit](https://img.shields.io/badge/security-bandit-yellow.svg)](https://github.com/PyCQA/bandit)
810
[![Codecov](https://codecov.io/gh/tekkidev/mcpstat/branch/main/graph/badge.svg)](https://codecov.io/gh/tekkidev/mcpstat)
911

1012
**Usage tracking and analytics for MCP servers.** Pure Python, zero required dependencies.
@@ -32,9 +34,10 @@ app = Server("my-server")
3234
stat = MCPStat("my-server")
3335

3436
@app.call_tool()
37+
@stat.track # ← One decorator does everything!
3538
async def handle_tool(name: str, arguments: dict):
36-
await stat.record(name, "tool") # ← Add as FIRST line
37-
# ... your tool logic
39+
# Your logic here - latency tracked automatically
40+
return await my_logic(arguments)
3841
```
3942

4043
Then ask your AI assistant: *"Give me MCP usage stats"*
@@ -45,6 +48,7 @@ Then ask your AI assistant: *"Give me MCP usage stats"*
4548
- **Built-in MCP tools** - `get_tool_usage_stats`, `get_tool_catalog`
4649
- **Tag system** - Categorize and filter tools
4750
- **Token tracking** - Estimate or record actual token usage
51+
- **Latency tracking** - Measure execution duration, identify slow tools
4852
- **File logging** - Optional timestamped audit trail
4953
- **Async-first** - Thread-safe via `asyncio.Lock`
5054

@@ -56,6 +60,7 @@ Then ask your AI assistant: *"Give me MCP usage stats"*
5660
- [Configuration](https://github.com/tekkidev/mcpstat/blob/main/docs/configuration.md) - Customize paths, logging, presets
5761
- [API Reference](https://github.com/tekkidev/mcpstat/blob/main/docs/api.md) - Complete method reference
5862
- [Token Tracking](https://github.com/tekkidev/mcpstat/blob/main/docs/token-tracking.md) - Cost analysis features
63+
- [Latency Tracking](https://github.com/tekkidev/mcpstat/blob/main/docs/latency-tracking.md) - Performance monitoring
5964

6065
## Examples
6166

docs/api.md

Lines changed: 154 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,12 @@ from mcpstat import MCPStat
1515

1616
stat = MCPStat(
1717
server_name: str,
18+
*,
1819
db_path: str | None = None,
1920
log_path: str | None = None,
20-
log_enabled: bool = False,
21+
log_enabled: bool | None = None,
2122
metadata_presets: dict[str, dict] | None = None,
23+
cleanup_orphans: bool = True,
2224
)
2325
```
2426

@@ -29,14 +31,57 @@ stat = MCPStat(
2931
| `log_path` | `str` | `./mcp_stat.log` | File log path |
3032
| `log_enabled` | `bool` | `False` | Enable file logging |
3133
| `metadata_presets` | `dict` | `None` | Pre-defined metadata |
34+
| `cleanup_orphans` | `bool` | `True` | Remove metadata for unregistered tools on sync |
3235

3336
---
3437

3538
## Core Methods
3639

40+
### @stat.track (Recommended)
41+
42+
Decorator that automatically tracks tool calls with latency measurement.
43+
44+
```python
45+
@app.call_tool()
46+
@stat.track # ← One decorator does everything!
47+
async def handle_tool(name: str, arguments: dict):
48+
return await my_logic(arguments)
49+
```
50+
51+
**Features:**
52+
53+
- Automatically measures execution time
54+
- Records call count
55+
- Tracks success/failure
56+
- Never fails user code (errors suppressed)
57+
- Works with exceptions (still records the call)
58+
59+
**With explicit type:**
60+
61+
```python
62+
@stat.track(primitive_type="prompt")
63+
async def handle_prompt(name: str, arguments: dict):
64+
return await generate_prompt(arguments)
65+
```
66+
67+
---
68+
69+
### stat.tracking() Context Manager
70+
71+
For cases where you need more control than a decorator:
72+
73+
```python
74+
async def handle_tool(name: str, arguments: dict):
75+
async with stat.tracking(name, "tool"):
76+
result = await my_logic(arguments)
77+
return result
78+
```
79+
80+
---
81+
3782
### record()
3883

39-
Record a tool, prompt, or resource invocation.
84+
Low-level method for manual recording. Use `@stat.track` instead when possible.
4085

4186
```python
4287
await stat.record(
@@ -48,6 +93,7 @@ await stat.record(
4893
response_chars: int | None = None,
4994
input_tokens: int | None = None,
5095
output_tokens: int | None = None,
96+
duration_ms: int | None = None,
5197
)
5298
```
5399

@@ -60,18 +106,10 @@ await stat.record(
60106
| `response_chars` | `int` | Response size for token estimation |
61107
| `input_tokens` | `int` | Actual input token count |
62108
| `output_tokens` | `int` | Actual output token count |
109+
| `duration_ms` | `int` | Execution duration in milliseconds |
63110

64-
> **Critical**: Always call `record()` as the **FIRST line** in your handlers to guarantee 100% tracking coverage.
65-
66-
**Example:**
67-
68-
```python
69-
@app.call_tool()
70-
async def handle_tool(name: str, arguments: dict):
71-
await stat.record(name, "tool") # FIRST LINE
72-
result = await my_logic(arguments)
73-
return result
74-
```
111+
!!! note "When to use record()"
112+
Use `record()` directly only when you need to pass additional data like `response_chars` or `input_tokens`. For basic tracking with automatic latency, use `@stat.track` (decorator) or `stat.tracking()` (context manager) instead.
75113

76114
---
77115

@@ -108,6 +146,10 @@ stats = await stat.get_stats(
108146
"total_estimated_tokens": int,
109147
"has_actual_tokens": bool,
110148
},
149+
"latency_summary": {
150+
"total_duration_ms": int,
151+
"has_latency_data": bool,
152+
},
111153
"stats": [
112154
{
113155
"name": str,
@@ -116,10 +158,16 @@ stats = await stat.get_stats(
116158
"last_accessed": str | None,
117159
"tags": list[str],
118160
"short_description": str | None,
161+
"full_description": str | None,
119162
"total_input_tokens": int,
120163
"total_output_tokens": int,
164+
"total_response_chars": int,
121165
"estimated_tokens": int,
122166
"avg_tokens_per_call": int,
167+
"total_duration_ms": int,
168+
"min_duration_ms": int | None,
169+
"max_duration_ms": int | None,
170+
"avg_latency_ms": int,
123171
}
124172
]
125173
}
@@ -148,7 +196,8 @@ catalog = await stat.get_catalog(
148196
| `include_usage` | `bool` | Include call counts |
149197
| `limit` | `int` | Maximum results |
150198

151-
> **AND Logic**: Tag filtering uses AND logic - tools must have **all** specified tags.
199+
!!! info "AND Logic"
200+
Tag filtering uses AND logic - tools must have **all** specified tags.
152201

153202
**Returns:**
154203

@@ -161,12 +210,18 @@ catalog = await stat.get_catalog(
161210
"tags": list[str],
162211
"query": str | None,
163212
},
213+
"include_usage": bool,
214+
"limit": int | None,
215+
"total_calls": int | None, # None when include_usage=False
164216
"results": [
165217
{
166218
"name": str,
167219
"tags": list[str],
168220
"short_description": str | None,
169-
"call_count": int,
221+
"full_description": str | None,
222+
"schema_version": int,
223+
"updated_at": str,
224+
"call_count": int | None, # None when include_usage=False
170225
"last_accessed": str | None,
171226
}
172227
]
@@ -227,33 +282,86 @@ await stat.sync_tools(tools)
227282

228283
### register_metadata()
229284

230-
Manually register metadata for a tool.
285+
Manually register metadata for a primitive.
231286

232287
```python
233288
await stat.register_metadata(
234289
name: str,
235-
tags: list[str] | None = None,
236-
short_description: str | None = None,
290+
*,
291+
tags: list[str],
292+
short_description: str,
237293
full_description: str | None = None,
238294
)
239295
```
240296

241297
---
242298

299+
### sync_prompts()
300+
301+
Sync metadata from MCP Prompt objects.
302+
303+
```python
304+
await stat.sync_prompts(prompts: list[Prompt])
305+
```
306+
307+
---
308+
309+
### sync_resources()
310+
311+
Sync metadata from MCP Resource objects.
312+
313+
```python
314+
await stat.sync_resources(resources: list[Resource])
315+
```
316+
317+
---
318+
319+
### add_preset()
320+
321+
Add a metadata preset for future sync operations.
322+
323+
```python
324+
stat.add_preset(
325+
name: str,
326+
*,
327+
tags: list[str],
328+
short: str,
329+
)
330+
```
331+
332+
---
333+
243334
### get_by_type()
244335

245-
Get call counts grouped by type.
336+
Get usage statistics grouped by MCP primitive type.
246337

247338
```python
248-
counts = await stat.get_by_type()
249-
# {"tool": 15, "prompt": 3, "resource": 2}
339+
data = await stat.get_by_type()
340+
```
341+
342+
**Returns:**
343+
344+
```python
345+
{
346+
"by_type": {
347+
"tool": [{"name": str, "type": str, "call_count": int, "last_accessed": str}],
348+
"prompt": [...],
349+
"resource": [...],
350+
},
351+
"summary": {
352+
"tool": {"count": int, "total_calls": int},
353+
...
354+
},
355+
"total_calls": int,
356+
"total_items": int,
357+
}
250358
```
251359

252360
---
253361

254362
### close()
255363

256-
Explicit cleanup (usually automatic).
364+
Release resources. Call during server shutdown for clean resource release.
257365

258366
```python
259367
stat.close()
@@ -272,7 +380,7 @@ from mcpstat import build_tool_definitions
272380

273381
tools = build_tool_definitions(
274382
prefix: str = "get",
275-
server_name: str | None = None,
383+
server_name: str = "MCP server",
276384
)
277385
```
278386

@@ -315,7 +423,10 @@ Generate MCP prompt schema for stats prompt.
315423
```python
316424
from mcpstat import build_prompt_definition
317425

318-
prompt = build_prompt_definition(server_name="my-server")
426+
prompt = build_prompt_definition(
427+
prompt_name: str,
428+
server_name: str = "MCP server",
429+
)
319430
```
320431

321432
---
@@ -327,7 +438,25 @@ Generate prompt content with current stats.
327438
```python
328439
from mcpstat import generate_stats_prompt
329440

330-
content = await generate_stats_prompt(stat, server_name="my-server")
441+
content = await generate_stats_prompt(
442+
stat,
443+
*,
444+
period: str = "all time",
445+
type_filter: str = "all",
446+
include_recommendations: bool = True,
447+
)
448+
```
449+
450+
---
451+
452+
### handle_stats_prompt()
453+
454+
Handle stats prompt request from MCP client.
455+
456+
```python
457+
from mcpstat.prompts import handle_stats_prompt
458+
459+
result = await handle_stats_prompt(stat, arguments={"period": "past week"})
331460
```
332461

333462
---

0 commit comments

Comments
 (0)