Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[blog] Qdrant Version 1.14 #1529

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
181 changes: 181 additions & 0 deletions qdrant-landing/content/blog/qdrant-1.14.x.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
---
title: "Qdrant 1.14 - Reranking Support, More Resource Optimizations & Cost Tracking"
draft: false
short_description: ""
description: ""
preview_image: /blog/qdrant-1.14.x/social_preview.png
social_preview_image: /blog/qdrant-1.14.x/social_preview.png
date: 2025-03-25T00:00:00-08:00
author: David Myriel
featured: true
tags:
---

[**Qdrant 1.14.0 is out!**](https://github.com/qdrant/qdrant/releases/tag/v1.14.0) Let's look at the main features for this version:

**Score-Boosting Reranker:** Blend vector similarity with custom rules and context.</br>
**Smarter Resource Utilization:** CPU and disk IO optimization for faster processing.</br>

**Memory Optimization:** Reduced usage for large datasets with improved ID tracking.</br>
**IO Measurements:** Detailed cost tracking for performance analysis.</br>
**RocksDB to Gridstore:** Additional reliance on our custom KV store. </br>

## Score-Boosting Reranker
![reranking](/blog/qdrant-1.14.x/reranking.jpg)

When integrating vector search into specific applications, you might want to tweak the final result list using domain or business logic. For example, if you are building a **chatbot or search on website content**, you might want to rank results with `title` metadata higher than `body_text` in your results.

In **e-commerce** you may want to boost products from a specific manufacturer—perhaps because you have a promotion or need to clear inventory. With this update, you can easily influence ranking using metadata like `brand` or `stock_status`.

> The **Score-Boosting Reranker** allows you to combine vector-based similarity with **business or domain-specific logic** by applying a **rescoring step** on top of the standard semantic or distance-based ranking.

As you structure the query, you can define a `formula` that references both existing scores (like cosine similarities) and additional payload data (e.g., timestamps, location info, numeric attributes). Let's take a look at some examples:

### Idea 1: Prioritizing Website Content

Imagine you have vectors for **titles**, **paragraphs**, and **code snippet** sections of your documentation. You can create a `tag` payload field that indicates whether a point is a title, paragraph, or snippet. Then, to give more weight to titles and paragraphs, you might do something like:

```
score = score + (is_title * 0.5) + (is_paragraph * 0.25)
```

**Above is just sample logic - but here is the actual Qdrant API request:**

```bash
POST /collections/{collection_name}/points/query
{
"prefetch": {
"query": [0.2, 0.8, ...], // <-- dense vector for the query
"limit": 50
},
"query": {
"formula": {
"sum": [
"$score",
{
"mult": [
0.5,
{ "key": "tag", "match": { "any": ["h1","h2","h3","h4"] } }
]
},
{
"mult": [
0.25,
{ "key": "tag", "match": { "any": ["p","li"] } }
]
}
]
}
}
}
```

### Idea 2: Reranking Most Recent Results

One of the most important advancements is the ability to prioritize recency. In many scenarios, such as in news or job listings, users want to see the most recent results first. Until now, this wasn’t possible without additional work: *you had to fetch all the data and manually filter for the latest entries on their side*.

Now, the similarity score **doesn’t have to rely solely on cosine distance**. It can also take into account how recent the data is, allowing for much more dynamic and context-aware ranking.

> With the Score-Boosting Reranker, simply add a `date` payload field and factor it into your formula so fresher data rises to the top.

### Idea 3: Factor in Geographical Proximity

Let’s say you’re searching for a restaurant serving Currywurst. Sure, Berlin has some of the best, but you probably don’t want to spend two days traveling for a sausage covered in magical seasoning. The best match is the one that **balances the distance with a real-world geographical distance**. You want your users see relevant and conveniently located options.

This feature introduces a multi-objective optimization: combining semantic similarity with geographical proximity. Suppose each point has a `geo.location` payload field (latitude, longitude). You can use a `gauss_decay` function to clamp the distance into a 0–1 range and add that to your similarity score:

```
score = $score + gauss_decay(distance)
```

**Example Query**:

```bash
POST /collections/{collection_name}/points/query
{
"prefetch": {
"query": [0.2, 0.8, ...],
"limit": 50
},
"query": {
"formula": {
"sum": [
"$score",
{
"gauss_decay": {
"scale": 5000, // e.g. 5 km
"x": {
"geo_distance": {
"origin": { "lat": 52.504043, "lon": 13.393236 }, // Berlin
"to": "geo.location"
}
}
}
}
]
},
"defaults": {
"geo.location": { "lat": 48.137154, "lon": 11.576124 } // Munich
}
}
}
```

You can tweak parameters like target, scale, and midpoint to shape how quickly the score decays over distance. This is extremely useful for local search scenarios, where location is a major factor but not the only factor.

> This is a very powerful feature that allows for extensive customization. Read more about this feature in the [**Hybrid Queries Documentation**](/documentation/concepts/hybrid-queries/)

## Smarter Resource Utilization During Optimization

Qdrant now **saturates CPU and disk IO** more effectively in parallel when optimizing segments. This helps reduce the "sawtooth" usage pattern—where CPU or disk often sat idle while waiting on the other resource.

This leads to **faster optimizations**, which are specially noticeable on large machines handling big data movement.
It also gives you **predictable performance**, as there are fewer sudden spikes or slowdowns during indexing and merging operations.

**Figure 1:** Indexing 400 million vectors - CPU and disk usage profiles.
![indexation-improvement](/blog/qdrant-1.14.x/indexation.png)

**Observed Results:** The improvement is especially noticeable during large-scale indexing. In our experiment, **we indexed 400 million 512-dimensional vectors**. The previous version of Qdrant took around 40 hours on an 8-core machine, while the development version with this change completed the task in just 28 hours.

### Minor Fixes & Optimizations
![reranking](/blog/qdrant-1.14.x/gridstore.jpg)

**Optimized Memory Usage in Immutable Segments**

We revamped how the ID tracker and related metadata structures store data in memory. This can result in a notable RAM reduction for very large datasets (hundreds of millions of vectors).

This causes **much lower overhead**, where memory savings let you store more vectors on the same hardware. Also, improved scalability is a major benefit. If your workload was near the RAM limit, this might let you push further **without using additional servers**.

**IO Measurements for Serverless Deployments**

Qdrant 1.14 introduces detailed tracking of **read/write costs** (CPU, disk, etc.) per operation. This is primarily intended for serverless billing, but also helps diagnose performance hotspots in dedicated setups.

> Now you can have **full cost visibility**, and you can understand exactly which queries or updates cause the most overhead.

This also makes for easier optimization - you can tweak indexes, partitioning, or formula queries to reduce resource usage based on concrete metrics.

**Ending our Reliance on RocksDB**

The **mutable ID tracker no longer relies on RocksDB**. This continues our journey toward minimal external dependencies. With our custom-built [**Gridstore**](/articles/gridstore-key-value-storage/), you can expect fewer random compactions and more predictable disk usage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new mutable ID tracker does not rely on RocksDB.

It's another kind of custom storage, which is very specialized to its purpose - the mutable ID tracker.

How it works is actually very simple: we have point mappings, and we store all changes to these mappings in a file simply by appending each change as an entry.

If we add a point, we create a mapping, thus we append a 'create point mapping for point x' entry to the file. If we delete a mapping, a 'delete point mapping for point x' is appended to the file. On load we simply walk over all change entries and reconstruct the mappings in memory.

Of course, this could grow forever if there are a lot of changes.

But, this integrates very well with our optimizers. The optimizers ensure that if the ID tracker grows large, it is picked up for optimization. In that case it is transformed into the immutable ID tracker, which benefits from other optimizations. That's why this simple mechanism in the mutable ID tracker is very effective here.


This reduces complexity, with fewer external data engines in your stack.
It also leads to better performance, by eliminating potential latency spikes from RocksDB's background operations.

![reranking](/blog/qdrant-1.14.x/gridstore.png)

*Read more about how we built [**Gridstore, our custom key-value store**](/articles/gridstore-key-value-storage/).*

## Upgrading to Version 1.14

With Qdrant 1.14, all client libraries remain fully compatible. If you do not need custom payload-based ranking, **your existing workflows remain unchanged**.

> **Upgrading from earlier versions is straightforward** — no major API or index-breaking changes.

In **Qdrant Cloud**, simply go to your **Cluster Details** screen and select **Version 1.14** from the dropdown. The upgrade may take a few moments.

![reranking](/blog/qdrant-1.14.x/upgrade.png)

**Documentation:** For a full list of formula expressions, conditions, decay functions, and usage examples, see the official [**Qdrant documentation**](https://qdrant.tech/documentation) and the [**API reference**](https://api.qdrant.tech/). This includes detailed code snippets for popular languages and a variety of advanced reranking examples.

**We'd love to hear your feedback:** If you have questions or want to share your experience, join our [**Discord**](https://qdrant.to/join-slack) or open an issue on [**GitHub**](https://github.com/qdrant/qdrant/issues).

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.