Skip to content

Latest commit

 

History

History
181 lines (130 loc) · 9.46 KB

qdrant-1.14.x.md

File metadata and controls

181 lines (130 loc) · 9.46 KB
title draft short_description description preview_image social_preview_image date author featured tags
Qdrant 1.14 - Reranking Support, More Resource Optimizations & Cost Tracking
false
/blog/qdrant-1.14.x/social_preview.png
/blog/qdrant-1.14.x/social_preview.png
2025-03-25 00:00:00 -0800
David Myriel
true

Qdrant 1.14.0 is out! Let's look at the main features for this version:

Score-Boosting Reranker: Blend vector similarity with custom rules and context.
Smarter Resource Utilization: CPU and disk IO optimization for faster processing.

Memory Optimization: Reduced usage for large datasets with improved ID tracking.
IO Measurements: Detailed cost tracking for performance analysis.
RocksDB to Gridstore: Additional reliance on our custom KV store.

Score-Boosting Reranker

reranking

When integrating vector search into specific applications, you might want to tweak the final result list using domain or business logic. For example, if you are building a chatbot or search on website content, you might want to rank results with title metadata higher than body_text in your results.

In e-commerce you may want to boost products from a specific manufacturer—perhaps because you have a promotion or need to clear inventory. With this update, you can easily influence ranking using metadata like brand or stock_status.

The Score-Boosting Reranker allows you to combine vector-based similarity with business or domain-specific logic by applying a rescoring step on top of the standard semantic or distance-based ranking.

As you structure the query, you can define a formula that references both existing scores (like cosine similarities) and additional payload data (e.g., timestamps, location info, numeric attributes). Let's take a look at some examples:

Idea 1: Prioritizing Website Content

Imagine you have vectors for titles, paragraphs, and code snippet sections of your documentation. You can create a tag payload field that indicates whether a point is a title, paragraph, or snippet. Then, to give more weight to titles and paragraphs, you might do something like:

score = score + (is_title * 0.5) + (is_paragraph * 0.25)

Above is just sample logic - but here is the actual Qdrant API request:

POST /collections/{collection_name}/points/query
{
    "prefetch": {
        "query": [0.2, 0.8, ...],   // <-- dense vector for the query
        "limit": 50
    },
    "query": {
        "formula": {
            "sum": [
                "$score",
                {
                    "mult": [
                        0.5,
                        { "key": "tag", "match": { "any": ["h1","h2","h3","h4"] } }
                    ]
                },
                {
                    "mult": [
                        0.25,
                        { "key": "tag", "match": { "any": ["p","li"] } }
                    ]
                }
            ]
        }
    }
}

Idea 2: Reranking Most Recent Results

One of the most important advancements is the ability to prioritize recency. In many scenarios, such as in news or job listings, users want to see the most recent results first. Until now, this wasn’t possible without additional work: you had to fetch all the data and manually filter for the latest entries on their side.

Now, the similarity score doesn’t have to rely solely on cosine distance. It can also take into account how recent the data is, allowing for much more dynamic and context-aware ranking.

With the Score-Boosting Reranker, simply add a date payload field and factor it into your formula so fresher data rises to the top.

Idea 3: Factor in Geographical Proximity

Let’s say you’re searching for a restaurant serving Currywurst. Sure, Berlin has some of the best, but you probably don’t want to spend two days traveling for a sausage covered in magical seasoning. The best match is the one that balances the distance with a real-world geographical distance. You want your users see relevant and conveniently located options.

This feature introduces a multi-objective optimization: combining semantic similarity with geographical proximity. Suppose each point has a geo.location payload field (latitude, longitude). You can use a gauss_decay function to clamp the distance into a 0–1 range and add that to your similarity score:

score = $score + gauss_decay(distance)

Example Query:

POST /collections/{collection_name}/points/query
{
    "prefetch": {
        "query": [0.2, 0.8, ...],
        "limit": 50
    },
    "query": {
        "formula": {
            "sum": [
                "$score",
                {
                    "gauss_decay": {
                        "scale": 5000,               // e.g. 5 km
                        "x": {
                            "geo_distance": {
                                "origin": { "lat": 52.504043, "lon": 13.393236 }, // Berlin
                                "to": "geo.location"
                            }
                        }
                    }
                }
            ]
        },
        "defaults": {
            "geo.location": { "lat": 48.137154, "lon": 11.576124 } // Munich
        }
    }
}

You can tweak parameters like target, scale, and midpoint to shape how quickly the score decays over distance. This is extremely useful for local search scenarios, where location is a major factor but not the only factor.

This is a very powerful feature that allows for extensive customization. Read more about this feature in the Hybrid Queries Documentation

Smarter Resource Utilization During Optimization

Qdrant now saturates CPU and disk IO more effectively in parallel when optimizing segments. This helps reduce the "sawtooth" usage pattern—where CPU or disk often sat idle while waiting on the other resource.

This leads to faster optimizations, which are specially noticeable on large machines handling big data movement. It also gives you predictable performance, as there are fewer sudden spikes or slowdowns during indexing and merging operations.

Figure 1: Indexing 400 million vectors - CPU and disk usage profiles. indexation-improvement

Observed Results: The improvement is especially noticeable during large-scale indexing. In our experiment, we indexed 400 million 512-dimensional vectors. The previous version of Qdrant took around 40 hours on an 8-core machine, while the development version with this change completed the task in just 28 hours.

Minor Fixes & Optimizations

reranking

Optimized Memory Usage in Immutable Segments

We revamped how the ID tracker and related metadata structures store data in memory. This can result in a notable RAM reduction for very large datasets (hundreds of millions of vectors).

This causes much lower overhead, where memory savings let you store more vectors on the same hardware. Also, improved scalability is a major benefit. If your workload was near the RAM limit, this might let you push further without using additional servers.

IO Measurements for Serverless Deployments

Qdrant 1.14 introduces detailed tracking of read/write costs (CPU, disk, etc.) per operation. This is primarily intended for serverless billing, but also helps diagnose performance hotspots in dedicated setups.

Now you can have full cost visibility, and you can understand exactly which queries or updates cause the most overhead.

This also makes for easier optimization - you can tweak indexes, partitioning, or formula queries to reduce resource usage based on concrete metrics.

Ending our Reliance on RocksDB

The mutable ID tracker no longer relies on RocksDB. This continues our journey toward minimal external dependencies. With our custom-built Gridstore, you can expect fewer random compactions and more predictable disk usage.

This reduces complexity, with fewer external data engines in your stack. It also leads to better performance, by eliminating potential latency spikes from RocksDB's background operations.

reranking

Read more about how we built Gridstore, our custom key-value store.

Upgrading to Version 1.14

With Qdrant 1.14, all client libraries remain fully compatible. If you do not need custom payload-based ranking, your existing workflows remain unchanged.

Upgrading from earlier versions is straightforward — no major API or index-breaking changes.

In Qdrant Cloud, simply go to your Cluster Details screen and select Version 1.14 from the dropdown. The upgrade may take a few moments.

reranking

Documentation: For a full list of formula expressions, conditions, decay functions, and usage examples, see the official Qdrant documentation and the API reference. This includes detailed code snippets for popular languages and a variety of advanced reranking examples.

We'd love to hear your feedback: If you have questions or want to share your experience, join our Discord or open an issue on GitHub.