Skip to content

Commit ccfc086

Browse files
committed
Add blogpost for z score normalization technique
Signed-off-by: Owais <[email protected]>
1 parent 8064067 commit ccfc086

File tree

4 files changed

+170
-0
lines changed

4 files changed

+170
-0
lines changed
+170
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
---
2+
layout: post
3+
title: Z Score Normalization Technique for Hybrid Search
4+
authors:
5+
- kazabdu
6+
- gaievski
7+
date: 2025-03-31
8+
has_science_table: true
9+
categories:
10+
- technical-posts
11+
meta_keywords: z score normalization, OpenSearch 3.0-beta, neural search plugin, hybrid search, relevance ranking, search normalization, k-nn search, L2 normalization, how reciprocal rank fusion works
12+
meta_description: Learn about z score normalization using the Neural Search plugin in OpenSearch 3.0-beta. Discover how this new approach to hybrid search merges results from multiple query sources for improved relevance.
13+
---
14+
15+
In the world of search engines and machine learning, data normalization plays a crucial role in ensuring fair and accurate comparisons between different features or scores.
16+
Hybrid query uses multiple normalization techniques for preparing final results, main two types are score based normalization and rank base combination. In score base normalization, min-max normalization doesn’t work well with outliers (Outliers are data points that significantly differ from other observations in a dataset.
17+
In the context of normalization techniques like Min-Max scaling and Z-score (Standard Score) normalization, outliers can have a substantial impact on the results). In this blogpost we would introduce another normalization technique called as z-score which was added in OpenSearch 3.0.0-beta release.
18+
Let's dive into what Z-score normalization is, why it's important, and how it's being used in OpenSearch.
19+
20+
## What is Z-Score Normalization?
21+
22+
Z-score normalization, also known as standardization, is a method of scaling data using mean and standard deviation. The formula for calculating the Z-score is:
23+
Z = (X - μ) / σ
24+
Where:
25+
26+
* X is the original value
27+
* μ is the mean of the population
28+
* σ is the standard deviation of the population
29+
30+
## When to use Z Score?
31+
32+
Considering your index’s structure can help you decide which one to choose since each has advantages of its own. If your documents are more similar to one another and the top-k results of a typical query return documents that are very similar to one another and clustered together within the index, as seen in the graph below, Min-Max may be a better option.
33+
34+
![Image for min max distribution](/assets/media/blog-images/2025-03-31-zscore-hybrid-search/blogpost1.jpg){: .img-fluid}
35+
36+
However, Z-Score is more suited if the results are more evenly distributed and have some characteristics of a normal distribution, as shown in the example below.
37+
38+
![Image for zscore distribution](/assets/media/blog-images/2025-03-31-zscore-hybrid-search/blogpost2.jpg){: .img-fluid}
39+
40+
The basic flow to use between min max and z score looks like below:
41+
42+
![Image for flow](/assets/media/blog-images/2025-03-31-zscore-hybrid-search/blogpost3.png){: .img-fluid}
43+
44+
### How to use Z Score?
45+
46+
To use z_score, create a search pipeline and specify `z_score` as the `technique`:
47+
48+
```
49+
PUT /_search/pipeline/z_score-pipeline
50+
{
51+
"description": "Zscore processor for hybrid search",
52+
"phase_results_processors": [
53+
{
54+
"normalization-processor": {
55+
"normalization": {
56+
"technique": "z_score"
57+
},
58+
"combination": {
59+
"technique": "arithmetic_mean"
60+
}
61+
}
62+
}
63+
]
64+
}
65+
```
66+
67+
Next, create a hybrid query and apply the pipeline to it
68+
69+
```
70+
71+
72+
POST my_index/_search?search_pipeline=z_score-pipeline
73+
{
74+
"query": {
75+
"hybrid": [
76+
{}, // First Query
77+
{}, // Second Query
78+
... // Other Queries
79+
]
80+
}
81+
}
82+
```
83+
84+
85+
86+
## Benchmarking Z Score performance
87+
88+
Benchmark experiments were conducted using an OpenSearch cluster consisting of a single r6g.8xlarge instance as the coordinator node, along with three r6g.8xlarge instances as data nodes. To assess Z Score’s performance comprehensively, we measured three key metrics across four distinct datasets. For information about the datasets used, see [Datasets](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/).
89+
90+
Search relevance was quantified using the industry-standard Normalized Discounted Cumulative Gain at rank 10 (NDCG@10). We also tracked system performance using search latency measurements and relevancy. This setup provided a strong foundation for evaluating both search quality and operational efficiency.
91+
92+
93+
### NDCG@10
94+
95+
|dataset |Hybrid (min max) |Hybrid (z score) |Percent diff |
96+
|--- |--- |--- |--- |
97+
|scidocs |0.1591 |0.1633 |+2..45% |
98+
|fiqa |0.2747 |0.2768 |0.77% |
99+
|nq |0.3665 |0.374 |2.05% |
100+
|arguana |0.4507 |0.467 | |
101+
| | |Average |1.765 |
102+
103+
### Search latency
104+
105+
106+
The following table presents search latency measurements in milliseconds at different percentiles (p50, p90, and p99) for both the Hybrid with min max and z score approaches. The *Percent difference* columns show the relative performance impact between these methods.
107+
108+
<table> <tr> <th></th> <th colspan="3"><b>p50</b></th> <th colspan="3"><b>p90</b></th> <th colspan="3"><b>p99</b></th> </tr> <tr> <td></td> <td><b>Hybrid (min max)</b></td> <td><b>Hybrid (z score)</b></td> <td><b>Percent difference</b></td> <td><b>Hybrid (min max)</b></td> <td><b>Hybrid (z score)</b></td> <td><b>Percent difference</b></td> <td><b>Hybrid (min max)</b></td> <td><b>Hybrid (z score)</b></td> <td><b>Percent difference</b></td> </tr> <tr> <td>scidocs</td> <td>76.25</td> <td>77.5</td> <td>1.64%</td> <td>99</td> <td>100.5</td> <td>1.52%</td> <td>129.54</td> <td>133.04</td> <td>2.70%</td> </tr> <tr> <td>fiqa</td> <td>80</td> <td>81</td> <td>1.25%</td> <td>104.5</td> <td>105</td> <td>0.48%</td> <td>123.236</td> <td>124</td> <td>0.62%</td> </tr> <tr> <td>nq</td> <td>117</td> <td>117</td> <td>0%</td> <td>140</td> <td>140</td> <td>0%</td> <td>166.74</td> <td>165.24</td> <td>-0.90%</td> </tr> <tr> <td>arguana</td> <td>349</td> <td>349</td> <td>0%</td> <td>382</td> <td>382</td> <td>0%</td> <td>417.975</td> <td>418.475</td> <td>0.12%</td> </tr> <tr> <td></td> <td></td> <td><b>Average:</b></td> <td>0.72%</td> <td></td> <td><b>Average:</b></td> <td>0.50%</td> <td></td> <td><b>Average:</b></td> <td>0.64%</td> </tr> </table>
109+
110+
### Conclusions
111+
112+
113+
Our benchmark experiments highlight the following advantages and trade-offs of Z-score normalization compared to min-max normalization in hybrid search approaches:
114+
115+
**Search quality (measured using NDCG@10 across four datasets)**:
116+
117+
* Z-score normalization shows a modest improvement in search quality, with an average increase of 1.765% in NDCG@10 scores.
118+
* This suggests that Z-score normalization may provide slightly better relevance in search results compared to min-max normalization.
119+
120+
121+
**Latency impact**:
122+
123+
* Z-score normalization shows a small increase in latency across different percentiles, as shown in the following table:
124+
125+
|Latency percentile |Percent difference |
126+
|--- |--- |
127+
|
128+
p50 |0.72% |
129+
|--- |--- |
130+
|
131+
p90 |0.50% |
132+
|
133+
p99 |0.64% |
134+
135+
* The positive percentages indicate that Z-score normalization has slightly higher latency compared to min-max normalization, but the differences are minimal (less than 1% on average).
136+
137+
**Trade-offs**:
138+
139+
* There's a slight trade-off between search quality and latency. Z-score normalization offers a small improvement in search relevance (1.765% increase in NDCG@10) at the cost of a marginal increase in latency (0.50% to 0.72% across different percentiles).
140+
141+
**Consistency**:
142+
143+
* The impact of Z-score normalization varies across datasets, with some showing more significant improvements in search quality than others.
144+
* Latency impact is relatively consistent across datasets, with most showing small increases or no change
145+
146+
147+
**Overall assessment**:
148+
149+
* Z-score normalization provides a modest improvement in search quality with a negligible impact on latency.
150+
* The choice between Z-score and min-max normalization may depend on specific use cases, with Z-score potentially being preferred when even small improvements in search relevance are valuable and the slight latency increase is acceptable.
151+
152+
These findings suggest that Z-score normalization could be a viable alternative to min-max normalization in hybrid search approaches, particularly in scenarios where optimizing search relevance is a priority and the system can tolerate minimal latency increases
153+
154+
155+
156+
## What’s next?
157+
158+
We are also expanding OpenSearch’s hybrid search capabilities beyond z score by planning the following improvements to our normalization framework:
159+
160+
**Custom normalization functions**: Enables you to define your own normalization logic and allows fine-tuning of search result rankings. For more information, see [this issue](https://github.com/opensearch-project/neural-search/issues/994).
161+
162+
These improvements will provide more control over search result ranking while ensuring reliable and consistent hybrid search outcomes. Stay tuned for more information!
163+
164+
165+
166+
### References
167+
168+
1. https://www.codecademy.com/article/normalization
169+
2. https://towardsdatascience.com/hybrid-search-2-0-the-pursuit-of-better-search-ce44d6f20c08/
170+
3. https://github.com/opensearch-project/neural-search/issues/1209
Loading
Loading
Loading

0 commit comments

Comments
 (0)