Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 16 additions & 5 deletions lib/utils.js
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,10 @@ var merge = function merge(target, source, options) {
}

if (isArray(target) && isArray(source)) {
source.forEach(function (item, i) {
var sourceOwnProperties = Object.getOwnPropertyNames(source);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can't rely on Object.getOwnPropertyNames existing. additionally, source is an array here, so why wouldn't we want to just iterate from 0 to source.length?

Copy link
Author

@krzysdz krzysdz Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not notice that qs supports Node.js versions older than 0.10, sorry.

why wouldn't we want to just iterate from 0 to source.length?

As I wrote in the description, when parsing indexed arrays merge() is called for each element with target being the "accumulated" array and source a sparse array with only one element. .forEach internally is for (let k = 0; k < arr.length; k++) { if (arr.hasOwn(k) { cb(arr[k], k, arr); } } (ArrayForEachLoopContinuation in V8). With large sparse arrays this takes some time and combined with repeatedly calling merge() is O(n^2).

This is the less practical part of the PR, because for values of parameterLimit and arrayLimit up to 1000 (arrayLimit is smaller by default), the current code is still faster.

for (var j = 0; j < sourceOwnProperties.length - 1; j++) {
var i = sourceOwnProperties[j];
var item = source[i];
if (has.call(target, i)) {
var targetItem = target[i];
if (targetItem && typeof targetItem === 'object' && item && typeof item === 'object') {
Expand All @@ -127,7 +130,7 @@ var merge = function merge(target, source, options) {
} else {
target[i] = item;
}
});
}
return target;
}

Expand Down Expand Up @@ -287,9 +290,17 @@ var combine = function combine(a, b, arrayLimit, plainObjects) {
return a;
}

var result = [].concat(a, b);
if (result.length > arrayLimit) {
return markOverflow(arrayToObject(result, { plainObjects: plainObjects }), result.length - 1);
var length;
var result;
if (Array.isArray(a)) {
length = Array.isArray(b) ? a.push.apply(a, b) : a.push(b);
result = a;
} else {
result = [].concat(a, b);
length = result.length;
}
if (length > arrayLimit) {
return markOverflow(arrayToObject(result, { plainObjects: plainObjects }), length - 1);
Comment on lines +293 to +303
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing from "only relying on .concat" to "relying on push and apply and concat" isn't an improvement, unfortunately. also, we can't rely on .apply being present.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the code isn't the most beautiful with 3 different cases of combining arrays, but it is much faster. I can't think of a faster way to combine 2 arrays without relying on Node.js 0.10+ features :(

}
return result;
};
Expand Down
8 changes: 4 additions & 4 deletions test/utils.js
Original file line number Diff line number Diff line change
Expand Up @@ -143,9 +143,9 @@ test('combine()', function (t) {
var b = [2];
var combined = utils.combine(a, b);

st.deepEqual(a, [1], 'a is not mutated');
st.deepEqual(a, [1, 2], 'a is mutated');
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if tests are changed, it's a breaking change. tests should not be changed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a already is mutated in some cases and there is a test which checks for this behaviour:

qs/test/utils.js

Lines 235 to 236 in 6bdfaf5

var combined = utils.combine(overflow, 'c', 10, false);
s2t.equal(combined, overflow, 'returns the same object (mutated)');

I believe that in #185 (comment) you were willing to accept mutation if you see real performance improvements and there certainly is one for 100 elements (over 2x for whole qs.parse()). For 20 elements it is only about 4%, but I have trouble measuring it precisely (I can try improving the measurements if you want).

I'll admit that there may be some way to break this with custom decoder that returns an array with overridden .push(). Array.prototype.push.call(a, b) and Array.prototype.push.apply(a, b) might be better in this case.

st.deepEqual(b, [2], 'b is not mutated');
st.notEqual(a, combined, 'a !== combined');
st.deepEqual(a, combined, 'a === combined');
st.notEqual(b, combined, 'b !== combined');
st.deepEqual(combined, [1, 2], 'combined is a + b');

Expand All @@ -167,9 +167,9 @@ test('combine()', function (t) {
st.deepEqual([1, 2], combinedAnB, 'first argument is array-wrapped when not an array');

var combinedABn = utils.combine(a, bN);
st.deepEqual(a, [aN], 'a is not mutated');
st.deepEqual(a, [aN, bN], 'a is mutated');
st.notEqual(aN, combinedABn, 'a + bN !== aN');
st.notEqual(a, combinedABn, 'a + bN !== a');
st.deepEqual(a, combinedABn, 'a + bN === a');
st.notEqual(bN, combinedABn, 'a + bN !== bN');
st.notEqual(b, combinedABn, 'a + bN !== b');
st.deepEqual([1, 2], combinedABn, 'second argument is array-wrapped when not an array');
Expand Down