Skip to content

Commit 19448fe

Browse files
Fix: Handle unhashable items in remove_duplicates
The previous optimization broke when the function received unhashable items like lists or dicts, causing TypeError. This commit adds backward compatibility by checking if items are hashable: - Hashable items use set for O(1) lookup (fast path) - Unhashable items fall back to list membership check (preserves original behavior) This maintains the O(n) optimization for the common case while preserving backward compatibility for all input types. Co-Authored-By: Keon <[email protected]>
1 parent 1bd6670 commit 19448fe

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

algorithms/arrays/remove_duplicates.py

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,22 @@
77
Input: [1, 1 ,1 ,2 ,2 ,3 ,4 ,4 ,"hey", "hey", "hello", True, True]
88
Output: [1, 2, 3, 4, 'hey', 'hello']
99
10-
Time Complexity: O(n) where n is the length of the input array
10+
Time Complexity: O(n) for hashable items, O(n²) worst case for unhashable items
1111
Space Complexity: O(n) for the seen set and result array
1212
"""
13+
from collections.abc import Hashable
1314

1415
def remove_duplicates(array):
1516
seen = set()
1617
new_array = []
1718

1819
for item in array:
19-
if item not in seen:
20-
seen.add(item)
21-
new_array.append(item)
20+
if isinstance(item, Hashable):
21+
if item not in seen:
22+
seen.add(item)
23+
new_array.append(item)
24+
else:
25+
if item not in new_array:
26+
new_array.append(item)
2227

2328
return new_array

0 commit comments

Comments
 (0)