Releases: pola-rs/polars
Python Polars 1.0.0-alpha.1
💥 Breaking changes
- Consistently convert to given time zone in Series constructor (#16828)
- Update
reshapeto return Array types instead of List types (#16825) - Default to raising on out-of-bounds indices in all
get/gatheroperations (#16841) - Native
selectorXOR set operation, guarantee consistent selector column-order (#16833) - Set
infer_schema_lengthas keyword-only argument instr.json_decode(#16835) - Update
set_sortedto only accept a single column (#16800) - Update
group_byiteration andpartition_byto always return tuple keys (#16793) - Default to
coalesce=Falsein left outer join (#16769) - Remove
pyxlsbengine fromread_database(#16784) - Remove deprecated parameters in
Series.cut/qcutand update struct field names (#16741) - Expedited removal of certain deprecated functionality (#16754)
- Remove deprecated
top_kparametersnulls_last,maintain_order, andmultithreaded(#16599) - Update some error types to more appropriate variants (#15030)
- Scheduled removal of deprecated functionality (#16715)
- Enforce deprecation of
offsetarg intruncateandround(#16655) - Change default
offsetingroup_by_dynamicfrom 'negativeevery' to 'zero' (#16658) - Constrain access to globals from
DataFrame.sqlin favor of top-levelpl.sql(#16598) - Read 2D NumPy arrays as multidimensional
Arrayinstead ofList(#16710) - Update
clipto no longer propagate nulls in the given bounds (#14413) - Change
str.to_datetimeto default to microsecond precision for format specifiers"%f"and"%.f"(#13597) - Update resulting column names in
pivotwhen pivoting by multiple values (#16439) - Preserve nulls in
ewm_mean,ewm_std, andewm_var(#15503) - Restrict casting for temporal data types (#14142)
- Support Decimal types by default when converting from Arrow (#15324)
- Remove serde functionality from
pl.read_jsonandDataFrame.write_json(#16550) - Update function signature of
nthto allow positional input of indices, removecolumnsparameter (#16510) - Rename struct fields of
rleoutput tolen/valueand update data type oflenfield (#15249) - Remove class variables from some DataTypes (#16524)
- Add
check_namesparameter toSeries.equalsand default toFalse(#16610)
⚠️ Deprecations
- Deprecate
LazyFrame.with_context(#16860) - Rename parameter
descendingtoreverseintop_kmethods (#16817) - Rename
str.concattostr.join(#16790) - Deprecate
arctan2d(#16786)
🚀 Performance improvements
- Optimize string/binary sort (#16871)
- Use
split_atinsplit(#16865) - Use
split_atinstead of double slice in chunk splits. (#16856) - Don't rechunk in
align_if arrays are aligned (#16850) - Don't create small chunks in parallel collect. (#16845)
- Add dedicated no-null branch in
arg_sort(#16808) - Speed up
dt.offset_by2x for constant durations (#16728) - Toggle coalesce if non-coalesced key isn't projected (#16677)
- Make
dt.truncate1.5x faster wheneveryis just a single duration (and not an expression) (#16666) - Always prune unused columns in semi/anti join (#16665)
✨ Enhancements
- Consistently convert to given time zone in Series constructor (#16828)
- Improve
read_csvSQL table reading function defaults (better handle dates) (#16866) - Support SQL
VALUESclause and inline renaming of columns in CTE & derived table definitions (#16851) - Support Python
Enumvalues inlit(#16858) - convert to give time zone in
.str.to_datetimewhen values are offset-aware (#16742) - Update
reshapeto return Array types instead of List types (#16825) - Default to raising for oob on all
get/gatheroperations (#16841) - Support
SQL"SELECT" with no tables, optimise registration of globals (#16836) - Native
selectorXOR set operation, guarantee consistent selector column-order (#16833) - Extend recognised
EXTRACTandDATE_PARTSQL part abbreviations (#16767) - Improve error message when raising integers to negative integers, improve docs (#16827)
- Return datetime for mean/median of Date colum (#16795)
- Only accept a single column in
set_sorted(#16800) - Expose overflowing cast (#16805)
- Update group-by iteration to always return tuple keys (#16793)
- Support array arithmetic for equally sized shapes (#16791)
- Default to
coalesce=Falsein left outer join (#16769) - More removal of deprecated functionality (#16779)
- Removal of
read_database_uripassthrough fromread_database(#16783) - Remove
pyxlsbengine fromread_database(#16784) - Add
check_orderparameter toassert_series_equal(#16778) - Enforce deprecation of keyword arguments as positional (#16755)
- Support cloud storage in
scan_csv(#16674) - Streamline SQL
INTERVALhandling and improve related error messages, updatesqlparser-rslib (#16744) - Support use of ordinal values in SQL
ORDER BYclause (#16745) - Support executing polars SQL against
pandasandpyarrowobjects (#16746) - Remove deprecated parameters in
Series.cut/qcut(#16741) - Expedited removal of certain deprecated functionality (#16754)
- Remove deprecated functionality from rolling methods (#16750)
- Update
date_rangeto no longer produce datetime ranges (#16734) - Mark
min_periodsas keyword-only forrollingmethods (#16738) - Remove deprecated
top_kparameters (#16599) - Support order-by in window functions (#16743)
- Add SQL support for
NULLS FIRST/LASTordering (#16711) - Update some error types to more appropriate variants (#15030)
- Initial SQL support for
INTERVALstrings (#16732) - More scheduled removal of deprecated functionality (#16724)
- Scheduled removal of deprecated functionality (#16715)
- Enforce deprecation of
offsetarg intruncateandround(#16655) - Change default of
offsetin group_by_dynamic from "negativeevery" to "zero" (#16658) - Constrain access to globals from
df.sqlin favour of top-levelpl.sql(#16598) - Read 2D numpy arrays as Array[dt, shape] instead of Listst[dt] (#16710)
- Activate decimal by default (#16709)
- Do not propagate nulls in
clipbounds (#14413) - Change
.str.to_datetimeto default to microsecond precision for format specifiers"%f"and"%.f"(#13597) - Remove redundant column name when pivoting by multiple values (#16439)
- Preserve nulls in
ewm_mean,ewm_std, andewm_var(#15503) - Restrict casting for temporal data types (#14142)
- Add many more auto-inferable datetime formats for
str.to_datetime(#16634) - Support decimals by default when converting from Arrow (#15324)
- Remove serde functionality from
pl.read_jsonandDataFrame.write_json(#16550) - Update function signature of
nthto allow positional input of indices, removecolumnsparameter (#16510) - Rename struct fields of
rleoutput tolen/valueand update data type oflenfield (#15249) - Remove default class variable values on DataTypes (#16524)
- Add
check_namesparameter toSeries.equalsand default toFalse(#16610) - Dedicated
SQLInterfaceandSQLSyntaxerrors (#16635) - Add
DIVfunction support to the SQL interface (#16678) - Support non-coalescing streaming left join (#16672)
- Allow wildcard and exclude before struct expansions (#16671)
🐞 Bug fixes
- Fix
should_rechunkcheck (#16852) - Ensure
read_excelandread_odsreturn identical frames across all engines when given empty spreadsheet tables (#16802) - Consistent behaviour when "infer_schema_length=0" for
read_excel(#16840) - Standardised additional SQL interface errors (#16829)
- Ensure that splitted ChunkedArray also flattens chunks (#16837)
- Reduce needless panics in comparisons (#16831)
- Reset if next caller clones inner series (#16812)
- Raise on non-positive json schema inference (#16770)
- Rewrite implementation of
top_k/bottom_kand fix a variety of bugs (#16804) - Fix comparison of UInt64 with zero (#16799)
- Fix incorrect parquet statistics written for UInt64 values > Int64::MAX (#16766)
- Fix boolean distinct (#16765)
DATE_PARTSQL syntax/parsing, improve some error messages (#16761)- Include
pl.qualifier for inner dtypes into_init_repr(#16235) - Column selection wasn't applied when reading CSV with no rows (#16739)
- Panic on empty df / null List(Categorical) (#16730)
- Only flush if operator can flush in streaming outer join (#16723)
- Raise unsupported cat array (#16717)
- Assert SQLInterfaceError is raised (#16713)
- Restrict casting for temporal data types (#14142)
- Handle nested categoricals in
assert_series_equalwhencategorical_as_str=True(#16700) - Improve
read_databasecheck for SQLAlchemy async Session objects (#16680) - Reduce scope of multi-threaded numpy conversion (#16686)
- Full null on dyn int (#16679)
- Fix filter shape on empty null (#16670)
📖 Documentation
- Update version switcher for 1.0.0 prereleases (#16847)
- Update link from Python API reference to user guide (#16849)
- Update docstring/test/etc usage of
selectandwith_columnsto idiomatic form (#16801) - Update versioning docs for 1.0.0 (#16757)
- Add docstring example for
DataFrame.limit(#16753) - Fix incorrect stated value of
include_nullsinDataFrame.updatedocstring (#16701) - Update deprecation docs in the user guide (#14315)
- Add example for index count in
DataFrame.rolling(#16600) - Improve docstring of
Expr/Series.map_elements(#16079) - Add missing
polars.sqldocs entry and small docstring update (#16656)
🛠️ Other improvements
- Remove inner
ArcfromFileCacheEntry(#16870) - Do not update stable API reference on prerelease (#16846)
- Update links to API references (#16843)
- Prepare update of API reference URLs (#16816)
- Rename allow_overflow to wrap_numerical (#16807)
- Set
infer_schema_lengthas keyword-only forstr.json_decode(#168...
Python Polars 0.20.31
Important
The decision to change the default coalesce behavior of left join has been reversed.
You can ignore the associated deprecation warning.
⚠️ Deprecations
- Rename
dtypesparameter toschema_overridesforread_csv/scan_csv/read_csv_batched(#16628) - Deprecate
nulls_last/maintain_order/multithreadedparameters fortop_kmethods (#16597) - Rename
SQLContext"eager_execution" param to "eager" (#16595) - Rename
Series.equalsparameterstricttocheck_dtypesand rename assertion utils parametercheck_dtypetocheck_dtypes(#16573) - Add
DataFrame.serialize/deserialize(#16545) - Deprecate
str.explodein favor ofstr.split("").explode()(#16508) - Deprecate default coalesce behavior of left join (#16532) - !! Reversed in 1.0.0 - see message above !!
🚀 Performance improvements
- make truncate 4x faster in simple cases (#16615)
- Cache arena's (and conversion) in SQL context (#16566)
- Partial schema cache. (#16549)
✨ Enhancements
- Support per-column
nulls_laston sort operations (#16639) - Initial support for SQL
ARRAYliterals and theUNNESTtable function (#16330) - Don't allow
struct.with_fieldsin grouping (#16629) - improve support for user-defined functions that return scalars (#16556)
- Add SQL support for
TRY_CASTfunction (#16589) - Add top-level
pl.sqlfunction (#16528) - Expose temporal function expression ops to expr ir (#16546)
- Add
DataFrame.serialize/deserialize(#16545) - check if by column is sorted, rather than just checking sorted flag, in
group_by_dynamic,upsample, androlling(#16494)
🐞 Bug fixes
- Potentially deal with empty range (#16650)
- Use of SQL
ORDER BYshould not cause reordering ofSELECTcols (#16579) - ensure df in empty parquet (#16621)
- Fix Array constructor when inner type is another Array (#16622)
- Fix parsing of
shapeinArrayconstructor and deprecatewidthparameter (#16567) - Crash using empty
SeriesinLazyFrame.select()(#16592) - improve support for user-defined functions that return scalars (#16556)
- Resolve multiple SQL
JOINissues (#16507) - Project last column if count query (#16569)
- Properly split struct columns (#16563)
- Ensure strict chunking in chunked partitioned group by (#16561)
- Error selecting columns after non-coalesced join (multiple join keys) (#16559)
- Don't panic on hashing nested list types (#16555)
- Crash selecting columns after non-coalesced join (#16541)
- Fix group gather of single literal (#16539)
- throw an invalid operation exception on performing a
sumover alistofstrs (#16521) - Fix
DataFrame.__getitem__for empty list input -df[[]](#16520) - Fix issue in
DataFrame.__getitem__with 2 column inputs (#16517)
📖 Documentation
- Overview of available SQL functions (#16268)
- Update filter description to clarify that null evaluations are removed (#16632)
- Include warning in docstrings that accessing
LazyFrameproperties may be expensive (#16618) - Add a few
versionaddedtags, and addis_column_selectionto the Expr meta docs (#16590) - Fix bullet points not rendering correctly in
DataFrame.joindocstring (#16576) - Remove erroneous
implodereference from the user guide section on window functions (#16544)
📦 Build system
- Run
cargo update(#16574)
🛠️ Other improvements
- Add test for 16642 (#16646)
- Remove duplicate tag in CODEOWNERS (#16625)
- Update dprint hook versions and enable JSON linting (#16611)
- Fewer
typing.no_type_check(#16497)
Thank you to all our contributors for making this release possible!
@MarcoGorelli, @alexander-beedie, @coastalwhite, @hattajr, @itamarst, @mcrumiller, @nameexhaustion, @r-brink, @ritchie46, @stinodego, @twoertwein and @wence-
Python Polars 0.20.30
⚠️ Deprecations
- Add
Series/Expr.has_nullsand deprecateSeries.has_validity(#16488) - Deprecate
tree_formatparameter forLazyFrame.explainin favor offormat(#16486)
🚀 Performance improvements
✨ Enhancements
- Minor
DataFrame.__getitem__improvements (#16495) - Add
is_column_selection()to expression meta, enhanceexpand_selector(#16479) - Add
Series/Expr.has_nullsand deprecateSeries.has_validity(#16488) - NDarray/Tensor support (#16466)
🐞 Bug fixes
- Fix df.chunked for struct (#16504)
- Mix of column and field expansion (#16502)
- Fix
split_chunksfor nested dtypes (#16493) - Fix handling NaT values when creating Series from NumPy ndarray (#16490)
- Fix boolean trap issue in
top_k/bottom_k(#16489) - Handle struct.fields as special case of alias (#16484)
- Correct schema for list.sum (#16483)
- allow search_sorted directly on multiple chunks, and fix behavior around nulls (#16447)
- Fix use of
COUNT(*)in SQLGROUP BYoperations (#16465) - respect
nan_to_nullwhen using multi-thread inpl.from_pandas(#16459) - write_delta() apparently does support Categorical columns (#16454)
📖 Documentation
- Update the Overview section of the contributing guide (#15674)
- Use
pl.fieldinsidewith_fieldsexamples. (#16451) - Change ordering of values in example for
cum_max(#16456)
🛠️ Other improvements
- Refactor
Series/DataFrame.__getitem__logic (#16482)
Thank you to all our contributors for making this release possible!
@BGR360, @alexander-beedie, @cmdlineluser, @coastalwhite, @itamarst, @marenwestermann, @mdavis-xyz, @messense, @orlp, @ritchie46 and @stinodego
Python Polars 0.20.29
⚠️ Deprecations
- Deprecate
how="outer"join type in favour ofhow="full"(left/right are *also* outer joins) (#16417)
🚀 Performance improvements
- Fix pathological small chunk parquet writing (#16433)
✨ Enhancements
- Support zero-copy conversion for temporal types in
DataFrame.to_numpy(#16429) - Allow designation of a custom name for the
value_counts"count" column (#16434) - Default rechunk=False for read_parquet (#16427)
- Add "ignore_spaces" to
alphaandalphanumericselectors, add "ascii_only" todigit(#16362) - Update
__array__method for Series and DataFrame to supportcopyparameter (#16401)
🐞 Bug fixes
- add cluster_with_columns optimization toggle in python (#16446)
- Fix struct 'with_fields' schema for update dtypes (#16428)
- Fix error reading lists of CSV files that contain comments (#16426)
- make read_parquet() respect rechunk flag when using pyarrow (#16418)
- Improve
read_exceldtype inference of "calamine" int/float results that include NaN (#16400) - Update
applycall instr_duration_util. (#16412)
📖 Documentation
Thank you to all our contributors for making this release possible!
@KDruzhkin, @alexander-beedie, @ankane, @cmdlineluser, @coastalwhite, @itamarst, @nameexhaustion, @ritchie46 and @stinodego
Python Polars 0.20.28
⚠️ Deprecations
- Deprecate
use_pyarrowparameter forto_numpymethods (#16391)
✨ Enhancements
- Add
fieldexpression as selector with an struct scope (#16402) - Field expansion renaming (#16397)
- Respect index order in
DataFrame.to_numpyalso for non-numeric frames (#16390) - add Expr.interpolate_by (#16313)
- Implement Struct support for
Series.to_numpy(#16383)
🐞 Bug fixes
- Fix struct arithmetic schema (#16396)
- Handle non-Sequence iterables in filter (#16254)
- Fix don't panic on chunked to_numpy conversion (#16393)
- Don't check nulls before conversion (#16392)
- Add support for generalized ufunc with different size input and output (#16336)
- Improve cursor close behaviour with respect to Oracle "thick mode" connections (#16380)
- Fix
DataFrame.to_numpyfor Array/Struct types (#16386) - Handle ambiguous/nonexistent datetimes in Series construction (#16342)
- Fix
DataFrame.to_numpyfor Struct columns whenstructured=True(#16358) - Use strings to expose
ClosedIntervalin expr IR (#16369)
📖 Documentation
- Expand docstrings for
to_numpymethods (#16394) - Add a not about index access on struct.field (#16389)
Thank you to all our contributors for making this release possible!
@MarcoGorelli, @alexander-beedie, @coastalwhite, @dangotbanned, @itamarst, @ritchie46, @stinodego and @wence-
Rust Polars 0.40.0
💥 Breaking changes
- Remove incremental read based batched CSV reader (#16259)
- separate
rolling_*_byfromrolling_*(..., by=...)in Rust (#16102) - Move CSV read options from
CsvReadertoCsvReadOptions(#16126) - Rename all 'Chunk's to RecordBatch (#16063)
- prepare for join coalescing argument (#15418)
- Rename to
CsvParserOptionstoCsvReaderOptions, use inCsvReader(#15919) - Add context trace to
LazyFrameconversion errors (#15761) - Move schema resolving of file scan to IR phase (#15739)
- Move schema resolving to IR phase. (#15714)
- Rename LogicalPlan and builders to reflect their uses better (#15712)
🚀 Performance improvements
- Use branchless uleb128 decoding for parquet (#16352)
- Reduce error bubbling in parquet hybrid_rle (#16348)
- use is_sorted in ewm_mean_by, deprecate check_sorted (#16335)
- Optimize
is_sortedfor numeric data (#16333) - do not use pyo3-built (#16309)
- Faster bitpacking for Parquet writer (#16278)
- Avoid importing
ctypes.utilin CPU check script if possible (#16307) - Don't rechunk when converting DataFrame to numpy/ndarray (#16288)
- use zeroed vec in ewm_mean_by for sorted fastpath (#16265)
- use zeroable_vec in ewm_mean_by (#16166)
- Improve cost of chunk_idx compute (#16154)
- Don't rechunk by default in
concat(#16128) - Ensure rechunk is parallel (#16127)
- Don't traverse deep datasets that we repr as union in CSE (#16096)
- Ensure better chunk sizes (#16071)
- Don't rechunk in parallel collection (#15907)
- Improve non-trivial list aggregations (#15888)
- Ensure we hit specialized gather for binary/strings (#15886)
- Limit the cache size for
to_datetime(#15826) - skip initial null items and don't recompute
slopeininterpolate(#15819) - Fix quadratic in binview growable same source (#15734)
✨ Enhancements
- Raise when joining on the same keys twice (#16329)
- Don't require data to be sorted by
bycolumn inrolling_*_byoperations (#16249) - Add struct.field expansion (regex, wildcard, columns) (#16320)
- Faster bitpacking for Parquet writer (#16278)
- Add
struct.with_fields(#16305) - Handle implicit SQL string → temporal conversion in the
BETWEENclause (#16279) - Add new index/range based selector
cs.by_index, allow multiple indices fornth(#16217) - Show warning if expressions are very deep (#16233)
- Native CSV file list reading (#16180)
- Register memory mapped files and raise when written to (#16208)
- Raise when encountering invalid supertype in functions during conversion (#16182)
- Add SQL support for
GROUP BY ALLsyntax and fix several issues with aliased group keys (#16179) - Allow implicit string → temporal conversion in SQL comparisons (#15958)
- separate
rolling_*_byfromrolling_*(..., by=...)in Rust (#16102) - Add run-length encoding to Parquet writer (#16125)
- add date pattern
dd.mm.YYYY(#16045) - Add RLE to
RLE_DICTIONARYencoder (#15959) - Support non-coalescing joins in default engine (#16036)
- Move diagonal & horizontal concat schema resolving to IR phase (#16034)
- raise more informative error messages in rolling_* aggregations instead of panicking (#15979)
- Convert concat during IR conversion (#16016)
- Improve dynamic supertypes (#16009)
- Additional
uintdatatype support for the SQL interface (#15993) - Support Decimal read from IPC (#15965)
- Add typed collection from par iterators (#15961)
- Add
byargument forExpr.top_kandExpr.bottom_k(#15468) - Add option to disable globbing in csv (#15930)
- Add option to disable globbing in parquet (#15928)
- Rename to
CsvParserOptionstoCsvReaderOptions, use inCsvReader(#15919) - Expressify
dt.round(#15861) - Improve error messages in context stack (#15881)
- Add dynamic literals to ensure schema correctness (#15832)
dt.truncatesupports broadcasting lhs (#15768)- Expressify
str.json_path_match(#15764) - Support decimal float parsing in CSV (#15774)
- Add context trace to
LazyFrameconversion errors (#15761)
🐞 Bug fixes
- correct AExpr.to_field for bitwise and logical and/or (#16360)
- cargo clippy for uleb128 safety comment (#16368)
- Infer CSV schema as supertype of all files (#16349)
- Address overflow combining u64 hashes in Debug builds (#16323)
- Don't exclude explicitly named columns in group-by context' expr expansion (#16318)
- Harden
Series.reshapeagainst invalid parameters (#16281) - Fix list.sum dtype for boolean (#16290)
- Don't stackoverflow on all/any horizontal (#16287)
- compilation error when both lazy and ipc features are enabled (#16284)
- `rolling_*_by was throwing incorrect error when dataframe was sorted by contained multiple chunks (#16247)
- Clippy Error for CPUID (#16241)
- Reading CSV with low_memory gave no data (#16231)
- Empty unique (#16214)
- Fix empty drop nulls (#16213)
- Fix get expression group-by state (#16189)
- Fix rolling empty group OOB (#16186)
- offset=-0i was being treated differently to offset=0i in rolling (#16184)
- Fix panic on empty frame joins (#16181)
- Fix streaming glob slice (#16174)
- Fix CSV skip_rows_after_header for streaming (#16176)
- Flush parquet at end of batches tick (#16073)
- Check CSE name aliases for collisions. (#16149)
- Don't override CSV reader encoding with lossy UTF-8 (#16151)
- Add missing allow macros for windows (#16130)
- Ensure hex and bitstring literals work inside SQL
INclauses (#16101) - Revert "Add RLE to
RLE_DICTIONARYencoder" (#16113) - Respect user passed 'reader_schema' in 'scan_csv' (#16080)
- Lazy csv + projection; respect null values arg (#16077)
- Materialize dtypes when converting to arrow (#16074)
- Fix casting decimal to decimal for high precision (#16049)
- Fix printing max scale decimals (#16048)
- Decimal supertype for dyn int (#16046)
- Do not set sorted flag on lexical sorting (#16032)
- properly handle nulls in DictionaryArray::iter_typed (#16013)
- Fix CSE case where upper plan has no projection (#16011)
- Crash/incorrect group_by/n_unique on categoricals created by (q)cut (#16006)
- Ternary supertype dynamics (#15995)
- Treat splitting by empty string as iterating over chars (#15922)
- Fix PartialEq for DataType::Unknown (#15992)
- Do not reverse null indices in descending arg_sort (#15974)
- Finish adding
typed_litto help schema determination in SQL "extract" func (#15955) - do not panic when comparing against categorical with incompatible dtype (#15857)
- Join validation for multiple keys (#15947)
- Set default limit for String column display to 30 and fix edge cases (#15934)
- typo in add_half_life takes ln(negative) (#15932)
- Remove ffspec from parquet reader (#15927)
- avoid WRITE+EXEC for CPUID check (#15912)
- fix inconsistent decimal formatting (#15457)
- Preserve NULLs for
is_not_nan(#15889) - double projection check should only take the upstream projections into account (#15901)
- Ensure we don't create invalid frames when combining unit lit + … (#15903)
- Clear cached rename schema (#15902)
- Fix OOB in struct lit/agg aggregation (#15891)
- create (q)cut labels in fixed order (#15843)
- Tag
shrink_dtypeas non-streaming (#15828) - drop-nulls edge case; remove drop-nulls special case (#15815)
- ewm_mean_by was skipping initial nulls when it was already sorted by "by" column (#15812)
- Consult cgroups to determine free memory (#15798)
- raise if index count like 2i is used when performing rolling, group_by_dynamic, upsample, or other temporal operatios (#15751)
- Don't deduplicate sort that has slice pushdown (#15784)
- Fix incorrect
is_betweenpushdown toscan_pyarrow_dataset(#15769) - Handle null index correctly for list take (#15737)
- Preserve lexical ordering on concat (#15753)
- Remove incorrect unsafe pointer cast for int -> enum (#15740)
- pass series name to apply for cut/qcut (#15715)
- count of null column shouldn't panic in agg context (#15710)
📖 Documentation
- Clarify arrow usage (#16152)
- Solve inconsistency between code and comment (#16135)
- add filter docstring examples to date and datetime (#15996)
- update the link to R API docs (#15973)
- Fix a typo in categorical section of the user guide (#15777)
- Fix incorrect column name in
LazyFrame.sortdoc example (#15658)
📦 Build system
- Update Rust nightly toolchain version (#16222)
- Don't import jemalloc (#15942)
- Use default allocator for lts-cpu (#15941)
- replace all macos-latest referrals with macos-13 (#15926)
- pin mimalloc and macos-13 (#15925)
- use jemalloc in lts-cpu (#15913)
🛠️ Other improvements
- simplify interpolate code, add test for rolling_*_by with nulls (#16334)
- Move expression expansion to conversion module (#16331)
- Add
polars-exprREADME (#16316) - Move physical expressions to new crate (#16306)
- Use
cls(notself) in classmethods (#16303) - conditionally print the CSEs (#16292)
- Rename
ChunkedArray.chunk_idtochunk_lengths(#16273) - Use Scalar instead of Series some aggregations (#16277)
- Use
CsvReadOptionsinLazyCsvReader(#16283) - Do not hardcode bash path in Makefile (#16263)
- Add IR::Reduce (not yet implemented) (#16216)
- Remove incremental read based batched CSV reader (#16259)
- move all describe, describe_tree and dot-viz code to IR instead of DslPlan (#16237)
- move describe to IR instead of DSL (#16191)
- Use
Duration.is_zeroinstead of comparing Duration.duration_ns to 0 (#16195) - Remove unused code (#16175)
- Don't override CSV reader encoding with lossy UTF-8 (#16151)
- Move CSV read options from
CsvReadertoCsvReadOptions(#16126) - Bump
sccacheaction (#16088) - Fix failures in test coverage workflow (#16083)
- Rename all 'Chunk's to RecordBatch (#16063)
- Use UnionArgs for DSL side (#16017)
- Add some comments (#16008)
- prepare for join coalescing argument (#15418)
- Pin c...
Python Polars 0.20.27
Warning
This release was yanked. Please use the 0.20.28 release instead.
⚠️ Deprecations
- Change parameter
chunkedtoallow_chunksin parametric testing strategies (#16264)
🚀 Performance improvements
- Use branchless uleb128 decoding for parquet (#16352)
- Reduce error bubbling in parquet hybrid_rle (#16348)
- use is_sorted in ewm_mean_by, deprecate check_sorted (#16335)
- Optimize
is_sortedfor numeric data (#16333) - do not use pyo3-built (#16309)
- Faster bitpacking for Parquet writer (#16278)
- Improve
Series.to_numpyperformance for chunked Series that would otherwise be zero-copy (#16301) - Further optimise initial
polarsimport (#16308) - Avoid importing
ctypes.utilin CPU check script if possible (#16307) - Don't rechunk when converting DataFrame to numpy/ndarray (#16288)
- use zeroed vec in ewm_mean_by for sorted fastpath (#16265)
✨ Enhancements
- expose BooleanFunction in expr IR (#16355)
- Allow
read_excelto handle bytes/BytesIO directly when using the "calamine" (fastexcel) engine (#16344) - Raise when joining on the same keys twice (#16329)
- Don't require data to be sorted by
bycolumn inrolling_*_byoperations (#16249) - Support List types in
Series.to_numpy(#16315) - Add
to_jaxmethods to support Jax Array export fromDataFrameandSeries(#16294) - Enable generating data with time zones in parametric testing (#16298)
- Add struct.field expansion (regex, wildcard, columns) (#16320)
- Add new
alpha,alphanumericanddigitselectors (#16310) - Faster bitpacking for Parquet writer (#16278)
- Add
require_allparameter to theby_namecolumn selector (#15028) - Start updating
BytecodeParserfor Python 3.13 (#16304) - Add
struct.with_fields(#16305) - Handle implicit SQL string → temporal conversion in the
BETWEENclause (#16279) - Expose string expression nodes to python (#16221)
- Add new index/range based selector
cs.by_index, allow multiple indices fornth(#16217) - Show warning if expressions are very deep (#16233)
- Fix some issues in parametric testing with nested dtypes (#16211)
🐞 Bug fixes
- pick a consistent order for the sort options in PyIR (#16350)
- Infer CSV schema as supertype of all files (#16349)
- Fix issue in parametric testing where
excluded_dtypeslist would grow indefinitely (#16340) - Address overflow combining u64 hashes in Debug builds (#16323)
- Don't exclude explicitly named columns in group-by context' expr expansion (#16318)
- Improve
map_elementstyping (#16257) - Harden
Series.reshapeagainst invalid parameters (#16281) - Fix list.sum dtype for boolean (#16290)
- Don't stackoverflow on all/any horizontal (#16287)
- Fix
Series.to_numpyfor Array types with nulls and nested Arrays (#16230) - `rolling_*_by was throwing incorrect error when dataframe was sorted by contained multiple chunks (#16247)
- Don't allow passing missing data to generalized ufuncs (#16198)
- Address overly-permissive
expand_selectorsfunction, minor fixes (#16250) - Add missing support for parsing instantiated Object dtypes
Object()(#16260) - Reading CSV with low_memory gave no data (#16231)
- Add missing
read_databaseoverload (#16229) - Fix a rounding error in parametric test datetimes generation (#16228)
- Fix some issues in parametric testing with nested dtypes (#16211)
📖 Documentation
- Add missing word in
joindocstring (#16299) - document that month_start/month_end preserve the current time (#16293)
- Add example for separator parameter in pivot (#15957)
📦 Build system
🛠️ Other improvements
- Move
DataFrame.to_numpyimplementation to Rust side (#16354) - Organize PyO3 NumPy code into
interop::numpymodule (#16346) - simplify interpolate code, add test for rolling_*_by with nulls (#16334)
- Very minor refactor of
DataFrame.to_numpycode (#16325) InterchangeDataFrame.versionshould be aClassVar(not aproperty) (#16312)- Add
polars-exprREADME (#16316) - Raise import timing test threshold (#16302)
- Use
cls(notself) in classmethods (#16303) - Use Scalar instead of Series some aggregations (#16277)
- Do not hardcode bash path in Makefile (#16263)
- Add IR::Reduce (not yet implemented) (#16216)
- move all describe, describe_tree and dot-viz code to IR instead of DslPlan (#16237)
Thank you to all our contributors for making this release possible!
@MarcoGorelli, @NickCondron, @ShivMunagala, @alexander-beedie, @brandon-b-miller, @coastalwhite, @datenzauberai, @itamarst, @jsarbach, @max-muoto, @nameexhaustion, @orlp, @r-brink, @ritchie46, @stinodego, @thalassemia, @twoertwein and @wence-
Python Polars 0.20.26
⚠️ Deprecations
- Deprecate
allow_infinitiesandnull_probabilityargs to parametric test strategies (#16183)
🚀 Performance improvements
- Avoid needless copy when converting chunked Series to NumPy (#16178)
- use zeroable_vec in ewm_mean_by (#16166)
- Improve cost of chunk_idx compute (#16154)
- Don't rechunk by default in
concat(#16128) - Ensure rechunk is parallel (#16127)
✨ Enhancements
- Clarify
to_torch"features" and "label" parameter behaviour when return type is not "dataset" (#16218) - Native CSV file list reading (#16180)
- Register memory mapped files and raise when written to (#16208)
- Implement support for Struct types in parametric tests (#16197)
- Enable Null datatype and null values by default in parametric testing (#16192)
- Support
Enumtypes in parametric testing (#16188) - Raise when encountering invalid supertype in functions during conversion (#16182)
- Add SQL support for
GROUP BY ALLsyntax and fix several issues with aliased group keys (#16179) - Overhaul parametric test implementations and update Hypothesis to latest version (#16062)
- Avoid an extra copy when converting Boolean Series to writable NumPy array (#16164)
- Allow implicit string → temporal conversion in SQL comparisons (#15958)
- Add run-length encoding to Parquet writer (#16125)
- Support passing instantiated adbc/alchemy connection objects to
write_database(#16099) - Add top-level
nth(n)method, to go with existingfirstandlast(#16112)
🐞 Bug fixes
- Empty unique (#16214)
- Fix empty drop nulls (#16213)
- Fix get expression group-by state (#16189)
- Fix rolling empty group OOB (#16186)
- offset=-0i was being treated differently to offset=0i in rolling (#16184)
- Fix panic on empty frame joins (#16181)
- Fix streaming glob slice (#16174)
- Fix CSV skip_rows_after_header for streaming (#16176)
- Flush parquet at end of batches tick (#16073)
- Check CSE name aliases for collisions. (#16149)
- Don't override CSV reader encoding with lossy UTF-8 (#16151)
- Respect
dtypeandstrictinpl.Series's constructor for pyarrow arrays, numpy arrays, and pyarrow-backed pandas (#15962) - Ensure hex and bitstring literals work inside SQL
INclauses (#16101)
📖 Documentation
- Add examples for multiple
Seriesfunctions (#16172) - Add deprecated messages to
cumfoldandcumreduce(#16173) - StringNameSpace.replace_all docstring (#16169)
- Explain how Polars floor division differs from Python (#16054)
- Clarify arrow usage (#16152)
- Add example to polars.Series.flags (#16123)
🛠️ Other improvements
- Switch over some of the custom Python date/time conversions to native PyO3 conversions (#16203)
- move describe to IR instead of DSL (#16191)
- More PyO3 0.21 Bound<> APIs, and finally disable gil-refs backwards compat feature on pyo3 crate (#16143)
- Move hypothesis tests into unit test module (#16185)
- Remove unused code (#16175)
- Update plugin example to PyO3 0.21 (#16157)
- Return correct temporal type from Rust in
to_numpy(#14353) - Create NumPy view for Datetime/Duration Series on the Rust side (#16148)
- Don't override CSV reader encoding with lossy UTF-8 (#16151)
- Continue converting to PyO3 0.21 Bound<> APIs (#16081)
Thank you to all our contributors for making this release possible!
@MarcoGorelli, @YichiZhang0613, @alexander-beedie, @bertiewooster, @coastalwhite, @dangotbanned, @itamarst, @janpipek, @jrycw, @luke396, @nameexhaustion, @pydanny, @ritchie46, @stinodego, @thalassemia and @tharunsuresh-code
Python Polars 0.20.25
🐞 Bug fixes
- Revert "Add RLE to
RLE_DICTIONARYencoder" - Improve error handling of
ParameterCollisionErrorinread_excel(#16100)
Thank you to all our contributors for making this release possible!
@nameexhaustion, @ritchie46 and @wsyxbcl
Python Polars 0.20.24
Warning
This release was yanked. Please use the 0.20.25 release instead.
🏆 Highlights
- Support
pytorchTensor and Dataset export with newto_torchDataFrame/Series method (#15931)
🚀 Performance improvements
- Don't traverse deep datasets that we repr as union in CSE (#16096)
- Ensure better chunk sizes (#16071)
✨ Enhancements
- split out rolling_*(..., by='foo') into rolling_*_by('foo', ...) (#16059)
- add date pattern
dd.mm.YYYY(#16045) - split Expr.top_k and Expr.top_k_by into separate functions (#16041)
- Support non-coalescing joins in default engine (#16036)
- Support
pytorchTensor and Dataset export with newto_torchDataFrame/Series method (#15931) - Minor DB type inference updates (#16030)
- Move diagonal & horizontal concat schema resolving to IR phase (#16034)
- raise more informative error messages in rolling_* aggregations instead of panicking (#15979)
- Convert concat during IR conversion (#16016)
- Improve dynamic supertypes (#16009)
- Additional
uintdatatype support for the SQL interface (#15993) - Add post-optimization callback (#15972)
- Support Decimal read from IPC (#15965)
- Expose plan and expression nodes through
NodeTraverserto Python (#15776) - Add typed collection from par iterators (#15961)
- Add
byargument forExpr.top_kandExpr.bottom_k(#15468)
🐞 Bug fixes
- Respect user passed 'reader_schema' in 'scan_csv' (#16080)
- Lazy csv + projection; respect null values arg (#16077)
- Materialize dtypes when converting to arrow (#16074)
- Fix casting decimal to decimal for high precision (#16049)
- Fix Series constructor failure for Array types for large integers (#16050)
- Fix printing max scale decimals (#16048)
- Decimal supertype for dyn int (#16046)
- Correctly handle large timedelta objects in Series constructor (#16043)
- Do not close connection just because we're not returning Arrow data in batches (#16031)
- properly handle nulls in DictionaryArray::iter_typed (#16013)
- Fix CSE case where upper plan has no projection (#16011)
- Crash/incorrect group_by/n_unique on categoricals created by (q)cut (#16006)
- converting from numpy datetime64 and overriding dtype with a different resolution was returning incorrect results (#15994)
- Ternary supertype dynamics (#15995)
- Fix PartialEq for DataType::Unknown (#15992)
- Finish adding
typed_litto help schema determination in SQL "extract" func (#15955) - Fix dtype parameter in
pandas_to_pyseriesfunction (#15948) - do not panic when comparing against categorical with incompatible dtype (#15857)
- Join validation for multiple keys (#15947)
- Add missing "truncate_ragged_lines" parameter to
read_csv_batched(#15944)
📖 Documentation
- Ensure consistent docstring warning in
fill_nanmethods (pointing out thatnanisn'tnull) (#16061) - add filter docstring examples to date and datetime (#15996)
- Fix docstring mistake for polars.concat_str (#15937)
- Update reference to
apply(#15982) - Remove unwanted linebreaks from docstrings (#16002)
- correct default in rolling_* function examples (#16000)
- Improve user-guide doc of UDF (#15923)
- update the link to R API docs (#15973)
🛠️ Other improvements
- Bump
sccacheaction (#16088) - Fix failures in test coverage workflow (#16083)
- Update benchmarks/coverage jobs with "requirements-ci" (#16072)
- Add TypeGuard to
is_polars_dtypeutil (#16065) - Clean up hypothesis decimal strategy (#16056)
- split Expr.top_k and Expr.top_k_by into separate functions (#16041)
- Use UnionArgs for DSL side (#16017)
- Add some comments (#16008)
- Improve hypothesis strategy for decimals (#16001)
- Set up TPC-H benchmark tests (#15908)
- Even more Pyo3 0.21 Bound<> APIs (#15914)
- Fix failing test (#15936)
Thank you to all our contributors for making this release possible!
@CanglongCl, @JulianCologne, @KDruzhkin, @MarcoGorelli, @alexander-beedie, @avimallu, @bertiewooster, @c-peters, @dependabot, @dependabot[bot], @eitsupi, @haocheng6, @itamarst, @luke396, @marenwestermann, @nameexhaustion, @orlp, @ritchie46, @stinodego, @thalassemia, @wence- and @wsyxbcl