Releases: snowflakedb/spark-snowflake
Releases · snowflakedb/spark-snowflake
v2.5.7
- Upgrade Snowflake JDBC to 3.11.1.
- Remove a confusing warning message when writing to snowflake on Amazon S3.
- Log or validate the row count when reading from snowflake.
- Remove the CREATE TABLE privilege requirement when saving data into existing table with APPEND mode.
- Support Case Insensitive Column Mapping for Spark Streaming.
- Pretty format logging.
- Code format refactor in compliance with Scala code style.
v2.5.6
- Upgrade Snowflake JDBC to 3.11.0.
- Enhance the read performance from Snowflake.
- Implement retry mechanism when downloading data from cloud storage.
v.2.5.2
- Upgrade Snowflake JDBC to 3.9.0
- Fix pushdown issue when keep-column-case is on.
v.2.5.1
- Upgrade Snowflake Ingest SDK to 0.9.6
- Fix staging table name error when table name include quotes
- Fix keep_column_case issue
v.2.5.0
- Support only Snowflake JDBC 3.8.4 +
- Add getLastSelect function
- Fix pushdown failure in large SQL query
v.2.4.14
Fix Bugs
- Upgrade JDBC to v3.8.0 to fix OCSP issue
- Fix issue, changes may be not committed to Snowflake table in Append Mode.
New Features
- Support TIME data type, automatically convert TIME to StringType in Spark
- Support Scala 2.12
- Now, user can use string "snowflake" to launch the connector, for example,
spark.write.format("snowflake")
- New parameter:
column_mapping
. This parameter has two options:order
(default) andname
. When usingname
method, connector automatically maps Spark DataFrame column to Snowflake Table column by column name (case insensitive). - New parameter:
column_mismatch_behavior
. This parameter has two options:error
(default) andignore
. Inignore
mode, column mapping function removes all unmatched the Spark DataFrame column, and fills all unmatched Spark Table column by null.
v.2.4.12
- Bug Fixed:
- Spark query pushdown failed in Count method()
- SBT updated to 1.2.8
v.2.4.11
Add keep_column_name parameter. Default value of this parameter is off.
When this parameter is set to on:
- Spark connector will not automatically capitalize all letters in the column names when creating Snowflake table from Spark.
- Spark connector will not quoted the column names if it contains any characters except letters, underscore, and numbers.
v.2.4.10
Support Spark 2.4