Skip to content

Releases: snowflakedb/spark-snowflake

v2.5.7

15 Jan 22:50
Compare
Choose a tag to compare
  • Upgrade Snowflake JDBC to 3.11.1.
  • Remove a confusing warning message when writing to snowflake on Amazon S3.
  • Log or validate the row count when reading from snowflake.
  • Remove the CREATE TABLE privilege requirement when saving data into existing table with APPEND mode.
  • Support Case Insensitive Column Mapping for Spark Streaming.
  • Pretty format logging.
  • Code format refactor in compliance with Scala code style.

v2.5.6

15 Jan 22:50
Compare
Choose a tag to compare
  • Upgrade Snowflake JDBC to 3.11.0.
  • Enhance the read performance from Snowflake.
  • Implement retry mechanism when downloading data from cloud storage.

v.2.5.2

23 Aug 17:55
Compare
Choose a tag to compare
  • Upgrade Snowflake JDBC to 3.9.0
  • Fix pushdown issue when keep-column-case is on.

v.2.5.1

23 Aug 17:53
Compare
Choose a tag to compare
  • Upgrade Snowflake Ingest SDK to 0.9.6
  • Fix staging table name error when table name include quotes
  • Fix keep_column_case issue

v.2.5.0

25 Jun 22:58
Compare
Choose a tag to compare
  • Support only Snowflake JDBC 3.8.4 +
  • Add getLastSelect function
  • Fix pushdown failure in large SQL query

v.2.4.14

04 Jun 17:09
Compare
Choose a tag to compare

Fix Bugs

  • Upgrade JDBC to v3.8.0 to fix OCSP issue
  • Fix issue, changes may be not committed to Snowflake table in Append Mode.

New Features

  • Support TIME data type, automatically convert TIME to StringType in Spark
  • Support Scala 2.12
  • Now, user can use string "snowflake" to launch the connector, for example, spark.write.format("snowflake")
  • New parameter: column_mapping. This parameter has two options: order(default) and name. When using name method, connector automatically maps Spark DataFrame column to Snowflake Table column by column name (case insensitive).
  • New parameter: column_mismatch_behavior. This parameter has two options: error (default) and ignore. In ignore mode, column mapping function removes all unmatched the Spark DataFrame column, and fills all unmatched Spark Table column by null.

v.2.4.12

18 Mar 22:54
Compare
Choose a tag to compare
  • Bug Fixed:
    • Spark query pushdown failed in Count method()
  • SBT updated to 1.2.8

v.2.4.11

03 Dec 23:52
Compare
Choose a tag to compare

Add keep_column_name parameter. Default value of this parameter is off.
When this parameter is set to on:

  • Spark connector will not automatically capitalize all letters in the column names when creating Snowflake table from Spark.
  • Spark connector will not quoted the column names if it contains any characters except letters, underscore, and numbers.

v.2.4.10

20 Nov 20:10
Compare
Choose a tag to compare

Support Spark 2.4

v.2.4.9

24 Oct 00:12
Compare
Choose a tag to compare
  • Fixed dependencies conflict issue
  • Support key pairs authentication
    • new parameter pem_private_key