Release Spark Connector 2.10.1
·
56 commits
to master
since this release
Fixed some critical issues:
- Removed unnecessary dependencies on libraries to avoid the security vulnerabilities CVE-2020-8908 and CVE-2018-10237.
- Added support for using the JDBC data type TIMESTAMP_WITH_TIMEZONE when reading data from Snowflake.
- Changed the logic for checking for the existence of a table before saving a DataFrame to Snowflake:
- The connector now reuses the existing connection (rather than creating a new connection) to avoid potential problems with token expiration.
- If the table name is not fully qualified (i.e. does not include the schema name), the connector now checks for the table under the schema specified by sfSchema, rather than the schema that is currently in use in the session.
Note: If you need to save a DataFrame to a table in a schema other than sfSchema, specify the schema as part of the fully qualified name of the table, rather than executing USE SCHEMA to change the current schema.
- Improved performance by avoiding unnecessary parse_json() calls in the COPY INTO TABLE command when writing a DataFrame with ArrayType, MapType or StructType columns to Snowflake.
- Added the getLastSelectQueryId and getLastCopyLoadQueryId methods to the Utils class. These methods return the query ID of the last query that read data from Snowflake and the last COPY INTO TABLE statement that was executed (respectively).