Salesforce

4.39 Release Notes - November 16-23, 2020

« Go Back

Information

 
Summary
Last Modified DateNovember 26, 2020
Article Body

This article provides a brief overview of the new features, enhancements, and other important changes introduced in this scheduled release of Snowflake.More information about these changes is provided in the Snowflake documentation. If you have additional questions, please contact Snowflake Support.

Back to: Announcements


Important: The changes described in this article may take 24 hours or more (after the completion of the release) to become available.

In addition, the release may include fixes or changes that require the web interface to be refreshed. As a general practice, we recommend refreshing the web interface after each Snowflake release.


Ecosystem Deprecations

Snowflake Clients: Deprecated Support for CentOS 6, RedHat Enterprise Linux 6, Microsoft Windows 7, and Ubuntu 14.04

As previously announced, with this release, support for the following operating system versions is now deprecated for all Snowflake clients:

Operating System

Deprecated Version

Date Support Was Dropped

Notes

CentOS

6

November, 2020

 

Microsoft Windows

7

November, 2020

Microsoft has already dropped support for Windows 7.

Red Hat Enterprise Linux (RHEL)

6

November, 2020

 

Ubuntu

14.04

November, 2020

Ubuntu has already dropped support for Ubuntu 14.04.


This applies to all Snowflake clients:

  • SnowSQL

  • Snowflake Connector for Python

  • Snowflake Connector for Spark

  • Snowflake Connector for Kafka

  • Node.js Driver

  • Go Snowflake Driver

  • .NET Driver

  • JDBC Driver

  • ODBC Driver

Newer versions of the operating systems are still supported.

General SQL Behavior Changes

JSON Data: Changes to the Handling of Numeric Values with Trailing Zeros

When Snowflake needs to convert a numeric value in JSON to a numeric data type (for example, when you cast the value as a NUMBER or when Snowflake materializes the subcolumns), Snowflake automatically detects the scale of the value to determine if the value should be converted to a fixed-point number or a floating point number.
As announced previously, in this release, when automatically detecting the scale of a value in JSON, Snowflake ignores any trailing zeros in JSON.
For example, suppose that a JSON object contains the following:

{"A":-0.030017977442983734000}

Snowflake interprets the numeric value in the following way:

Previously:

Snowflake interprets the value as -0.030017977442983734000 with a scale of 21.

Currently:

Snowflake interprets the value as -0.030017977442983734 with a scale of 18.

 

This change can reduce rounding errors for values that are converted to DOUBLE. If the value without the trailing zeros can be represented as a fixed-point number, Snowflake treats the value as a fixed-point number.

For example, suppose that a value with trailing zeros has more than 37 digits after the decimal point:

{"A":-0.03001797744298373400000000000000000000000000000000000000000000000000000000000000}

The way in which Snowflake handles this value has changed:

Previously:

Because the scale exceeds 37, Snowflake interprets the value as a DOUBLE.

Currently:

When determining the scale, Snowflake ignores the trailing zeros, resulting in a scale of 18.
Because the scale does not exceed the maximum for a fixed-point number, Snowflake interprets the value as the fixed-point NUMBER  -0.030017977442983734.

 

This change can also affect hashes generated from values with trailing zeros. For example, the following statement produces hashes for a value that has differing numbers of trailing zeros:

select
  hash(parse_json('-0.030017977442983734')),
  hash(parse_json('-0.0300179774429837340000000000000000000000000000000000000000000000000')),
  hash(parse_json('-0.0300179774429837340'));

The hashes generated in this example have changed:

Previously:

The generated hash for each value is different.

Currently:

The generated hash for each value is the same.

 

VARIANT Data: QUOTED_IDENTIFIERS_IGNORE_CASE Parameter No Longer Applies to Dot Notation Paths

As announced previously, in this release, the QUOTED_IDENTIFIERS_IGNORE_CASE parameter no longer applies to VARIANT dot notation paths.

Previously:

When QUOTED_IDENTIFIERS_IGNORE_CASE is set to TRUE, double-quoted element names are case-insensitive. All letters are treated as uppercase.
For example, this path:

 v:"Company Division":Name 

resolves to:

v:"COMPANY DIVISION":Name

Currently:

When QUOTED_IDENTIFIERS_IGNORE_CASE is set to TRUE, double-quoted element names are case-sensitive.

For example, this path:

v:"Company Division":Name 

resolves to:

v:"Company Division":Name

 

Unlike SQL identifiers, names of elements in dot notation paths are always case-sensitive, even when the names are not enclosed in double quotes. The QUOTED_IDENTIFIERS_IGNORE_CASE parameter is intended to apply to SQL identifiers, not to element names.

For example, suppose that there are two element names that differ in case (AAA and aAa):

create or replace table t(v variant) as
  select parse_json(column1) from values
  ('{ "AAA": "AAA element", "aAa": "aAa element"}');

Previously, when QUOTED_IDENTIFIERS_IGNORE_CASE was TRUE, a path with double quotes around the element name (v:"aAa") resolved to the element with the uppercase name (AAA). A path without double quotes (v:aAa) resolved to the element name with the same case (aAa). For example:

alter session set QUOTED_IDENTIFIERS_IGNORE_CASE=true;
select v:aAa from t;
"aAa element"

select v:"aAa" from t;
"AAA element"

With this change, the path with double quotes resolves to the element name with the same case, regardless of the value of QUOTED_IDENTIFIERS_IGNORE_CASE:

alter session set QUOTED_IDENTIFIERS_IGNORE_CASE=true;

select v:aAa from t;
"aAa element"

select v:"aAa" from t;
"aAa element"

alter session set QUOTED_IDENTIFIERS_IGNORE_CASE=false;

select v:aAa from t;
"aAa element"

select v:"aAa" from t;
"aAa element"

SQL Function & Command Behavior Changes

SYSTEM$SET_RETURN_VALUE Function: Failure When Input Expression Exceeds 10K Bytes

As announced previously, with this release, the behavior when the input expression for the SYSTEM$SET_RETURN_VALUE function exceeds 10K bytes has changed as follows:

Previously:

The statement returned a FAILED message but succeeded anyway.

Currently:

The statement fails and returns the following user error:

Return value specified is too large. It must be 10,000 bytes or smaller.

 

SYSTEM$USER_TASK_CANCEL_ONGOING_EXECUTIONS Function: Requires OWNERSHIP or OPERATE Privilege on Task

As announced previously, with this release, the ability to call the SYSTEM$USER_TASK_CANCEL_ONGOING_EXECUTIONS function for a given task is limited to the task owner (i.e. the role that has the OWNERSHIP privilege on the task) or any role that has the OPERATE privilege on the task. Previously, a role with any privilege on the task, including MONITOR, could call the function.
This change limits the ability to call the function to roles with the same privileges required to suspend or resume a task (using ALTER TASK … SUSPEND or ALTER TASK … RESUME, respectively).

MD5_NUMBER Function — Obsoleted

As announced previously, with this release, the MD5_NUMBER function has been obsoleted.

Previously:

Calling the MD5_NUMBER function returns the 128-bit MD5 message digest interpreted as a signed 128-bit big endian number.

Currently:

Calling the MD5_NUMBER function causes the SQL compilation error "Unknown function MD5_NUMBER."

 

If you need to compute MD5 hashes, consider using MD5_BINARY or MD5 / MD5_HEX.

Note: MD5_NUMBER has been deprecated as of March 2019 because the function returns a 128-bit integer. Some 128-bit integer values have a decimal representation that exceeds 38 digits, which is greater than the number of digits supported for integers in Snowflake. These values returned by MD5_NUMBER do not fit into the type system used by Snowflake.

SHOW FUNCTIONS Command: New Column

As previously announced, with this release the following new column is added to the output of the SHOW FUNCTIONS command:

Column Name

Data Type

Description

LANGUAGE

VARCHAR

For built-in functions, this column shows “SQL”. For user-defined functions, this column shows the language in which the function was written, for example “JAVASCRIPT” or “SQL”. For external functions, this column will show “EXTERNAL”.

 

Account Usage & Information Schema Behavior Changes

PIPES View: New Column

As announced previously, with this release, the following column has been added to the PIPES view in Account Usage:

Column Name

Data Type

Description

PATTERN

VARCHAR

Value of the PATTERN copy option, if any, in the COPY INTO <table> statement in the pipe definition.

 

To help limit the impact of this change, the column was added as the last column in the output.

Ecosystem Behavior Changes

SHOW FILE FORMATS Command: Displays FORMAT_OPTIONS Column in Output in All Snowflake Clients

As announced previously, with this release, the FORMAT_OPTIONS column is returned in the SHOW FILE FORMATS command output regardless of the Snowflake client used to execute the command. Previously, the column was only returned when the command was executed in the Snowflake web interface.

Snowflake Connector for Python: Minimum Supported Python Version Changed to 3.6

As announced previously, with this release, the minimum version of Python supported by the Snowflake Connector for Python has changed as follows:

Previously:

The minimum supported version of Python was 3.5.

Currently:

The minimum supported version of Python is 3.6. 
The connector also supports Python 3.7 and 3.8.

 

This change is consistent with Python’s own support for version 3.5, which expired in September, 2020. 

Note that the Python connector now supports Python 3.8 on MS-Windows. (Previously, Python 3.8 was supported on Linux and macOS, but not on Microsoft Windows.)

Snowflake JDBC Driver, Snowflake ODBC Driver: Change to Type Names Returned for GEOGRAPHY Columns

ODBC and JDBC provide functions and methods that get the database type of a column:

  • You can call the SQLColAttribute ODBC function and pass in the SQL_DESC_TYPE_NAME field to get the type name.

  • For the ResultSetMetaData JDBC class, you can call the getColumnTypeName method to get the type name.

As announced previously, in this release, these functions and methods have changed to return the GEOGRAPHY type for GEOGRAPHY columns:

Previously:

  • ODBC: SQL_DESC_TYPE_NAME returns STRUCT, VARCHAR, or BINARY for a GEOGRAPHY column, depending on the value of the GEOGRAPHY_OUTPUT_FORMAT parameter.

  • JDBC: getColumnTypeName returns “VARCHAR” or “BINARY” for a GEOGRAPHY column, depending on the value of the GEOGRAPHY_OUTPUT_FORMAT parameter.

Currently:

  • ODBC: SQL_DESC_TYPE_NAME returns GEOGRAPHY for a GEOGRAPHY column.

  • JDBC: getColumnTypeName returns "GEOGRAPHY" for a GEOGRAPHY column.

 

Note:

  • This change does not affect other ODBC metadata fields, such as SQL_DESC_CONCISE_TYPE.

  • This change does not affect other JDBC methods, such as the getColumnType and getColumnClassName methods in the ResultSetMetaData class.

Data Loading / Unloading Behavior Changes

Parquet LIST Items No Longer Enclosed in Objects

As announced previously, with this release, the handling of LIST value in Parquet data when loaded into Snowflake tables has changed as follows:

Previously:

A LIST value was composed of a set of objects nested in an array and enclosed in an outer object.

This behavior did not comply with the Apache Parquet specification.

Currently:

A LIST value is structured as a flat array of values. The data type of each value (e.g. integer, string) is set correspondingly.

With this change, the behavior now complies with the specification.


For example, suppose a list column in a Parquet file contains the value LIST [1, 2, 3].
When the value is loaded into a VARIANT column in Snowflake, the previous structure was as follows:

"simple_list": {
  "list": [
    {
      "item": 1
    },
    {
      "item": 2
    },
    {
      "item": 3
    }
  ]
}

With this release, the loaded VARIANT value would be as follows:

"simple_list": [
  1,
  2,
  3
]

Nested Lists in Parquet LIST Values Stored in Single Row

A LIST value in Parquet data can store multiple levels of nested lists. As announced previously, with this release, the handling of nested lists loaded into Snowflake tables has changed as follows:

Previously:

Nested lists in a LIST value were loaded into separate rows.

This behavior did not comply with the Apache Parquet specification.

Currently:

The entire LIST value, including all levels of nested lists, is loaded into a single row.

With this change, the behavior now complies with the specification.

 

For example, a list column in a Parquet file contains the value [[[1, 2], [3, 4]]].
When the value was loaded into a VARIANT column in Snowflake, the previous structure was as follows:

"multi_level_list": {
    "list": [
        {                    
        "item": {
            "list": [
                {
                "item": 1
                },
                {
                "item": 2
                }
            ]                
            }
        }
    ]
}
"multi_level_list": {
    "list": [
        {
        "item": {
            "list": [        
                {
                "item": 3
                },
                {
                "item": 4
                }
            ]                
            }
        }
    ]
}


Note that the data was loaded into 2 rows.

With this release, the loaded VARIANT value would be as follows:

"multi_level_list": {
    "list": [
    {                    
        "item": {
            "list": [        
                {
                "item": 1
                },
                {
                "item": 2
                }
            ]                
        },
    },
    {
        "item": {
            "list": [        
                {
                "item": 3
                },
                {
                "item": 4
                }
            ]                
        }
    }
}


Note that the data is loaded into a single row.

File Formats: NULL_IF Format Option Ignores Escape Character

The NULL_IF file format option defines a list of strings to convert to or from SQL NULL when loading or unloading data. COPY INTO <location> statements convert SQL NULL in any source tables to the first string in the NULL_IF list.

As announced previously, with this release, when TYPE = CSV in the file format definition, the NULL_IF behavior has changed as follows:

Previously:

If the NULL_IF replacement string included the escape character, the character wass escaped twice. The escape character prepended to the NULL_IF replacement string was doubled. 

As a result, for example, when NULL_IF was set to \N (with any number of backslashes) and the escape character file format option was set to the backslash character (\\),  it was not possible to output \N to an unloaded file. The number of backslashes in the output file was always an even number.

The escape character is specified in either the ESCAPE_UNENCLOSED_FIELD or ESCAPE file format option.

Currently:

The escape character, if any, is not applied to the NULL_IF replacement string.

 

The following examples show how the NULL_IF string is applied when unloading SQL NULL to data files in stages. In each example, the escape character is \ (backslash). A series of COPY INTO <location> statements unloads rows from the mytable table to the mystage stage.
In the following example, the NULL_IF file format option is set to \\N:

COPY INTO @mystage
  FROM mytable 
  FILE_FORMAT = (
    TYPE = CSV NULL_IF =  ('\\N') 
    ESCAPE_UNENCLOSED_FIELD = '\\');

Previous string written to unloaded files: \\N
Current string written to unloaded files: \N 

In the following example, the NULL_IF file format option is set to \\\N:

COPY INTO @mystage
  FROM mytable 
  FILE_FORMAT = (
    TYPE = CSV NULL_IF =  ('\\\N') 
    ESCAPE_UNENCLOSED_FIELD = '\\');

Previous string written to unloaded files: \\N
Current string written to unloaded files: \N 

In the following example, the NULL_IF file format option is set to \\\\N:

COPY INTO @mystage
  FROM mytable 
  FILE_FORMAT = (

Title4.39 Release Notes - November 16-23, 2020
URL Name4-39-Release-Notes-November-16-23-2020
CategoryProduct Release
Sub Category 
Article Record TypeRelease Notes

Powered by