github.com/snowflakedb/gosnowflake@v1.9.0/doc.go (about)

     1  /*
     2  Package gosnowflake is a pure Go Snowflake driver for the database/sql package.
     3  
     4  Clients can use the database/sql package directly. For example:
     5  
     6  	import (
     7  		"database/sql"
     8  
     9  		_ "github.com/snowflakedb/gosnowflake"
    10  
    11  		"log"
    12  	)
    13  
    14  	func main() {
    15  		db, err := sql.Open("snowflake", "user:password@my_organization-my_account/mydb")
    16  		if err != nil {
    17  			log.Fatal(err)
    18  		}
    19  		defer db.Close()
    20  		...
    21  	}
    22  
    23  # Connection String
    24  
    25  Use the Open() function to create a database handle with connection parameters:
    26  
    27  	db, err := sql.Open("snowflake", "<connection string>")
    28  
    29  The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats):
    30  
    31    - username[:password]@<account_identifier>/dbname/schemaname[?param1=value&...&paramN=valueN]
    32    - username[:password]@<account_identifier>/dbname[?param1=value&...&paramN=valueN]
    33    - username[:password]@hostname:port/dbname/schemaname?account=<account_identifier>[&param1=value&...&paramN=valueN]
    34  
    35  where all parameters must be escaped or use Config and DSN to construct a DSN string.
    36  
    37  For information about account identifiers, see the Snowflake documentation
    38  (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html).
    39  
    40  The following example opens a database handle with the Snowflake account
    41  named "my_account" under the organization named "my_organization",
    42  where the username is "jsmith", password is "mypassword", database is "mydb",
    43  schema is "testschema", and warehouse is "mywh":
    44  
    45  	db, err := sql.Open("snowflake", "jsmith:mypassword@my_organization-my_account/mydb/testschema?warehouse=mywh")
    46  
    47  # Connection Parameters
    48  
    49  The connection string (DSN) can contain both connection parameters (described below) and session parameters
    50  (https://docs.snowflake.com/en/sql-reference/parameters.html).
    51  
    52  The following connection parameters are supported:
    53  
    54    - account <string>: Specifies your Snowflake account, where "<string>" is the account
    55      identifier assigned to your account by Snowflake.
    56      For information about account identifiers, see the Snowflake documentation
    57      (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html).
    58  
    59      If you are using a global URL, then append the connection group and ".global"
    60      (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the
    61      connection group are separated by a dash ("-"), as shown above.
    62  
    63      This parameter is optional if your account identifier is specified after the "@" character
    64      in the connection string.
    65  
    66    - region <string>: DEPRECATED. You may specify a region, such as
    67      "eu-central-1", with this parameter. However, since this parameter
    68      is deprecated, it is best to specify the region as part of the
    69      account parameter. For details, see the description of the account
    70      parameter.
    71  
    72    - database: Specifies the database to use by default in the client session
    73      (can be changed after login).
    74  
    75    - schema: Specifies the database schema to use by default in the client
    76      session (can be changed after login).
    77  
    78    - warehouse: Specifies the virtual warehouse to use by default for queries,
    79      loading, etc. in the client session (can be changed after login).
    80  
    81    - role: Specifies the role to use by default for accessing Snowflake
    82      objects in the client session (can be changed after login).
    83  
    84    - passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login.
    85  
    86    - passcodeInPassword: false by default. Set to true if the MFA passcode is embedded
    87      in the login password. Appends the MFA passcode to the end of the password.
    88  
    89    - loginTimeout: Specifies the timeout, in seconds, for login. The default
    90      is 60 seconds. The login request gives up after the timeout length if the
    91      HTTP response is success.
    92  
    93    - requestTimeout: Specifies the timeout, in seconds, for a query to complete.
    94      0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds.
    95      The query request gives up after the timeout length if the HTTP response is success.
    96  
    97    - authenticator: Specifies the authenticator to use for authenticating user credentials:
    98  
    99    - To use the internal Snowflake authenticator, specify snowflake (Default).
   100  
   101    - To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta).
   102  
   103    - To authenticate using your IDP via a browser, specify externalbrowser.
   104  
   105    - To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below).
   106  
   107    - application: Identifies your application to Snowflake Support.
   108  
   109    - insecureMode: false by default. Set to true to bypass the Online
   110      Certificate Status Protocol (OCSP) certificate revocation check.
   111      IMPORTANT: Change the default value for testing or emergency situations only.
   112  
   113    - token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator.
   114  
   115    - client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive
   116      such that the connection session will never expire. Care should be taken in using this option as it opens up
   117      the access forever as long as the process is alive.
   118  
   119    - ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode.
   120  
   121    - validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for
   122      Database, Schema, Warehouse and Role when setting up the connection
   123  
   124    - tracing: Specifies the logging level to be used. Set to error by default.
   125      Valid values are trace, debug, info, print, warning, error, fatal, panic.
   126  
   127    - disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well.
   128      Default value is false.
   129  
   130    - clientConfigFile: specifies the location of the client configuration json file.
   131      In this file you can configure Easy Logging feature.
   132  
   133  All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html).
   134  For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding:
   135  
   136  	...&TIMESTAMP_OUTPUT_FORMAT=MM-DD-YYYY...
   137  
   138  A complete connection string looks similar to the following:
   139  
   140  		my_user_name:my_password@ac123456/my_database/my_schema?my_warehouse=inventory_warehouse&role=my_user_role&DATE_OUTPUT_FORMAT=YYYY-MM-DD
   141  	                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   142  	                                                                      connection                     connection           session
   143  	                                                                      parameter                      parameter            parameter
   144  
   145  Session-level parameters can also be set by using the SQL command "ALTER SESSION"
   146  (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html).
   147  
   148  Alternatively, use OpenWithConfig() function to create a database handle with the specified Config.
   149  
   150  # Proxy
   151  
   152  The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting.
   153  
   154  NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy.
   155  
   156  NO_PROXY does not support wildcards. Each value specified should be one of the following:
   157  
   158    - The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com".
   159  
   160    - An IP address, for example "192.196.1.15".
   161  
   162  If more than one value is specified, values should be separated by commas, for example:
   163  
   164  	no_proxy=localhost,.my_company.com,xy12345.snowflakecomputing.com,192.168.1.15,192.168.1.16
   165  
   166  # Logging
   167  
   168  By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level.
   169  Users can use SetLogger in driver.go to set a customized logger for gosnowflake package.
   170  
   171  In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface
   172  as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work.
   173  
   174  # Query tag
   175  
   176  A custom query tag can be set in the context. Each query run with this context
   177  will include the custom query tag as metadata that will appear in the Query Tag
   178  column in the Query History log. For example:
   179  
   180  	queryTag := "my custom query tag"
   181  	ctxWithQueryTag := WithQueryTag(ctx, queryTag)
   182  	rows, err := db.QueryContext(ctxWithQueryTag, query)
   183  
   184  # Query request ID
   185  
   186  A specific query request ID can be set in the context and will be passed through
   187  in place of the default randomized request ID. For example:
   188  
   189  	requestID := ParseUUID("6ba7b812-9dad-11d1-80b4-00c04fd430c8")
   190  	ctxWithID := WithRequestID(ctx, requestID)
   191  	rows, err := db.QueryContext(ctxWithID, query)
   192  
   193  # Last query ID
   194  
   195  If you need query ID for your query you have to use raw connection.
   196  
   197  For queries:
   198  ```
   199  
   200  	err := conn.Raw(func(x any) error {
   201  		stmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, "SELECT 1")
   202  		rows, err := stmt.(driver.StmtQueryContext).QueryContext(ctx, nil)
   203  		rows.(SnowflakeRows).GetQueryID()
   204  		stmt.(SnowflakeStmt).GetQueryID()
   205  		return nil
   206  	}
   207  
   208  ```
   209  
   210  For execs:
   211  ```
   212  
   213  	err := conn.Raw(func(x any) error {
   214  		stmt, err := x.(driver.ConnPrepareContext).PrepareContext(ctx, "INSERT INTO TestStatementQueryIdForExecs VALUES (1)")
   215  		result, err := stmt.(driver.StmtExecContext).ExecContext(ctx, nil)
   216  		result.(SnowflakeResult).GetQueryID()
   217  		stmt.(SnowflakeStmt).GetQueryID()
   218  		return nil
   219  	}
   220  
   221  ```
   222  
   223  # Fetch Results by Query ID
   224  
   225  The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context.
   226  ```
   227  
   228  	// Get the query ID using raw connection as mentioned above:
   229  	err := conn.Raw(func(x any) error {
   230  		rows1, err = x.(driver.QueryerContext).QueryContext(ctx, "SELECT 1", nil)
   231  		queryID = rows1.(sf.SnowflakeRows).GetQueryID()
   232  		return nil
   233  	}
   234  
   235  	// Update the Context object to specify the query ID
   236  	fetchResultByIDCtx = sf.WithFetchResultByID(ctx, queryID)
   237  
   238  	// Execute an empty string query
   239  	rows2, err := db.QueryContext(fetchResultByIDCtx, "")
   240  
   241  	// Retrieve the results as usual
   242  	for rows2.Next()  {
   243  		err = rows2.Scan(...)
   244  		...
   245  	}
   246  
   247  ```
   248  
   249  # Canceling Query by CtrlC
   250  
   251  From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a
   252  query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter
   253  (e.g. QueryContext, ExecContext).
   254  
   255  	// handle interrupt signal
   256  	ctx, cancel := context.WithCancel(context.Background())
   257  	c := make(chan os.Signal, 1)
   258  	signal.Notify(c, os.Interrupt)
   259  	defer func() {
   260  		signal.Stop(c)
   261  	}()
   262  	go func() {
   263  		select {
   264  		case <-c:
   265  			cancel()
   266  		case <-ctx.Done():
   267  		}
   268  	}()
   269  	... (connection)
   270  	// execute a query
   271  	rows, err := db.QueryContext(ctx, query)
   272  	... (Ctrl+C to cancel the query)
   273  
   274  See cmd/selectmany.go for the full example.
   275  
   276  # Supported Data Types
   277  
   278  The Go Snowflake Driver now supports the Arrow data format for data transfers
   279  between Snowflake and the Golang client. The Arrow data format avoids extra
   280  conversions between binary and textual representations of the data. The Arrow
   281  data format can improve performance and reduce memory consumption in clients.
   282  
   283  Snowflake continues to support the JSON data format.
   284  
   285  The data format is controlled by the session-level parameter
   286  GO_QUERY_RESULT_FORMAT. To use JSON format, execute:
   287  
   288  	ALTER SESSION SET GO_QUERY_RESULT_FORMAT = 'JSON';
   289  
   290  The valid values for the parameter are:
   291  
   292    - ARROW (default)
   293    - JSON
   294  
   295  If the user attempts to set the parameter to an invalid value, an error is
   296  returned.
   297  
   298  The parameter name and the parameter value are case-insensitive.
   299  
   300  This parameter can be set only at the session level.
   301  
   302  Usage notes:
   303  
   304    - The Arrow data format reduces rounding errors in floating point numbers. You might see slightly
   305      different values for floating point numbers when using Arrow format than when using JSON format.
   306      In order to take advantage of the increased precision, you must pass in the context.Context object
   307      provided by the WithHigherPrecision function when querying.
   308  
   309    - Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed
   310      in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural,
   311      expected data type as well.
   312  
   313    - For some numeric data types, the driver can retrieve larger values when using the Arrow format than
   314      when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0)
   315      values to be retrieved, while using JSON format allows only values in the range supported by the
   316      Golang int64 data type.
   317  
   318      Users should ensure that Golang variables are declared using the appropriate data type for the full
   319      range of values contained in the column. For an example, see below.
   320  
   321  When using the Arrow format, the driver supports more Golang data types and
   322  more ways to convert SQL values to those Golang data types. The table below
   323  lists the supported Snowflake SQL data types and the corresponding Golang
   324  data types. The columns are:
   325  
   326   1. The SQL data type.
   327  
   328   2. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from
   329      Arrow data format via an interface{}.
   330  
   331   3. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data
   332      from Arrow data format directly.
   333  
   334   4. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from
   335      JSON data format via an interface{}. (All returned values are strings.)
   336  
   337   5. The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from
   338      JSON data format directly.
   339  
   340      Go Data Types for Scan()
   341      ===================================================================================================================
   342      |                    ARROW                    |                    JSON
   343      ===================================================================================================================
   344      SQL Data Type          | Default Go Data Type   | Supported Go Data  | Default Go Data Type   | Supported Go Data
   345      | for Scan() interface{} | Types for Scan()   | for Scan() interface{} | Types for Scan()
   346      ===================================================================================================================
   347      BOOLEAN              | bool                                        | string                 | bool
   348      -------------------------------------------------------------------------------------------------------------------
   349      VARCHAR              | string                                      | string
   350      -------------------------------------------------------------------------------------------------------------------
   351      DOUBLE               | float32, float64                  [1] , [2] | string                 | float32, float64
   352      -------------------------------------------------------------------------------------------------------------------
   353      INTEGER that         | int, int8, int16, int32, int64              | string                 | int, int8, int16,
   354      fits in int64        |                                   [1] , [2] |                        | int32, int64
   355      -------------------------------------------------------------------------------------------------------------------
   356      INTEGER that doesn't | int, int8, int16, int32, int64,  *big.Int   | string                 | error
   357      fit in int64         |                       [1] , [2] , [3] , [4] |
   358      -------------------------------------------------------------------------------------------------------------------
   359      NUMBER(P, S)         | float32, float64,  *big.Float               | string                 | float32, float64
   360      where S > 0          |                       [1] , [2] , [3] , [5] |
   361      -------------------------------------------------------------------------------------------------------------------
   362      DATE                 | time.Time                                   | string                 | time.Time
   363      -------------------------------------------------------------------------------------------------------------------
   364      TIME                 | time.Time                                   | string                 | time.Time
   365      -------------------------------------------------------------------------------------------------------------------
   366      TIMESTAMP_LTZ        | time.Time                                   | string                 | time.Time
   367      -------------------------------------------------------------------------------------------------------------------
   368      TIMESTAMP_NTZ        | time.Time                                   | string                 | time.Time
   369      -------------------------------------------------------------------------------------------------------------------
   370      TIMESTAMP_TZ         | time.Time                                   | string                 | time.Time
   371      -------------------------------------------------------------------------------------------------------------------
   372      BINARY               | []byte                                      | string                 | []byte
   373      -------------------------------------------------------------------------------------------------------------------
   374      ARRAY                | string                                      | string
   375      -------------------------------------------------------------------------------------------------------------------
   376      OBJECT               | string                                      | string
   377      -------------------------------------------------------------------------------------------------------------------
   378      VARIANT              | string                                      | string
   379  
   380      [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan()
   381      method can lose low bits (lose precision), lose high bits (completely change the value), or result in error.
   382  
   383      [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{}
   384      causes an error.
   385  
   386      [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context
   387      returned by WithHigherPrecision().
   388  
   389      [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to
   390      those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below.
   391  
   392      [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to
   393      those data types by using .Float32()/.String()/.Float64() methods. For an example, see below.
   394  
   395  Note: SQL NULL values are converted to Golang nil values, and vice-versa.
   396  
   397  The following example shows how to retrieve very large values using the math/big
   398  package. This example retrieves a large INTEGER value to an interface and then
   399  extracts a big.Int value from that interface. If the value fits into an int64,
   400  then the code also copies the value to a variable of type int64. Note that a
   401  context that enables higher precision must be passed in with the query.
   402  
   403  	import "context"
   404  	import "math/big"
   405  
   406  	...
   407  
   408  	var my_interface interface{}
   409  	var my_big_int_pointer *big.Int
   410  	var my_int64 int64
   411  	var rows snowflakeRows
   412  
   413  	...
   414  	rows = db.QueryContext(WithHigherPrecision(context.Background), <query>)
   415  	rows.Scan(&my_interface)
   416  	my_big_int_pointer, ok = my_interface.(*big.Int)
   417  	if my_big_int_pointer.IsInt64() {
   418  	    my_int64 = my_big_int_pointer.Int64()
   419  	}
   420  
   421  If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface
   422  and then converting to a big.Int:
   423  
   424  	rows.Scan(&my_big_int_pointer)
   425  
   426  If the variable named "rows" contains a big.Int, then each of the following fails:
   427  
   428  	rows.Scan(&my_int64)
   429  
   430  	my_int64, _ = my_interface.(int64)
   431  
   432  Similar code and rules also apply to big.Float values.
   433  
   434  If you are not sure what data type will be returned, you can use code similar to the following to check the data type
   435  of the returned value:
   436  
   437  	// Create variables into which you can scan the returned values.
   438  	var i64 int64
   439  	var bigIntPtr *big.Int
   440  
   441  	for rows.Next() {
   442  	    // Get the data type info.
   443  	    column_types, err := rows.ColumnTypes()
   444  	    if err != nil {
   445  	        log.Fatalf("ERROR: ColumnTypes() failed. err: %v", err)
   446  	    }
   447  	    // The data type of the zeroeth column in the row.
   448  	    column_type := column_types[0].ScanType()
   449  	    // Choose the appropriate variable based on the data type.
   450  	    switch column_type {
   451  	        case reflect.TypeOf(i64):
   452  	            err = rows.Scan(&i64)
   453  	            fmt.Println("INFO: retrieved int64 value:")
   454  	            fmt.Println(i64)
   455  	        case reflect.TypeOf(bigIntPtr):
   456  	            err = rows.Scan(&bigIntPtr)
   457  	            fmt.Println("INFO: retrieved bigIntPtr value:")
   458  	            fmt.Println(bigIntPtr)
   459  	    }
   460  	}
   461  
   462  ## Arrow batches
   463  
   464  You can retrieve data in a columnar format similar to the format a server returns.
   465  You must use `WithArrowBatches` context, similar to the following:
   466  
   467  	    var rows driver.Rows
   468  		err = conn.Raw(func(x interface{}) error {
   469  			rows, err = x.(driver.QueryerContext).QueryContext(ctx, query, nil)
   470  			return err
   471  		})
   472  
   473  		...
   474  
   475  		batches, err := rows.(sf.SnowflakeRows).GetArrowBatches()
   476  
   477  		... // use Arrow records
   478  
   479  Limitations:
   480  
   481  1. For some queries Snowflake may decide to return data in JSON format (examples: `SHOW PARAMETERS` or `ls @stage`). You cannot use JSON with Arrow batches context.
   482  2. Snowflake handles timestamps in a range which is higher than available space in Arrow timestamp type. Because of that special treatment should be used (see below).
   483  
   484  ### Handling timestamps in Arrow batches
   485  
   486  Snowflake returns timestamps natively (from backend to driver) in multiple formats.
   487  The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake.
   488  Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision.
   489  Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision.
   490  
   491  If you want to use timestamps in Arrow batches, you have two options:
   492  
   493   1. The Go driver can reduce timestamp struct into simple Arrow Timestamp, if you set `WithArrowBatchesTimestampOption` to nanosecond, microsecond, millisecond or second.
   494      For nanosecond, some timestamp values might not fit into Arrow timestamp. E.g after year 2262 or before 1677.
   495   2. You can use native Snowflake values. In that case you will receive complex structs as described above. To transform Snowflake values into the Golang time.Time struct you can use `ArrowSnowflakeTimestampToTime`.
   496      To enable this feature, you must use `WithArrowBatchesTimestampOption` context with value set to`UseOriginalTimestamp`.
   497  
   498  ### Invalid UTF-8 characters in Arrow batches
   499  Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters.
   500  However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html
   501  and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74),
   502  Arrow string columns should only contain UTF-8 characters.
   503  
   504  To address this issue and prevent potential downstream disruptions, the context `enableArrowBatchesUtf8Validation`, is introduced.
   505  When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`.
   506  This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks.
   507  
   508  ### WithHigherPrecision in Arrow batches
   509  To preserve BigDecimal values within Arrow batches, use `WithHigherPrecision`.
   510  This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services.
   511  Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision.
   512  Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow.
   513  
   514  # Binding Parameters
   515  
   516  Binding allows a SQL statement to use a value that is stored in a Golang variable.
   517  
   518  Without binding, a SQL statement specifies values by specifying literals inside the statement.
   519  For example, the following statement uses the literal value “42“ in an UPDATE statement:
   520  
   521  	_, err = db.Exec("UPDATE table1 SET integer_column = 42 WHERE ID = 1000")
   522  
   523  With binding, you can execute a SQL statement that uses a value that is inside a variable. For example:
   524  
   525  	var my_integer_variable int = 42
   526  	_, err = db.Exec("UPDATE table1 SET integer_column = ? WHERE ID = 1000", my_integer_variable)
   527  
   528  The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable.
   529  
   530  Binding data that involves time zones can require special handling. For details, see the section
   531  titled "Timestamps with Time Zones".
   532  
   533  Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.:
   534  
   535  	rows, err := db.Query("SELECT * FROM TABLE(SOMEFUNCTION(?))", sql.NullBool{})
   536  
   537  The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types
   538  which are mapped to single Go time.Time type:
   539  
   540  	rows, err := db.Query("SELECT * FROM TABLE(SOMEFUNCTION(?))", sf.TypedNullTime{sql.NullTime{}, sf.TimestampLTZType})
   541  
   542  # Binding Parameters to Array Variables
   543  
   544  Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL
   545  INSERT statement. You can use this technique to insert multiple rows in a single batch.
   546  
   547  As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example
   548  binds arrays to the parameters in the INSERT statement.
   549  
   550  	// Create a table containing an integer, float, boolean, and string column.
   551  	_, err = db.Exec("create or replace table my_table(c1 int, c2 float, c3 boolean, c4 string)")
   552  	...
   553  	// Define the arrays containing the data to insert.
   554  	intArray := []int{1, 2, 3}
   555  	fltArray := []float64{0.1, 2.34, 5.678}
   556  	boolArray := []bool{true, false, true}
   557  	strArray := []string{"test1", "test2", "test3"}
   558  	...
   559  	// Insert the data from the arrays and wrap in an Array() function into the table.
   560  	_, err = db.Exec("insert into my_table values (?, ?, ?, ?)", Array(&intArray), Array(&fltArray), Array(&boolArray), Array(&strArray))
   561  
   562  If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values.
   563  This feature is available in version 1.6.12 (and later) of the driver. For example,
   564  
   565  	 	// Define the arrays containing the data to insert.
   566  	 	strArray := make([]interface{}, 3)
   567  		strArray[0] = "test1"
   568  		strArray[1] = "test2"
   569  		strArray[2] = nil // This line is optional as nil is the default value.
   570  		...
   571  		// Create a table and insert the data from the array as shown above.
   572  		_, err = db.Exec("create or replace table my_table(c1 string)")
   573  		_, err = db.Exec("insert into my_table values (?)", Array(&strArray))
   574  		...
   575  		// Use sql.NullString to fetch the string column that contains NULL values.
   576  		var s sql.NullString
   577  		rows, _ := db.Query("select * from my_table")
   578  		for rows.Next() {
   579  			err := rows.Scan(&s)
   580  			if err != nil {
   581  				log.Fatalf("Failed to scan. err: %v", err)
   582  			}
   583  			if s.Valid {
   584  				fmt.Println("Retrieved value:", s.String)
   585  			} else {
   586  				fmt.Println("Retrieved value: NULL")
   587  			}
   588  		}
   589  
   590  For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function.
   591  This feature is available in version 1.6.13 (and later) of the driver. For example,
   592  
   593  	_, err = db.Exec("create or replace table my_table(c1 timestamp_ntz, c2 timestamp_ltz)")
   594  	_, err = db.Exec("insert into my_table values (?,?)", Array(&ntzArray, sf.TimestampNTZType), Array(&ltzArray, sf.TimestampLTZType))
   595  
   596  Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see
   597  Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html).
   598  
   599  # Batch Inserts and Binding Parameters
   600  
   601  When you use array binding to insert a large number of values, the driver can
   602  improve performance by streaming the data (without creating files on the local
   603  machine) to a temporary stage for ingestion. The driver automatically does this
   604  when the number of values exceeds a threshold (no changes are needed to user code).
   605  
   606  In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema:
   607  
   608  	CREATE STAGE
   609  
   610  If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database.
   611  
   612  In addition, the current database and schema for the session must be set. If these are not set,
   613  the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error:
   614  
   615  	CREATE TEMPORARY STAGE SYSTEM$BIND file_format=(type=csv field_optionally_enclosed_by='"')
   616  	Cannot perform CREATE STAGE. This session does not have a current schema. Call 'USE SCHEMA', or use a qualified name.
   617  
   618  For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command),
   619  see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html).
   620  
   621  # Binding a Parameter to a Time Type
   622  
   623  Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable.
   624  However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data
   625  type to associate with the binding parameter. For example:
   626  
   627  	dbt.mustExec("CREATE OR REPLACE TABLE tztest (id int, ntz, timestamp_ntz, ltz timestamp_ltz)")
   628  	// ...
   629  	stmt, err :=dbt.db.Prepare("INSERT INTO tztest(id,ntz,ltz) VALUES(1, ?, ?)")
   630  	// ...
   631  	tmValue time.Now()
   632  	// ... Is tmValue a TIMESTAMP_NTZ or TIMESTAMP_LTZ?
   633  	_, err = stmt.Exec(tmValue, tmValue)
   634  
   635  To resolve this issue, a binding parameter flag is introduced that associates
   636  any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ
   637  or BINARY data type. The above example could be rewritten as follows:
   638  
   639  	import (
   640  		sf "github.com/snowflakedb/gosnowflake"
   641  	)
   642  	dbt.mustExec("CREATE OR REPLACE TABLE tztest (id int, ntz, timestamp_ntz, ltz timestamp_ltz)")
   643  	// ...
   644  	stmt, err :=dbt.db.Prepare("INSERT INTO tztest(id,ntz,ltz) VALUES(1, ?, ?)")
   645  	// ...
   646  	tmValue time.Now()
   647  	// ...
   648  	_, err = stmt.Exec(sf.DataTypeTimestampNtz, tmValue, sf.DataTypeTimestampLtz, tmValue)
   649  
   650  # Timestamps with Time Zones
   651  
   652  The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the
   653  offset-based Location types, which represent a collection of time offsets in
   654  use in a geographical area, such as CET (Central European Time) or UTC
   655  (Coordinated Universal Time). The offset-based Location data is generated and
   656  cached when a Go Snowflake Driver application starts, and if the given offset
   657  is not in the cache, it is generated dynamically.
   658  
   659  Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles").
   660  
   661  For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location.
   662  
   663  # Binary Data
   664  
   665  Internally, this feature leverages the []byte data type. As a result, BINARY
   666  data cannot be bound without the binding parameter flag. In the following
   667  example, sf is an alias for the gosnowflake package:
   668  
   669  	var b = []byte{0x01, 0x02, 0x03}
   670  	_, err = stmt.Exec(sf.DataTypeBinary, b)
   671  
   672  # Maximum Number of Result Set Chunk Downloader
   673  
   674  The driver directly downloads a result set from the cloud storage if the size is large. It is
   675  required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine
   676  named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can
   677  consume the current result set.
   678  
   679  The application may change the number of result set chunk downloader if required. Note this does not help reduce
   680  memory footprint by itself. Consider Custom JSON Decoder.
   681  
   682  	import (
   683  		sf "github.com/snowflakedb/gosnowflake"
   684  	)
   685  	sf.MaxChunkDownloadWorkers = 2
   686  
   687  Custom JSON Decoder for Parsing Result Set (Experimental)
   688  
   689  The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows.
   690  
   691  	import (
   692  		sf "github.com/snowflakedb/gosnowflake"
   693  	)
   694  	sf.CustomJSONDecoderEnabled = true
   695  	...
   696  
   697  This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the
   698  performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory
   699  footprint while four times slower. Be cautious when using the option.
   700  
   701  # JWT authentication
   702  
   703  The Go Snowflake Driver supports JWT (JSON Web Token) authentication.
   704  
   705  To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>",
   706  or using a Config structure specifying:
   707  
   708  	config := &Config{
   709  		...
   710  		Authenticator: AuthTypeJwt,
   711  		PrivateKey:   "<your_private_key_struct in *rsa.PrivateKey type>",
   712  	}
   713  
   714  The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL
   715  base 64 URL format is through the base64.URLEncoding.EncodeToString() function.
   716  
   717  On the server side, you can alter the public key with the SQL command:
   718  
   719  	ALTER USER <your_user_name> SET RSA_PUBLIC_KEY='<your_public_key>';
   720  
   721  The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base
   722  64 Standard format is through the base64.StdEncoding.EncodeToString() function.
   723  
   724  To generate the valid key pair, you can execute the following commands in the shell:
   725  
   726  		# generate 2048-bit pkcs8 encoded RSA private key
   727  		openssl genpkey -algorithm RSA \
   728  	    	-pkeyopt rsa_keygen_bits:2048 \
   729  	    	-pkeyopt rsa_keygen_pubexp:65537 | \
   730  	  		openssl pkcs8 -topk8 -outform der > rsa-2048-private-key.p8
   731  
   732  		# extract 2048-bit PKI encoded RSA public key from the private key
   733  		openssl pkey -pubout -inform der -outform der \
   734  	    	-in rsa-2048-private-key.p8 \
   735  	    	-out rsa-2048-public-key.spki
   736  
   737  Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key.
   738  For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and
   739  decrypt the key in your application using a library you trust.
   740  
   741  JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds.
   742  Each retry timeout is configured by `jwtClientTimeout`.
   743  Retries are limited by total time of `loginTimeout`.
   744  
   745  # External browser authentication
   746  
   747  The driver allows to authenticate using the external browser.
   748  
   749  When a connection is created, the driver will open the browser window and ask the user to sign in.
   750  
   751  To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with
   752  following Authenticator specified:
   753  
   754  	config := &Config{
   755  		...
   756  		Authenticator: AuthTypeExternalBrowser,
   757  	}
   758  
   759  The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when
   760  browser window was closed, or not responding.
   761  
   762  Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds)
   763  or using a Config structure with following ExternalBrowserTimeout specified:
   764  
   765  	config := &Config{
   766  		ExternalBrowserTimeout: 240 * time.Second, // Requires time.Duration
   767  	}
   768  
   769  # Executing Multiple Statements in One Call
   770  
   771  This feature is available in version 1.3.8 or later of the driver.
   772  
   773  By default, Snowflake returns an error for queries issued with multiple statements.
   774  This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection).
   775  
   776  The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a
   777  single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully.
   778  The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to
   779  inject a statement by appending it. More details are below.
   780  
   781  The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call:
   782  
   783    - db.QueryContext(): This function is used to execute queries, such as SELECT statements, that return a result set.
   784    - db.ExecContext(): This function is used to execute statements that don't return a result set (i.e. most DML and DDL statements).
   785  
   786  To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons,
   787  in the order in which the statements should be executed.
   788  
   789  To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies
   790  the number of statements in the string. For example:
   791  
   792  	import (
   793  		"context"
   794  		"database/sql"
   795  	)
   796  
   797  	var multi_statement_query = "SELECT c1 FROM t1; SELECT c2 FROM t2"
   798  	var number_of_statements = 2
   799  	blank_context = context.Background()
   800  	multi_statement_context, _ := WithMultiStatement(blank_context, number_of_statements)
   801  	rows, err := db.QueryContext(multi_statement_context, multi_statement_query)
   802  
   803  When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After
   804  you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet().
   805  
   806  The following pseudo-code shows how to process multiple result sets:
   807  
   808  	Execute the statement and get the result set(s):
   809  
   810  		rows, err := db.QueryContext(ctx, multiStmtQuery)
   811  
   812  	Retrieve the rows in the first query's result set:
   813  
   814  		while rows.Next() {
   815  			err = rows.Scan(&variable_1)
   816  			if err != nil {
   817  				t.Errorf("failed to scan: %#v", err)
   818  			}
   819  			...
   820  		}
   821  
   822  	Retrieve the remaining result sets and the rows in them:
   823  
   824  		while rows.NextResultSet()  {
   825  
   826  			while rows.Next() {
   827  				...
   828  			}
   829  
   830  		}
   831  
   832  The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each
   833  individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which
   834  updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not
   835  available.
   836  
   837  The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext():
   838  
   839  	Execute the SQL statements:
   840  
   841  	    res, err := db.ExecContext(ctx, multiStmtQuery)
   842  
   843  	Get the summed result and store it in the variable named count:
   844  
   845  	    count, err := res.RowsAffected()
   846  
   847  Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors.
   848  For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to
   849  update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5
   850  rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as
   851  expected.
   852  
   853  The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it
   854  still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query
   855  statements) is impractical.
   856  
   857  The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function
   858  returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the
   859  result set contains a single row that contains a single column; the value is the number of rows changed by the
   860  statement.
   861  
   862  If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a
   863  multi-statement query, use QueryContext(). You can retrieve the result sets for the queries,
   864  and you can retrieve or ignore the row counts for the non-query statements.
   865  
   866  Note: PUT statements are not supported for multi-statement queries.
   867  
   868  If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is
   869  aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected.
   870  
   871  For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the
   872  third statement, and an exception is thrown.
   873  
   874  	CREATE OR REPLACE TABLE test(n int);
   875  	INSERT INTO TEST VALUES (1), (2);
   876  	INSERT INTO TEST VALUES ('not_an_integer');  -- execution fails here
   877  	INSERT INTO TEST VALUES (3);
   878  
   879  If you then query the contents of the table named "test", the values 1 and 2 would be present.
   880  
   881  When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For
   882  example:
   883  
   884  	rows, err := db.QueryContext(ctx, multiStmtQuery)
   885  	if err != nil {
   886  		Fatalf("failed to query multiple statements: %v", err)
   887  	}
   888  
   889  Preparing statements and using bind variables are also not supported for multi-statement queries.
   890  
   891  # Asynchronous Queries
   892  
   893  The Go Snowflake Driver supports asynchronous execution of SQL statements.
   894  Asynchronous execution allows you to start executing a statement and then
   895  retrieve the result later without being blocked while waiting. While waiting
   896  for the result of a SQL statement, you can perform other tasks, including
   897  executing other SQL statements.
   898  
   899  Most of the steps to execute an asynchronous query are the same as the
   900  steps to execute a synchronous query. However, there is an additional step,
   901  which is that you must call the WithAsyncMode() function to update
   902  your Context object to specify that asynchronous mode is enabled.
   903  
   904  In the code below, the call to "WithAsyncMode()" is specific
   905  to asynchronous mode. The rest of the code is compatible with both
   906  asynchronous mode and synchronous mode.
   907  
   908  	...
   909  
   910  	// Update your Context object to specify asynchronous mode:
   911  	ctx := WithAsyncMode(context.Background())
   912  
   913  	// Execute your query as usual by calling:
   914  	rows, _ := db.QueryContext(ctx, query_string)
   915  
   916  	// Retrieve the results as usual by calling:
   917  	for rows.Next()  {
   918  		err := rows.Scan(...)
   919  		...
   920  	}
   921  
   922  The function db.QueryContext() returns an object of type snowflakeRows
   923  regardless of whether the query is synchronous or asynchronous. However:
   924  
   925    - If the query is synchronous, then db.QueryContext() does not return until
   926      the query has finished and the result set has been loaded into the
   927      snowflakeRows object.
   928    - If the query is asynchronous, then db.QueryContext() returns a
   929      potentially incomplete snowflakeRows object that is filled in later
   930      in the background.
   931  
   932  The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking).
   933  If the query has not yet completed and the snowflakeRows object (named "rows" in this
   934  example) has not been filled in yet, then rows.Next() waits until the result set has been filled in.
   935  
   936  More generally, calls to any Golang SQL API function implemented in snowflakeRows or
   937  snowflakeResult are blocking calls, and wait if results are not yet available.
   938  (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(),
   939  snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().)
   940  
   941  Because the example code above executes only one query and no other activity, there is
   942  no significant difference in behavior between asynchronous and synchronous behavior.
   943  The differences become significant if, for example, you want to perform some other
   944  activity after the query starts and before it completes. The example code below starts
   945  a query, which run in the background, and then retrieves the results later.
   946  
   947  This example uses small SELECT statements that do not retrieve enough data to require
   948  asynchronous handling. However, the technique works for larger data sets, and for
   949  situations where the programmer might want to do other work after starting the queries
   950  and before retrieving the results. For a more elaborative example please see cmd/async/async.go
   951  
   952  		package gosnowflake
   953  
   954  		import  (
   955  			"context"
   956  			"database/sql"
   957  			"database/sql/driver"
   958  			"fmt"
   959  			"log"
   960  			"os"
   961  			sf "github.com/snowflakedb/gosnowflake"
   962  	    )
   963  
   964  		...
   965  
   966  		func DemonstrateAsyncMode(db *sql.DB) {
   967  			// Enable asynchronous mode
   968  			ctx := sf.WithAsyncMode(context.Background())
   969  
   970  			// Run the query with asynchronous context
   971  			rows, err := db.QueryContext(ctx, "select 1")
   972  			if err != nil {
   973  				// handle error
   974  			}
   975  
   976  			// do something as the workflow continues whereas the query is computing in the background
   977  			...
   978  
   979  			// Get the data when you are ready to handle it
   980  			var val int
   981  			err = rows.Scan(&val)
   982  			if err != nil {
   983  				// handle error
   984  			}
   985  
   986  			...
   987  		}
   988  
   989  # Support For PUT and GET
   990  
   991  The Go Snowflake Driver supports the PUT and GET commands.
   992  
   993  The PUT command copies a file from a local computer (the computer where the
   994  Golang client is running) to a stage on the cloud platform. The GET command
   995  copies data files from a stage on the cloud platform to a local computer.
   996  
   997  See the following for information on the syntax and supported parameters:
   998  
   999    - PUT: https://docs.snowflake.com/en/sql-reference/sql/put.html
  1000    - GET: https://docs.snowflake.com/en/sql-reference/sql/get.html
  1001  
  1002  ## Using PUT
  1003  
  1004  The following example shows how to run a PUT command by passing a string to the
  1005  db.Query() function:
  1006  
  1007  	db.Query("PUT file://<local_file> <stage_identifier> <optional_parameters>")
  1008  
  1009  "<local_file>" should include the file path as well as the name. Snowflake recommends
  1010  using an absolute path rather than a relative path. For example:
  1011  
  1012  	db.Query("PUT file:///tmp/my_data_file @~ auto_compress=false overwrite=false")
  1013  
  1014  Different client platforms (e.g. linux, Windows) have different path name
  1015  conventions. Ensure that you specify path names appropriately. This is
  1016  particularly important on Windows, which uses the backslash character as
  1017  both an escape character and as a separator in path names.
  1018  
  1019  To send information from a stream (rather than a file) use code similar to the code below.
  1020  (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.)
  1021  
  1022  	fileStream, _ := os.Open(fname)
  1023  	defer func() {
  1024  		if fileStream != nil {
  1025  			fileStream.Close()
  1026  		}
  1027  	} ()
  1028  
  1029  	sql := "put 'file://%v' @%%%v auto_compress=true parallel=30"
  1030  	sqlText := fmt.Sprintf(sql,
  1031  		strings.ReplaceAll(fname, "\\", "\\\\"),
  1032  		tableName)
  1033  	dbt.mustExecContext(WithFileStream(context.Background(), fileStream),
  1034  		sqlText)
  1035  
  1036  Note: PUT statements are not supported for multi-statement queries.
  1037  
  1038  ## Using GET
  1039  
  1040  The following example shows how to run a GET command by passing a string to the
  1041  db.Query() function:
  1042  
  1043  	db.Query("GET <internal_stage_identifier> file://<local_file> <optional_parameters>")
  1044  
  1045  "<local_file>" should include the file path as well as the name. Snowflake recommends using
  1046  an absolute path rather than a relative path. For example:
  1047  
  1048  	db.Query("GET @~ file:///tmp/my_data_file auto_compress=false overwrite=false")
  1049  
  1050  ## Specifying temporary directory for encryption and compression
  1051  
  1052  Putting and getting requires compression and/or encryption, which is done in the OS temporary directory.
  1053  If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter.
  1054  Remember, to encode slashes.
  1055  Example:
  1056  
  1057  	u:p@a.r.c.snowflakecomputing.com/db/s?account=a.r.c&tmpDirPath=%2Fother%2Ftmp
  1058  
  1059  ## Using custom configuration for PUT/GET
  1060  
  1061  If you want to override some default configuration options, you can use `WithFileTransferOptions` context.
  1062  There are multiple config parameters including progress bars or compression.
  1063  */
  1064  package gosnowflake