Skip to content

Commit

Permalink
Prepare release 1.8.0 (#229)
Browse files Browse the repository at this point in the history
* Prepare release 1.8.0

Signed-off-by: Levko Kravets <[email protected]>

* Fix `npm audit` issues

Signed-off-by: Levko Kravets <[email protected]>

* Minor fixes to changelog

Signed-off-by: Levko Kravets <[email protected]>

---------

Signed-off-by: Levko Kravets <[email protected]>
  • Loading branch information
kravets-levko authored Feb 8, 2024
1 parent 957791b commit bf3c9ee
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 9 deletions.
41 changes: 41 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,46 @@
# Release History

## 1.8.0

### Highlights

- Retry failed CloudFetch requests (databricks/databricks-sql-nodejs#211)
- Fixed compatibility issues with Node@14 (databricks/databricks-sql-nodejs#219)
- Support Databricks OAuth on Azure (databricks/databricks-sql-nodejs#223)
- Support Databricks OAuth on GCP (databricks/databricks-sql-nodejs#224)
- Support LZ4 compression for Arrow and CloudFetch results (databricks/databricks-sql-nodejs#216)
- Fix OAuth M2M flow on Azure (databricks/databricks-sql-nodejs#228)

### OAuth on Azure

Some Azure instances now support Databricks native OAuth flow (in addition to AAD OAuth). For a backward
compatibility, library will continue using AAD OAuth flow by default. To use Databricks native OAuth,
pass `useDatabricksOAuthInAzure: true` to `client.connect()`:

```ts
client.connect({
// other options - host, port, etc.
authType: 'databricks-oauth',
useDatabricksOAuthInAzure: true,
// other OAuth options if needed
});
```

Also, we fixed issue with AAD OAuth when wrong scopes were passed for M2M flow.

### OAuth on GCP

We enabled OAuth support on GCP instances. Since it uses Databricks native OAuth,
all the options are the same as for OAuth on AWS instances.

### CloudFetch improvements

Now library will automatically attempt to retry failed CloudFetch requests. Currently, the retry strategy
is quite basic, but it is going to be improved in the future.

Also, we implemented a support for LZ4-compressed results (Arrow- and CloudFetch-based). It is enabled by default,
and compression will be used if server supports it.

## 1.7.1

- Fix "Premature close" error which happened due to socket limit when intensively using library
Expand Down
16 changes: 8 additions & 8 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@databricks/sql",
"version": "1.7.1",
"version": "1.8.0",
"description": "Driver for connection to Databricks SQL via Thrift API.",
"main": "dist/index.js",
"types": "dist/index.d.ts",
Expand Down

0 comments on commit bf3c9ee

Please sign in to comment.