Skip to content

Commit

Permalink
Nested filtering
Browse files Browse the repository at this point in the history
add script to run arbitrary SQL

remove unused env vars

use the linux binary directly

remove checking out ndc-spec

install libssl3

generate the correct nested fields filtering predicate

generate the WHERE predicate for nested fields

get the recursion correct [WIP]

get the binary comparison operator for a Column

pass around the rootContainerAlias

rename visitNestedField1 to visitNestedField

refactor visitNestedField

change the file structure

fix the syntax of the query

remove redundant function

minor no-op refactors

fix tests

update README according to the latest template (#37)

Fix NDC tests (#40)

* fix NDC tests

* checkout v0.1.5

* update typescript ndc version

* npm audit fix

add sql generation tests

Add script to run ndc tests (#41)

* add a npm script to run ndc-tests

* generate configuration before starting the server

* test out github workflow

* try with ubuntu-latest

* add debug statement

* debug

* remove the loging into GHCR

Seed data into test container (#42)

* fix test

* setup data only when --setup-data option is provided

* set up the azure cosmos emulator

* fix syntax errors

* use the emulator values

* setup emulator data

* fix issues

* use await

* disable tls

* fix typo

* fix syntax

* add indexing bit while creating the container

* fail the process when encountering any error

figure out where the bug is

one more attempt at debugging

add debug statement

revert back the cli changes

fix the imports

fix import 1

add nested array object field filtering test case

modify the failing tests according to the new data

add nested filtering tests
  • Loading branch information
codingkarthik committed Sep 12, 2024
1 parent eb6bead commit f0e1d60
Show file tree
Hide file tree
Showing 50 changed files with 11,988 additions and 4,973 deletions.
72 changes: 37 additions & 35 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,56 +6,58 @@ on:
jobs:
unit_tests:
name: Run NDC tests
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- name: Checkout (GitHub)
uses: actions/checkout@v3
with:
path: cosmos

- name: Log in to GitHub Container Registry 📦
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker
uses: docker-practice/actions-setup-docker@master

- name: Pull and run Azure Cosmos DB Emulator
run: |
docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
docker run -d --name=cosmos-emulator -p 8081:8081 -p 10251:10251 -p 10252:10252 -p 10253:10253 -p 10254:10254 \
-e AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10 \
-e AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=false \
mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
- name: Wait for Cosmos DB Emulator to be ready
run: |
timeout 300 bash -c 'until curl -ks https://localhost:8081/_explorer/emulator.pem > /dev/null; do sleep 5; done'
echo "Cosmos DB Emulator is ready"
- name: Download Cosmos DB Emulator certificate
run: |
curl -k https://localhost:8081/_explorer/emulator.pem > emulatorcert.crt
sudo cp emulatorcert.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
- name: Build connector
run: |
cd cosmos
npm install
npm run build
- name: Generate the connector configuration
env:
AZURE_COSMOS_KEY: ${{ secrets.AZURE_COSMOS_KEY }}
AZURE_COSMOS_ENDPOINT: ${{ secrets.AZURE_COSMOS_ENDPOINT }}
AZURE_COSMOS_DB_NAME: ${{ secrets.AZURE_COSMOS_DB_NAME }}
- name: Download NDC Test Binary
run: |
cd cosmos
chmod +x ./dist/cli/index.js
./dist/cli/index.js update
curl -L https://github.com/hasura/ndc-spec/releases/download/v0.1.6/ndc-test-x86_64-unknown-linux-gnu -o ndc-test
chmod +x ndc-test
./ndc-test --version # Optional: Verify the binary works
sudo mv ndc-test /usr/local/bin/
- name: Verify the ndc-test binary is accessible
run: |
ndc-test -V
- name: Start connector
- name: Run tests
env:
AZURE_COSMOS_KEY: ${{ secrets.AZURE_COSMOS_KEY }}
AZURE_COSMOS_ENDPOINT: ${{ secrets.AZURE_COSMOS_ENDPOINT }}
AZURE_COSMOS_DB_NAME: ${{ secrets.AZURE_COSMOS_DB_NAME }}
AZURE_COSMOS_KEY: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
AZURE_COSMOS_ENDPOINT: https://localhost:8081
AZURE_COSMOS_DB_NAME: TestNobelLaureates
run: |
cd cosmos
export AZURE_COSMOS_KEY=Bh3EVxRH6BsUnger4tfXkKAvUenZhVosnvNpk185PyYZ9wd4qZO1kf7Y6hvERc7EUUJUE9j8RvDNACDbsgKqLg==
export AZURE_COSMOS_ENDPOINT=https://test-azure-cosmos-one.documents.azure.com:443/
export AZURE_COSMOS_DB_NAME=azure-cosmos-one
export HASURA_CONFIGURATION_DIRECTORY="."
npm run start serve -- --configuration . &
- name: Checkout ndc-spec
uses: actions/checkout@v3
with:
repository: hasura/ndc-spec
path: ndc-spec

- name: Run ndc-test
working-directory: ndc-spec
# temporary-solution: the --no-validate-responses flag is used to avoid the errors from the changes in ndc-spec in [PR:141](https://github.com/hasura/ndc-spec/pull/141)
run: cargo run --bin ndc-test -- replay --endpoint http://0.0.0.0:8080 --snapshots-dir ../cosmos/ndc-test-snapshots --no-validate-responses
npm run ndc-test -- --setup-emulator-data
137 changes: 46 additions & 91 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,137 +9,92 @@ With this connector, Hasura allows you to instantly create a real-time GraphQL A

This connector is built using the [TypeScript Data Connector SDK](https://github.com/hasura/ndc-sdk-typescript) and implements the [Data Connector Spec](https://github.com/hasura/ndc-spec).

- [Connector information in the Hasura Hub](https://hasura.io/connectors/azure-cosmos)
- [See the listing in the Hasura Hub](https://hasura.io/connectors/azure-cosmos)
- [Hasura V3 Documentation](https://hasura.io/docs/3.0)

## Features

Below, you'll find a matrix of all supported features for the Azure Cosmos DB for NoSQL connector:

| Feature | Supported | Notes |
| ------------------------------- | --------- | ----- |
| Native Queries + Logical Models | | |
| Simple Object Query | | |
| Filter / Search | | |
| Simple Aggregation | | |
| Sort | | |
| Paginate | | |
| Nested Objects | | |
| Nested Arrays | | |
| Nested Filtering | | |
| Nested Sorting | | |
| Nested Relationships | | |
|---------------------------------|-----------|-------|
| Native Queries + Logical Models | | |
| Simple Object Query | | |
| Filter / Search | | |
| Simple Aggregation | | |
| Sort | | |
| Paginate | | |
| Nested Objects | | |
| Nested Arrays | | |
| Nested Filtering | | |
| Nested Sorting | | |
| Nested Relationships | | |


## Before you get Started

1. Create a [Hasura Cloud account](https://console.hasura.io)
2. Install the [CLI](https://hasura.io/docs/3.0/cli/installation/)
3. Install the [Hasura VS Code extension](https://marketplace.visualstudio.com/items?itemName=HasuraHQ.hasura)
4. [Create a supergraph](https://hasura.io/docs/3.0/getting-started/init-supergraph)
5. [Create a subgraph](https://hasura.io/docs/3.0/getting-started/init-subgraph)
2. Please ensure you have the [DDN CLI](https://hasura.io/docs/3.0/cli/installation) and [Docker](https://docs.docker.com/engine/install/) installed
2. [Create a supergraph](https://hasura.io/docs/3.0/getting-started/init-supergraph)
3. [Create a subgraph](https://hasura.io/docs/3.0/getting-started/init-subgraph)

## Using the connector

To use the Azure Cosmos DB for NoSQL connector, follow these steps in a Hasura project:
(Note: for more information on the following steps, please refer to the Postgres connector documentation [here](https://hasura.io/docs/3.0/getting-started/connect-to-data/connect-a-source))
The steps below explain how to Initialize and configure a connector for local development. You can learn how to deploy a
connector — after it's been configured — [here](https://hasura.io/docs/3.0/getting-started/deployment/deploy-a-connector).

## Using the Azure Cosmos DB for NoSQL connector

### 1. Init the connector
(Note: here and following we are naming the subgraph "my_subgraph" and the connector "my_azure_cosmos")
### Step 1: Authenticate your CLI session

```bash
ddn connector init my_azure_cosmos --subgraph my_subgraph/subgraph.yaml --hub-connector hasura/azure-cosmos --configure-port 8081 --add-to-compose-file compose.yaml
```

### 2. Add your Azure Cosmos DB for NoSQL credentials

Add you credentials to `my_subgraph/connector/my_azure_cosmos/.env.local`

```env title="my_subgraph/connector/my_azure_cosmos/.env.local"
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_my_azure_cosmos
AZURE_COSMOS_DB_NAME= <YOUR_AZURE_DB_NAME>
AZURE_COSMOS_ENDPOINT= <YOUR_AZURE_COSMOS_ENDPOINT>
AZURE_COSMOS_KEY= <YOUR_AZURE_COSMOS_KEY>
AZURE_COSMOS_NO_OF_ROWS_TO_FETCH= <NO-OF-ROWS-TO-FETCH>
```bash
ddn auth login
```

Note: `AZURE_COSMOS_CONNECTOR_NO_OF_ROWS_TO_FETCH` is an optional field, with 100 rows to be fetched by default.

### 3. Introspect your Database
### Step 2: Configure the connector

From the root of your project run:
Once you have an initialized supergraph and subgraph, run the initialization command in interactive mode while
providing a name for the connector in the prompt:

```bash title="From the root of your project run:"
ddn connector introspect --connector my_subgraph/connector/my_azure_cosmos/connector.local.yaml
```bash
ddn connector init <connector-name> -i
```

If you look at the `config.json` for your connector, you'll see metadata describing your Azure Cosmos DB for NoSQL mappings.
#### Step 2.1: Choose the `hasura/azure-cosmos` from the list

### 4. Restart the services
#### Step 2.2: Choose a port for the connector

Let's restart the docker compose services. Run the folowing from the root of your project:
The CLI will ask for a specific port to run the connector on. Choose a port that is not already in use or use the
default suggested port.

```bash title="From the root of your project run:"
HASURA_DDN_PAT=$(ddn auth print-pat) docker compose up --build --watch
```

The schema of the database can be viewed at http://localhost:8081/schema.
#### Step 2.3: Provide the env vars for the connector

### 5. Create the Hasura metadata

In a new terminal tab from your project's root directory run:
| Name | Description | Required | Default |
|----------------------------------|-------------------------------------------------------------------------------|----------|---------|
| AZURE_COSMOS_KEY | Primary/Secondary key asssociated with the Azure Cosmos DB for NoSQL | Yes | N/A |
| AZURE_COSMOS_ENDPOINT | Endpoint of the Azure Cosmos DB for NoSQL | Yes | N/A |
| AZURE_COSMOS_DB_NAME | Name of the Database | Yes | N/A |
| AZURE_COSMOS_NO_OF_ROWS_TO_FETCH | Maximum number of rows to fetch per container to infer the schema. (Optional) | No | 100 |

```bash title="Run the following from the root of your project:"
ddn connector-link add my_azure_cosmos --subgraph my_subgraph/subgraph.yaml --configure-host http://local.hasura.dev:8081 --target-env-file my_subgraph/.env.my_subgraph.local
```

The above step will add the following env vars to the `.env.my_subgraph.local` file.

```env title="my_subgraph/.env.my_subgraph.local"
MY_SUBGRAPH_MY_AZURE_COSMOS_READ_URL=http://local.hasura.dev:8081
MY_SUBGRAPH_MY_AZURE_COSMOS_WRITE_URL=http://local.hasura.dev:8081
```

The generated file has two environment variables — one for reads and one for writes.
Each key is prefixed by the subgraph name, an underscore, and the name of the
connector.
## Step 3: Introspect the connector

### 6. Update the new DataConnectorLink object

Finally, now that our `DataConnectorLink` has the correct environment variables configured for the Azure Cosmos DB for NoSQL connector,
we can run the update command to have the CLI look at the configuration JSON and transform it to reflect our database's
schema in `hml` format. From your project's root directory, run:
```bash title="From the root of your project, run:"
ddn connector-link update my_azure_cosmos --subgraph my_subgraph/subgraph.yaml --env-file my_subgraph/.env.my_subgraph.local
```bash
ddn connector introspect <connector-name>
```

After this command runs, you can open your `my_subgraph/metadata/my_azure_cosmos.hml` file and see your metadata completely
scaffolded out for you 🎉
### 7. Import _all_ your indices
This will generate a `configuration.json` file that will have the schema of your Azure Cosmos DB for NoSQL.

You can do this with just one command. From your project's root directory, run:
## Step 4: Add your resources

```bash title="From the root of your project, run:"
ddn connector-link update my_azure_cosmos --subgraph my_subgraph/subgraph.yaml --env-file my_subgraph/.env.my_subgraph.local --add-all-resources
```bash
ddn connector-link add-resources <connector-name>
```

### 8. Create a supergraph build

Pass the `local` subcommand along with specifying the output directory as `./engine` in the root of the project. This
directory is used by the docker-compose file to serve the engine locally. From your project's root directory, run:
```bash title="From the root of your project, run:"
ddn supergraph build local --output-dir engine --subgraph-env-file my_subgraph:my_subgraph/.env.my_subgraph.local
```
You can now navigate to
[`https://console.hasura.io/local/graphql?url=http://localhost:3000`](https://console.hasura.io/local/graphql?url=http://localhost:3000)
and interact with your API using the Hasura Console.
This command will track all the containers in your Azure Cosmos DB for NoSQL as [Models](https://hasura.io/docs/3.0/supergraph-modeling/models).
## Contributing

We're happy to receive any contributions from the community. Please refer to our [development guide](./docs/development.md).
Expand Down
9 changes: 6 additions & 3 deletions ndc-test-snapshots/capabilities
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
{
"version": "0.1.0",
"version": "0.1.5",
"capabilities": {
"query": {},
"query": {
"nested_fields": {},
"exists": {}
},
"mutation": {}
}
}
}
Loading

0 comments on commit f0e1d60

Please sign in to comment.