-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support table sharing when using a catalog account #905
Commits on Sep 15, 2023
-
Add Additional Error Messages for KMS Key lookup on imported dataset (d…
…ata-dot-all#748) ### Feature or Bugfix <!-- please choose --> - Feature Enhancement ### Detail - Adding additional error messages for KMS Key lookup when importing a new dataset - 1 Error message to determine if the KMS Key Alias Exists - 1 Error message to determine if the PivotRole has permissions to describe the KMS Key ### Relates - data-dot-all#712 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for ad4ab1f - Browse repository at this point
Copy the full SHA ad4ab1fView commit details
Commits on Sep 19, 2023
-
Get Latest in main to v2m1m0 (data-dot-all#771)
### Feature or Bugfix <!-- please choose --> - NA ### Detail - Get latest code in `main` to `v2m1m0` branch to keep in sync ### Relates - NA ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). NA ``` - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? ``` By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dlpzx <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: jaidisido <[email protected]> Co-authored-by: dlpzx <[email protected]> Co-authored-by: mourya-33 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for dbbef3c - Browse repository at this point
Copy the full SHA dbbef3cView commit details
Commits on Sep 26, 2023
-
Handle Environment Import of IAM service roles (data-dot-all#749)
### Feature or Bugfix <!-- please choose --> - Enahncement / Bugfix ### Detail - When creating an environment and specifying default Env IAM Role we assume it is of the structure `arn:aws:iam::ACCOUNT:role/NAME_SPECIFIED` - This does not work when there is a service path in the role arn such as with SSO: `arn:aws:iam::ACCOUNT:role/sso/NAME_SPECIFIED` - Causes issues when importing an IAM Role for an invited Team in an environment and/or with dataset sharing - This PR takes in the full IAM role ARN when importing the IAM role in order to correctly determine the role name ### Relates - [data-dot-all#695 ](data-dot-all#695) ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for d096160 - Browse repository at this point
Copy the full SHA d096160View commit details
Commits on Oct 5, 2023
-
Build Compliant Names for Opensearch Resources (data-dot-all#750)
### Feature or Bugfix <!-- please choose --> - Enhancement / Bugfix ### Detail - Ensure the names passed for OpenSearch Domain and OpenSearch Serverless Collection, Access Policy, Security Policy, and VPC Endpoint all follow naming conventions required by the service, meaning - The name must start with a lowercase letter - Must be between 3 and 28 characters - Valid characters are a-z (lowercase only), 0-9, and - (hyphen). ### Relates - data-dot-all#540 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Co-authored-by: dlpzx <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a53434f - Browse repository at this point
Copy the full SHA a53434fView commit details
Commits on Oct 10, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 16c7026 - Browse repository at this point
Copy the full SHA 16c7026View commit details -
Update Lambda runtime (data-dot-all#782)
### Feature or Bugfix Update ### Detail ### Relates See data-dot-all#655: > In Nov 27, 2023 the Lambda runtime node14 and Python3.7 will be deprecated! Checked all lambdas that explicitly set the runtime engine: only cognito httpheader redirection lambda used node14. All lambdas use python3.8 and node16 or node18. For cdk dependencies: upgraded to a newest `aws-cdk-lib` `v2.99.0` just in case if python3.7 is hardcoded somewhere inside of 2.78.0 (shouldn't be) ### Testing: - [x] uploaded the changes to my isengard account - [x] deployment is green - [x] could access app page, userguide page, and userguide from the app page. ### Security `N/A` - upgraded to a newer version of node js By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for c61ba15 - Browse repository at this point
Copy the full SHA c61ba15View commit details
Commits on Oct 12, 2023
-
Feat: limit pivot role S3 permissions (data-dot-all#780)
### Feature or Bugfix - Feature ### Detail The guiding principle is that: 1. dataset IAM role is the role accessing data 2. pivot role is the role used by the central account to perform SDK calls in the environment account In this PR we - Replace pivot role by dataset role in dataset Lake Formation registration - Use pivot role to trigger upload files feature and create folder feature, but use the dataset IAM role to perform the putObject operations-> removes the need for read and `putObject` permissions. for the pivot role - Redefine pivot role CDK stack to manage S3 buckets (bucket policies) for only the datasets S3 buckets that have been created or imported in the environment. - implement IAM policy utils to handle the new dynamic policies. We need to verify that the created policy statements do not exceed the maximum policy size. In addition we replace the previous "divide in chunks of 10 statements" by a function that divides in chunks based on the size of the policy statements. This way we optimize the policy size, which helps us in reducing the number of managed policies attached to the pivot role. --> it can be re-used in other "chunkenization" of policies - We did not implement force update of environments (pivot role nested stack) with new datasets added because it is already forced in `backend/dataall/modules/datasets/services/dataset_service.py` ### Backwards compatibility Testing Pre-update setup: - 1 environment (auto-created pivot role) - 2 datasets in environment, 1 created, 1 imported: with tables and folders - Run profiling jobs in tables Update with the branch changes: - [X] CICD pipeline runs successfully - [X] Click update environment on environment -> successfully updated policy of pivot role with imported datasets in policy. Reduction of policies - [X] Click update datasets --> registration in Lake formation updated to dataset role - [X] Update files works - [X] Create folder works - [X] Crawler and profiling jobs work ### Relates - data-dot-all#580 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Are you introducing any new policies/roles/users? `Yes` - Have you used the least-privilege principle? How? `In this PR we restrict the permissions of the pivot role, a super role that handles SDK calls in the environment accounts. Instead of granting permissions to all S3 buckets, we restrict it to data.all handled S3 buckets only` By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for f84250e - Browse repository at this point
Copy the full SHA f84250eView commit details -
Fix: ensure valid environments for share request and other objects cr…
…eation (data-dot-all#781) ### Feature or Bugfix - Feature - Bugfix ### Detail The different alternatives considered are discussed in data-dot-all#556 This PR introduces a new query `listValidEnvironments` that replaces the query `listEnvironments` for certain operations. `listEnvironments` - lists all environments independently of their CloudFormation stack statys with a lot of additional details `listValidEnvironments` - lists only "CloudFormation" stable and successful environments. Retrieves only basic info about the environment. Operations such as opening a share request or creation a Dataset/Notebook/etc require the selection of an environment. The environment options are now retrieved from `listValidEnvironments` ensuring that only valid environments are selectable. Moreover, this query is more light and does not need to query and obtain as many fields as the original `listEnvironments`, improving the efficiency of the code. ### Relates - data-dot-all#556 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 7d9122d - Browse repository at this point
Copy the full SHA 7d9122dView commit details
Commits on Oct 13, 2023
-
Adding configurable session timeout to IDP (data-dot-all#786)
### Feature or Bugfix <!-- please choose --> - Feature ### Detail Allows user to configure a session timeout . Today data.all by default sets the refresh token to 30 days but with this change it becomes configurable ### Relates data-dot-all#421 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: Manjula <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 1801cf1 - Browse repository at this point
Copy the full SHA 1801cf1View commit details
Commits on Oct 16, 2023
-
Fix: shell true semgrep (data-dot-all#760)
### Feature or Bugfix - Feature - Bugfix ### Detail As explained in the [semgrep docs](https://semgrep.dev/docs/cheat-sheets/python-command-injection/#1b-shelltrue): "Functions from the subprocess module have the shell argument for specifying if the command should be executed through the shell. Using shell=True is dangerous because it propagates current shell settings and variables. This means that variables, glob patterns, and other special shell features in the command string are processed before the command is run, making it much easier for a malicious actor to execute commands. The subprocess module allows you to start new processes, connect to their input/output/error pipes, and obtain their return codes. Methods such as Popen, run, call, check_call, check_output are intended for running commands provided as an argument ('args'). Allowing user input in a command that is passed as an argument to one of these methods can create an opportunity for a command injection vulnerability." In our case the risk is not exposed as no user input is directly taken into the subprocess commands. Nevertheless we should strive for the highest standards on security and this PR works on replacing all the `shell=True` executions in the data.all code. In this PR: - when possible we have set `shell=False` - in cases where the command was too complex a `CommandSanitizer` ensures that the input arguments are strings following the regex=`[a-zA-Z0-9-_]` Testing: - [X] local testing - deployment of any stack (`backend/dataall/base/cdkproxy/cdk_cli_wrapper.py`) - [X] local testing - deployment of cdk pipeline stack (`backend/dataall/modules/datapipelines/cdk/datapipelines_cdk_pipeline.py`) - [X] local testing - deployment of codepipeline pipeline stack (`backend/dataall/modules/datapipelines/cdk/datapipelines_pipeline.py`) - [ ] AWS testing - deployment of data.all - [ ] AWS testing - deployment of any stack (`backend/dataall/base/cdkproxy/cdk_cli_wrapper.py`) - [ ] AWS testing - deployment of cdk pipeline stack (`backend/dataall/modules/datapipelines/cdk/datapipelines_cdk_pipeline.py`) - [ ] AWS testing - deployment of codepipeline pipeline stack (`backend/dataall/modules/datapipelines/cdk/datapipelines_pipeline.py`) ### Relates - data-dot-all#738 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? ---> 🆗 This is exactly what this PR is trying to do - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 599fc1a - Browse repository at this point
Copy the full SHA 599fc1aView commit details -
Fix: allow to submit a share when you are both and approver and a req…
…uester (data-dot-all#793) ### Feature or Bugfix - Bugfix ### Detail - Allowing to submit a share when you are both an approver and a requester ### Security **DOES NOT APPLY** Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Co-authored-by: Zilvinas Saltys <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b356bf2 - Browse repository at this point
Copy the full SHA b356bf2View commit details -
feat: redirect upon creating a share request (data-dot-all#799)
### Feature or Bugfix - Feature ### Detail Adding a redirect to the share UI once a share object is created. Additionally updating the breadcrumb message to more clearly indicate that a "Draft share request is created" rather than suggesting that the share has actually been sent to the data owners team. ### Relates N/A ### Security N/A By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: Zilvinas Saltys <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 793a078 - Browse repository at this point
Copy the full SHA 793a078View commit details
Commits on Oct 18, 2023
-
Fix: condition when there are no public subnets (data-dot-all#794)
### Feature or Bugfix Fix data-dot-all#792: Fix: condition when there are no public subnets ---------
Configuration menu - View commit details
-
Copy full SHA for f448613 - Browse repository at this point
Copy the full SHA f448613View commit details -
feat: removing unused variable (data-dot-all#815)
### Feature or Bugfix - Feature ### Detail - Removing unused variable in local graphql server pointing to a fixed AWS region ### Relates N/A ### Security N/A By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: Zilvinas Saltys <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 66b9a08 - Browse repository at this point
Copy the full SHA 66b9a08View commit details -
feat: Handle Pre-filtering of tables (data-dot-all#811)
### Feature or Bugfix - Feature ### Detail - For a dataset to make sense all the tables within a dataset should have their location pointing to the same place as the dataset S3 bucket. However it is possible that a database can have tables which do not point to the same bucket which is perfectly legal in LakeFormation. Therefore we propose that data.all automatically only lists tables that have the same S3 bucket location as the dataset. This will solve a problem for Yahoo where we want to import a database that contains many tables with different buckets. Additionally Catalog UI should also only list prefiltered tables. ### Testing - Tested this in local env. I was able to create and share datasets even after pre-filtering process takes place. - Will send separate PR for unit testing. ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: Anushka Singh <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c833c26 - Browse repository at this point
Copy the full SHA c833c26View commit details -
Fix Check other share exists before clean up (data-dot-all#769)
### Feature or Bugfix <!-- please choose --> - Bugfix ### Detail - Fix method to detect if other share objects exist on the environment before cleaning up environment-level shared resources (i.e. RAM invitation and PivotRole permissions) - Originally, if TeamA in EnvA had 2 shares approved and succeeded on DatasetB and TeamA rejects 1 of the pre-existing shares, the method `other_approved_share_object_exists` was returning `False`and deleting necessary permissions for the other existing Share - Also disables the other existing shares ability to Revoke the still existing share since pivotRole no longer has permissions - Also fixes the removal of dataall QS Group permissions if there are still existing shares to EnvA ### Security NA ``` Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? ``` By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Co-authored-by: dlpzx <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6cc564e - Browse repository at this point
Copy the full SHA 6cc564eView commit details
Commits on Oct 20, 2023
-
Email Notification on Share Workflow - Issue - 734 (data-dot-all#818)
### Feature or Bugfix - Feature ### Detail Whenever a share request is created and transitions from states ( approved, revoked, etc ) a notification is created. This notification is displayed on the bell icon on the UI . We want such a similar notification to be sent to the dataset owner, requester, etc via email Please take a look at Github Issue 734 For more details - data-dot-all#734 ### Relates - data-dot-all#734 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? No - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? No - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? No - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? Yes - Have you used the least-privilege principle? How? --> **Permission granted for SES:sendEmail to Lambda on resources - (Ses identity and configuration set ) , Also created KMS and SNS for SES setup to handle email bounces . Used least privleged and restricted access on both whenever required. ** By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: trajopadhye <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8b7b82e - Browse repository at this point
Copy the full SHA 8b7b82eView commit details
Commits on Oct 25, 2023
-
feat: adding frontend and backend feature flags (data-dot-all#817)
### Feature or Bugfix - Feature ### Detail - Adding frontend support for all feature flags defined in config.json with a new util method isFeatureEnabled - Adding a new flag **preview_data** in the datasets module to control whether previewing data is allowed - Adding a new flag **glue_crawler** in the datasets module to control whether running glue crawler is allowed - Updating environment features to be hidden or visible based on whether the module is active. Adding a new util isAnyFeatureModuleEnabled to check whether to render the entire feature box. ### Relates N/A ### Security Not relevant By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. --------- Co-authored-by: Zilvinas Saltys <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 48c32e5 - Browse repository at this point
Copy the full SHA 48c32e5View commit details
Commits on Oct 26, 2023
-
Feat: Refactor notifications from core to modules (data-dot-all#822)
### Feature or Bugfix - Refactoring ### Detail As a rule of thumb, we encourage customization of `modules` while changes in `core` should be avoided when possible. `notifications` is a component initially in core which is only used by `dataset_sharing`. To facilitate customization of the `notifications` module and also to clearly see its dependencies we have: - Moved `notifications` code from core to modules as it is a reusable component that is not needed by any core component. - Moved dataset_sharing references inside dataset_sharing module and left `notifications` independent from any other module (done mostly in data-dot-all#734, so credits to @TejasRGitHub) - Added depends_on in the dataset_sharing module to load notifications if the data_sharing module is imported. - Modified frontend navigation bar to make it conditional of the notifications module - Added migration script to modify the notification type column - Fix tests from data-dot-all#734, some references on the payload of the notification tasks were wrong - Small fixes to SES stack: added account in KMS policy and email_id as input ### [WIP] Testing Local testing - [ ] loading of notifications with datasets enabled - [ ] ... AWS testing - [ ] CICD pipeline succeds ### Other remarks Not for this PR, but as a general note, we should clean up deprecated ECS tasks ### Relates - data-dot-all#785 - data-dot-all#734 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). `N/A` just refactoring By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 6d727e9 - Browse repository at this point
Copy the full SHA 6d727e9View commit details
Commits on Oct 27, 2023
-
Merge branch 'main' into v2m1m0
# Conflicts: # deploy/stacks/backend_stack.py # deploy/stacks/backend_stage.py # deploy/stacks/lambda_api.py # deploy/stacks/pipeline.py # template_cdk.json
Configuration menu - View commit details
-
Copy full SHA for 8ad760b - Browse repository at this point
Copy the full SHA 8ad760bView commit details -
Feat: pivot role limit kms (data-dot-all#830)
### Feature or Bugfix - Feature ### Detail - read KMS keys with an alias prefixed by the environment resource prefix - read KMS keys imported in imported datasets - restrict pivot role policies to the KMS keys created by data.all and those imported in the imported datasets - move kms client from data_sharing to base as it is used in environments and datasets ### Relates - data-dot-all#580 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). This PR restricts the IAM policies of the pivot role, following the least privilege permissions principle - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 3f100b4 - Browse repository at this point
Copy the full SHA 3f100b4View commit details -
Make hosted_zone_id optional, code update (data-dot-all#812)
### Feature or Bugfix - Bugfix ### Detail - Make `hosted_zone_id` optional, code update ### Relates - data-dot-all#797 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? N/A - Is the input sanitized? N/A - What precautions are you taking before deserializing the data you consume? N/A - Is injection prevented by parametrizing queries? N/A - Have you ensured no `eval` or similar functions are used? N/A - Does this PR introduce any functionality or component that requires authorization? N/A - How have you ensured it respects the existing AuthN/AuthZ mechanisms? N/A - Are you logging failed auth attempts? N/A - Are you using or adding any cryptographic features? N/A - Do you use a standard proven implementations? N/A - Are the used keys controlled by the customer? Where are they stored? N/A - Are you introducing any new policies/roles/users? N/A - Have you used the least-privilege principle? How? N/A By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. YES ### Description Make `hosted_zone_id` optional and provide `HostedZoneId` and `DNSName` in CloudFormation Stack Output, so users can create their own [Route53 AliasTarget](https://docs.aws.amazon.com/Route53/latest/APIReference/API_AliasTarget.html). Following validation checks in `ecs_patterns.ApplicationLoadBalancedFargateService` were considered: * `frontend_alternate_domain` and `userguide_alternate_domain` have to be `None` when the `hosted_zone` is `None`, see checks in [multiple-target-groups-service-base.ts#L463](https://github.com/aws/aws-cdk/blob/c445b8cc6e20d17e4a536f17262646b291a0fe36/packages/aws-cdk-lib/aws-ecs-patterns/lib/base/network-multiple-target-groups-service-base.ts#L463), or else a `A Route53 hosted domain zone name is required to configure the specified domain name` error is raised * for a HTTPS ALB listener, only the `certificate` is ultimately required, and not the `domainName` or `domainZone`, as per evaluation logic in [application-load-balanced-service-base.ts#L509](https://github.com/aws/aws-cdk/blob/c445b8cc6e20d17e4a536f17262646b291a0fe36/packages/aws-cdk-lib/aws-ecs-patterns/lib/base/application-load-balanced-service-base.ts#L509)
Configuration menu - View commit details
-
Copy full SHA for fb7b61b - Browse repository at this point
Copy the full SHA fb7b61bView commit details
Commits on Oct 30, 2023
-
Clean-up for v2.1 (data-dot-all#843)
### Feature or Bugfix - Bugfix ### Detail - Clean up prints and show better exception message when custom_domain is not provided for SES ### Relates - v2.1.0 ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for b51da2c - Browse repository at this point
Copy the full SHA b51da2cView commit details
Commits on Oct 31, 2023
-
fix: adding missing pivot role permission to get key policy (data-dot…
…-all#845) ### Feature or Bugfix - Bugfix ### Detail - getKeyPolicy permission is required by share manager. I'm not sure if it is required in 2.0 but it is definitely required for S3 bucket policy sharing. Though I suspect it should be needed for OS version to as for access points to work pivotRole needs to update KMS key policy right? Without this permission sharing manager fails to get the policy and fails. The only workaround is if you manually add the pivotRole to the KMS key policy. ### Relates N/A ### Security This change expands the pivot role with a new permission to get key policy. This is still following least permission principle. By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Co-authored-by: Zilvinas Saltys <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b54860d - Browse repository at this point
Copy the full SHA b54860dView commit details
Commits on Nov 6, 2023
-
Env features dynamic enable (data-dot-all#856)
### Feature or Bugfix <!-- please choose --> - Bugfix ### Detail - If I disable a module (i.e. dashboards), the `modulesEnabled` environment parameter will still be set to `true` because the default value is `true` - This PR sets the default value of the environment feature to be the true/false value of `isModuleEnabled()` so - If a module is disabled the initial value is `false` and will not be editable on the frontend EnvironmentCreateForm - If a module is enabled the initial value is `true` and is editable on the frontend EnvironmentCreateForm ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for ce4e760 - Browse repository at this point
Copy the full SHA ce4e760View commit details -
Fix SES Stack Permissions and Email Sender Id (data-dot-all#854)
### Feature or Bugfix <!-- please choose --> - Bugfix ### Detail - Add Service Principal for `ses.amazonaws.com` for SNS Topic to handle bounced emails - Make `email_sender` defined as `email_notification_sender_id` + `custom_domain.hosted_zone` to avoid unverified email identity errors ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 3c9c82e - Browse repository at this point
Copy the full SHA 3c9c82eView commit details -
Add GenerateEmbedUrlForRegisteredUser for QS Reader sessions (data-do…
…t-all#853) ### Feature or Bugfix <!-- please choose --> - Bugfix ### Detail - Missing QS Permission for QS Dashboard Reader Sessions in data.all - Exists in PivotRole.yaml but not for auto-created cdk pivotRole (pivotRole-cdk) ### Security N/A Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 3277ab9 - Browse repository at this point
Copy the full SHA 3277ab9View commit details -
Fix: ssm prefix custom cdk policy (data-dot-all#857)
### Feature or Bugfix - Bugfix ### Detail As part of the environment stack we deploy some SSM parameters prefixed with `dataall` prefix in all cases. The custom DataallCustomCDKPolicy provided to use in the `cdk bootstrap` command restricts SSM permissions to resource-prefixed parameters. As a result, when using an environment with a resource prefix different from the default `dataall` the stack fails to create and to delete because it cannot create or delete those SSM parameters. - The first commit adds the generic `dataall` SSM permisssions to the custom policy. It open the permissions slightly - The second and third commits rename the SSM parameters to use the resource prefix Each commit would solve the issue on its own, so we don't really need both. However, there are arguments to keep both. Having generic permissions to dataall-SSM parameters is restrictive and might be useful for other cases when the toolkit is used. It is good to keep all created resources prefixed with the same prefix, so that users can easily track which resources belong to a data.all environment. The only issue is that for number 2, we need users to update the environment stacks before creating more datasets (add in release notes) ### Relates - v2.1 release ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for c8228f4 - Browse repository at this point
Copy the full SHA c8228f4View commit details
Commits on Nov 7, 2023
-
Use op.execute to alter column type in migration script (data-dot-all…
…#859) ### Feature or Bugfix - Bugfix ### Detail In theory op.alter_table can modify column data types. But in our validation testing alembic runs into issues when those data types are more complex, like the user-define data type that we are trying to modify in the latest migration script. The migration seems to succeed but the data type does not change from the Enum type. The problem is that if in the future customers introduce new notification types they will receive failures when writing data to RDS. After digging a bit, I found that other projects have faced similar issues and the way to work around it is to directly use SQL statements to modify the data type. The migration script has been tested in AWS. I set a limit of 100 characters which is more than the double of the longest notification type at the moment. ### Relates V2.1 release ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for c5cc72d - Browse repository at this point
Copy the full SHA c5cc72dView commit details -
Fix Get Dataset Table Overview for Non-Dataset Owners (data-dot-all#858)
### Feature or Bugfix <!-- please choose --> - Bugfix ### Detail - If a User who is not Dataset Owner, Env Admin, or Dataset Steward tries to view a dataset table from the Catalog they cannot view the any of the tabs for: `Preview`, `Overview`, `Columns`, or `Metrics` and receive an error: `An error occurred (UnauthorizedOperation) when calling GET_DATASET_TABLE operation` - This PR: - Removes `GlueTableProperties` from being returned in the `getDatasetTable` query as it is not required for any Dataset Table operations on data.all's UI - Resolves Requester User can see Overview Tab - Adds Check on Dataset Confidentiality Tag to determine to show data in `Columns` Tab or not and removes checks on - Resolves Requester User can see Column Tab (for Unclassified Datasets) - Removes Resource Permissions checks on `Metrics` Tab queries to only handle check on Dataset Confidentiality Tag - Resolves Requester User can see Metrics Tab (for Unclassified Datasets) - Fix bug in update column description - Resolves Dataset Owner being able to update Column Description ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 27a863b - Browse repository at this point
Copy the full SHA 27a863bView commit details -
Added github workflows on release branches and fix migration workflow…
… issues (data-dot-all#860) ### Feature or Bugfix - Feature - Bugfix ### Detail - Add all applicable GitHub workflows to PRs pointing at `v2m*` branches - fix semgrep finding issue from GitHub workflows from migration script for notifications type --> added `nosemgrep` as no user input is passed to the SQL query and only code administrators will have access to the query. - fix migration validation: this one is tricky as it succeeds when running it locally and on a real pipeline. It turns out that the issue was not on the migration script itself but on the way we dropped and updated tables in the validation migration stage. For dropping tables, we were using a different schema that the one used in upgrade database. This PR removes the schema_name variable and uses the envname as schema for all cases. One final note, this issue might be related to data-dot-all#788. Here some screenshots of the resulting local schema for the notification table after running `make drop-tables` and `make upgrade-db` <img width="962" alt="image" src="https://github.com/awslabs/aws-dataall/assets/71252798/0d020d7b-915c-436f-a767-8290d0ac3480"> ### Relates - V2.1 release ### Security Please answer the questions below briefly where applicable, or write `N/A`. Based on [OWASP 10](https://owasp.org/Top10/en/). - Does this PR introduce or modify any input fields or queries - this includes fetching data from storage outside the application (e.g. a database, an S3 bucket)? - Is the input sanitized? - What precautions are you taking before deserializing the data you consume? - Is injection prevented by parametrizing queries? - Have you ensured no `eval` or similar functions are used? - Does this PR introduce any functionality or component that requires authorization? - How have you ensured it respects the existing AuthN/AuthZ mechanisms? - Are you logging failed auth attempts? - Are you using or adding any cryptographic features? - Do you use a standard proven implementations? - Are the used keys controlled by the customer? Where are they stored? - Are you introducing any new policies/roles/users? - Have you used the least-privilege principle? How? By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Configuration menu - View commit details
-
Copy full SHA for 473a1b6 - Browse repository at this point
Copy the full SHA 473a1b6View commit details
Commits on Dec 7, 2023
-
Configuration menu - View commit details
-
Copy full SHA for d85b416 - Browse repository at this point
Copy the full SHA d85b416View commit details