- Audit all final file ownership on target dse nodes (ctool comparable sys.)
Need universe added to ubuntu repos:
sudo nano /etc/apt/sources.list
then add universe at the end of each line, like this:
deb http://archive.ubuntu.com/ubuntu bionic main universe deb http://archive.ubuntu.com/ubuntu bionic-security main universe deb http://archive.ubuntu.com/ubuntu bionic-updates main universe
Need to validate generated root -> intemidiaries -> top level cert order via openssl validate() function, appears to be an incorrect order, current workaround is to force truststore/keystore acceptance.
Need to test against Windows AD with DSE mixed LDAP/Internal mode.
https://academy.datastax.com/content/datastax-enterprise-46-ldap-support
- See new params in
group_vars/all
:[heap_xms]
and[heap_xmx]
- always set them both to the same value to avoid runtime memory allocation issues.
No load systems will need 4GB heaps, development and staging should have 8GB heaps and load testing/production systems should have 20GB heaps.
AWS Datastax internal security group: need to allow all nodes to communicate on all ports due to AlwaysOnSQL using random ports to connect to other analytic nodes. Not allowing this means AlwaysOnSQL service only starts up on the local node.
ingress {
from_port = 0
protocol = "-1"
to_port = 0
security_groups = ["${aws_security_group.sg_internal_only.id}"]
}
And remove from all bash references e.g. runansi_extended.sh etc.
vars: /ansible/group_vars/all
On Ansible startup via runansi_extended.sh and runansi_add_node.sh if data exists in the configured DSE data directory the entire BASH script will exit. This will protect against overwriting an existing running cluster in the case of creating a new cluster via runansi_extended.sh and will also stop overwriting an existing running node in the case of runansi_add_node.sh
Called by playbooks: dse_install.yml, add_node_install.yml and opsc_install.yml
role: role: dse_test_for_data_directory
-> playbook: dse_security.yml
This is the default certificate generation process.
This method generates a self-signed root certificate and then uese that root certificate to sign certificates for each node, each node has a CN that matches it's resolvable FQDN e.g. machine1.mysite.net, machine2.mysite.net
role: security_create_root_certificate
This method takes a CA signed WILDCARD certificate and treats it as a root certificate, using it to sign individual certificates for each node, each node has a CN that matches it's resolvable FQDN e.g. machine1.mysite.net, machine2.mysite.net
CA signed certificates (1x supplied for each node e.g ip-10-0-0-1.mysite.net, ip-10-0-0-2.mysite.net ) - ON HOLD
No requirement as yet for this feature.
This method takes a pre-ordered CA supplied certificate for each node.
role: security_create_truststores
role: security_create_keystores
role: security_distribute_truststores
role: security_distribute_keystores
role: security_client_to_node
role: security_node_to_node
role: security_client_to_node
-> playbook: dse_authentication.yml
role: security_auth_activate_internal
-> playbook: dse_authorisation_roles.yml
Used also by opsc_authorisation_roles.py
role: /ansible/roles/security_cassandra_change_superuser
Currently commented out, working on SSL usage of librabry/cassandra_roles.py
Used by dse_authorisation_roles.yml
role: /ansible/roles/security_cassandra_change_superuser
role: /ansible/roles/security_keyspaces_configure
- TODO ❌
-> playbook: opsc_security.yml
role: security_opsc_create_keystores
role: security_opsc_create_truststores
role: security_opsc_distribute_truststores
role: security_opsc_configure
role: security_opsc_cluster_configure
Enabling SSL/TLS for OpsCenter and Agent communication - Package Installs
role: security_opsc_configure
Enabling SSL/TLS for OpsCenter and Agent communication - Package Installs
role: security_opc_agent_fetch_keystore role: security_opc_agent_distribute_keystore role: security_opc_agent_activate_ssl
Connect to DSE with client-to-node encryption in OpsCenter and the DataStax Agents
Various roles including:
- security_create_keystores
- security_create_truststores
- security_opsc_create_keystores
- security_opsc_create_truststores
- security_opsc_cluster_configure
playbook: opsc_authentication.yml
role: /ansible/roles/security_opsc_configure
role: /ansible/roles/security_auth_activate_internal
-> playbook: opsc_authorisation_roles.yml
role: /ansible/roles/security_opsc_change_admin
role: /ansible/roles/security_change_superuser
-> playbook: spark_security.yml
The Spark web UI by default uses client-to-cluster encryption settings to enable SSL security in the web interface.
No transport phase.
Encryption between the Spark driver and DSE is configured by enabling client encryption in cassandra.yaml
role: security_client_to_node
Encryption between Spark nodes, including between the Spark master and worker, is configured by enabling Spark security in dse.yaml.
role: security_spark_configure
Encryption between the Spark driver and executors in client applications is configured by enabling Spark security in the application configuration properties, or by default in /etc/dse/spark/spark-defaults.conf
role: security_spark_configure
Client-to-node encryption protects data in flight for the Spark Executor to DSE database connections by establishing a secure channel between the client and the coordinator node.
role: security_client_to_node
Uses the same keystore and trustore as client->node and node->node encryption.
role: security_spark_alwaysonsql_configure
-> playbook: spark_authentication
role: security_spark_auth_activate
Using authentication with AlwaysOn SQL
Shares common role with AlwaysOnSQL transport encryption:
role: security_spark_alwaysonsql_configure
-> playbook: spark_authorisation_roles.yml
NOTE:
- This playbook is here as a convenience, currently empty it could be used to automate user/role creation.
- This role is currently commented out in the runansi_extended.sh script
Create a Spark role and user? Limit spark jobs by user?
role: spark_dsefs_configure
spark-env.sh: spark.worker.ops settings to clear out directory
role: spark_worker_cleanup_configure
role: spark_worker_log_rolling_configure
role: spark_alwaysonsql_configure
Spark disk encryption of driver temp files and shuffle files on disk (only available DSE 6.0 onwards) - TODO ❌
role: security_spark_auth_activate/templates/spark_defaults.conf
-> - COMPLETE ✔️
Enable SSL client-to-node encryption on the DSE Graph node by setting the client_encryption_options.
role: security_client_to_node
-> - COMPLETE ✔️
Allow only authenticated users to access DSE Graph data by enabling DSE Unified Authentication on the transactional database.
role: security_auth_activate_internal
-> playbook: graph_authorisation_roles.yml
Limit access to graph data by defining roles for DSE Graph keyspaces and tables, see Managing access to DSE Graph keyspaces.
NOTE:
- This playbook is here as a convenience, currently empty it could be used to automate user/role creation.
- This role is currently commented out in the runansi_extended.sh script
Providing credentials for DSE Graph
-> - COMPLETE ✔️
Encrypt connections using SSL between HTTP clients and CQL shell to with client-to-node encryption on the DSE Search node.
role: security_client_to_node
-> - COMPLETE ✔️
Perform index management tasks with the CQL shell by DSE Unified Authentication.
role: security_auth_activate_internal
-> playbook: search_authorisation_roles.yml
Use role-based access control (RBAC) for authenticated users to provide search index related permissions.
NOTE:
- This playbook is here as a convenience, currently empty it could be used to automate user/role creation.
- This role is currently commented out in the runansi_extended.sh script
-> playbook: jmx_security.yml - TODO ❌
Setting up SSL for nodetool, dsetool, and dse advrep Securing jConsole SSL
-> playbook: jmx_authentication.yml
Enable JMX Authentication Support Link
JMX authenticationis set up in TerraDSE to pass thru to DSE Unified Authentication
Managing JMX Access Control to MBeans
role: /ansible/roles/security_jmx_auth_activate
Passes -> JMX Authentication -> DSE Unified Authentication
A username / password pair is required once JMX Authentication activated
- Needs to be optional with default of false, hardened environments would not accept deployment of credentials on nodes.
- Need a DSE Unified Authentication account/password stored in clear text in a ~/.cqlshrc file on each node in /home/ec2-user/.cqlshrc
- Can't be a DSE admin account, needs to be a read-only account?
Better approach to CQLSH/cqlshrc, DSE/.dserc, Spark shell etc is use Datastax Studio for all CQL/Gremlin/SparkSQL
- COMPLETE ✔️
Read the following link if using mixed authentication (e.g. internal AND LDAP/AD authentcation):
"Prevent unintentional role assignment when a group name or user name is found in multiple schemes. When a role has execute permission on a scheme, the role can only be applied to users that authenticated against that scheme."
Binding a role to an authentication scheme
Enabling data auditing in DataStax Enterprise
Formats of DataStax Enterprise logs
default location for audit log: /var/log/cassandra/audit/audit.log default location for logback.xml: /etc/dse/cassandra/logback.xml
In logback.xml we can control log rotation.
role: security_audit_logging_configure - COMPLETE ✔️
Works for initial creation of datacenters, probably doesn't work for adding a new node to a datacenter that has been added later, has no concept of node type for dse_name_2 datacenter name, needs flowing block:
solr_enabled=0
spark_enabled=0
graph_enabled=1
auto_bootstrap=0
- COMPLETE ✔️
This new DC could have various reasons for existing:
- A duplicate DC type, i.e. a second Spark DC in a cluster with an existing Spark DC (both in the same DC)
- A new DC type, i.e. a new Graph DC in a cluster with C* and Spark DC's only
- A backup DC i.e. a new Graph DC in AZ2 in a cluster with an existing Graph DC in AZ1
- A geographically seperate replicated edge DC
For STATIC INVENTORY (hosts file):
[add_datacenter]
xxx private_ip=xxx private_dns=ip-10-200-175-160.datastax.lan seed=true dc=dse_graph_2 dc_type=dse_graph rack=RAC1 vnode=1 initial_token=
xxx private_ip=xxx private_dns=ip-10-200-175-164.datastax.lan seed=true dc=dse_graph_2 dc_type=dse_graph rack=RAC1 vnode=1 initial_token=
xxx private_ip=xxx private_dns=ip-10-200-175.163.datastax.lan seed=false dc=dse_graph_2 dc_type=dse_graph rack=RAC1 vnode=1 initial_token=
[add_datacenter:vars]
solr_enabled=0
spark_enabled=0
graph_enabled=1
auto_bootstrap=0
For DYNAMIC INVENTORY the above configuration will be generated off differences in tfstate_current and tf_state_latest.
Replace a node in a DC. - ON HOLD
NOTE:
- Schema automation needs to run prior to backup automation.
-> playbook: opsc_backups_configure.yml
role: /ansible/roles/opsc_backups_configure
role: /ansible/roles/opsc_backups_configure
-> playbook: opsc_services_configure.yml
role: /ansible/roles/opsc_services_configure
Recreate file based handler for reload of syscrtl
role: dse_osparam_change
role: dse_osparam_ssd_change