-
Notifications
You must be signed in to change notification settings - Fork 20
Security and Authz
The Cryostat 3.0 application as a whole consists of:
- Cryostat Deployment
- Service + Route → Auth container
- Cryostat Pod
- Auth Proxy container instance
- Cryostat container instance
- cryostat-db container instance
- PersistentVolumeClaim for Postgres Database data
- cryostat-storage container instance
- PersistentVolumeClaim for SeaweedFS data
- Grafana container instance
- jfr-datasource container instance
- (optional) Cryostat Report Generator Deployment
- Service (no Route) → Pods
- Cryostat Report Generator Pod(s)
- cryostat-report container instance
- Operator Pod
- cryostat-operator instance, containing various controllers
The Routes are configured with TLS Re-Encryption so all connections from outside the cluster use HTTPS/WSS using the OpenShift cluster's TLS cert externally. Internally, Service connections between Cryostat components use HTTPS with cert-manager
(described in more detail below) to ensure that connections are private even within the cluster namespace. Each Auth Proxy container is either an oauth2-proxy
configured with htpasswd
Basic
authentication, or an openshift-oauth-proxy
delegating to the cluster's internal authentication/authorization server and optional htpasswd
authentication.
In this scenario, the Cryostat Operator is installed into its own namespace
. It runs here separately with its privileged serviceaccount
. Cryostat CR objects are created to request the Operator to create Cryostat instances. The CR has a field for a list of namespace
names that the associated Cryostat instance should be deployed across. When the Cryostat instances are created, they are supplied with an environment variable informing them which namespaces should be monitored. These Cryostat instances are deployed into their own separate install namespaces
as well and run with their own lower privileged serviceaccounts
. Using these privileges they perform an Endpoints
query to discover target applications across each of the listed namespaces. Cryostat will only automatically discover those target applications (potentially including itself) that are located within this namespace. Cryostat queries the k8s/OpenShift API server for Endpoints
objects within each namespace
, then filters them for ports with either the name jfr-jmx
or the number 9091
. Other applications, within the namespace or otherwise, may be registered via the Custom Targets API or the Discovery Plugin API (ex. using the Cryostat Agent), but Cryostat will not be aware that these applications may be in other namespaces.
With this setup, the target applications are not able to assume the privileges associated with the serviceaccounts
for the Cryostat Operator or each of the Cryostat instances. Each Cryostat instance can discover and become aware of target JVM applications across any of the namespaces that this particular instance is monitoring. The separated namespaces also ease administration and access management, so cluster administrators can assign roles to users that allow them to work on projects within namespaces, and assign other roles to other users that allow them to acces Cryostat instances that may have visibility into those namespaces.
Cryostat traditonally connects to other JVM applications within its cluster using remote JMX, using cluster-internal URLs so that no traffic will leave the cluster. Cryostat supports connecting to target JVMs with JMX auth credentials enabled ("Basic" style authentication). When a connection attempt to a target fails due to a SecurityException, Cryostat responds to the requesting client with an HTTP 427 status code and the header X-JMX-Authenticate: Basic
. The client is expected to create a Stored Credential object via the Cryostat API before retrying the request, which results in the required target credentials being stored in an encrypted database table. When deployed in OpenShift the requests are already encrypted using OpenShift TLS re-encryption as mentioned above, so the credentials are never transmitted in cleartext. The table is encrypted with a passphrase either provided by the user at deployment time, or generated by the Operator if none is specified. It is also possible to configure Cryostat to trust SSL certificates used by target JVMs by adding the certificate to a Secret and linking that to the Cryostat CR, which will add the certificate to the SSL trust store used by Cryostat. The Operator also uses cert-manager
to generate a self-signed CA and provides Cryostat's auth proxy with certificates as a mounted volume.
In more recent releases, JVM applications may optionally be instrumented with the Cryostat Agent, which uses the local JDK Instrumentation API to hook into the target application. The Cryostat Agent then exposes a JDK HTTP(S) webserver, generates credentials to secure it, and looks up its supplied configuration to locate the Cryostat server instance it should register with. Once it is registered the Agent creates a Stored Credential object on the server corresponding to itself, then clears its generated password from memory retaining only the hash. From this point on, the Agent and Cryostat server communicate with each other using Basic authentication bidirectionally, and with TLS enabled on each webserver if enabled/configured.
Cryostat and the associated Operator will only monitor the OpenShift namespace(s) that they are deployed within (see Scenarios above), and can only initiate connections to target JVMs within this namespace - this is enforced by OpenShift's networking setup. This way, end user administrators or developers can be sure of which set of JVMs they are running which are visible to Cryostat and thus which JVMs' data they should be mindful of.
Once Cryostat has established a JMX or HTTP(S) connection to a target application its primary purpose is to enable JFR recordings on the target JVM and expose them to the end user. These recordings can be transferred from the target JVM back to Cryostat over the JMX/HTTP(S) connection. Cryostat does this for four purposes:
- to generate Automated Rules Reports of the JFR contents, served to clients over HTTPS. These may be generated by the Cryostat container itself or by
cryostat-reports
sidecar container(s) depending on the configuration. - to stream JFR file contents into the cryostat-storage container "archives", which saves them in an OpenShift PersistentVolumeClaim
- to stream a snapshot of the JFR contents over HTTPS to a requesting client's GET request
- to upload a snapshot of the JFR contents using HTTPS POST to the
jfr-datasource
("archived" JFR copies can also be streamed back out to clients over HTTPS, or POSTed to jfr-datasource
, and Automated Rules Reports can also be made of them)
Here, "the client" may refer to an end user's browser when using Cryostat's web interface, or may be the end user using a direct HTTP(S) client (ex. HTTPie or curl
), or may be an OpenShift Operator controller acting as an automated client. All of these cases are handled identically by Cryostat.
jfr-datasource
receives file uploads by POST
request from the Cryostat container. Cryostat and jfr-datasource
run together within the same Pod and use the local loopback network interface, so the file contents do not travel across the network outside of the Pod. These files are held in transient storage by the jfr-datasource
container and the parsed JFR data contents held in-memory to make available for querying by the Grafana dashboard container, which also runs within the same Pod and communicates over the local loopback network interface.
When deployed in OpenShift, the Cryostat container instance detects this scenario and expects clients to provide a Bearer token on all Command Channel (WSS) connections as well as on any HTTPS API connections that can provide information about target applications within the cluster (ie authz are not checked only for requests for things like web-client assets). These tokens are the ones provided by OpenShift OAuth itself, ie. the user's account for that OpenShift instance/cluster. On each HTTPS request, Cryostat receives the token and sends its own request to the internal OpenShift OAuth server to validate the token. If OpenShift OAuth validates the token the request is accepted. If OpenShift OAuth does not validate the token, or the user does not provide a token, then the request is rejected with a 401. Likewise, for each new WSS WebSocket connection, Cryostat expects the client to provide a token as part of the WebSocket SubProtocol header. This token is then passed to the OpenShift OAuth server in the same way previously described. If the token validation fails, the server will reply with an appropriate closure status code and message after the client sends its first message frame.
The specific method for verifying an OAuth token is to receive the client's provided token and construct a new OpenShift client instance using this token, to perform a request while Cryostat masquerades as the actual client. Currently, the request performed is to attempt to list the Routes within the Namespace. This is likely to change in the future to a more robust criterion.
TODO describe non-OpenShift cases
The Operator configures the Grafana container to use the default admin
username, but the default password is overridden. The Operator generates a random password as below (at the time of writing):
func NewGrafanaSecretForCR(cr *rhjmcv1alpha1.Cryostat) *corev1.Secret {
return &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: cr.Name + "-grafana-basic",
Namespace: cr.Namespace,
},
StringData: map[string]string{
"GF_SECURITY_ADMIN_USER": "admin",
"GF_SECURITY_ADMIN_PASSWORD": GenPasswd(20),
},
}
}
func GenPasswd(length int) string {
rand.Seed(time.Now().UnixNano())
chars := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_"
b := make([]byte, length)
for i := range b {
b[i] = chars[rand.Intn(len(chars))]
}
return string(b)
}
(ie: [a-zA-Z0-9\-_]{20}
)
This generated password is stored in a Kubernetes Secret, which is then "mounted" into the Grafana container as an environment variable at startup time. This Secret is also re-read by another controller within the Operator at a later time after Grafana container startup so that the Operator can perform API requests to the Grafana container to configure it with a default dashboard and to add the jfr-datasource datasource definition/URL.
Operator connections to its "child" Cryostat instance are solely via HTTPS, identically as outlined above for all client connections in general. The Operator passes its ServiceAccount API token to Cryostat via bearer authentication. Cryostat then uses this token to masquerade as the ServiceAccount and verify its permissions within the cluster and namespace.
Once the Operator has obtained information about the target JVM(s) within the namespace, this information is copied into Custom Resources owned by the Operator. This information only includes more basic details such as the names, durations, and states of any recordings active in the target JVM(s), as well as the URL to download the recording. This URL is a direct link to the Cryostat Route at a path that allows the recording to be downloaded by the client. This path is also secured using HTTPS Bearer token authentication, so the end user client must supply their own account's token in order to retrieve the recording file. Any information contained within the Custom Resources is secured using OpenShift RBAC policy, similar to other built-in Resource types. Important to note is that the Operator itself does not receive the JFR file itself in whole or in part, so the only information available via the API is information obtained via the Cryostat HTTP API, which a user would also be able to view using the web-client or direct HTTP queries.