-
Notifications
You must be signed in to change notification settings - Fork 35
3. Sizing your Deployment
Jason Shaw edited this page Sep 18, 2023
·
13 revisions
Turbonomic as a micro-service based application can leverage the advantages of containers to scale the solution to handle anywhere from small to huge IT estates in a single instance. The amount of resources required to run Turbonomic is related to the number of entities that are being managed. An entity is a VM, container/pod, ESX host, datastores, and cloud accounts. You can start out your environment at any level and scale up or down. The following table provides a guideline to cluster resources requirement to run Turbonomic based on the expected range of number of entities to be managed in a single instance:
Entity scale | Minimum node size for Node Group | Number of Nodes | Approx Total Mem consumed (GB) | Node Storage (GiB) | PV Storage (GiB) - details here |
---|---|---|---|---|---|
Standard | 2 CPUs x 32 GB Memory | 3+ | average 32-64 | 10 | 132 (without DB) |
Large (80K plus) | 4 CPUs x 32GB Memory | 4+ (using 64 GB nodes would use less) | average 64-128 | 10 | 132 (without DB) |
X-Large (200K plus) | contact Turbo |
NOTE
- Storage: Turbonomic will create Persistent Volumes based on dynamic storage provisioning. See PV requirements here
- Total amount of memory and cpu required also depends on the number of different target types. Each target type spins up its own pod.
- Configuration custom resource yaml to be supplied by Turbonomic
- Turbonomic supports namespaces with resource quotas. Contact Turbonomic for a recommended quota, and custom resource yaml to use.
- DO NOT USE SPOT INSTANCES in the cloud in the cluster you are deploying Turbonomic into. This is not recommended as those can be spun down at any time and will cause Turbonomic to be down as well.
Next Step: You are ready to deploy. Proceed to Deployment Steps.