You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Aggregation Service sizing guidance provides an overview of the supported scale and job latency benchmarks. Adtech can use this to estimate the size of the instance they should choose for their workload and the job latencies to expect. Today, the service utilizes a single cloud instance and thus the latency and supported scale are restricted by the availability of appropriate size cloud instances.
The Aggregation Service team is exploring the possibility of scaling the service horizontally - i.e. utilize multiple cloud instances to process a single aggregation job in parallel. Horizontal scaling can have two benefits: (1) reducing job processing latency by utilizing multiple cloud instances in parallel and (2) enabling processing larger jobs, which cannot fit on a single machine due to memory limitations (example). We would like to get early input and feedback from ad tech to incorporate in our plans.
Some of the areas where we would appreciate your feedback are mentioned below but we welcome any and all feedback related to this topic.
Processing scale - do your aggregation batches fit in the report and domain scale mentioned in the sizing guidance memory guide? (Top row specifies the #domain keys and leftmost column specifies the #reports in a batch. Note that this table does not specify the limits on scale but only the sizes we have benchmarked.) If your batches are larger than what is specified in the guide, what scale of batches do you expect to process with Aggregation Service?
Latency - which of your use-cases are latency sensitive, and would benefit from reduced job latency?
We appreciate your time and feedback. Your input is valuable and will help us to improve the Aggregation Service.
The text was updated successfully, but these errors were encountered:
Aggregation Service sizing guidance provides an overview of the supported scale and job latency benchmarks. Adtech can use this to estimate the size of the instance they should choose for their workload and the job latencies to expect. Today, the service utilizes a single cloud instance and thus the latency and supported scale are restricted by the availability of appropriate size cloud instances.
The Aggregation Service team is exploring the possibility of scaling the service horizontally - i.e. utilize multiple cloud instances to process a single aggregation job in parallel. Horizontal scaling can have two benefits: (1) reducing job processing latency by utilizing multiple cloud instances in parallel and (2) enabling processing larger jobs, which cannot fit on a single machine due to memory limitations (example). We would like to get early input and feedback from ad tech to incorporate in our plans.
Some of the areas where we would appreciate your feedback are mentioned below but we welcome any and all feedback related to this topic.
Processing scale - do your aggregation batches fit in the report and domain scale mentioned in the sizing guidance memory guide? (Top row specifies the #domain keys and leftmost column specifies the #reports in a batch. Note that this table does not specify the limits on scale but only the sizes we have benchmarked.) If your batches are larger than what is specified in the guide, what scale of batches do you expect to process with Aggregation Service?
Latency - which of your use-cases are latency sensitive, and would benefit from reduced job latency?
We appreciate your time and feedback. Your input is valuable and will help us to improve the Aggregation Service.
The text was updated successfully, but these errors were encountered: