-
Notifications
You must be signed in to change notification settings - Fork 4
A Solid System Design For Website Requests
Without the cloud, it is important to understand how each servers communicate. Developers seems to have missed the parts where hardware is needed because of dependency of using cloud servers.
A server can have different roles in the Internet. Content Delivery Network (CDN), Load Balancers, Application Servers, Caching Servers, and Database Servers are used to make the pages load efficiently. Each servers may or may not exist, depending on the complexity of the system design to accommodate data to serve the users.
Using a CDN is a way of delegating caching. The business can build it, or hire a third party company to provide the data.
If the cached website is not built with Akamai or other Content Delivery Network, businesses creating a CDN makes work as complex as creating an application server because the CDN calculates the time zones and the nearest location before it gives the correct data for the users. Some CDN, in order to serve dynamic caching, serves the static data based on the cookies or sessions inside the browser to store the state.
A load balancer is totally different from a DNS server. The DNS server only stores a list of load balancers' IP addresses. A load balancer's job is to point to the application servers, and controls where to place each request, wherein most of the load balancers have been pre-programmed to do the task. All you have to do is set them. Depending on the system design, load balancers assign each server the task using Random, Round Robin, Fastest Response, Least Connections, Observed, or Predictive methodologies. [https://devcentral.f5.com/articles/intro-to-load-balancing-for-developers-ndash-the-algorithms]
Cache servers and Application servers are located in one and the same server in most system designs. In websites with the top most hits, Caching servers and Application servers are separate.
Some example of Cache Applications for your Application server are Redis, Memcache, and Membase. Cache servers uses hash tables, a key/value pair of query and results. If the memcache storage is full, it deletes the least requested key/value pair.
If the Cache and Application Servers are separate, Solid State Drive or Hard Disk Drives stores the installed caching software. The code for storing, getting, and updating data to the Caching server will come from the Application servers. Thus, it is important for Caching servers to possess large RAM.
If the cache and the application are in the same server, which you can also configure with Google App Engine (GAE), cache servers can also store data in drives if the caching software uses NoSQL.
GAE Datastore can also act as the cache server using Memcached. Alternatively, using ndb of Google App Engine Python can allow programmers to use GAE Datastore to easily cache data and become the backend for SQL databases.
On a single dedicated server without distribution and if you definitely are sure that the database will not scale, you may not use caching software similar to Memcache. Instead, you can use predefined programming language functions which stores key/value pair in RAM. However, the database configuration may have already done that to pull data quickly.
The GAE application makes use of the caching software before it queries the database or make a calculation. The application should write in the database, while updating the cache in the RAM asynchronously/synchronously. Cache server admin can selectively flush the entire cache or specific query coming from the key/value pair in the RAM.
The most important hardware of a cache server is the RAM, which holds the data for easy access, whether the cache and application server are separate or the same.
Aside from protecting the written data, hashing the cached value becomes important because it creates a unique identifier for the cached key/value pair whenever multiple writes happen at the same time.
The application server stores the system you coded for your main application. Applications can vary from a Software as A Service (SaaS), blog, eCommerce, or any website serving high traffic. Most processes, are done in the application server. So, it becomes most important for the servers to be less likely overloaded, which needs a solid system architecture. The application server should consider cache servers when coding the application.
Application server should be programmed to less likely get the data from the database servers, and should instead serve data from the cache servers while continuously checking if the query from the key/value pair inside the cache server is available.
Database servers can include MySQL, Oracle SQL, MS SQL, or PostgreSQL. The database servers should be programmed only to store the data ones insert or update happens, acting as a backup when the data is given is needed if it is not available in caching servers. During the first launch, the database servers may be used frequently.
Note that there are also settings on each database server's configuration which can set the usage of RAM, acting as a smaller cache system.
The request should less likely hit the application servers and the database servers if the CDN and Cache Servers are configured efficiently. If the database servers gets more request and seems overloaded, increasing the capacity of the cache servers first should always become efficient. Then, it will always be safe to increase the database capacity, just in case cache servers will be down.