diff --git a/src/content/en/2019/caching.md b/src/content/en/2019/caching.md
index 48aec09e15a..bbb6762f4eb 100644
--- a/src/content/en/2019/caching.md
+++ b/src/content/en/2019/caching.md
@@ -36,7 +36,7 @@ Web architectures typically involve [multiple tiers of caching](https://blog.yoa
* An end user's browser
* A service worker cache in the user's browser
-* A shared gateway
+* A shared gateway
* CDNs, which offer the ability to cache at the edge, close to end users
* A caching proxy in front of the application, to reduce the backend workload
* The application and database layers
@@ -86,7 +86,7 @@ The example below contains an excerpt of a request/response header from HTTP Arc
< ETag: "1566748830.0-3052-3932359948"
```
-The tool [RedBot.org](https://redbot.org/) allows you to input a URL and see a detailed explanation of how the response would be cached based on these headers. For example, [a test for the URL above](https://redbot.org/?uri=https%3A%2F%2Fhttparchive.org%2Fstatic%2Fjs%2Fmain.js) would output the following:
+The tool [RedBot.org](https://redbot.org/) allows you to input a URL and see a detailed explanation of how the response would be cached based on these headers. For example, [a test for the URL above](https://redbot.org/?uri=https%3A%2F%2Fhttparchive.org%2Fstatic%2Fjs%2Fmain.js) would output the following:
{{ figure_markup(
image="ch16_fig1_redbot_example.jpg",
@@ -98,7 +98,7 @@ The tool [RedBot.org](https://redbot.org/) allows you to input a URL and see a d
)
}}
-If no caching headers are present in a response, then the [client is permitted to heuristically cache the response](https://paulcalvano.com/index.php/2018/03/14/http-heuristic-caching-missing-cache-control-and-expires-headers-explained/). Most clients implement a variation of the RFC's suggested heuristic, which is 10% of the time since `Last-Modified`. However, some may cache the response indefinitely. So, it is important to set specific caching rules to ensure that you are in control of the cacheability.
+If no caching headers are present in a response, then the [client is permitted to heuristically cache the response](https://paulcalvano.com/index.php/2018/03/14/http-heuristic-caching-missing-cache-control-and-expires-headers-explained/). Most clients implement a variation of the RFC's suggested heuristic, which is 10% of the time since `Last-Modified`. However, some may cache the response indefinitely. So, it is important to set specific caching rules to ensure that you are in control of the cacheability.
72% of responses are served with a `Cache-Control` header, and 56% of responses are served with an `Expires` header. However, 27% of responses did not use either header, and therefore are subject to heuristic caching. This is consistent across both desktop and mobile sites.
@@ -113,7 +113,7 @@ If no caching headers are present in a response, then the [client is permitted t
## What type of content are we caching?
-A cacheable resource is stored by the client for a period of time and available for reuse on a subsequent request. Across all HTTP requests, 80% of responses are considered cacheable, meaning that a cache is permitted to store them. Out of these,
+A cacheable resource is stored by the client for a period of time and available for reuse on a subsequent request. Across all HTTP requests, 80% of responses are considered cacheable, meaning that a cache is permitted to store them. Out of these,
* 6% of requests have a time to live (TTL) of 0 seconds, which immediately invalidates a cached entry.
* 27% are cached heuristically because of a missing `Cache-Control` header.
@@ -235,7 +235,7 @@ The table below details the cache TTL values for desktop requests by type. Most
While most of the median TTLs are high, the lower percentiles highlight some of the missed caching opportunities. For example, the median TTL for images is 28 hours, however the 25th percentile is just one-two hours and the 10th percentile indicates that 10% of cacheable image content is cached for less than one hour.
-By exploring the cacheability by content type in more detail in Figure 16.5 below, we can see that approximately half of all HTML responses are considered non-cacheable. Additionally, 16% of images and scripts are non-cacheable.
+By exploring the cacheability by content type in more detail in Figure 16.5 below, we can see that approximately half of all HTML responses are considered non-cacheable. Additionally, 16% of images and scripts are non-cacheable.
{{ figure_markup(
image="fig5.png",
@@ -269,7 +269,7 @@ HTTP/1.1 introduced the `Cache-Control` header, and most modern clients support
* `must-revalidate` tells the client a cached entry must be validated with a conditional request prior to its use.
* `private` indicates a response should only be cached by a browser, and not by an intermediary that would serve multiple clients.
-53% of HTTP responses include a `Cache-Control` header with the `max-age` directive, and 54% include the Expires header. However, only 41% of these responses use both headers, which means that 13% of responses are caching solely based on the older `Expires` header.
+53% of HTTP responses include a `Cache-Control` header with the `max-age` directive, and 54% include the Expires header. However, only 41% of these responses use both headers, which means that 13% of responses are caching solely based on the older `Expires` header.
{{ figure_markup(
image="fig7.png",
@@ -342,7 +342,7 @@ The HTTP/1.1 [specification](https://tools.ietf.org/html/rfc7234#section-5.2.1)
Cache-Control
directives.") }}
A word of caution when interpreting these charts: it is important to focus on orders of magnitude when comparing vendors as there are many factors that impact the actual TLS negotiation performance. These tests were completed from a single datacenter under controlled conditions and do not reflect the variability of the internet and user experiences.
@@ -649,7 +649,7 @@ In contrast, the median TLS negotiation for the majority of CDN providers is betIt is important to emphasize that Chrome used in the Web Almanac will bias to the latest TLS versions and ciphers offered by the host. Also, these web pages were crawled in July 2019 and reflect the adoption of websites that have enabled the newer versions.
+It is important to emphasize that Chrome used in the Web Almanac will bias to the latest TLS versions and ciphers offered by the host. Also, these web pages were crawled in July 2019 and reflect the adoption of websites that have enabled the newer versions.
{{ figure_markup( image="fig18.png", @@ -1071,7 +1071,7 @@ More discussion of TLS versions and ciphers can be found in the [Security](./sec ## HTTP/2 adoption -Along with RTT management and improving TLS performance, CDNs also enable new standards like HTTP/2 and IPv6. While most CDNs offer support for HTTP/2 and many have signaled early support of the still-under-standards-development HTTP/3, adoption still depends on website owners to enable these new features. Despite the change-management overhead, the majority of the HTML served from CDNs has HTTP/2 enabled. +Along with RTT management and improving TLS performance, CDNs also enable new standards like HTTP/2 and IPv6. While most CDNs offer support for HTTP/2 and many have signaled early support of the still-under-standards-development HTTP/3, adoption still depends on website owners to enable these new features. Despite the change-management overhead, the majority of the HTML served from CDNs has HTTP/2 enabled. CDNs have over 70% adoption of HTTP/2, compared to the nearly 27% of origin pages. Similarly, sub-domain and third-party resources on CDNs see an even higher adoption of HTTP/2 at 90% or higher while third-party resources served from origin infrastructure only has 31% adoption. The performance gains and other features of HTTP/2 are further covered in the [HTTP/2](./http2) chapter. @@ -1491,7 +1491,7 @@ CDNs have over 70% adoption of HTTP/2, compared to the nearly 27% of origin page A website can control the caching behavior of browsers and CDNs with the use of different HTTP headers. The most common is the `Cache-Control` header which specifically determines how long something can be cached before returning to the origin to ensure it is up-to-date. -Another useful tool is the use of the `Vary` HTTP header. This header instructs both CDNs and browsers how to fragment a cache. The `Vary` header allows an origin to indicate that there are multiple representations of a resource, and the CDN should cache each variation separately. The most common example is [compression](./compression). Declaring a resource as `Vary: Accept-Encoding` allows the CDN to cache the same content, but in different forms like uncompressed, with gzip, or Brotli. Some CDNs even do this compression on the fly so as to keep only one copy available. This `Vary` header likewise also instructs the browser how to cache the content and when to request new content. +Another useful tool is the use of the `Vary` HTTP header. This header instructs both CDNs and browsers how to fragment a cache. The `Vary` header allows an origin to indicate that there are multiple representations of a resource, and the CDN should cache each variation separately. The most common example is [compression](./compression). Declaring a resource as `Vary: Accept-Encoding` allows the CDN to cache the same content, but in different forms like uncompressed, with Gzip, or Brotli. Some CDNs even do this compression on the fly so as to keep only one copy available. This `Vary` header likewise also instructs the browser how to cache the content and when to request new content. {{ figure_markup( image="use_of_vary_on_cdn.png", @@ -1507,7 +1507,7 @@ While the main use of `Vary` is to coordinate `Content-Encoding`, there are othe For HTML pages, the most common use of `Vary` is to signal that the content will change based on the `User-Agent`. This is short-hand to indicate that the website will return different content for desktops, phones, tablets, and link-unfurling engines (like Slack, iMessage, and Whatsapp). The use of `Vary: User-Agent` is also a vestige of the early mobile era, where content was split between "mDot" servers and "regular" servers in the back-end. While the adoption for responsive web has gained wide popularity, this `Vary` form remains. -In a similar way, `Vary: Cookie` usually indicates that content that will change based on the logged-in state of the user or other personalization. +In a similar way, `Vary: Cookie` usually indicates that content that will change based on the logged-in state of the user or other personalization. {{ figure_markup( image="use_of_vary.png", @@ -1519,7 +1519,7 @@ In a similar way, `Vary: Cookie` usually indicates that content that will change Resources, in contrast, don't use `Vary: Cookie` as much as the HTML resources. Instead these resources are more likely to adapt based on the `Accept`, `Origin`, or `Referer`. Most media, for example, will use `Vary: Accept` to indicate that an image could be a JPEG, WebP, JPEG 2000, or JPEG XR depending on the browser's offered `Accept` header. In a similar way, third-party shared resources signal that an XHR API will differ depending on which website it is embedded. This way, a call to an ad server API will return different content depending on the parent website that called the API. -The `Vary` header also contains evidence of CDN chains. These can be seen in `Vary` headers such as `Accept-Encoding, Accept-Encoding` or even `Accept-Encoding, Accept-Encoding, Accept-Encoding`. Further analysis of these chains and `Via` header entries might reveal interesting data, for example how many sites are proxying third-party tags. +The `Vary` header also contains evidence of CDN chains. These can be seen in `Vary` headers such as `Accept-Encoding, Accept-Encoding` or even `Accept-Encoding, Accept-Encoding, Accept-Encoding`. Further analysis of these chains and `Via` header entries might reveal interesting data, for example how many sites are proxying third-party tags. Many of the uses of the `Vary` are extraneous. With most browsers adopting double-key caching, the use of `Vary: Origin` is redundant. As is `Vary: Range` or `Vary: Host` or `Vary: *`. The wild and variable use of `Vary` is demonstrable proof that the internet is weird. diff --git a/src/content/en/2019/compression.md b/src/content/en/2019/compression.md index c05df2bdaa7..7faee3de7d5 100644 --- a/src/content/en/2019/compression.md +++ b/src/content/en/2019/compression.md @@ -15,7 +15,7 @@ featured_quote: HTTP compression is a technique that allows you to encode inform featured_stat_1: 38% featured_stat_label_1: HTTP responses using text-based compression featured_stat_2: 80% -featured_stat_label_2: Use of gzip compression +featured_stat_label_2: Use of Gzip compression featured_stat_3: 56% featured_stat_label_3: HTML responses not using compression --- @@ -24,19 +24,19 @@ featured_stat_label_3: HTML responses not using compression HTTP compression is a technique that allows you to encode information using fewer bits than the original representation. When used for delivering web content, it enables web servers to reduce the amount of data transmitted to clients. This increases the efficiency of the client's available bandwidth, reduces [page weight](./page-weight), and improves [web performance](./performance). -Compression algorithms are often categorized as lossy or lossless: +Compression algorithms are often categorized as lossy or lossless: * When a lossy compression algorithm is used, the process is irreversible, and the original file cannot be restored via decompression. This is commonly used to compress media resources, such as image and video content where losing some data will not materially affect the resource. -* Lossless compression is a completely reversible process, and is commonly used to compress text based resources, such as [HTML](./markup), [JavaScript](./javascript), [CSS](./css), etc. +* Lossless compression is a completely reversible process, and is commonly used to compress text based resources, such as [HTML](./markup), [JavaScript](./javascript), [CSS](./css), etc. In this chapter, we are going to explore how text-based content is compressed on the web. Analysis of non-text-based content forms part of the [Media](./media) chapter. ## How HTTP compression works -When a client makes an HTTP request, it often includes an [`Accept-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding) header to advertise the compression algorithms it is capable of decoding. The server can then select from one of the advertised encodings it supports and serve a compressed response. The compressed response would include a [`Content-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding) header so that the client is aware of which compression was used. Additionally, a [`Content-Type`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type) header is often used to indicate the [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types) of the resource being served. +When a client makes an HTTP request, it often includes an [`Accept-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding) header to advertise the compression algorithms it is capable of decoding. The server can then select from one of the advertised encodings it supports and serve a compressed response. The compressed response would include a [`Content-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding) header so that the client is aware of which compression was used. Additionally, a [`Content-Type`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type) header is often used to indicate the [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types) of the resource being served. -In the example below, the client advertised support for gzip, brotli, and deflate compression. The server decided to return a gzip compressed response containing a `text/html` document. +In the example below, the client advertised support for Gzip, Brotli, and Deflate compression. The server decided to return a Gzip compressed response containing a `text/html` document. ``` @@ -44,7 +44,7 @@ In the example below, the client advertised support for gzip, brotli, and deflat > Host: httparchive.org > Accept-Encoding: gzip, deflate, br - < HTTP/1.1 200 + < HTTP/1.1 200 < Content-type: text/html; charset=utf-8 < Content-encoding: gzip ``` @@ -55,11 +55,11 @@ The HTTP Archive contains measurements for 5.3 million web sites, and each site ## Compression algorithms -IANA maintains a [list of valid HTTP content encodings](https://www.iana.org/assignments/http-parameters/http-parameters.xml#content-coding) that can be used with the `Accept-Encoding` and `Content-Encoding` headers. These include gzip, deflate, br (brotli), as well as a few others. Brief descriptions of these algorithms are given below: +IANA maintains a [list of valid HTTP content encodings](https://www.iana.org/assignments/http-parameters/http-parameters.xml#content-coding) that can be used with the `Accept-Encoding` and `Content-Encoding` headers. These include `gzip`, `deflate`, `br` (Brotli), as well as a few others. Brief descriptions of these algorithms are given below: -* [Gzip](https://tools.ietf.org/html/rfc1952) uses the [LZ77](https://en.wikipedia.org/wiki/LZ77_and_LZ78#LZ77) and [Huffman coding](https://en.wikipedia.org/wiki/Huffman_coding) compression techniques, and is older than the web itself. It was originally developed for the UNIX gzip program in 1992. An implementation for web delivery has existed since HTTP/1.1, and most web browsers and clients support it. -* [Deflate](https://tools.ietf.org/html/rfc1951) uses the same algorithm as gzip, just with a different container. Its use was not widely adopted for the web because of [compatibility issues](https://en.wikipedia.org/wiki/HTTP_compression#Problems_preventing_the_use_of_HTTP_compression) with some servers and browsers. -* [Brotli](https://tools.ietf.org/html/rfc7932) is a newer compression algorithm that was [invented by Google](https://github.com/google/brotli). It uses the combination of a modern variant of the LZ77 algorithm, Huffman coding, and second order context modeling. Compression via brotli is more computationally expensive compared to gzip, but the algorithm is able to reduce files by [15-25%](https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf) more than gzip compression. Brotli was first used for compressing web content in 2015 and is [supported by all modern web browsers](https://caniuse.com/#feat=brotli). +* [Gzip](https://tools.ietf.org/html/rfc1952) uses the [LZ77](https://en.wikipedia.org/wiki/LZ77_and_LZ78#LZ77) and [Huffman coding](https://en.wikipedia.org/wiki/Huffman_coding) compression techniques, and is older than the web itself. It was originally developed for the UNIX `gzip` program in 1992. An implementation for web delivery has existed since HTTP/1.1, and most web browsers and clients support it. +* [Deflate](https://tools.ietf.org/html/rfc1951) uses the same algorithm as Gzip, just with a different container. Its use was not widely adopted for the web because of [compatibility issues](https://en.wikipedia.org/wiki/HTTP_compression#Problems_preventing_the_use_of_HTTP_compression) with some servers and browsers. +* [Brotli](https://tools.ietf.org/html/rfc7932) is a newer compression algorithm that was [invented by Google](https://github.com/google/brotli). It uses the combination of a modern variant of the LZ77 algorithm, Huffman coding, and second order context modeling. Compression via Brotli is more computationally expensive compared to Gzip, but the algorithm is able to reduce files by [15-25%](https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf) more than Gzip compression. Brotli was first used for compressing web content in 2015 and is [supported by all modern web browsers](https://caniuse.com/#feat=brotli). Approximately 38% of HTTP responses are delivered with text-based compression. This may seem like a surprising statistic, but keep in mind that it is based on all HTTP requests in the dataset. Some content, such as images, will not benefit from these compression algorithms. The table below summarizes the percentage of requests served with each content encoding. @@ -88,21 +88,21 @@ Approximately 38% of HTTP responses are delivered with text-based compression. Tgzip
br
deflate
identity
x-gzip
compress
x-compress
58.26% | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gzip | +gzip |
29.33% | 30.20% | 30.87% | 31.22% | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
br | +br |
4.41% | 10.49% | 4.56% | 10.49% | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
deflate | +deflate |
0.02% | 0.01% | 0.02% | @@ -363,8 +363,8 @@ Lighthouse also indicates how many bytes could be saved by enabling text-based c ## Conclusion -HTTP compression is a widely used and highly valuable feature for reducing the size of web content. Both gzip and brotli compression are the dominant algorithms used, and the amount of compressed content varies by content type. Tools like Lighthouse can help uncover opportunities to compress content. +HTTP compression is a widely used and highly valuable feature for reducing the size of web content. Both Gzip and Brotli compression are the dominant algorithms used, and the amount of compressed content varies by content type. Tools like Lighthouse can help uncover opportunities to compress content. While many sites are making good use of HTTP compression, there is still room for improvement, particularly for the `text/html` format that the web is built upon! Similarly, lesser-understood text formats like `font/ttf`, `application/json`, `text/xml`, `text/plain`, `image/svg+xml`, and `image/x-icon` may take extra configuration that many websites miss. -At a minimum, websites should use gzip compression for all text-based resources, since it is widely supported, easily implemented, and has a low processing overhead. Additional savings can be found with brotli compression, although compression levels should be chosen carefully based on whether a resource can be precompressed. +At a minimum, websites should use Gzip compression for all text-based resources, since it is widely supported, easily implemented, and has a low processing overhead. Additional savings can be found with Brotli compression, although compression levels should be chosen carefully based on whether a resource can be precompressed. diff --git a/src/content/en/2019/http2.md b/src/content/en/2019/http2.md index e32043991b1..e136bb9ca58 100644 --- a/src/content/en/2019/http2.md +++ b/src/content/en/2019/http2.md @@ -34,7 +34,7 @@ The protocol seemed simple, but it also came with limitations. Because HTTP was That in itself brings its own issues as TCP connections take time and resources to set up and get to full efficiency, especially when using HTTPS, which requires additional steps to set up the encryption. HTTP/1.1 improved this somewhat, allowing reuse of TCP connections for subsequent requests, but still did not solve the parallelization issue. -Despite HTTP being text-based, the reality is that it was rarely used to transport text, at least in its raw format. While it was true that HTTP headers were still text, the payloads themselves often were not. Text files like [HTML](./markup), [JS](./javascript), and [CSS](./css) are usually [compressed](./compression) for transport into a binary format using gzip, brotli, or similar. Non-text files like [images and videos](./media) are served in their own formats. The whole HTTP message is then often wrapped in HTTPS to encrypt the messages for [security](./security) reasons. +Despite HTTP being text-based, the reality is that it was rarely used to transport text, at least in its raw format. While it was true that HTTP headers were still text, the payloads themselves often were not. Text files like [HTML](./markup), [JS](./javascript), and [CSS](./css) are usually [compressed](./compression) for transport into a binary format using Gzip, Brotli, or similar. Non-text files like [images and videos](./media) are served in their own formats. The whole HTTP message is then often wrapped in HTTPS to encrypt the messages for [security](./security) reasons. So, the web had basically moved on from text-based transport a long time ago, but HTTP had not. One reason for this stagnation was because it was so difficult to introduce any breaking changes to such a ubiquitous protocol like HTTP (previous efforts had tried and failed). Many routers, firewalls, and other middleboxes understood HTTP and would react badly to major changes to it. Upgrading them all to support a new version was simply not possible. diff --git a/src/content/en/2019/javascript.md b/src/content/en/2019/javascript.md index a9f53c6554a..693ccb1ab9b 100644 --- a/src/content/en/2019/javascript.md +++ b/src/content/en/2019/javascript.md @@ -146,8 +146,8 @@ In the context of browser-server interactions, resource compression refers to co There are multiple text-compression algorithms, but only two are mostly used for the compression (and decompression) of HTTP network requests: -- [Gzip](https://www.gzip.org/) (gzip): The most widely used compression format for server and client interactions -- [Brotli](https://github.com/google/brotli) (br): A newer compression algorithm aiming to further improve compression ratios. [90% of browsers](https://caniuse.com/#feat=brotli) support Brotli encoding. +- [Gzip](https://www.gzip.org/) (`gzip`): The most widely used compression format for server and client interactions +- [Brotli](https://github.com/google/brotli) (`br`): A newer compression algorithm aiming to further improve compression ratios. [90% of browsers](https://caniuse.com/#feat=brotli) support Brotli encoding. Compressed scripts will always need to be uncompressed by the browser once transferred. This means its content remains the same and execution times are not optimized whatsoever. Resource compression, however, will always improve download times which also is one of the most expensive stages of JavaScript processing. Ensuring JavaScript files are compressed correctly can be one of the most significant factors in improving site performance. @@ -155,8 +155,8 @@ How many sites are compressing their JavaScript resources? {{ figure_markup( image="fig10.png", - caption="Percentage of sites compressing JavaScript resources with gzip or brotli.", - description="Bar chart showing 67%/65% of JavaScript resources are compressed with gzip on desktop and mobile respectively, and 15%/14% are compressed using Brotli.", + caption="Percentage of sites compressing JavaScript resources with Gzip or Brotli.", + description="Bar chart showing 67%/65% of JavaScript resources are compressed with Gzip on desktop and mobile respectively, and 15%/14% are compressed using Brotli.", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vTpzDb9HGbdVvin6YPTOmw11qBVGGysltxmH545fUfnqIThAq878F_b-KxUo65IuXaeFVSnlmJ5K1Dm/pubchart?oid=241928028&format=interactive" ) }} diff --git a/src/content/en/2020/caching.md b/src/content/en/2020/caching.md index 454165fe311..3a9e9616fdf 100644 --- a/src/content/en/2020/caching.md +++ b/src/content/en/2020/caching.md @@ -503,7 +503,7 @@ One large source of invalid `Expires` headers is from assets served from a popul ## The `Vary` header -We have discussed how a caching entity can determine whether a response object is cacheable, and for how long it can be cached. However, one of the most important steps the caching entity must take is determining if the resource being requested is already in its cache. While this may seem simple, many times the URL alone is not enough to determine this. For example, requests with the same URL could vary in what compression they used (gzip, brotli, etc.) or could be returned in different encodings (XML, JSON etc.). +We have discussed how a caching entity can determine whether a response object is cacheable, and for how long it can be cached. However, one of the most important steps the caching entity must take is determining if the resource being requested is already in its cache. While this may seem simple, many times the URL alone is not enough to determine this. For example, requests with the same URL could vary in what compression they used (Gzip, Brotli, etc.) or could be returned in different encodings (XML, JSON etc.). To solve this problem, when a caching entity caches an object, it gives the object a unique identifier (a cache key). When it needs to determine whether the object is in its cache, it checks for the existence of the object using the cache key as a lookup. By default, this cache key is simply the URL used to retrieve the object, but servers can tell the caching entity to include other 'attributes' of the response (such as compression method) in the cache key, by including the Vary response header, to ensure that the correct object is subsequently retrieved from cache - the `Vary` header identifies 'variants' of the object, based on factors other than the URL. diff --git a/src/content/es/2019/javascript.md b/src/content/es/2019/javascript.md index 07040f4d296..4897f095905 100644 --- a/src/content/es/2019/javascript.md +++ b/src/content/es/2019/javascript.md @@ -146,8 +146,8 @@ En el contexto de las interacciones navegador-servidor, la compresión de recurs Existen varios algoritmos de compresión de texto, pero solo dos se utilizan principalmente para la compresión (y descompresión) de solicitudes de red HTTP: -- [Gzip](https://www.gzip.org/) (gzip): El formato de compresión más utilizado para las interacciones de servidor y cliente. -- [Brotli](https://github.com/google/brotli) (br): Un algoritmo de compresión más nuevo que apunta a mejorar aún más las relaciones de compresión. [90% de los navegadores](https://caniuse.com/#feat=brotli) soportan la codificación Brotli. +- [Gzip](https://www.gzip.org/) (`gzip`): El formato de compresión más utilizado para las interacciones de servidor y cliente. +- [Brotli](https://github.com/google/brotli) (`br`): Un algoritmo de compresión más nuevo que apunta a mejorar aún más las relaciones de compresión. [90% de los navegadores](https://caniuse.com/#feat=brotli) soportan la codificación Brotli. Los scripts comprimidos siempre deberán ser descomprimidos por el navegador una vez transferidos. Esto significa que su contenido sigue siendo el mismo y los tiempos de ejecución no están optimizados en absoluto. Sin embargo, la compresión de recursos siempre mejorará los tiempos de descarga, que también es una de las etapas más caras del procesamiento de JavaScript. Asegurarse de que los archivos JavaScript se comprimen correctamente puede ser uno de los factores más importantes para mejorar el rendimiento del sitio. @@ -155,8 +155,8 @@ Los scripts comprimidos siempre deberán ser descomprimidos por el navegador una {{ figure_markup( image="fig10.png", - caption="Porcentaje de sitios que comprimen recursos de JavaScript con gzip o brotli.", - description="Gráfico de barras que muestra el 67% / 65% de los recursos de JavaScript se comprime con gzip en computadoras de escritorio y dispositivos móviles respectivamente, y el 15% / 14% se comprime con Brotli.", + caption="Porcentaje de sitios que comprimen recursos de JavaScript con Gzip o Brotli.", + description="Gráfico de barras que muestra el 67% / 65% de los recursos de JavaScript se comprime con Gzip en computadoras de escritorio y dispositivos móviles respectivamente, y el 15% / 14% se comprime con Brotli.", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vTpzDb9HGbdVvin6YPTOmw11qBVGGysltxmH545fUfnqIThAq878F_b-KxUo65IuXaeFVSnlmJ5K1Dm/pubchart?oid=241928028&format=interactive" ) }} diff --git a/src/content/fr/2019/caching.md b/src/content/fr/2019/caching.md index 2a7e95571ad..bc56431393e 100644 --- a/src/content/fr/2019/caching.md +++ b/src/content/fr/2019/caching.md @@ -542,7 +542,7 @@ La plus grande source d'en-têtes `Expires` invalides provient de ressources ser ## En-tête `Vary` -L'une des étapes les plus importantes de la mise en cache est de déterminer si la ressource demandée est mise en cache ou non. Bien que cela puisse paraître simple, il arrive souvent que l'URL seule ne suffise pas à le déterminer. Par exemple, les requêtes ayant la même URL peuvent varier en fonction de la [compression](./compression) utilisée (gzip, brotli, etc.) ou être modifiées et adaptées aux visiteurs mobiles. +L'une des étapes les plus importantes de la mise en cache est de déterminer si la ressource demandée est mise en cache ou non. Bien que cela puisse paraître simple, il arrive souvent que l'URL seule ne suffise pas à le déterminer. Par exemple, les requêtes ayant la même URL peuvent varier en fonction de la [compression](./compression) utilisée (Gzip, Brotli, etc.) ou être modifiées et adaptées aux visiteurs mobiles. Pour résoudre ce problème, les clients donnent à chaque ressource mise en cache un identifiant unique (une clé de cache). Par défaut, cette clé de cache est simplement l'URL de la ressource, mais les développeurs et développeuses peuvent ajouter d'autres éléments (comme la méthode de compression) en utilisant l'en-tête `Vary`. diff --git a/src/content/fr/2019/compression.md b/src/content/fr/2019/compression.md index 8137434dd52..bb9413bd625 100644 --- a/src/content/fr/2019/compression.md +++ b/src/content/fr/2019/compression.md @@ -15,7 +15,7 @@ featured_quote: La compression HTTP est une technique qui permet de coder des in featured_stat_1: 38 % featured_stat_label_1: Réponses HTTP avec compression de texte featured_stat_2: 80 % -featured_stat_label_2: Utilisent la compression gzip +featured_stat_label_2: Utilisent la compression Gzip featured_stat_3: 56 % featured_stat_label_3: Réponses HTML n'utilisant pas de compression --- @@ -36,7 +36,7 @@ Dans ce chapitre, nous allons analyser comment le contenu textuel est compressé Lorsqu’un client effectue une requête HTTP, celle-ci comprend souvent un en-tête [`Accept-Encoding`](https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/Accept-Encoding) pour communiquer les algorithmes qu’il est capable de décoder. Le serveur peut alors choisir parmi eux un encodage qu’il prend en charge et servir la réponse compressée. La réponse compressée comprendra un en-tête [`Content-Encoding`](https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/Content-Encoding) afin que le client sache quelle compression a été utilisée. En outre, l’en-tête [`Content-Type`](https://developer.mozilla.org/fr/docs/Web/HTTP/Headers/Content-Type) est souvent utilisé pour indiquer le [type MIME](https://developer.mozilla.org/fr/docs/Web/HTTP/Basics_of_HTTP/MIME_types) de la ressource servie. -Dans l’exemple ci-dessous, le client indique supporter la compression gzip, brotli et deflate. Le serveur a décidé de renvoyer une réponse compressée avec gzip contenant un document `text/html`. +Dans l’exemple ci-dessous, le client indique supporter la compression Gzip, Brotli et deflate. Le serveur a décidé de renvoyer une réponse compressée avec Gzip contenant un document `text/html`. ``` @@ -55,11 +55,11 @@ HTTP Archive contient des mesures pour 5,3 millions de sites web, et chaque site ## Algorithmes de compression -L’IANA tient à jour une [liste des encodages de contenu HTTP valide](https://www.iana.org/assignments/http-parameters/http-parameters.xml#content-coding) qui peuvent être utilisés avec les en-têtes « Accept-Encoding » et « Content-Encoding ». On y retrouve notamment gzip, deflate, br (brotli), ainsi que de quelques autres. De brèves descriptions de ces algorithmes sont données ci-dessous : +L’IANA tient à jour une [liste des encodages de contenu HTTP valide](https://www.iana.org/assignments/http-parameters/http-parameters.xml#content-coding) qui peuvent être utilisés avec les en-têtes « Accept-Encoding » et « Content-Encoding ». On y retrouve notamment `gzip`, `deflate`, `br` (Brotli), ainsi que de quelques autres. De brèves descriptions de ces algorithmes sont données ci-dessous : -* [Gzip](https://tools.ietf.org/html/rfc1952) utilise les techniques de compression [LZ77](https://fr.wikipedia.org/wiki/LZ77_et_LZ78#LZ77) et [le codage de Huffman](https://fr.wikipedia.org/wiki/Codage_de_Huffman) qui sont plus ancienes que le web lui-même. Elles ont été développés à l’origine pour le programme gzip d’UNIX en 1992. Une implémentation pour la diffusion sur le web existe depuis HTTP/1.1, et la plupart des navigateurs et clients web la prennent en charge. -* [Deflate](https://tools.ietf.org/html/rfc1951) utilise le même algorithme que gzip, mais avec un conteneur différent. Son utilisation n’a pas été largement adoptée sur le web pour des [raisons de compatibilité](https://en.wikipedia.org/wiki/HTTP_compression#Problems_preventing_the_use_of_HTTP_compression) avec d’autres serveurs et navigateurs. -* [Brotli](https://tools.ietf.org/html/rfc7932) est un algorithme de compression plus récent qui a été [inventé par Google](https://github.com/google/brotli). Il utilise la combinaison d’une variante moderne de l’algorithme LZ77, le codage de Huffman et la modélisation du contexte du second ordre. La compression via brotli est plus coûteuse en termes de calcul par rapport à gzip, mais l’algorithme est capable de réduire les fichiers de [15-25 %](https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf) de plus que la compression gzip. Brotli a été utilisé pour la première fois pour la compression de contenu web en 2015 et est [supporté par tous les navigateurs web modernes](https://caniuse.com/#feat=brotli). +* [Gzip](https://tools.ietf.org/html/rfc1952) utilise les techniques de compression [LZ77](https://fr.wikipedia.org/wiki/LZ77_et_LZ78#LZ77) et [le codage de Huffman](https://fr.wikipedia.org/wiki/Codage_de_Huffman) qui sont plus ancienes que le web lui-même. Elles ont été développés à l’origine pour le programme `gzip` d’UNIX en 1992. Une implémentation pour la diffusion sur le web existe depuis HTTP/1.1, et la plupart des navigateurs et clients web la prennent en charge. +* [Deflate](https://tools.ietf.org/html/rfc1951) utilise le même algorithme que Gzip, mais avec un conteneur différent. Son utilisation n’a pas été largement adoptée sur le web pour des [raisons de compatibilité](https://en.wikipedia.org/wiki/HTTP_compression#Problems_preventing_the_use_of_HTTP_compression) avec d’autres serveurs et navigateurs. +* [Brotli](https://tools.ietf.org/html/rfc7932) est un algorithme de compression plus récent qui a été [inventé par Google](https://github.com/google/brotli). Il utilise la combinaison d’une variante moderne de l’algorithme LZ77, le codage de Huffman et la modélisation du contexte du second ordre. La compression via Brotli est plus coûteuse en termes de calcul par rapport à Gzip, mais l’algorithme est capable de réduire les fichiers de [15-25 %](https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf) de plus que la compression Gzip. Brotli a été utilisé pour la première fois pour la compression de contenu web en 2015 et est [supporté par tous les navigateurs web modernes](https://caniuse.com/#feat=brotli). Environ 38 % des réponses HTTP sont fournies avec de la compression de texte. Cette statistique peut sembler surprenante, mais n’oubliez pas qu’elle est basée sur toutes les requêtes HTTP de l’ensemble de données. Certains contenus, tels que les images, ne bénéficieront pas de ces algorithmes de compression. Le tableau ci-dessous résume le pourcentage de requêtes servies pour chaque type de compression. @@ -88,21 +88,21 @@ Environ 38 % des réponses HTTP sont fournies avec de la compression de tex285,158,644 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gzip | +gzip |
29,66 % | 30,95 % | 122,789,094 | 143,549,122 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
br | +br |
7.43 % | 7.55 % | 30,750,681 | 35,012,368 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
deflate | +deflate |
0,02 % | 0,02 % | 68,802 | @@ -116,28 +116,28 @@ Environ 38 % des réponses HTTP sont fournies avec de la compression de tex68,352 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
identité | +identity |
0,000709 % | 0,000563 % | 2,935 | 2,611 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
x-gzip | +x-gzip |
0,000193 % | 0,000179 % | 800 | 829 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
compress | +compress |
0,000008 % | 0,000007 % | 33 | 32 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
x-compress | +x-compress |
0,000002 % | 0,000006 % | 8 | @@ -148,7 +148,7 @@ Environ 38 % des réponses HTTP sont fournies avec de la compression de tex
58.26 % | ||||||||||||||||||||||
gzip | +gzip |
29.33 % | 30.20 % | 30.87 % | 31.22 % | |||||||||||||||||
br | +br |
4.41 % | 10.49 % | 4.56 % | 10.49 % | |||||||||||||||||
deflate | +deflate |
0.02 % | 0.01 % | 0.02 % | @@ -363,8 +363,8 @@ Lighthouse indique également combien d’octets pourraient être économisés e ## Conclusion -La compression HTTP est une fonctionnalité très utilisée et très précieuse pour réduire la taille des contenus web. La compression gzip et brotli sont les deux algorithmes les plus utilisés, et la quantité de contenu compressé varie selon le type de contenu. Des outils comme Lighthouse peuvent aider à découvrir des solutions pour comprimer le contenu. +La compression HTTP est une fonctionnalité très utilisée et très précieuse pour réduire la taille des contenus web. La compression Gzip et Brotli sont les deux algorithmes les plus utilisés, et la quantité de contenu compressé varie selon le type de contenu. Des outils comme Lighthouse peuvent aider à découvrir des solutions pour comprimer le contenu. Bien que de nombreux sites fassent bon usage de la compression HTTP, il y a encore des possibilités d’amélioration, en particulier pour le format `text/html` sur lequel le web est construit ! De même, les formats de texte moins bien compris comme `font/ttf`, `application/json`, `text/xml`, `text/plain`, `image/svg+xml`, et `image/x-icon` peuvent nécessiter une configuration supplémentaire qui manque à de nombreux sites web. -Au minimum, les sites web devraient utiliser la compression gzip pour toutes les ressources textuelles, car elle est largement prise en charge, facile à mettre en œuvre et a un faible coût de traitement. Des économies supplémentaires peuvent être réalisées grâce à la compression brotli, bien que les niveaux de compression doivent être choisis avec soin en fonction de la possibilité de précompression d’une ressource. +Au minimum, les sites web devraient utiliser la compression Gzip pour toutes les ressources textuelles, car elle est largement prise en charge, facile à mettre en œuvre et a un faible coût de traitement. Des économies supplémentaires peuvent être réalisées grâce à la compression Brotli, bien que les niveaux de compression doivent être choisis avec soin en fonction de la possibilité de précompression d’une ressource. diff --git a/src/content/fr/2019/javascript.md b/src/content/fr/2019/javascript.md index 2cc3fa16f78..1a5dd1e589f 100644 --- a/src/content/fr/2019/javascript.md +++ b/src/content/fr/2019/javascript.md @@ -146,8 +146,8 @@ Dans le contexte des interactions entre navigateur et serveur, la compression de Il existe de nombreux algorithmes de compression de texte, mais seuls deux sont principalement utilisés pour la compression (et la décompression) des requêtes sur le réseau HTTP : -- [Gzip](https://www.gzip.org/) (gzip) : le format de compression le plus utilisé pour les interactions entre serveurs et clients ; -- [Brotli](https://github.com/google/brotli) (br) : un algorithme de compression plus récent visant à améliorer encore les taux de compression. [90 % des navigateurs](https://caniuse.com/#feat=brotli) supportent la compression Brotli. +- [Gzip](https://www.gzip.org/) (`gzip`) : le format de compression le plus utilisé pour les interactions entre serveurs et clients ; +- [Brotli](https://github.com/google/brotli) (`br`) : un algorithme de compression plus récent visant à améliorer encore les taux de compression. [90 % des navigateurs](https://caniuse.com/#feat=brotli) supportent la compression Brotli. Les scripts compressés devront toujours être décompressés par le navigateur une fois transférés. Cela signifie que son contenu reste le même et que les temps d’exécution ne sont pas du tout optimisés. Cependant, la compression des ressources améliorera toujours leur temps de téléchargement, qui est également l’une des étapes les plus coûteuses du traitement JavaScript. S’assurer que les fichiers JavaScript sont correctement compressés peut constituer un des principaux facteurs d’amélioration des performances pour un site web. @@ -155,8 +155,8 @@ Combien de sites compressent leurs ressources JavaScript ? {{ figure_markup( image="fig10.png", - caption="Pourcentage de sites compressant des ressources JavaScript avec gzip ou brotli.", - description="Diagramme à barres montrant que 67 % / 65 % des ressources JavaScript sont compressées avec gzip sur les ordinateurs de bureau et les mobiles respectivement, et 15 % / 14 % sont compressées en utilisant Brotli.", + caption="Pourcentage de sites compressant des ressources JavaScript avec Gzip ou Brotli.", + description="Diagramme à barres montrant que 67 % / 65 % des ressources JavaScript sont compressées avec Gzip sur les ordinateurs de bureau et les mobiles respectivement, et 15 % / 14 % sont compressées en utilisant Brotli.", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vTpzDb9HGbdVvin6YPTOmw11qBVGGysltxmH545fUfnqIThAq878F_b-KxUo65IuXaeFVSnlmJ5K1Dm/pubchart?oid=241928028&format=interactive" ) }} diff --git a/src/content/ja/2019/caching.md b/src/content/ja/2019/caching.md index a211bd065ec..abd1605ffbd 100644 --- a/src/content/ja/2019/caching.md +++ b/src/content/ja/2019/caching.md @@ -413,7 +413,7 @@ HTTP/1.1[仕様](https://tools.ietf.org/html/rfc7234#section-5.2.1)には、`Cac < Last-Modified: Sun, 25 Aug 2019 16:00:30 GMT < Cache-Control: public, max-age=43200 < Expires: Mon, 14 Oct 2019 07:36:57 GMT -< ETag: "1566748830.0-3052-3932359948" +< ETag: "1566748830.0-3052-3932359948" ``` 全体的に、Webで提供されるリソースの59%のキャッシュTTLは、コンテンツの年齢に比べて短すぎます。さらに、TTLと経過時間のデルタの中央値は25日です。 @@ -542,7 +542,7 @@ HTTP/1.1[仕様](https://tools.ietf.org/html/rfc7234#section-5.2.1)には、`Cac ## ヘッダーを変更 -キャッシングで最も重要な手順の1つは、要求されているリソースがキャッシュされているかどうかを判断することです。これは単純に見えるかもしれませんが、多くの場合、URLだけではこれを判断するには不十分です。たとえば同じURLのリクエストは、使用する[圧縮](./compression)(gzip、brotliなど)が異なる場合や、モバイルの訪問者に合わせて変更および調整できます。 +キャッシングで最も重要な手順の1つは、要求されているリソースがキャッシュされているかどうかを判断することです。これは単純に見えるかもしれませんが、多くの場合、URLだけではこれを判断するには不十分です。たとえば同じURLのリクエストは、使用する[圧縮](./compression)(Gzip、Brotliなど)が異なる場合や、モバイルの訪問者に合わせて変更および調整できます。 この問題を解決するために、クライアントはキャッシュされた各リソースに一意の識別子(キャッシュキー)を与えます。デフォルトでは、このキャッシュキーは単にリソースのURLですが、開発者はVaryヘッダーを使用して他の要素(圧縮方法など)を追加できます。 diff --git a/src/content/ja/2019/cdn.md b/src/content/ja/2019/cdn.md index b65f98cbd2d..e8a753e8949 100644 --- a/src/content/ja/2019/cdn.md +++ b/src/content/ja/2019/cdn.md @@ -1057,7 +1057,7 @@ TLSおよびRTTのパフォーマンスにCDNを使用することに加えて 一般にCDNの使用は、TLS1.0のような非常に古くて侵害されたTLSバージョンの使用率が高いoriginホストサービスと比較して、強力な暗号およびTLSバージョンの迅速な採用と高い相関があります。 -285,158,644 | |||||||||||||||||
gzip | +gzip |
29.66% | 30.95% | 122,789,094 | 143,549,122 | |||||||||||||||||
br | +br |
7.43% | 7.55% | 30,750,681 | 35,012,368 | |||||||||||||||||
deflate | +deflate |
0.02% | 0.02% | 68,802 | @@ -116,28 +116,28 @@ HTTPレスポンスの約38%はテキストベースの圧縮で配信され68,352 | |||||||||||||||||
identity | +identity |
0.000709% | 0.000563% | 2,935 | 2,611 | |||||||||||||||||
x-gzip | +x-gzip |
0.000193% | 0.000179% | 800 | 829 | |||||||||||||||||
compress | +compress |
0.000008% | 0.000007% | 33 | 32 | |||||||||||||||||
x-compress | +x-compress |
0.000002% | 0.000006% | 8 | @@ -148,7 +148,7 @@ HTTPレスポンスの約38%はテキストベースの圧縮で配信され
58.26% | |||||
gzip | +gzip |
29.33% | 30.20% | 30.87% | 31.22% |
br | +br |
4.41% | 10.49% | 4.56% | 10.49% |
deflate | +deflate |
0.02% | 0.01% | 0.02% | @@ -363,8 +363,8 @@ Lighthouseは、テキストベースの圧縮を有効にすることで、保 ## 結論 -HTTP圧縮は、Webコンテンツのサイズを削減するために広く使用されている非常に貴重な機能です。 gzipとbrotliの両方の圧縮が使用される主要なアルゴリズムであり、圧縮されたコンテンツの量はコンテンツの種類によって異なります。 Lighthouseなどのツールは、コンテンツを圧縮する機会を発見するのに役立ちます。 +HTTP圧縮は、Webコンテンツのサイズを削減するために広く使用されている非常に貴重な機能です。 GzipとBrotliの両方の圧縮が使用される主要なアルゴリズムであり、圧縮されたコンテンツの量はコンテンツの種類によって異なります。 Lighthouseなどのツールは、コンテンツを圧縮する機会を発見するのに役立ちます。 多くのサイトがHTTP圧縮をうまく利用していますが、特にWebが構築されている`text/html`形式については、まだ改善の余地があります! 同様に、`font/ttf`、`application/json`、`text/xml`、`text/plain`、`image/svg+xml`、`image/x-icon`のようなあまり理解されていないテキスト形式は、多くのWebサイトで見落とされる余分な構成を取る場合があります。 -Webサイトは広くサポートされており、簡単に実装で処理のオーバーヘッドが低いため、少なくともすべてのテキストベースのリソースにgzip圧縮を使用する必要があります。 brotli圧縮を使用するとさらに節約できますが、リソースを事前に圧縮できるかどうかに基づいて圧縮レベルを慎重に選択する必要があります。 +Webサイトは広くサポートされており、簡単に実装で処理のオーバーヘッドが低いため、少なくともすべてのテキストベースのリソースにGzip圧縮を使用する必要があります。 Brotli圧縮を使用するとさらに節約できますが、リソースを事前に圧縮できるかどうかに基づいて圧縮レベルを慎重に選択する必要があります。 diff --git a/src/content/ja/2019/http2.md b/src/content/ja/2019/http2.md index 0d3c78f96d6..73bcafa4cb7 100644 --- a/src/content/ja/2019/http2.md +++ b/src/content/ja/2019/http2.md @@ -34,7 +34,7 @@ HTTP/2は、ほぼ20年ぶりになるWebのメイン送信プロトコルの初 特に暗号化を設定するための追加の手順を必要とするHTTPSを使用する場合、TCP接続は設定と完全な効率を得るのに時間とリソースを要するため、それ自体に問題が生じます。 HTTP/1.1はこれを幾分改善し、後続のリクエストでTCP接続を再利用できるようにしましたが、それでも並列化の問題は解決しませんでした。 -HTTPはテキストベースですが、実際、少なくとも生の形式でテキストを転送するために使用されることはほとんどありませんでした。 HTTPヘッダーがテキストのままであることは事実でしたが、ペイロード自体しばしばそうではありませんでした。 [HTML](./markup)、[JS](./javascript)、[CSS](./css)などのテキストファイルは通常、gzip、brotliなどを使用してバイナリ形式に転送するため[圧縮](./compression)されます。[画像や動画](./media) などの非テキストファイルは、独自の形式で提供されます。その後、[セキュリティ](./security)上の理由からメッセージ全体を暗号化するために、HTTPメッセージ全体がHTTPSでラップされることがよくあります。 +HTTPはテキストベースですが、実際、少なくとも生の形式でテキストを転送するために使用されることはほとんどありませんでした。 HTTPヘッダーがテキストのままであることは事実でしたが、ペイロード自体しばしばそうではありませんでした。 [HTML](./markup)、[JS](./javascript)、[CSS](./css)などのテキストファイルは通常、Gzip、Brotliなどを使用してバイナリ形式に転送するため[圧縮](./compression)されます。[画像や動画](./media) などの非テキストファイルは、独自の形式で提供されます。その後、[セキュリティ](./security)上の理由からメッセージ全体を暗号化するために、HTTPメッセージ全体がHTTPSでラップされることがよくあります。 そのため、Webは基本的に長い間テキストベースの転送から移行していましたが、HTTPは違いました。この停滞の1つの理由は、HTTPのようなユビキタスプロトコルに重大な変更を導入することが非常に困難だったためです(以前努力しましたが、失敗しました)。多くのルーター、ファイアウォール、およびその他のミドルボックスはHTTPを理解しており、HTTPへの大きな変更に対して過剰に反応します。それらをすべてアップグレードして新しいバージョンをサポートすることは、単に不可能でした。 diff --git a/src/content/ja/2019/javascript.md b/src/content/ja/2019/javascript.md index d8d46b92a64..5290073ba70 100644 --- a/src/content/ja/2019/javascript.md +++ b/src/content/ja/2019/javascript.md @@ -146,8 +146,8 @@ Webページで使用されているJavaScriptの量を分析しようとする テキスト圧縮アルゴリズムは複数ありますが、HTTPネットワークリクエストの圧縮(および解凍)に使われることが多いのはこの2つだけです。 -- [Gzip](https://www.gzip.org/) (gzip): サーバーとクライアントの相互作用のために最も広く使われている圧縮フォーマット。 -- [Brotli](https://github.com/google/brotli) (br): 圧縮率のさらなる向上を目指した新しい圧縮アルゴリズム。[90%のブラウザ](https://caniuse.com/#feat=brotli)がBrotliエンコーディングをサポートしています。 +- [Gzip](https://www.gzip.org/) (`gzip`): サーバーとクライアントの相互作用のために最も広く使われている圧縮フォーマット。 +- [Brotli](https://github.com/google/brotli) (`br`): 圧縮率のさらなる向上を目指した新しい圧縮アルゴリズム。[90%のブラウザ](https://caniuse.com/#feat=brotli)がBrotliエンコーディングをサポートしています。 圧縮されたスクリプトは、一度転送されるとブラウザによって常に解凍される必要があります。これは、コンテンツの内容が変わらないことを意味し、実行時間が最適化されないことを意味します。しかし、リソース圧縮は常にダウンロード時間を改善しますが、これはJavaScriptの処理で最もコストのかかる段階の1つでもあります。JavaScriptファイルが正しく圧縮されていることを確認することは、サイトのパフォーマンスを向上させるための最も重要な要因の1つとなります。 @@ -155,8 +155,8 @@ JavaScriptのリソースを圧縮しているサイトはどれくらいある {{ figure_markup( image="fig10.png", - caption="JavaScript リソースをgzipまたはbrotliで圧縮しているサイトの割合。", - description="バーチャートを見ると、デスクトップとモバイルでそれぞれJavaScriptリソースの67%/65%がgzipで圧縮されており、15%/14%がBrotliで圧縮されていることがわかります。", + caption="JavaScript リソースをGzipまたはBrotliで圧縮しているサイトの割合。", + description="バーチャートを見ると、デスクトップとモバイルでそれぞれJavaScriptリソースの67%/65%がGzipで圧縮されており、15%/14%がBrotliで圧縮されていることがわかります。", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vTpzDb9HGbdVvin6YPTOmw11qBVGGysltxmH545fUfnqIThAq878F_b-KxUo65IuXaeFVSnlmJ5K1Dm/pubchart?oid=241928028&format=interactive" ) }} diff --git a/src/content/pt/2019/http2.md b/src/content/pt/2019/http2.md index d48f7478717..59ae0e1fcbc 100644 --- a/src/content/pt/2019/http2.md +++ b/src/content/pt/2019/http2.md @@ -34,7 +34,7 @@ O protocolo parecia simples, mas também apresentava limitações. Como o HTTP e Isso por si só já trás seus próprios problemas considerando que as conexões TCP demandam tempo e recursos para serem estabelecidas e obter eficiência total, especialmente ao usar HTTPS, que requer etapas adicionais para configurar a criptografia. O HTTP/1.1 melhorou isso em alguma medida, permitindo a reutilização de conexões TCP para requisições subsequentes, mas ainda não resolveu a dificuldade em paralelização. -Apesar do HTTP ser baseado em texto, a realidade é que ele raramente era usado para transportar texto, ao menos em seu formato puro. Embora fosse verdade que os cabeçalhos ainda eram texto, os payloads em si frequentemente não eram. Arquivos de texto como [HTML](./markup), [JS](./javascript) e [CSS](./css) costumam ser [compactados](./compression) para transporte em formato binário usando gzip, brotli ou similar. Arquivos não textuais como [imagens e vídeos](./media) são distribuidos em seus próprio formatos. A mensagem HTTP completa é então costumeiramente encapsulada em HTTPS para criptografar as mensagens por razões de [segurança](./security). +Apesar do HTTP ser baseado em texto, a realidade é que ele raramente era usado para transportar texto, ao menos em seu formato puro. Embora fosse verdade que os cabeçalhos ainda eram texto, os payloads em si frequentemente não eram. Arquivos de texto como [HTML](./markup), [JS](./javascript) e [CSS](./css) costumam ser [compactados](./compression) para transporte em formato binário usando Gzip, Brotli ou similar. Arquivos não textuais como [imagens e vídeos](./media) são distribuidos em seus próprio formatos. A mensagem HTTP completa é então costumeiramente encapsulada em HTTPS para criptografar as mensagens por razões de [segurança](./security). Portanto, a web tinha basicamente movido de um transporte baseado em texto há muito tempo, mas o HTTP não. Uma razão para essa estagnação foi porque era muito difícil introduzir qualquer alteração significativa em um protocolo tão onipresente como o HTTP (esforços anteriores haviam tentado e falhado). Muitos roteadores, firewalls e outros dispositivos de rede entendiam o HTTP e reagiriam mal a grandes mudanças maiores. Atualizar todos eles para suportar uma nova versão simplesmente não era impossível. diff --git a/src/content/pt/2019/javascript.md b/src/content/pt/2019/javascript.md index dbca2a135af..467b9d89e65 100644 --- a/src/content/pt/2019/javascript.md +++ b/src/content/pt/2019/javascript.md @@ -146,8 +146,8 @@ No contexto das interações navegador-servidor, a compactação de recursos se Existem vários algoritmos de compactação de texto, mas apenas dois são usados principalmente para compactação (e descompressão) de solicitações de rede HTTP: -- [Gzip](https://www.gzip.org/) (gzip): O formato de compactação mais amplamente usado para interações de servidor e cliente. -- [Brotli](https://github.com/google/brotli) (br): Um algoritmo de compressão mais recente que visa melhorar ainda mais as taxas de compressão. [90% dos navegadores](https://caniuse.com/#feat=brotli) eles suportam a codificação Brotli. +- [Gzip](https://www.gzip.org/) (`gzip`): O formato de compactação mais amplamente usado para interações de servidor e cliente. +- [Brotli](https://github.com/google/brotli) (`br`): Um algoritmo de compressão mais recente que visa melhorar ainda mais as taxas de compressão. [90% dos navegadores](https://caniuse.com/#feat=brotli) eles suportam a codificação Brotli. Scripts compactados devem sempre ser descompactados pelo navegador depois de transferidos. Isso significa que seu conteúdo permanece o mesmo e os tempos de execução não são otimizados de forma alguma. No entanto, a compactação de recursos sempre melhorará os tempos de download, que também é um dos estágios mais caros do processamento de JavaScript. Garantir que os arquivos JavaScript sejam compactados corretamente pode ser um dos fatores mais importantes para melhorar o desempenho do site. @@ -155,8 +155,8 @@ Quantos sites estão compactando seus recursos JavaScript? {{ figure_markup( image="/static/images/2019/javascript/fig10.png", - caption="Porcentagem de sites que compactam recursos JavaScript com gzip ou brotli.", - description="Gráfico de barras mostrando 67% / 65% dos recursos JavaScript são compactados com gzip em desktops e dispositivos móveis, respectivamente, e 15% / 14% são compactados com Brotli.", + caption="Porcentagem de sites que compactam recursos JavaScript com Gzip ou Brotli.", + description="Gráfico de barras mostrando 67% / 65% dos recursos JavaScript são compactados com Gzip em desktops e dispositivos móveis, respectivamente, e 15% / 14% são compactados com Brotli.", chart_url="https://docs.google.com/spreadsheets/d/e/2PACX-1vTpzDb9HGbdVvin6YPTOmw11qBVGGysltxmH545fUfnqIThAq878F_b-KxUo65IuXaeFVSnlmJ5K1Dm/pubchart?oid=241928028&format=interactive" ) }}