In 2015, Internet Engineering Task Force (IETF) release HTTP/2, the second major version of the most useful internet protocol, HTTP. It was derived from the earlier experimental SPDY protocol.
SPDY is a deprecated open-specification networking protocol that was developed primarily at Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security
- Request multiplexing
HTTP/2 can send multiple requests for data in parallel over a single TCP connection.
This reduces additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
-
Header compression
HTTP/2 compress a large number of redundant header frames. It uses the HPACK specification as a simple and secure approach to header compression. Both client and server maintain a list of headers used in previous client-server requests.
- Binary Protocols
HTTP1.x used to process text commands to complete request-response cycles. HTTP/2 will use binary commands (in 1s and 0s) to execute the same tasks. This attribute eases complications with framing and simplifies implementation of commands that were confusingly intermixed due to commands containing text and optional spaces.
Browsers using HTTP/2 implementation will convert the same text commands into binary before transmitting it over the network.
- HTTP/2 Server Push
This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.
Benefits:
The client saves pushed resources in the cache.
The client can reuse these cached resources across different pages.
The server can multiplex pushed resources along with originally requested information within the same TCP connection.
The server can prioritize pushed resources — a key performance differentiator in HTTP/2 vs HTTP1.
The client can decline pushed resources to maintain an effective repository of cached resources or disable Server Push entirely.
The client can also limit the number of pushed streams multiplexed concurrently.
HTTP/2 can send multiple requests for data in parallel over a single TCP connection.
This reduces additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
Header compression
HTTP/2 compress a large number of redundant header frames. It uses the HPACK specification as a simple and secure approach to header compression. Both client and server maintain a list of headers used in previous client-server requests.
HTTP1.x used to process text commands to complete request-response cycles. HTTP/2 will use binary commands (in 1s and 0s) to execute the same tasks. This attribute eases complications with framing and simplifies implementation of commands that were confusingly intermixed due to commands containing text and optional spaces.
Browsers using HTTP/2 implementation will convert the same text commands into binary before transmitting it over the network.
This capability allows the server to send additional cacheable information to the client that isn’t requested but is anticipated in future requests. For example, if the client requests for the resource X and it is understood that the resource Y is referenced with the requested file, the server can choose to push Y along with X instead of waiting for an appropriate client request.
Benefits:
The client saves pushed resources in the cache.
The client can reuse these cached resources across different pages.
The server can multiplex pushed resources along with originally requested information within the same TCP connection.
The server can prioritize pushed resources — a key performance differentiator in HTTP/2 vs HTTP1.
The client can decline pushed resources to maintain an effective repository of cached resources or disable Server Push entirely.
The client can also limit the number of pushed streams multiplexed concurrently.
No comments:
Post a Comment