Saturday, 28 March 2020

HTTP3: Good or bad

HTTP/3 (QUIC):
Before discussing HTTP/3 , first understand problem with http/2. One of the most notable issue is the fact that the single connection of HTTP/2 ends up being a bottleneck for data in a low network quality environment — as network quality degrades, and packets are dropped, the single connection slows the entire process down, and no additional data can be transferred during this time of retransmission.
To solve the underlying issues of HTTP/2, HTTP/3 is based around QUIC. QUIC, once an acronym for the “Quick UDP Interaction Connections,” was built by Google as a solution to many of the issues intrinsic in the current network protocol stack. It is low-latency by design. The protocol has also been designed to be secure — because there is no cleartext version of the protocol.


QUIC is using all HTTP/2 best features like multiplexing , stream , server push , header compression QPACK(new version of HPACK) etc .
The HPACK algorithm depends on an ordered delivery of streams so it was not possible to reuse it for HTTP/3 without modifications since QUIC offers streams that can be delivered out of order. QPACK can be seen as the QUIC-adapted version of HPACK.
QUIC is using UDP as underlying protocol. UDP is much faster than TCP because UDP is connectionless and there is no error recovery attempt in case of packet loss. TCP need minimum 2 round trip to establish connection between client and server and If you use TLS then round trip would be 6. UDP is using only 0–1 (0-RTT and 1-RTT (Round Trip Time) handshakes)round trip because it is connectionless . TCP is heavy-weight. TCP requires three packets to set up a socket connection, before any user data can be sent. TCP handles reliability and congestion control.UDP is lightweight. There is no ordering of messages, no tracking connections, etc. It is a small transport layer designed on top of IP.




Additionally, the fact that QUIC has been developed for implementation in the user space is also notable. This means that, unlike protocols built into the OS or firmware level, QUIC can iterate rather quickly and effectively without having to deal with the entrenchment of each protocol version



Why QUIC

QUIC is a name, not an acronym. It is pronounced exactly like the English word "quick".
QUIC is in many ways what could be seen as a way of doing a new reliable and secure transport protocol that is suitable for a protocol like HTTP and that can address some of the known shortcomings of doing HTTP/2 over TCP and TLS. The logical next step in the web transport evolution.
QUIC is not limited to just transporting HTTP. The desire to make the web and data in general delivered faster to end users is probably the largest reason and push that initially triggered the creation of this new transport protocol.



HTTP/3 Prioritization

One of the HTTP/3 stream frames is called PRIORITY. It is used to set priority and dependency on a stream in a way similar to how it works in HTTP/2.
The frame can set a specific stream to depend on another specific stream and it can set a "weight" on a given stream.
A dependent stream should only be allocated resources if either all of the streams that it depends on are closed or it is not possible to make progress on them.
A stream weight is value between 1 and 256 and it is specified that streams with the same parent should be allocated resources proportionally based on their weight.
Spinning a bit
Both endpoints, the client and the server, maintain a spin value, 0 or 1, for each QUIC connection, and they set the spin bit on packets it sends for that connection to the appropriate value.
Both sides then send out packets with that spin bit set to the same value for as long as one round trip lasts and then it toggles the value. The effect is then a pulse of ones and zeroes in that bitfield that observers can monitor.
This measuring only works when the sender is neither application nor flow control limited and packet reordering over the network can also make the data noisy.


To reduce the time required to establish a new connection, a client that has previously connected to a server may cache certain parameters from that connection and subsequently set up a 0-RTT connection with the server. This allows the client to send data immediately, without waiting for a handshake to complete.




Src:

No comments:

Post a Comment