The Hypertext Transfer Protocol(HTTP) is the underlying request-response protocol used by theWorld Wide Web. The first documented version of HTTP, HTTP/0.9, appeared in 1991. Then, 1996 saw the introduction of HTTP/1.0, quickly followed byHTTP/1.1 in January 1997. Further improvements and updates were released in 1999; this is the version of HTTP most commonly used today.
A big difference between HTTP/1.0 and HTTP/1.1 is that the latter can reuse a connection multiple times to download page content, making load times a lot quicker, as there is no need to establish a new connection for each page resource. The need for ever-quicker content delivery times in today's connected, bandwidth-intensive, mobile world, though, means HTTP/1.1 is no longer deemed fast nor efficient enough.
The Internet Engineering Task Force is responsible for developing and promoting voluntary Internet standards, and it is close to finalizing and making HTTP/2 a formal Internet specification. HTTP/2 is primarily focused on improving the time it takes to render a page; it allows servers to send all the different elements of a requested webpage at once, eliminating the serial sets of messages that still have to be sent back and forth with HTTP/1.1. It also allows the server and the browser to compress HTTP content, reducing the volume of data that needs to be sent, and reducing the number of network roundtrips required to render a page.
HTTP/2 is based largely on the SPDY protocol developed by Google, which can reduce the time it takes to deliver a webpage by 50% or more. High-volume sites such as Google, Facebook and Twitter already use the SPDY protocol. Google's ads are also served from SPDY-enabled servers, but it is currently only used by 1% of all websites, according to W3techs. Chrome, Safari, Firefox, Opera, Amazon Silk and Internet Explorer browsers already support SPDY.
HTTP/2 is not a ground-up rewrite of the protocol, so it supports the same semantics as HTTP/1.1. This means the code of enterprise Web applications won't need updating to benefit from the new protocol, only client and server software will. However, note that servers will be fielding many more requests, as clients can send requests more quickly, so caching and load-balancing services may need upgrading due to the need to commit more resources to each connection.
HTTP/2 provides an effective compression algorithm that is tailored to HTTP and avoids many of the security issues with using general purpose compression algorithms over TLS connections. Some concerns have been raised about a possible distributed denial-of-service attack vector because attackers could find ways to abuse the new method for handling header content if browser and software vendors fail to interpret and implement the protocol correctly. However, note that there are implementation risks with any new protocol. Web security gateways may need their rules and filters updated to handle the larger amount of data that will be within the headers received when users on the internal network request content from a website.
Those organizations that run highly visible websites should start trialing Google's SPYD module for Apache so that they can assess the likely effects of HTTP/2 on their own infrastructure once it's officially formalized later this month. As with any new technology or protocol, IT teams should follow the relevant security forums to stay abreast of any developments, as well as pick up tips of how others are integrating it into their environment.
No comments:
Post a Comment