Google Proposed To Speed Up The Internet With SPDY And TCP Proposals

Posted on Jan 25 2012 - 11:16am by Editorial Staff

Google – we know as the search engine giant with its continuing aim to deliver content more effectively with quicker speed, the company has proposed a number of changes to Transmission Control Protocol (TCP). Google’s attempting to build a Web-accelerating technology it calls SPDY into the Internet.

Google engineer Yuchung Cheng listed several proposals for speeding up TCP, which stands for “Transmission Control Protocol” and was famously created in 1974 by Vint Cerf and Bob Kahn. It’s served us quite well since then, but Google is suggesting a number of improvements to speed it up, including reducing the number of “round trips” a data packet needs to make, reducing timeouts from three to one second, and implementing TCP Fast Open and Proportional Rate Reduction.

Proposals include:

1. Increase TCP initial congestion window to 10 (IW10). The amount of data sent at the beginning of a TCP connection is currently 3 packets, implying 3 round trips (RTT) to deliver a tiny 15KB-sized content. Our experiments indicate that IW10 reduces the network latency of Web transfers by over 10%.

2. Reduce the initial timeout from 3 seconds to 1 second. An RTT of 3 seconds was appropriate a couple of decades ago, but today’s Internet requires a much smaller timeout. Our rationale for this change is well documented here.

3. Use TCP Fast Open (TFO). For 33% of all HTTP requests, the browser needs to first spend one RTT to establish a TCP connection with the remote peer. Most HTTP responses fit in the initial TCP congestion window of 10 packets, doubling response time. TFO removes this overhead by including the HTTP request in the initial TCP SYN packet. We’ve demonstrated TFO reducing Page Load time by 10% on average, and over 40% in many situations. Our research paper and internet-draft address concerns such as dropped packets and DOS attacks when using TFO.

4. Use Proportional Rate Reduction for TCP (PRR). Packet losses indicate the network is in disorder or is congested. PRR, a new loss recovery algorithm, retransmits smoothly to recover losses during network congestion. The algorithm is faster than the current mechanism by adjusting the transmission rate according to the degree of losses. PRR is now part of the Linux kernel and is in the process of becoming part of the TCP standard.

All of that combined could theoretically work to reduce latency and just as importantly be backwards compatible with current TCP systems. 

About the Author

Editorial Staff at I2Mag is a team of subject experts.