The test is a lie. While I don't doubt improvements in http/2, this test uses "Connection: close" on the http/1.1 test, which means for each tile there needs to be a tcp connect and TLS handshake. This is not representative of real world.
In http/2 the "Connection: close" header is meaningless and all the tiles come from the same connection.
the other server is 2x faster even without http/2
My wget implementation does not suppot http/2
HTTP Server:
$ time wget https://1153288396.rsc.cdn77.org/http2/tiles_final/tile_18.png
[...]
real 1.038 user 0.038 sys 0.007 pcpu 5.37
HTTP2 Server: $ time wget https://1906714720.rsc.cdn77.org/http2/tiles_final/tile_18.png
[...]
real 0.539 user 0.045 sys 0.009 pcpu 10.01
of course that's just latency.. but this is hardly a scientific demonstration.We should also consider the fact that this is cherry picking the worst trait of HTTP/1.1 and that's that it's latency sensitive.
A demo with a real webpage of large assets would be a better example.
This is a copy of the Akamai's http2 demo: http://http2.akamai.com
This test only demonstrates that CDN77 can't serve HTTP/1.1 properly.
This demo could be possibly even faster if using HTTP/2.0 Server Push.
Btw, note that if you're looking into supporting HTTP/2.0 on your own then with nginx there's still some waiting left: https://www.nginx.com/blog/early-alpha-patch-http2/ And there's no plan to support server push with the first production release. So NGINX users will have to keep using SPDY.
AFAIK the latest plan with SPDY is to remove it from Chrome browser in early 2016 so nginx has to make sure to deliver before that...
Ran this a couple of times in Firefox 40/Linux x86_64, HTTP/1.1 was always faster by 10-20% (~1s vs. ~1.15s).
Chrome 43 on Linux from Germany here.
HTTP/2 routinely outperforms HTTP/1.1 by several seconds for me. HTTP/1.1 being somewhat stable at 7-8 seconds and HTTP varying from 4 to 11 seconds (though generally closer to 11 seconds than to 4).
The Akamai demo works fine: https://http2.akamai.com/demo (though HTTP/2 is only ahead by 20% or so)
It appears they turned on keep alive in http/1.1 test now. http/1.1 timings improved by a lot ... still obviously slower than http/2
6.41s HTTP/1.1 vs 2.51s HTTP/2 on FF42. Very nice! (Although when HTTP2 is going the FPS drops quite a bit.)
Can someone explain what exactly HTTP2 is doing differently to achieve such an improvement?
12.75s vs 1.40s. This is quite impressive - looking forward to a faster Web, slowly migrating to HTTP/2.
Any clue if Amazon CDN service is / will offer HTTP/2 support too?
Recently the .net 4.6 has allowed windows server to run some http/2 for the edge browser, it greatly improved the load speed and web socket calls of our app.
My results show 1.3s for HTTP/1.1 and 3.0 seconds for HTTP/2 using Chrome on OS X. So, this demo wasn't very impressive for me.
Ignoring HTTP/2, I'm finding it very interesting that on my 11" MacBookAir6,1 running OS X 10.9.5, Safari 7.0.6 is much faster than Chrome 44.0.2403.155 at the HTTP/1.1 test. Safari performs the test in almost exactly 3.00 seconds, while Chrome never comes in under 3.15 and often takes as high as 3.45.
Did anyone else observer the JS in the iframe footer? I'm just curios why it's obfuscated and what's its purpose (see surce of https://1153288396.rsc.cdn77.org/http2/http1.html)
Hm, HTTP/1.1 at 15.5s, HTTP/2 at 23.72s
Yeah, I "can see the difference clearly", but I don't think it is the kind of difference they expected or intended.
Edit: Firefox 40 on Windows 7 at work. Will try at home as well.
Oddly enough, the Akamai demo someone else posted gives me 18.47s for HTTP/1.1 and 2.24s for HTTP/2.
The test server does not actually make sure the h2 test is using h2. If you are using a client that does not have h2 support then you are just using the fallback code on the server and testing h1 against h1. An iphone is a good example :) (but it may be using spdy instead.. lots of variables)
The speed test links at the bottom for single files don't make any sense. A single file download wouldn't benefit from the upgraded protocol and just seems, from very rough testing on my 100mb line, like the http/1 links are artificially slowed down.
Cheaper and faster than AWS Cloudfront with free custom SSL. So what is the catch?
One issue is our data is on S3 and I believe that any outgoing S3 traffic to this CDN would be slow and cost money, but S3 to CloudFront is is likely prioritized and free.
It is interesting that Safari & Firefox beat Chrome in the HTTP/1.1 test for me. However, the HTTP2 test is then twice as fast as Safari. Maybe we can stop smashing all those javascript files together.
Here's a fun overview video explanation of HTTP/2 from the other day..
Perfect. http://puu.sh/jIiG4/731c98e894.jpg
For me (on FF) HTTP/1.1 was faster in around 5/7 attempts. I'm on corporate network so not sure that's affecting it.
no connection: close anymore
How about a demo that doesn't require javascript to be enabled to work? Or is javascript a hard requirement for HTTP/2?
I ran the demo several times and I got a 1 second difference. I guess there is a place where even 1 second is important.
Awesome! Now web sites can pack 6 times more ads and other cruft onto each page.
12.50 -> 1.41
Chome 44, Win 7
With this, JS bundling is a thing of the past, I think.
Lol ran slower than http1 on my iPhone :/
HTTP/2 is consistently slower for me...
Not sure what is going on, but here were my results.
HTTP1 - 3.13s
HTTP2 - 0.54s
HTTP1 - 10.88s
HTTP2 - 1.65s
Chrome on Windows 8.1.
How can I enable HTTP/2 on Apache?
Now make those images webp.
this HTTP 2.0 was faster for me by only .01 seconds.
the 14MB HTTP/1 demo ran significantly faster than the HTTP/2 demo... re-ran many times with same result.
It is a real world demo though?
Similar to many of the other demo's of HTTP/2 (Gopher Tile, Akamai) it's written in a way that presents HTTP/1.x in the worst light and manages to screw things up even more.
HTTP/1.1 is really latency prone so when you have a demo that uses lots of smalls requests that don't fill up the congestion window you run into a couple of problems.
1. The browser can only use a limited number of connections to a host, so once these are in use the other requests queue behind waiting to a connection to become free.
2. Even when one becomes free, we've got the request / response latency before the browser sees an images bytes
3. If the response doesn't fill the congestion window i.e. it's a small file, then there's spare capacity that's not being used i.e. packets we could have send it the round trip that didn't.
4. In this demo the server sends connection: close so forces the browser to open a new TCP connection and negotiate TLS for each of the tiles, so the congestion window won't grow either.
Yes, HTTP/2 is faster, because it can send multiple requests at the same time to overcome latency, the server can fill the congestion window, and the window will grow.
But are our web pages build of tiny image tiles, or a greater variety of image and resources sizes?
EDIT: They've now enabled keep-alive which makes the HTTP/1.1 test much faster than it was