The modern web on a slow connection (2017)
We are soon coming full circle when this generation's programmers realize they can render html templates server-side.
Finally, an influential developer who cares about the other 99%.
I've lived in dozens of places. I've lived in urban areas, suburban areas, rural areas. I've even lived on a boat. With the exception of wealthy areas, reasonable internet is a constant struggle.
It's also never due to the speed though, in my experience. The biggest issues are always packet loss, intermittent outages, latency, and jitter. High speed internet doesn't count for much, aside checking off a box, if it goes out for a couple hours every few hours, or has 10% packet loss. You'd be surprised how common stuff like that is. Try visiting the domicile of someone who isn't rich and running mtr.
Another thing I've noticed ISPs doing is they seem to intentionally add things like jitter to latency-sensitive low-bandwidth connections like SSH, because they want people to buy business class. So, in many ways, 56k was better than modern high speed internet. Because yes, connections had slow throughput, but even with 300 baud the latency was fast and the reliability was good enough that you could count on it to be something you could actually use when connecting to a central computer and running vi. Bill Joy actually wrote vi on that kind of internet connection, which deeply influenced its design.
The problem with comparing the Internet now vs 25 years ago is back then you didn't live on the Internet all your waking hours. You jumped on, got what you needed and got off again otherwise you'd be paying a high hourly premium. And to top that off you'd power off your computer and cover it and the monitor with a dust cover.
Now with phones and always on connections it's not even comparable. I spent more time using my computer; programming, graphics, learning about it, yes the physical thing in front of me than on the Internet in the early 1990s even later 1990s.
The performance numbers are a really helpful illustration of the problem with a lot of sites. A detail it misses and I see people constantly forget, is any individual user might have one of those crappy connections at multiple points during the day or even move through all of them.
It doesn't matter if your home broadband is awesome if you're currently at the store trying to look something up on a bloated page. It's little consolation that the store wrote an SPA to do fancy looking fades when navigating between links when it won't even load right in the store's brick and mortar location.
Far too many web devs are making terrible assumptions about not just bandwidth but latency. Making a hundred small connections might not require a lot of total bandwidth but when each connection takes a tenth to a quarter of a second just to get established there's a lot of time a user is blocked unable to do anything. Asynchronous loading just means "I won't explicitly block for this load", it doesn't mean that dozens of in-flight requests won't cause de facto blocking.
I'm using the web not because I love huge hero images or fade effects. I'm using the web to get something accomplished like buy something or look up information. Using my time, bandwidth, and battery poorly by forcing me to accommodate your bloated web page makes me not want to give you any money.
One of the problem is that a lot of devs have very good connections at home. I've got 600 MBit/s symmetric (optic fiber). Spain, France, now Belgium... It's fiber nearly everywhere. Heck, Andorra has 100% fiber coverage. Japan: my brother has got 2 Gbit/s at home.
My home connection is smoking some of my dedicated servers: the cheap ones are still on 100 MBit/s in datacenter and they're totally the bottleneck. That's how fast home connections are, for some.
I used to browse on a 28.8 modem, then 33.6, then ISDN, then ADSL.
The problem is: people who get fiber are probably not going back. We're going towards faster and faster connections.
It's easy, when you're on fiber since years and years now, to forget what it was. To me it's at least part of the problem.
Joel is not loading his own site on a 28.8 modem. It's the unevenly distributed future.
Kind of a tangent but (for reasons) I have a workstation on which I didn't want to install a full-fledged browser so I'm using Dillo, https://www.dillo.org/
> Dillo is a multi-platform graphical web browser known for its speed and small footprint.
> Dillo is written in C and C++.
> Dillo is based on FLTK, the Fast Light Toolkit (statically-linked by default!).
Dillo doesn't have a JS engine. (This is a "pro" in my opinion.)
Using it the web is divided into three equivalence classes:
1) Works. (defined as, the site loads, the content is accessible and it looks more-or-less like the author intended.) As a rule such sites load lightning fast.
2) Broken but the content can still be read. Usually the site layout is messed up.
3) Broken completely. Typically a blank page, or garbage without visible content. (There is a new failure mode: sites that won't reply to browsers without server name indication (SNI https://www.cloudflare.com/learning/ssl/what-is-sni/ ) Dillo doesn't (yet) support SNI, so those sites are "broken" too. Typically these are Cloudflare-protected sites that give me a 403, but e.g. PyPI recently adopted SNI and went from category 1 to 3.)
(HN is in category 2 FWIW: you can read the content but the "bells and whistles" don't work, login, voting, etc.)
I don't really have much to add, just that A) enough of the web works that I find it useful to use dillo for my purposes, B) the web that does work with dillo is much less annoying than the "modern" web, C) it kinda sucks IMO that most of the web is junk from the POV of dillo user agent.
Between 2010-2016, I lived in a rural area where we had one option: Satellite internet from HughesNet. Prior to that, the only option was dialup, and since it was such a remote area, the telephone had grandfathered a few numbers from out of the service area as free-to-call, to allow for residents to use the nearest isp without extra toll charges.
So we went from paying 9.95/month for average 56k service, to 80/month for a service that was worse than that.
To add insult to injury, a local broadband provider kept sticking their signs at our driveway next to our mailbox, and we would call to try and get service, but we were apparently 200 feet past the limit of their service area. People who lived at the mouth of our driveway had service, our neighbors had service, but we were too far out they said.
I repeat: as late as 2016 I WOULD HAVE KILLED TO BE ABLE TO JUST USE FREAKING DIALUP!
My mom's cellular data plan (used for rural internet access through a cellular/wifi router) has a 128kbps fallback if you use up your main data allotment.
128kbps isn't so bad, is it? More than 3x the speed she used to get with a dialup modem.
But no. We ran it into the fallback zone to try it out. And half the sites (e.g. the full Gmail UI or Facebook) wouldn't even load - the load would time out before the page was functional.
The 128kbs fallback is meant to be as a lifeline, for email and instant messaging communications. And that's really all it's food for any more.
> If you think browsing on a 56k connection is bad, try a 16k connection from Ethiopia!
No need to travel that far. During my time in my previous apartment I had two options for connecting:
-The landlord's mother's old router behind a thick wall (400kbps at best, 300-400ms latency with 10-30% packet loss).
-A "soapbar" modem we got along with my SO's mobile plan. 14GB a month, slowing down to a trickle of 16-32kbps when used up.
Things that worked on such a connection:
-Google
-Google Meet
-Facebook Messenger
-Hacker News
The rest of the web would often break or not load at all.
Nevermind connection speed, the modern web is unbearable without ad blockers.
The modern web loads the whole website on your first visit, aka SPA.
Plus React 150kb, Bootstrap 150kb and all their plugins make it multi megabyte.
I was thinking about converting my old server rendered web site into modern web. Still wondering if it's worth it.
> For another level of ironic, consider that while I think of a 50kB table as bloat, this page is 12kB when gzipped, even with all of the bloat. Google's AMP currently has > 100kB of blocking javascript that has to load before the page loads! There's no reason for me to use AMP pages because AMP is slower than my current setup of pure HTML with a few lines of embedded CSS and the occasional image, but, as a result, I'm penalized by Google (relative to AMP pages) for not "accelerating" (deccelerating) my page with AMP.
This is cute. Somehow it's like Compton limit where at a certain scale you just can't make measurements accurate enough because the very act of measurement interferes with the system.
what do people think is the best approach to incentivise lean web design? The bloat of the modern web is absolutely ridiculous but it seems to be the inevitable result of piling abstraction on top of abstraction so that you end up with a multimegabyte news article due to keeping up with the latest fad framework.
>You can see that for a very small site that doesn’t load many blocking resources, HTTPS is noticeably slower than HTTP, especially on slow connections.
Yet another reason to not get rid of HTTP in the HTTP+HTTPS world just because it's hip and what big companies are going.
A realisation I had a few years ago was that differential accessibility to websites is quite likely market segmentation technique, whether used intentionally or otherwise.
A website that only works on recent kit in high-bandwidth locales with low ping latencies and little packet loss ... acts much the same as a posh high-street address does in dissuading those people one would prefer not to have to face with or address.
(There may be a similar logic to truly atrocious web deisgn, intended to dissuade (literal) tyre kickers, as with LINGsCars: https://ello.co/dredmorbius/post/7TOJTiDEF_L4r_sdBRINGw)
Though this explains the behaviour, it doesn't excuse it. And might well form the basis of discrimination or accessibility lawsuits.
Interesting that it mentions the book "High Performance Browser Networking". My feeling after reading it was that latency is all that matters, not page size.
Is there any cellular phone service that throttles to a usable speed (say at least 1.5 Mbps)? I was looking forward to the spread of 4G because I naively thought throttled speeds would also increase to "3G equivalent" instead of "2G equivalent".
I use Mint Mobile and last month I hit the 4G data cap of my "unlimited" plan (30 Gb). I tried to buy more data, but it can't be done on the "unlimited" plan. I could not buy more data for my "unlimited" data plan after my "unlimited" data plan ran out of data. Other plans let you purchase additional data at an expensive price, only the "unlimited" plan doesn't.
This article gave me a random push to nuke the size of my site - just brought it down from ~50 kB on text pages all the way to ~1.5 kB now, and from 14 files to 1 HTML file.
Part of the problem is that writing html is lost knowledge. Extinct.
Before you think out loud how simple html is and you still remember doing that a million years ago ask yourself this: do you write now and would you pay to hire someone to do that?
It’s not a job or skill you can be hired for. Instead you have to use React if you want employment. So that’s an immediate 50x swell right there immediately out of the gate and we haven’t even gotten to poor coding practices.
I was expecting an empty rant, but this was even-handed and made some good points
From a capitalist devil’s advocate point of view: does it really matter that your website works poorly with 90% of the web when these 90% either don’t have any disposable income (there’s got to be a correlation) or are too distant geographically and culturally to ever bring you business? The “commercial quality” (i.e. to avoid looking worse than competition) web development has become unbelievably complicated and expensive, so in reality probably only individual enthusiasts and NCOs should care about the long tail. Local small businesses will just use some local site builder or social media, so even they don’t matter.
For many advertising-based websites, the value of the user is often proportional to their bandwidth.
I wonder if the experience improves by using proxy browsers, such as- Opera (lite/mini).
> I probably sound like that dude who complains that his word processor, which used to take 1MB of RAM, takes 1GB of RAM
Well, 1MB is way too small, but 1GB is way too large. Conserving resources is important.
YES!
I've started using Tailwind and Alpine (with no other frontend framework). A basic page with a couple of images, a nice UI, animations, etc only takes up ~250kb gzipped.
It loads quickly on practically any connection, is well-optomized, and generally just works really well. The users love it too.
Coupled with a fast templating system and webserver (server-side in Go is my personal favorite but there are plenty of others), it isn't hard to get a modern featureset and UI with under 300kb.
I hope the push for a faster internet continues. Load-all-the-things is great until you're on a bad connection.
Sourcehut comes to mind as a great model/example, using a simple JS-less UI and Go server-side rendering.