There could be a companion article: "Will Consuming Only Real HTML Content Make A Website Faster? Let's Experiment!"
Having myself run this "experiment" for many years now by (a) controlling DNS so that only the domain in the "address bar" URL is resolved^1 and (b) making HTTP requests using a TCP client and/or an unpopular nongraphical web browser that only processes HTML and does not perform auto-loading of resources. No images, JS, CSS, etc.
The answer to the question is yes. This "makes a website faster", or, more specifically, as someone else in the thread has stated, it does not make the website slow. It does not accomodate the practices of "web developers" that slow a website down.
But most importantly, IMO, it makes "website speed", not to mention appearance, more consistent across websites. Good luck achieving any semblance of that with a popular graphical web browser.
Most web pages submitted to HN can be read this way. I find it easier to consume information when it follows a consistent style of presentation and without the distractions enabled by "modern" web browsers.
1. This is the only URL the www user is informed about. In the short history of the www so far, auto-loading from other domains, whether through HTML, Javascript or otherwise, has unfortnuately been abused to the point where allowing it produces more risk-taking for the www "user" than convenience for the www "developer". Sadly, instead of deprecating the "web development" practices that have been abused and make websites slow, the new HTTP standards proposed by an advertising company and supported by CDNs cater to this practice of "composite" websites comprised of resources from various third parties. It stands to reason that advertisers and therefore "tech" companies and their providers, e.g., CDNs, stand to benefit more from "composite" websites than www users do. IMHO the easiest way to "make websites faster" is to stop enabling "web developers" to do the things that make them slow.
> stop enabling "web developers" to do the things that make them slow
It's been said countless times; but Slack, Discord, Youtube, Google Meets, Figma, Google Maps, Google Docs, Github, Excalidraw, Penpot, Diagrams.net etc. etc. are all web sites. They are a different class of websites than Hacker News or Project Gutenberg, being more apps than documents, but they are an important class, too; and they were all made possible because web developers were enabled, through various browser apis, to build them; and I am grateful for that. Wishing that web developers were not enabled to build complex things on the web is inconceivable to me. It's the culture and the education around web development that should change; not the enablement of web developers.
> IMHO the easiest way to "make websites faster" is to stop enabling "web developers" to do the things that make them slow.
yeah as one those developers who jumped on the processing everything on the client/browser side, and supported all the browser having fancy JS features when ajax first popularised by gmail, I have come to regretted my decision. Especially with all the standards proposed by an advertising companies, the user/consumers don't really benefit from the usage of the web.
I see it as a rare case of "hate the players not the game"
I've come to rely a lot on gitlab and github's web interfaces through the years for diff and quickly navigate a specific commit, or a project I don't want to checkout. All the improvements coming from more JS and Ajax have been a boon to me. Sure I could do everything locally, but it's just so much more convenient.
Same for gmail as you mention it: I wouldn't go back to the previous web mail interfaces short of getting paid a living salary just for that. Same for banking web sites, which came such a long way.
The technology and trend is a net plus, advertising company coming to ruin whatever they can ruin is par for the course. I mean, looking at newspaper, TVs, Google Search, YouTube, Instagram, AppStore search etc....making anything it touches worse is in the ad business' DNA.
For sure, I agree that the ajax UI/UX offered by gmail is a game changer. And all those features that makes it convenient for users, can be accomplished without compromising security/privacy. What I don't like are companies proposing standard disguising as a some convenient feature, where in reality is just a racket to mine data from the users.
For reading HTML, I use various versions of links 1.x and links 2.x, patched to meet personal preferences. Of course links can make HTTP requests, as well as display HTML, but I use a variety of programs to make HTTP requests. When I use links to make HTTP requests, it is always through a loopback-bound proxy.
For commercial use of the www of course I am forced use a popular graphical web browser like everyone else. But I make minimal commercial use of the www. Most www use for me is academic or recrerational.
I had campus security called on me because I was using an acoustic coupler on a payphone to a free dial-up provider to browse the internet via links. Luckily the security guard understood what I was talking about and was chill, but he couldn't understand why I wasn't just using the campus WiFi.
I remember when single-page applications were all the rage. I was highly skeptical that they could beat just loading HTML, given that the performance benefits were all predicated upon amortizing the initial load cost over many page requests. It's a very risky bet given that a lot of sites don't have a lot of repeat traffic to begin with, unless you just so happen to be an application in the guise of a website.
I just tested my own site, which I built using Gatsby — a JS framework — and https://astro.build, whose entire schtick is that they deliver as little JS to the page as possible.
(Because I’m thinking of rebuilding my site using Astro. But that’s not relevant here.)
In the default test, my page loaded in 1.6s and Astro in 1.9s. In the ‘not bad’ ratings below the main figures, my site fared better.
Now that my page is loaded, Gatsby does some neat pre-loading on hover of links. So clicking around my site is literally instantaneous. The same is not true of Astro, where every click is a classic HTTP request.
I am not judging Astro. That’s not the point of this post. I’m no Gatsby fanboy, I think it’s horribly over-complicated. I’m just saying. It’s complicated.
SSR doesn't mean no XHR, it usually just means that change in route means round trip to the server. If you're doing SSR only for the initial page load then it's typically called hydration.
The amount of JS to download and parse is only one component that makes SPAs slow. SPAs often have to do another request to fetch json. Depending on how far away you are from the web server, and some users are literally on the other end of the world, this latency can be really large. You don't just want to minimize the javascript you also want to minimize the round trips. But using a small js framework is a good start I think.
Woah! I was gonna play with Astro for a new site test, but I wanted the prefetch fast-nav that I’ve seen in NextJS and Gatsby. This looks like exactly what I want to make Astro so the trick! Thanks for sharing
Also, over-engineering web sites this way creates a lot of job security. Front-end development could probably warrant university courses all on its own.
A point of comparison should be to git.kernel.org, which loads and renders instantly (at least compared to all these other sites), contains a massive amount of actual content per page, is highly cacheable on the server, and uses exactly zero javascript while remaining usable (for its use case at least, which is all links and little form interaction (only the search box)).
How far we've come that near-real-time multidimensional dynamic views onto multi-GB data stores is considered 'a pure document' by someone. Why, just because it uses links and URLs and instead of buttons and fetch?
No, you've assumed the question. "Applications are much harder to cache and start up fast" because you're defining "applications" to intentionally exclude architectures which are cachable and fast to start.
If I shipped a git repo viewer as an executable, nobody would be asking why it's not rather a txt/doc/PDF. It's unquestionably an application, not a document.
It takes about 4s to load and does 2 or 3 jarring text reflows, I guess as fonts load or something.
It's also unreadable. Incredibly tiny text that would exhaust my eyes to parse within a few minutes.
And besides that... It's just text. Displaying text is easy. I'm sure this would be close to instant whatever framework you used, as long as it was coded well.
>a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions
notice how all these "benefits" only benefit the developer at the expense of the user or have nothing to do with the problem at hand. "personalized content"? really?
Pre client side Youtube,Twitter,FB,Reddit were all superior feats of engineering to their modern JS heavy counterparts.
It will certainly make the website more accessible to more people and reduce the load on their computers. This is required for government/public services. Look at how nice the UK NHS sites are. But for-profit corporations are free to, and seem to always, go the javascript application route because it is cheaper and easier to find teams to build them.
I'm sad we are downgrading quality of workforce just to increase the size of the pool. Try not to have gate keeper mindset but the new army of js warriors coming in industry makes the ecosystem off of engineering basis.
The NHS has very different goals. Those are reflected in the standards they set for themselves. They must serve everyone, unlike big businesses who consider the edges of the bell curve too expensive to cater to.
These types of tests tend to be unfair to actual web apps, since they only really account for first-time-use.
Twitter is slow in this experiment because it has to load a bunch of JavaScript up front. But that's not the case in practical use! Twitter uses service workers and HTTP cache headers (e.g. `expires`) to make sure that most non-first-time-users aren't actually loading most things every time. Client-side rendering isn't the thing that's slow here, it's mostly the re-downloading of the rendering code every time when that's not realistic.
Note that WebPageTest supports repeat tests for exactly this reason - that closes the browser and restarts it without clearing the cache. Here’s what that looks like - it helps the first render a lot (0.5 vs 1s) but the largest paint is still 7s: better than 11 but still pretty slow.
One big thing to remember is that browser caching only works if you aren’t shipping updates frequently (bundlers have been an anti-pattern for many sites for the last few years) and aren’t storing too much. On mobile in particular a lot of sites load enough junk that they fall out of the cache. I use Twitter only via their web app and the page load time on a fast iPhone is still like 10 seconds or worse, when a well-optimized HTML page can be in the hundred of milliseconds.
Browser caching is nicer with service workers - you can use the cached version and load/install the updated version in the background, so that the user can get the updated version on their next refresh.
That’s a nice improvement but it still hits the cache size issues and it creates some hard problems you have to get right (“why didn’t reload fix it?”).
Not everyone has the resources to support that level of investment and there’s a pitfall in the middle where you get the costs but don’t see the desired benefits.
Client-side rendering absolutely does make Twitter, in particular, slow. They used to do server side rendering for the initial load not that long ago, and it was so much faster than it is now. Night-and-day difference. I guess they just stopped caring. Someone probably got a promotion for launching the new thing.
Any time I have to use the Twitter site instead of Twitterrific on both desktop on mobile, the difference is huge. The site is a never-ending barrage of loading spinners.
For what it's worth, Twitter has gone back and forth on this. They did serve useful HTML up-front in the recent past. It was faster for initial delivery and I'd imagine helped the odd person who was sent a link to a tweet and hadn't loaded the site in a while.
Other sites in the article fit a traditional website model where folks commonly land on pages from search results, but they're still very JS dependent for content. CNN and FedEx etc
I've been using Nitter rather than Twitter for a while now, at least for reading tweets. A submit feature and SSE/a websocket listening for new posts would not kill Nitter's performance if that were its intentions to implement but the difference between the two is night and day.
Twitter is the slowest website I come back to after Reddit. It's a hot mess and the fully loaded web app isn't any faster. Yes, the megabytes of content being loaded for rendering 280 characters aren't indicative of web app performance, but clicking through to the main app is still slow and unreliable. My old phone used to get hot from the Twitter web app for no real good reason. There's less than a kilobyte of content in a tweet yet I have constant issues with getting all images to load when they come into view or scrolling back up without having the content jump around because some recommendations block decided it exists again now.
Getting faster devices solved Twitter's problems for me for a while but it's still ridiculous if you open Nitter and compare the performance. It's a clear example of what's wrong with modern web development because the "traditional" design of Nitter does everything I want it to do and 90% of what most people want it to do (needs tweets, DMs to become interactive) at a fraction of the load time.
I'd been thinking that 5G is a thing only because the IOT is a thing. It had nothing to do with phones, but the build-out is funded by phones. So when it all settles down, phones will be as slow as they were in the 3G era, at best, what with so much stuff clamoring for data.
I've seen this with my telco provider with 5G, full bars and it is still almost unusable. replace people with IOT devices pinging and sending tiny messages constantly it's gonna be rough :)
Shouldn't we have more devices and more connection types to have a more controlled experiment?
It's always 4G, mobile Chrome and I assume the same device.
Very likely same carrier at the same place, so roughly same connection conditions in terms of latency DL/UL bandwidth and jitter. Also always the same device with same CPU/GPU. Perhaps a flagship new shiny phone with a superfast SoC which gives a headstart to faster JS execution? Or perhaps a very spotty barely 1-bar 4G connection. (Just assumptions, maybe both are false, but you get the idea)
I'm a bit fan of client-side generation using JS too but I don't think this experiment is exhaustive of many practical scenarios.
If we see more connection types and more variety of devices with different CPUs then it'd be more convincing.
Measuring the speed of a page rendered on an iPhone 14 on an mmWave 5G connection a foot from an antenna is not a worthwhile test. If it takes 5 seconds for Twitter to load a tweet (which it does on my iPhone 12 Pro on WiFi) is that somehow better? A tweet, famously limited to 140 characters takes 5 seconds to load?
A news article or tweet takes way too long to load on my phone, it's just ludicrous that on a mid-range phone and connection it would take 45 seconds! A copy of Frankenstein[0] (~78k words) weighs in at 463KB. A random CNN article or tweet is not a damn copy of Frankenstein. There's no reason either should take more than a second to load and render.
An HTML document with a bare minimum CSS to not be ugly has enough information to render and be useful to a user. It can do that with a single request to a server. At minimum the same page rendered with JavaScript needs two connections to a server. It's also got a higher minimum threshold for displaying something to the user because the JavaScript needs to be downloaded, parsed, interpreted/JIT, then requests for useful resources made. All to do things a browser will already do for free.
There's full JavaScript applications that can't be built with just HTML and CSS. Of course they need to load and run the JavaScript. A tweet or news article are not applications. They do not need to load the equivalent of copies of Doom to display a dozen paragraphs of text or just 140 characters of text. The modern web's obsession with JavaScript everywhere is asinine.
4G on a Moto is basically the worst case scenario but also how half the world interacts with the internet at large. If you're going to pick one scenario that describes a lot of users, they're pretty much dead on.
I think 4G doesn't tell the whole story, especially in several businesses that target users in specific conditions (e.g. tourism, where your users have poor unstable 4g) or specific markets withpoor avera8ge mobile connections.
Some of the tests in the article are run on Desktop Chrome using a "cable" connection speed instead of 4G, which looks to have about a 6x faster round trip time than their 4G does. Those results are a little less impactful but still significant (many seconds faster still in some metrics).
More testing environments would make the results more or less significant, as you'd expect.
In ideal browsing conditions, the impact will be more minor, and in the spotty barely 1-bar connection you mention, the difference would be much more dramatic than the 4G examples in the post.
I wonder if one of the reasons we've seen a big push towards systems like this is because it does make the website faster, but in a way that's one step removed than we often consider it, or by a slightly different metric.
What if instead of client side load time, what's also being looked at is load time per finite server side compute resource? By dumbing down the server side to graphql JSON delivery + static JS, maybe that allows them to serve that specific need faster per 10k servers or something, and having to do the full page composition under heavy load server side just doesn't scale as well?
I mean, yes. That's what I was getting at. It's cheaper to offload your compute to your users. It's probably much cheaper to offload a massive amount of compute onto billions of users.
They'll keep doing it until they're penalized in some fashion for doing so, if it's cheaper.
There needs to be a website hall of shame that serves particularly slow websites rendered into PNGs/WEBP/WEBMs (+JS code to make them interactive/clickable) if those would load faster and use less data volume than the real thing.
I don't think it necessarily is. You do get better LCP and FCP but other metrics suffer (time to interactive, TTFB and several other metrics are primary examples).
It's a compromise, and hydration is a huge performance hit. (I work on performance of a SSR ecommerce)
"Time to interactive" and "time to first byte" are pointless numbers if the purpose of your site is to display content (Reddit, Twitter, pretty much everything else). If I (resentfully) click on a Reddit link on a SERP, I'm going there to read the content, not to flip open menus or whatever.
"Time to human satisfaction" should be a number that front-end developers measure and aim to improve. Just rendering the content server-side and showing it to the user first, then adding on the bells and whistles after that, is how you do that.
> "Time to human satisfaction" should be a number that front-end developers measure and aim to improve. Just rendering the content server-side and showing it to the user first, then adding on the bells and whistles after that, is how you do that.
Not necessarily. If you "load" the page but it doesn't do what it should when I click on it, that can be much more frustrating to the human than taking a little longer to load but being fully functional when you do. The assumption that anything that isn't HTML is "bells and whistles" is pretty dubious (as is the converse assumption that everything in the HTML is valuable).
If the purpose of your site is to show content to a human, then anything on the page that isn't the content the human wants to see is bells and whistles. I will die on this hill.
> If the purpose of your site is to show content to a human, then anything on the page that isn't the content the human wants to see is bells and whistles.
Sure, but a) the purpose is rarely just to show content, one of the great strengths of the web is interactivity. b) often a lot of what's in the HTML (and especially the CSS) isn't the content the human wants to see.
> "Time to interactive" and "time to first byte" are pointless numbers if the purpose of your site
They matter for SEO.
> "Time to human satisfaction" should be a number that front-end developers measure and aim to improve.
Satisfaction varies. TTI is a relevant metric. E.g. On our ecommerce it takes less than 10 seconds to load the entire page (sub 5 for most pages), but then the user can literally do nothing till all hydration has happened and executed which is simply not a great experience.
Our users, if using the website through slower connections/devices are looking at 40 seconds + delays before they can do anything meaningful, that's unacceptable.
> E.g. On our ecommerce it takes less than 10 seconds to load the entire page (sub 5 for most pages), but then the user can literally do nothing till all hydration has happened and executed which is simply not a great experience.
I can't tell if you're gloating about this or agreeing that these numbers are unacceptable.
Let's say it's a product page. You should be able to have the product title, description, and images (in the sense of <img> tags) load with the page instantly. If you need to do dynamic stuff like have a T-shirt size and color picker which changes what options are available based on stock or something like that, that functionality can be added to the page after the initial load, but nothing that doesn't have to be dynamic should be dynamic.
> other metrics suffer (time to interactive, TTFB and several other metrics are primary examples).
This is only true for a fraction of sites - the ones I work on are content heavy and use progressive enhancement so TTFB is how long it takes a CDN cache hit to transfer, and since you’re using so much less JavaScript in the critical path they’re less affected by network latency and device performance. Obviously that hits a different trade off than an app which has to have dynamic functionality to meaningfully load and lots of user-specific features are going to lower cache effectiveness.
As always, you get what you measure: pick metrics which matter for your business and get as much as possible get your data from real users so you don’t waste time chasing something people don’t care about. It’s too easy to focus on, say, making your search page as fast as you can when you should be asking whether you have what they’re looking for and surfacing it for them.
The large failing with this test is that it assumes the time to get the relevant page data from the database and render it to HTML is 0. If Twitter had your feed ready to go from its cache this might be accurate, but realistically I would give the server a few seconds to do its work since the site is so personalized.
As the article says, the first example in the post does include the time that it takes to go out and fetch the static HTML that swaps in, and it added about a second to the server response of the experiment run that doesn't show up in the control run. For a big distributed site, a second may be more time than it would really take to put together a dynamic response.
Even with that 1-second additional delay included though, the improvement in that first test is still large (over 8 seconds faster in those tests). If the experiment took a few seconds longer on the server, it still would be 5 or 6 seconds faster to render content than the control.
I agree that server rendered would be faster, just the article had presented the absolute best-case scenario for the server. That said there could be other tradeoffs at play. Maybe loading the JavaScript and requesting a small amount of JSON each page is faster loading the initial page and then scrolling 10 pages down is faster than the server rendering out each page and appending it to the end.
This page is serving me a recursive stream of Captchas, as pudgetsystems reviews the security of my connection; not a great look for the topic alluded to in the headline.
In my experience, it's not even the framework that's the problem. Client-side rendered React is plenty fast for example. Not as fast as server-rendered (or even better, static) HTML, but fast enough (measured in a few hundreds of ms) that you won't notice the difference. It's generally things like loading lots of 3rd party scripts, or not paying to how many network roundtrips are required on the critical loading path which make it slow.
HN, which is about as fast as sites get in my experience takes between 600ms and 800ms to fully load with cache disabled. Around 350ms to load just HTML and CSS.
IMO a few hundred milliseconds for something that is actually webapp-like rather than just an information website is quite reasonable.
Notably, the examples in the OP article are slower by many thousands of ms, not just hundreds.
Either way, the post isn't saying you have to abandon app-like experiences. It's only about improving initial HTML delivery regardless of what you do after that.
You have to be careful with the lazy loading, sometimes it ends up being just more round trips. And then there are those annoying sites that refuse to load anything that is off screen, and you end up having to wait over and over again.
Those are the cursed sites! Not only you have to wait for content you want, but you always get content you DON'T want just when you're about to finish your reading. Then - BAM! - page starts lagging, twitching, spitting out another megabytes worth article you didn't ask for.
While this is technically true, it's always been technically true, even when those in the religion of the SPA claimed it was clearly faster to use CSR over SSR.
This is just realigning what many of us already knew, SSR is faster.
If I may, the argument of "well... sure, but CSR can be plenty fast!" is redrawing the line after someone stepped over it.
Really it is a lot more about how much stuff you load than how you do it. CSR can at times provide savings, for instance by doing fake page changes and not having to load menus and stuff when the content changes. But of course you have to keep the JS light in order to not eat the savings.
Newer frameworks like Svelte or SolidJS are a lot less bloated on the client. Though we're still far from successfully minimizing the amount of network roundtrips involved in a SPA update, so there's plenty of room for improvement still.
I like what Ruby on Rails 7 has done with Hotwire and Stimulus. All html is rendered server side. If you need client side interactions, you can mount lightweight components using plain JavaScript.
Actually no, the point is if you don't put in anything that potentially makes the website slow you can be reasonably certain that it isn't. Obviously you still want to test the website, but it is not like this path is particularly test demanding.
Automated performance regression testing in browsers isn't trivial. Hell automated correctness testing in browsers isn't trivial. It's just not a well-trodden path -- automating browsers to test anything at all is messy and brittle.
The `***-dom.js` is 130.5 kB minified. Imagine parse and evaluate time browser takes before doing anything else. .. Oh also that's 0 LoC app, just a lib.
The commenter you're replying to would likely agree - the comment's point isn't about the value of speed, it's that by default pages are fast, and it takes deliberate (but ubiquitous) deviations from the rendered HTML to slow it down.
It's really funny because I asked on stack exchange why websites are slower than apps, my question for removed for being opinion based, and I got an answer about hydration.
In my view, the dom should be made obsolete, and there should be tighter restrictions, by making things immutable, or just completely redesigning how the dom works.
You cannot obsolete web browser features. That is simply not going to happen because of the long tail of devices. I'm not sure how you'd want to redesign the DOM, but you can't do that in a backwards-incompatible way either.
You could probably speed up pages significantly by introducing deferred rendering to the DOM, but: backwards incompatible change.
(That was my answer about hydration that you didn't like)
The protocol and the document object model essentially assumed a static world. It's amazing what we've cobbled on top of an essentially stateless mode of encoding/structuring and accessing documents.
What this doesn't account for is html rendering time on the server.
The reason websites use local javascript to render html is so they don't have to do it on their server while you have to wait for the result. This way you have the perception of a page load while the html renders. It's actually a better experience for the user.
This entire analysis assumes that the server renders the html instantly. Unless it is static content that is highly cacheable, chances are the render time on your machine isn't much slower than the server, but the website can use a lot less compute resource to make the webpage for you since your computer is doing part of the work.
Also, chances are they have to transmit less data to you, which cuts down on network latency as well.
Citation needed. Whether the server is serving HTML or JSON it still needs to serialise the data, so I don't think serialising JSON is going to be significantly faster than serialising HTML. Plus then the client needs to deserialise that JSON before it can render HTML, so all of that work related to (de)serialising JSON is work which doesn't even need to happen if the server is rendering HTML. Not to mention the work on the client to parse and evaluate the JS which needs to happen before it can even start rendering HTML.
As for data across the wire, GZIP is a thing so again I would want to see real world performance numbers to back your claims.
While it does leave out server rendering time, that could not possibly account for several seconds of difference. TTFB could maybe be increased by 300-500ms… 1/10th of what gains are happening here
I'm not OP, but I don't think the argument is even about rendering. The whole point of SPA, if I understand correctly, is to send the javascript/data and then dynamically create the web page client side (with minimal updates when data changes). Javascript is fast, but a server can dynamically generate HTML in any language. Most server side languages and/or frameworks will be written in faster languages than Javascript. Additionally, you don't need to wait for the browser to deserialize the JS before it can start generating the HTML.
So the benefits of server side generation is no need to deserialize the code that generates the page and locality to the data. I'm guessing these two things are the biggest contributors to the speed ups.
I guess "server side rendering" is a bit of a misnomer, since you're not getting a rendered image, but rather a functional web page. It's possible I've misunderstood the whole SPA vs server side rendering arguments as well and my entire argument is invalid ¯\_(ツ)_/¯
In the Stack Overflow survey it looks like the top 8 frameworks are all JS, C#,and Java. Flask, Django, Laravel and RoR start coming in after all these frameworks. So it looks like most server side apps are either JS or something faster like C# or Java.
Also this is if you check professionals. It's pretty similar for all respondents, but Flask and Django rise above spring and ASP.NET, although ASP.NET Core is still above those.
But most of the desktops, notebooks, phones and tablets doesn't have 3GHz, 50Mb L2 processors coupled with Gb/Tbs of RAM and local (compared to the client requesting the page) storage.
Having myself run this "experiment" for many years now by (a) controlling DNS so that only the domain in the "address bar" URL is resolved^1 and (b) making HTTP requests using a TCP client and/or an unpopular nongraphical web browser that only processes HTML and does not perform auto-loading of resources. No images, JS, CSS, etc.
The answer to the question is yes. This "makes a website faster", or, more specifically, as someone else in the thread has stated, it does not make the website slow. It does not accomodate the practices of "web developers" that slow a website down.
But most importantly, IMO, it makes "website speed", not to mention appearance, more consistent across websites. Good luck achieving any semblance of that with a popular graphical web browser.
Most web pages submitted to HN can be read this way. I find it easier to consume information when it follows a consistent style of presentation and without the distractions enabled by "modern" web browsers.
1. This is the only URL the www user is informed about. In the short history of the www so far, auto-loading from other domains, whether through HTML, Javascript or otherwise, has unfortnuately been abused to the point where allowing it produces more risk-taking for the www "user" than convenience for the www "developer". Sadly, instead of deprecating the "web development" practices that have been abused and make websites slow, the new HTTP standards proposed by an advertising company and supported by CDNs cater to this practice of "composite" websites comprised of resources from various third parties. It stands to reason that advertisers and therefore "tech" companies and their providers, e.g., CDNs, stand to benefit more from "composite" websites than www users do. IMHO the easiest way to "make websites faster" is to stop enabling "web developers" to do the things that make them slow.