nolanlawson.com Open in urlscan Pro
192.0.78.24  Public Scan

Submitted URL: http://nolanlawson.com/
Effective URL: https://nolanlawson.com/
Submission: On June 28 via api from GB — Scanned from GB

Form analysis 3 forms found in the DOM

GET https://nolanlawson.com/

<form method="get" id="searchform" action="https://nolanlawson.com/">
  <p><input type="text" name="s" onblur="this.value=(this.value=='' ) ? 'Search this Blog' : this.value;" onfocus="this.value=(this.value=='Search this Blog' ) ? '' : this.value;" value="Search this Blog" id="s">
    <button type="submit" id="top-search-submit"><img src="https://s0.wp.com/wp-content/themes/pub/springloaded/images/search-btn.gif" alt="Search"></button>
  </p>
</form>

POST https://subscribe.wordpress.com

<form method="post" action="https://subscribe.wordpress.com" accept-charset="utf-8" style="display: none;">
  <div class="actnbr-follow-count">Join 1,184 other followers</div>
  <div>
    <input type="email" name="email" placeholder="Enter your email address" class="actnbr-email-field" aria-label="Enter your email address">
  </div>
  <input type="hidden" name="action" value="subscribe">
  <input type="hidden" name="blog_id" value="21720966">
  <input type="hidden" name="source" value="https://nolanlawson.com/">
  <input type="hidden" name="sub-type" value="actionbar-follow">
  <input type="hidden" id="_wpnonce" name="_wpnonce" value="4f6dc675a2">
  <div class="actnbr-button-wrap">
    <button type="submit" value="Sign me up"> Sign me up </button>
  </div>
</form>

<form id="jp-carousel-comment-form">
  <label for="jp-carousel-comment-form-comment-field" class="screen-reader-text">Write a Comment...</label>
  <textarea name="comment" class="jp-carousel-comment-form-field jp-carousel-comment-form-textarea" id="jp-carousel-comment-form-comment-field" placeholder="Write a Comment..."></textarea>
  <div id="jp-carousel-comment-form-submit-and-info-wrapper">
    <div id="jp-carousel-comment-form-commenting-as">
      <fieldset>
        <label for="jp-carousel-comment-form-email-field">Email (Required)</label>
        <input type="text" name="email" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-email-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-author-field">Name (Required)</label>
        <input type="text" name="author" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-author-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-url-field">Website</label>
        <input type="text" name="url" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-url-field">
      </fieldset>
    </div>
    <input type="submit" name="submit" class="jp-carousel-comment-form-button" id="jp-carousel-comment-form-button-submit" value="Post Comment">
  </div>
</form>

Text Content

READ THE TEA LEAVES SOFTWARE AND OTHER DARK ARTS, BY NOLAN LAWSON



 * Home
 * Apps
 * Code
 * Talks
 * About

27 Jun


SPAS: THEORY VERSUS PRACTICE

Posted by Nolan Lawson in performance, Web. Tagged: spas. 10 Comments

I’ve been thinking a lot recently about Single-Page Apps (SPAs) and Multi-Page
Apps (MPAs). I’ve been thinking about how MPAs have improved over the years, and
where SPAs still have an edge. I’ve been thinking about how complexity creeps
into software, and why a developer may choose a more complex but powerful
technology at the expense of a simpler but less capable technology.

I think this core dilemma – complexity vs simplicity, capability vs
maintainability – is at the heart of a lot of the debates about web app
architecture. Unfortunately, these debates are so often tied up in other factors
(a kind of web dev culture war, Twitter-stoked conflicts, maybe even a
generational gap) that it can be hard to see clearly what the debate is even
about.

At the risk of grossly oversimplifying things, I propose that the core of the
debate can be summed up by these truisms:

 1. The best SPA is better than the best MPA.
 2. The average SPA is worse than the average MPA.

The first statement should be clear to most seasoned web developers. Show me an
MPA, and I can show you how to make it better with JavaScript. Added too much
JavaScript? I can show you some clever ways to minimize, defer, and multi-thread
that JavaScript. Ran into some bugs, because now you’ve deviated from the
browser’s built-in behavior? There are always ways to fix it! You’ve got
JavaScript.

Whereas with an MPA, you are delegating some responsibility to the browser. Want
to animate navigations between pages? You can’t (yet). Want to avoid the flash
of white? You can’t, until Chrome fixes it (and it’s not perfect yet). Want to
avoid re-rendering the whole page, when there’s only a small subset that
actually needs to change? You can’t; it’s a “full page refresh.”

My second truism may be more controversial than the first. But I think time and
experience have shown that, whatever the promises of SPAs, the reality has been
less convincing. It’s not hard to find examples of poorly-built SPAs that score
badly on a variety of metrics (performance, accessibility, reliability), and
which could have been built better and more cheaply as a bog-standard MPA.


EXAMPLE: SUBSEQUENT NAVIGATIONS

To illustrate, let’s consider one of the main value propositions of an SPA:
making subsequent navigations faster.

Rich Harris recently offered an example of using the SvelteKit website (SPA)
compared to the Astro website (MPA), showing that page navigations on the Svelte
site were faster.

Now, to be clear, this is a bit of an unfair comparison: the Svelte site is
preloading content when you hover over links, so there’s no network call by the
time you click. (Nice optimization!) Whereas the Astro site is not using a
Service Worker or other offlining – if you throttle to 3G, it’s even slower
relative to the Svelte site.



But I totally believe Rich is right! Even with a Service Worker, Astro would
have a hard time beating SvelteKit. The amount of DOM being updated here is
small and static, and doing the minimal updates in JavaScript should be faster
than asking the browser to re-render the full HTML. It’s hard to beat
element.innerHTML = '...'.

However, in many ways this site represents the ideal conditions for an SPA
navigation: it’s small, it’s lightweight, it’s built by the kind of experts who
build their own JavaScript framework, and those experts are also keen to get
performance right – since this website is, in part, a showcase for the framework
they’re offering. What about real-world websites that aren’t built by JavaScript
framework authors?

Anthony Ricaud recently gave a talk (in French – apologies to non-Francophones)
where he analyzed the performance of real-world SPAs. In the talk, he asks: What
if these sites used standard MPA navigations?

To answer this, he built a proxy that strips the site of its first-party
JavaScript (leaving the kinds of ads and trackers that, sadly, many teams are
not allowed to forgo), as well as another version of the proxy that doesn’t
strip any JavaScript. Then, he scripted WebPageTest to click an internal link,
measuring the load times for both versions (on throttled 4G).

So which was faster? Well, out of the three sites he tested, on both mobile
(Moto G4) and desktop, the MPA was either just as fast or faster, every time. In
some cases, the WebPageTest filmstrips even showed that the MPA version was
faster by several seconds. (Note again: these are subsequent navigations.)

On top of that, the MPA sites gave immediate feedback to the user when clicking
– showing a loading indicator in the browser chrome. Whereas some of the SPAs
didn’t even manage to show a “skeleton” screen before the MPA had already
finished loading.

Screenshot from Anthony Ricaud’s talk. The SPA version is on top (5.5s), and the
MPA version is on bottom (2.5s).

Now, I don’t think this experiment is perfect. As Anthony admits, removing
inline <script>s removes some third-party JavaScript as well (the kind that
injects itself into the DOM). Also, removing first-party JavaScript removes some
non-SPA-related JavaScript that you’d need to make the site interactive, and
removing any render-blocking inline <script>s would inherently improve the
visual completeness time.

Even with a perfect experiment, there are a lot of variables that could change
the outcome for other sites:

 * How fast is the SSR?
 * Is the HTML streamed?
 * How much of the DOM needs to be updated?
 * Is a network request required at all?
 * What JavaScript framework is being used?
 * How fast is the client CPU?
 * Etc.

Still, it’s pretty gobsmacking that JavaScript was slowing these sites down,
even in the one case (subsequent navigations) where JavaScript should be making
things faster.


EXHAUSTED DEVELOPERS AND CLEVER DEVELOPERS

Now, let’s return to my truisms from the start of the post:

 1. The best SPA is better than the best MPA.
 2. The average SPA is worse than the average MPA.

The cause of so much debate, I think, is that two groups of developers may look
at this situation, agree on the facts on the ground, but come to two different
conclusions:

> “The average SPA sucks? Well okay, I should stop building SPAs then. Problem
> solved.” – Exhausted developer

 

> “The average SPA sucks? That’s just because people haven’t tried hard enough!
> I can think of 10 ways to fix it.” – Clever developer

Let’s call these two archetypes the exhausted developer and the clever
developer.

The exhausted developer has had enough with managing the complexity of “modern”
web sites and web applications. Too many build tools, too many code paths, too
much to think about and maintain. They have JavaScript fatigue. Throw it all
away and simplify!

The clever developer is similarly frustrated by the state of modern web
development. But they also deeply understand how the web works. So when a tool
breaks or a framework does something in a sub-optimal way, it irks them, because
they can think of a better way. Why can’t a framework or a tool fix this
problem? So they set out to find a new tool, or to build it themselves.

The thing is, I think both of these perspectives are right. Clever developers
can always improve upon the status quo. Exhausted developers can always save
time and effort by simplifying. And one group can even help the other: for
instance, maybe Parcel is approachable for those exhausted by Webpack, but a
clever developer had to go and build Parcel first.


CONCLUSION

The disparity between the best and the average SPA has been around since the
birth of SPAs. In the mid-2000s, people wanted to build SPAs because they saw
how amazing GMail was. What they didn’t consider is that Google had a crack team
of experts monitoring every possible problem with SPAs, right down to esoteric
topics like memory leaks. (Do you have a team like that?)

Ever since then, JavaScript framework and tooling authors have been trying to
democratize SPA tooling, bringing us the kinds of optimizations previously only
available to the Googles and the Facebooks of the world. Their intentions have
been admirable (I would put my own fuite on that pile), but I think it’s fair to
say the results have been mixed.

An expert developer can stand up on a conference stage and show off the amazing
scores for their site (perfect performance! perfect accessibility! perfect
SEO!), and then an excited conference-goer returns to their team, convinces them
to use the same tooling, and two years later they’ve built a monstrosity. When
this happens enough times, the same conference-goer may start to distrust the
next dazzling demo they see.

And yet… the web dev community marches forward. Today I can grab any number of
“starter” app toolkits and build something that comes out-of-the-box with
code-splitting, Service Workers, tree-shaking, a thousand different little
micro-optimizations that I don’t even have to know the names of, because someone
else has already thought of it and gift-wrapped it for me. That is a miracle,
and we should be grateful for it.

Given enough innovation in this space, it is possible that, someday, the average
SPA could be pretty great. If it came batteries-included with proper scroll,
focus, and screen reader announcements, tooling to identify performance problems
(including memory leaks), progressive DOM rendering (e.g. Jake Archibald’s
hack), and a bunch of other optimizations, it’s possible that developers would
fall into the “pit of success” and consistently make SPAs that outclass the
equivalent MPA. I remain skeptical that we’ll get there, and even the best SPA
would still have problems (complexity, performance on slow clients, etc.), but I
can’t fault people for trying.

At the same time, browsers never stop taking the lessons from userland and
upstreaming them into the browser itself, giving us more lines of code we can
potentially delete. This is why it’s important to periodically re-evaluate the
assumptions baked into our tooling.

Today, I think the core dilemma between SPAs and MPAs remains unresolved, and
will maybe never be resolved. Both SPAs and MPAs have their strengths and
weaknesses, and the right tool for the job will vary with the size and skills of
the team and the product they’re trying to build. It will also vary over time,
as browsers evolve. The important thing, I think, is to remain open-minded,
skeptical, and analytical, and to accept that everything in software development
has tradeoffs, and none of those tradeoffs are set in stone.

22 Jun


STYLE SCOPING VERSUS SHADOW DOM: WHICH IS FASTEST?

Posted by Nolan Lawson in performance, Web. Tagged: shadow dom. Leave a Comment

Last year, I asked the question: Does shadow DOM improve style performance? I
didn’t give a clear answer, so perhaps it’s no surprise that some folks weren’t
sure what conclusion to draw.

In this post, I’d like to present a new benchmark that hopefully provides a more
solid answer.

TL;DR: My new benchmark largely confirmed my previous research, and shadow DOM
comes out as the most consistently performant option. Class-based style scoping
slightly beats shadow DOM in some scenarios, but in others it’s much less
performant. Firefox, thanks to its multi-threaded style engine, is much faster
than Chrome or Safari.


SHADOW DOM AND STYLE PERFORMANCE

To recap: shadow DOM has some theoretical benefits to style calculation, because
it allows the browser to work with a smaller DOM size and smaller CSS rule set.
Rather than needing to compare every CSS rule against every DOM node on the
page, the browser can work with smaller “sub-DOMs” when calculating style.

However, browsers have a lot of clever optimizations in this area, and userland
“style scoping” solutions have emerged (e.g. Vue, Svelte, and CSS Modules) that
effectively hook into these optimizations. The way they typically do this is by
adding a class or an attribute to the CSS selector: e.g. * { color: red }
becomes *.xxx { color: red }, where xxx is a randomly-generated token unique to
each component.

After crunching the numbers, my post showed that class-based style scoping was
actually the overall winner. But shadow DOM wasn’t far behind, and it was the
more consistently fast option.

These nuances led to a somewhat mixed reaction. For instance, here’s one common
response I saw (paraphrasing):

> The fastest option overall is class-based scoped styles, ala Svelte or CSS
> Modules. So shadow DOM isn’t really that great.

But looking at the same data, you could reach another, totally reasonable,
conclusion:

> With shadow DOM, the performance stays constant instead of scaling with the
> size of the DOM or the complexity of the CSS. Shadow DOM allows you to use
> whatever CSS selectors you want and not worry about performance.

Part of it may have been people reading into the data what they wanted to
believe. If you already dislike shadow DOM (or web components in general), then
you can read my post and conclude, “Wow, shadow DOM is even more useless than I
thought.” Or if you’re a web components fan, then you can read my post and
think, “Neat, shadow DOM can improve performance too!” Data is in the eye of the
beholder.

To drive this point home, here’s the same data from my post, but presented in a
slightly different way:



Click for details

This is 1,000 components, 10 rules per component.

Selector performance (ms) Chrome Firefox Safari Class selectors 58.5 22 56
Attribute selectors 597.1 143 710 Class selectors – shadow DOM 70.6 30 61
Attribute selectors – shadow DOM 71.1 30 81

As you can see, the case you really want to avoid is the second one – bare
attribute selectors. Inside of the shadow DOM, though, they’re fine. Class
selectors do beat shadow DOM overall, but only by a rounding error.

My post also showed that more complex selectors are consistently fast inside of
the shadow DOM, even if they’re much slower at the global level. This is exactly
what you would expect, given how shadow DOM works – the real surprise is just
that shadow DOM doesn’t handily win every category.


RE-BENCHMARKING

It didn’t sit well with me that my post didn’t draw a firm conclusion one way or
the other. So I decided to benchmark it again.

This time, I tried to write a benchmark to simulate a more representative web
app. Rather than focusing on individual selectors (ID, class, attribute, etc.),
I tried to compare a userland “scoped styles” implementation against shadow DOM.

My new benchmark generates a DOM tree based on the following inputs:

 * Number of “components” (web components are not used, since this benchmark is
   about shadow DOM exclusively)
 * Elements per component (with a random DOM structure, with some nesting)
 * CSS rules per component (randomly generated, with a mix of tag, class,
   attribute, :not(), and :nth-child() selectors, and some descendant and
   compound selectors)
 * Classes per component
 * Attributes per component

To find a good representative for “scoped styles,” I chose Vue 3’s
implementation. My previous post showed that Vue’s implementation is not as fast
as that of Svelte or CSS Modules, since it uses attributes instead of classes,
but I found Vue’s code to be easier to integrate. To make things a bit fairer, I
added the option to use classes rather than attributes.

One subtlety of Vue’s style scoping is that it does not scope ancestor
selectors. For instance:

1
2
3
4
5
6
7
8
/* Input */
div div {}
 
/* Output - Vue */
div div[data-v-xxx] {}
 
/* Output - Svelte */
div.svelte-xxx div.svelte-xxx {}

(Here is a demo in Vue and a demo in Svelte.)

Technically, Svelte’s implementation is more optimal, not only because it uses
classes rather than attributes, but because it can rely on the Bloom filter
optimization for ancestor lookups (e.g. :not(div) div → .svelte-xxx:not(div)
div.svelte-xxx, with .svelte-xxx in the ancestor). However, I kept the Vue
implementation because 1) this analysis is relevant to Vue users at least, and
2) I didn’t want to test every possible permutation of “scoped styles.” Adding
the “class” optimization is enough for this blog post – perhaps the “ancestor”
optimization can come in future work.

Note: In benchmark after benchmark, I’ve seen that class selectors are typically
faster than attribute selectors – sometimes by a lot, sometimes by a little.
From the web developer’s perspective, it may not be obvious why. Part of it is
just browser vendor priorities: for instance, WebKit invented the Bloom filter
optimization in 2011, but originally it only applied to tags, classes, and IDs.
They expanded it to attributes in 2018, and Chrome and Firefox followed suit in
2021 when I filed these bugs on them. Perhaps something about attributes also
makes them intrinsically harder to optimize than classes, but I’m not a browser
developer, so I won’t speculate.


METHODOLOGY

I ran this benchmark on a 2021 MacBook Pro (M1), running macOS Monterey 12.4.
The M1 is perhaps not ideal for this, since it’s a very fast computer, but I
used it because it’s the device I had, and it can run all three of Chrome,
Firefox, and Safari. This way, I can get comparable numbers on the same
hardware.

In the test, I used the following parameters:

Parameter Value Number of components 1000 Elements per component 10 CSS rules
per component 10 Classes per element 2 Attributes per element 2

I chose these values to try to generate a reasonable “real-world” app, while
also making the app large enough and interesting enough that we’d actually get
some useful data out of the benchmark. My target is less of a “static blog” and
more of a “heavyweight SPA.”

There are certainly more inputs I could have added to the benchmark: for
instance, DOM depth. As configured, the benchmark generates a DOM with a maximum
depth of 29 (measured using this snippet). Incidentally, this is a decent
approximation of a real-world app – YouTube measures 28, Reddit 29, and
Wikipedia 17. But you could certainly imagine more heavyweight sites with deeper
DOM structures, which would tend to spend more time in descendant selectors
(outside of shadow DOM, of course – descendant selectors cannot cross shadow
boundaries).

For each measurement, I took the median of 5 runs. I didn’t bother to refresh
the page between each run, because it didn’t seem to make a big difference. (The
relevant DOM was being blown away every time.) I also didn’t randomize the
stylesheets, because the browsers didn’t seem to be doing any caching that would
require randomization. (Browsers have a cache for stylesheet parsing, as I
discussed in this post, but not for style calculation, insofar as it matters for
this benchmark anyway.)

Update: I realized this comment was a bit blasé, so I re-ran the benchmark with
a fresh browser session between each sample, just to make sure the browser cache
wasn’t affecting the numbers. You can find those numbers at the end of the post.
(Spoiler: no big change.)

Although the benchmark has some randomness, I used random-seedable with a
consistent seed to ensure reproducible results. (Not that the randomness was
enough to really change the numbers much, but I’m a stickler for details.)

The benchmark uses a requestPostAnimationFrame polyfill to measure
style/layout/paint performance (see this post for details). To focus on style
performance only, a DOM structure with only absolute positioning is used, which
minimizes the time spent in layout and paint.

And just to prove that the benchmark is actually measuring what I think it’s
measuring, here’s a screenshot of the Chrome DevTools Performance tab:



Note that the measured time (“total”) is mostly taken up by “Recalculate Style.”


RESULTS

When discussing the results, it’s much simpler to go browser-by-browser, because
each one has different quirks.

One of the things I like about analyzing style performance is that I see massive
differences between browsers. It’s one of those areas of browser performance
that seems really unsettled, with lots of work left to do.

That is… unless you’re Firefox. I’m going to start off with Firefox, because
it’s the biggest outlier out of the three major browser engines.


FIREFOX

Firefox’s Stylo engine is fast. Like, really fast. Like, so fast that, if every
browser were like Firefox, there would be little point in discussing style
performance, because it would be a bit like arguing over the fastest kind of
for-loop in JavaScript. (I.e., interesting minutia, but irrelevant except for
the most extreme cases.)

In almost every style calculation benchmark I’ve seen over the past five years,
Firefox smokes every other browser engine to the point where it’s really in a
class of its own. Whereas other browsers may take over 1,000ms in a given
scenario, Firefox will take ~100ms for the same scenario on the same hardware.

So keep in mind that, with Firefox, we’re going to be talking about really small
numbers. And the differences between them are going to be even smaller. But here
they are:



Click for table

Scenario Firefox 101 Scoping – classes 30 Scoping – attributes 38 Shadow DOM 26
Unscoped 114

Note that, in this benchmark, the first three bars are measuring roughly the
same thing – you end up with the same DOM with the same styles. The fourth case
is a bit different – all the styles are purely global, with no scoping via
classes or attributes. It’s mostly there as a comparison point.

My takeaway from the Firefox data is that scoping with either classes,
attributes, or shadow DOM is fine – they’re all pretty fast. And as I mentioned,
Firefox is quite fast overall. As we move on to other browsers, you’ll see how
the performance numbers get much more varied.


CHROME

The first thing you should notice about Chrome’s data is how much higher the
y-axis is compared to Firefox. With Firefox, we were talking about ~100ms at the
worst, whereas now with Chrome, we’re talking about an order of magnitude
higher: ~1,000ms. (Don’t feel bad for Chrome – the Safari numbers will look
pretty similar.)



Click for table

Scenario Chrome 102 Scoping – classes 357 Scoping – attributes 614 Shadow DOM 49
Unscoped 1022

Initially, the Chrome data tells a pretty simple story: shadow DOM is clearly
the fastest, followed by style scoping with classes, followed by style scoping
with attributes, followed by unscoped CSS. So the message is simple: use Shadow
DOM, but if not, then use classes instead of attributes for scoping.

I noticed something interesting with Chrome, though: the performance numbers are
vastly different for these two cases:

 * 1,000 components: insert 1,000 different <style>s into the <head>
 * 1,000 components: concatenate those styles into one big <style>

As it turns out, this simple optimization greatly improves the Chrome numbers:



Click for table

Scenario Chrome 102 – separate styles Chrome 102 – concatenated Classes 357 48
Attributes 614 43

When I first saw these numbers, I was confused. I could understand this
optimization in terms of reducing the cost of DOM insertions. But we’re talking
about style calculation – not DOM API performance. In theory, it shouldn’t
matter whether there are 1,000 stylesheets or one big stylesheet. And indeed,
Firefox and Safari show no difference between the two:



Click for table

Scenario Firefox 101 – separate styles Firefox 101 – concatenated Classes 30 29
Attributes 38 38



Click for table

Scenario Safari 15.5 – separate styles Safari 15.5. – concatenated Classes 75 73
Attributes 812 820

This behavior was curious enough that I filed a bug on Chromium. According to
the Chromium engineer who responded (thank you!), this is because of a design
decision to trade off some initial performance in favor of incremental
performance when stylesheets are modified or added. (My benchmark is a bit
unfair to Chrome, since it only measures the initial calculation. A good idea
for a future benchmark!)

This is actually a pretty interesting data point for JavaScript framework and
bundler authors. It seems that, for Chromium anyway, the ideal technique is to
concatenate stylesheets similarly to how JavaScript bundlers do code-splitting –
i.e. trying to concatenate as much as possible, while still splitting in some
cases to optimize for caching across routes. (Or you could go full inline and
just put one big <style> on every page.) Keep in mind, though, that this is a
peculiarity of Chromium’s current implementation, and it could go away at any
moment if Chromium decides to change it.

In terms of the benchmark, though, it’s not clear to me what to do with this
data. You might imagine that it’s a simple optimization for a JavaScript
framework (or meta-framework) to just concatenate all the styles together, but
it’s not always so straightforward. When a component is mounted, it may call
getComputedStyle() on its own DOM nodes, so batching up all the style insertions
until after a microtask is not really feasible. Some meta-frameworks (such as
Nuxt and SvelteKit) leverage a bundler to concatenate the styles and insert them
before the component is mounted, but it feels a bit unfair to depend on that for
the benchmark.

To me, this is one of the core advantages of shadow DOM – you don’t have to
worry if your bundler is configured correctly or if your JavaScript framework
uses the right kind of style scoping. Shadow DOM is just performant, all of the
time, full stop. That said, here is the Chrome comparison data with the
concatenation optimization applied:



Click for table

Scenario Chrome 102 (with concatenation optimization) Scoping – classes 48
Scoping – attributes 43 Shadow DOM 49 Unscoped 1022

The first three are close enough that I think it’s fair to say that all of the
three scoping methods (class, attribute, and shadow DOM) are fast enough.

Note: You may wonder if Constructable Stylesheets would have an impact here. I
tried a modified version of the benchmark that uses these, and didn’t observe
any difference – Chrome showed the same behavior for concatenation vs splitting.
This makes sense, as none of the styles are duplicated, which is the main use
case Constructable Stylesheets are designed for. I have found elsewhere, though,
that Constructable Stylesheets are more performant than <style> tags in terms of
DOM API performance, if not style calculation performance (e.g. see here, here,
and here).


SAFARI

In our final tour of browsers, we arrive at Safari:



Click for table

Scenario Safari 15.5 Scoping – classes 75 Scoping – attributes 812 Shadow DOM 94
Unscoped 840

To me, the Safari data is the easiest to reason about. Class scoping is fast,
shadow DOM is fast, and unscoped CSS is slow. The one surprise is just how slow
attribute selectors are compared to class selectors. Maybe WebKit has some more
optimizations to do in this space – compared to Chrome and Firefox, attributes
are just a much bigger performance cliff relative to classes.

This is another good example of why class scoping is superior to attribute
scoping. It’s faster in all the engines, but the difference is especially stark
in Safari. (Or you could use shadow DOM and not worry about it at all.)


CONCLUSION

Performance shouldn’t be the main reason you choose a technology like scoped
styles or shadow DOM. You should choose it because it fits well with your
development paradigm, it works with your framework of choice, etc. Style
performance usually isn’t the biggest bottleneck in a web application, although
if you have a lot of CSS or a large DOM size, then you may be surprised by the
amount of “Recalculate Style” costs in your next performance trace.

One can also hope that someday browsers will advance enough that style
calculation becomes less of a concern. As I mentioned before, Stylo exists, it’s
very good, and other browsers are free to borrow its ideas for their own
engines. If every browser were as fast as Firefox, I wouldn’t have a lot of
material for this blog post.

This is the same data presented in this post, but on a single chart. Just notice
how much Firefox stands out from the other browsers.

Click for table

Scenario Chrome 102 Firefox 101 Safari 15.5 Scoping – classes 357 30 75 Scoping
– attributes 614 38 812 Shadow DOM 49 26 94 Unscoped 1022 114 840 Scoping –
classes – concatenated 48 29 73 Scoping – attributes – concatenated 43 38 820

For those who dislike shadow DOM, there is also a burgeoning proposal in the CSS
Working Group for style scoping. If this proposal were adopted, it could provide
a less intrusive browser-native scoping mechanism than shadow DOM, similar to
the abandoned <style scoped> proposal. I’m not a browser developer, but based on
my reading of the spec, I don’t see why it couldn’t offer the same performance
benefits we see with shadow DOM.

In any case, I hope this blog post was interesting, and helped shine light on an
odd and somewhat under-explored space in web performance. Here is the benchmark
source code and a live demo in case you’d like to poke around.

Thanks to Alex Russell and Thomas Steiner for feedback on a draft of this blog
post.


AFTERWORD – MORE DATA

Updated June 23, 2022

After writing this post, I realized I should take my own advice and automate the
benchmark so that I could have more confidence in the numbers (and make it
easier for others to reproduce).

So, using Tachometer, I re-ran the benchmark, taking the median of 25 samples,
where each sample uses a fresh browser session. Here are the results:



Click for table

Scenario Chrome 102 Firefox 101 Safari 15.5 Scoping – classes 277.1 45 80
Scoping – attributes 418.8 54 802 Shadow DOM 56.80000001 67 82 Unscoped 820.4
190 857 Scoping – classes – concatenated 44.30000001 42 80 Scoping – attributes
– concatenated 44.5 51 802 Unscoped – concatenated 251.3 167 865

As you can see, the overall conclusion of my blog post doesn’t change, although
the numbers have shifted slightly in absolute terms.

I also added “Unscoped – concatenated” as a category, because I realized that
the “Unscoped” scenario would benefit from the concatenation optimization as
well (in Chrome, at least). It’s interesting to see how much of the perf win is
coming from concatenation, and how much is coming from scoping.

If you’d like to see the raw numbers from this benchmark, you can download them
here.


SECOND AFTERWORD – EVEN MORE DATA

Updated June 25, 2022

You may wonder how much Firefox’s Stylo engine is benefiting from the 10 cores
in that 2021 Mac Book Pro. So I unearthed my old 2014 Mac Mini, which has only 2
cores but (surprisingly) can still run macOS Monterey. Here are the results:



Click for table

Scenario Chrome 102 Firefox 101 Safari 15.5 Scoping – classes 717.4 107 187
Scoping – attributes 1069.5 162 2853 Shadow DOM 227.7 117 233 Unscoped 2674.5
452 3132 Scoping – classes – concatenated 189.3 104 188 Scoping – attributes –
concatenated 191.9 159 2826 Unscoped – concatenated 865.8 422 3148

(Again, this is the median of 25 samples. Raw data.)

Amazingly, Firefox seems to be doing even better here relative to the other
browsers. For “Unscoped,” it’s 14.4% of the Safari number (vs 22.2% on the
MacBook), and 16.9% of the Chrome number (vs 23.2% on the MacBook). Whatever
Stylo is doing, it’s certainly impressive.

14 Jun


DIALOGS AND SHADOW DOM: CAN WE MAKE IT ACCESSIBLE?

Posted by Nolan Lawson in accessibility, Web. Tagged: shadow dom. 2 Comments

Last year, I wrote about managing focus in the shadow DOM, and in particular
about modal dialogs. Since the <dialog> element has now shipped in all browsers,
and the inert attribute is starting to land too, I figured it would be a good
time to take another look at getting dialogs to play nicely with shadow DOM.

This post is going to get pretty technical, especially when it comes to the
nitty-gritty details of accessibility and web standards. If you’re into that,
then buckle up! The ride may be a bit bumpy.


QUICK RECAP

Shadow DOM is weird. On paper, it doesn’t actually change what you can do in the
DOM – with open mode, at least, you can access any element on the page that you
want. In practice, though, shadow DOM upends a lot of web developer expectations
about how the DOM works, and makes things much harder.

I credit Brian Kardell for this description of open shadow DOM, which is maybe
the most perfect distillation of how it actually works.

Note: Shadow DOM has two modes: open and closed. Closed mode is a lot more
restrictive, but it’s less common – the majority of web component frameworks use
open by default (e.g. Angular, Fast, Lit, LWC, Remount, Stencil, Svelte, Vue).
Somewhat surprisingly, though, open mode is only 3 times as popular as closed
mode, according to Chrome Platform Status (9.9% vs 3.5%).

For accessibility reasons, modal dialogs need to implement a focus trap.
However, the DOM doesn’t have an API for “give me all the elements on the page
that the user can Tab through.” So web developers came up with creative
solutions, most of which amount to:

1
dialog.querySelectorAll('button, input, a[href], ...')

Unfortunately this is the exact thing that doesn’t work in the shadow DOM.
querySelectorAll only grabs elements in the current document or shadow root; it
doesn’t deeply traverse.

Like a lot of things with shadow DOM, there is a workaround, but it requires
some gymnastics. These gymnastics are hard, and have a complexity and (probably)
performance cost. So a lot of off-the-shelf modal dialogs don’t handle shadow
DOM properly (e.g. a11y-dialog does not).

Note: My goal here isn’t to criticize a11y-dialog. I think it’s one of the best
dialog implementations out there. So if even a11y-dialog doesn’t support shadow
DOM, you can imagine a lot of other dialog implementations probably don’t,
either.


A CONSTRUCTIVE DIALOG

“But what about <dialog>?”, you might ask. “The dang thing is called <dialog>;
can’t we just use that?”

If you had asked me a few years ago, I would have pointed you to Scott O’Hara’s
extensive blog post on the subject, and said that <dialog> had too many
accessibility gotchas to be a practical solution.

If you asked me today, I would again point you to the same blog post. But this
time, there is a very helpful 2022 update, where Scott basically says that
<dialog> has come a long way, so maybe it’s time to give it a second chance.
(For instance, the issue with returning focus to the previously-focused element
is now fixed, and the need for a polyfill is much reduced.)

Note: One potential issue with <dialog>, mentioned in Rob Levin’s recent post on
the topic, is that clicking outside of the dialog should close it. This has been
proposed for the <dialog> element, but the WAI ARIA Authoring Practices Guide
doesn’t actually stipulate this, so it seems like optional behavior to me.

To be clear: <dialog> still doesn’t give you 100% of what you’d need to
implement a dialog (e.g. you’d need to lock the background scroll), and there
are still some lingering discussions about how to handle initial focus. For that
reason, Scott still recommends just using a battle-tested library like
a11y-dialog.

As always, though, shadow DOM makes things more complicated. And in this case,
<dialog> actually has some compelling superpowers:

 1. It automatically limits focus to the dialog, with correct Tab order, even in
    shadow DOM.
 2. It works with closed shadow roots as well, which is impossible in userland
    solutions.
 3. It also works with user-agent shadow roots. (E.g. you can Tab through the
    buttons in a <video controls> or <audio controls>.) This is also impossible
    in userland, since these elements function effectively like closed shadow
    roots.
 4. It correctly returns focus to the previously-focused element, even if that
    element is in a closed shadow root. (This is possible in userland, but you’d
    need an API contract with the closed-shadow component.)
 5. The Esc key correctly closes the modal, even if the focus is in a user-agent
    shadow root (e.g. the pause button is focused when you press Esc). This is
    also not possible in userland.

Here is a demo:



Note: Eagle-eyed readers may wonder: what if the first tabbable element in the
dialog is in a shadow root? Does it correctly get focus? The short answer is:
yes in Chrome, no in Firefox or Safari (demo). Let’s hope those browsers fix it
soon.

So should everybody just switch over to <dialog>? Not so fast: it actually
doesn’t perfectly handle focus, per the WAI ARIA Authoring Practices Guide
(APG), because it allows focus to escape to the browser chrome. Here’s what I
mean:

 * You reach the last tabbable element in the dialog and press Tab.
   * Correct: focus moves to the first tabbable element in the dialog.
   * Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the
     browser chrome.
 * You reach the first tabbable element in the dialog and press Shift+Tab.
   * Correct: focus moves to the last tabbable element in the dialog.
   * Incorrect (<dialog>): focus goes to the URL bar or somewhere else in the
     browser chrome.

This may seem like a really subtle difference, but the consensus of
accessibility experts seems to be that the WAI ARIA APG is correct, and <dialog>
is wrong.

Note: I say “consensus,” but… there isn’t perfect consensus. You can read this
comment from James Teh or Scott O’Hara’s aforementioned post (“This is good
behavior, not a bug”) for dissenting opinions. In any case, the “leaky” focus
trap conflicts with the WAI ARIA APG and the way userland dialogs have
traditionally worked.

So we’ve reached (yet another!) tough decision with <dialog>. Do we accept
<dialog>, because at least it gets shadow DOM right, even though it gets some
other stuff wrong? Do we try to build our own thing? Do we quit web development
entirely and go live the bucolic life of a potato farmer?


INERT MATTER

While I was puzzling over this recently, it occurred to me that inert may be a
step forward to solving this problem. For those unfamiliar, inert is an
attribute that can be used to mark sections of the DOM as “inert,” i.e.
untabbable and invisible to screen readers:

1
2
3
<main inert></main>
<div role="dialog"></div>
<footer inert></footer>

In this way, you could mark everything except the dialog as inert, and focus
would be trapped inside the dialog.

Here is a demo:



As it turns out, this works perfectly for tabbing through elements in the shadow
DOM, just like <dialog>! Unfortunately, it has exactly the same problem with
focus escaping to the browser chrome. This is no accident: the behavior of
<dialog> is defined in terms of inert.

Can we still solve this, though? Unfortunately, I’m not sure it’s possible. I
tried a few different techniques, such as listening for Tab events and checking
if the activeElement has moved outside of the modal, but the problem is that you
still, at some point, need to figure out what the “first” and “last” tabbable
elements in the dialog are. To do this, you need to traverse the DOM, which
means (at the very least) traversing open shadow roots, which doesn’t work for
closed or user-agent shadow roots. And furthermore, it involves a lot of extra
work for the web developer, who has probably lost focus at this point and is
daydreaming about that nice, quiet potato farm.

Note: inert also, sadly, does not help with the Esc key in user-agent shadow
roots, or returning focus to closed shadow roots when the dialog is closed, or
setting initial focus on an element in a closed shadow root. These are
<dialog>-only superpowers. Not that you needed any extra convincing.


CONCLUSION

Until the spec and browser issues have been ironed out (e.g. browsers change
their behavior so that focus doesn’t escape to the browser chrome, or they give
us some entirely different “focus trap” primitive), I can see two reasonable
options:

 1. Use something like a11y-dialog, and don’t use shadow DOM or user-agent
    shadow components like <video controls> or <audio controls>. (Or do some
    nasty hacks to make it partially work.)
 2. Use shadow DOM, but don’t bother solving the “focus escapes to the browser
    chrome” problem. Use <dialog> (or a library built on top of it) and leave it
    at that.

For my readers who were hoping that I’d drop some triumphant “just npm install
nolans-cool-dialog and it will work,” I’m sorry to disappoint you. Browsers are
still rough around the edges in this area, and there aren’t a lot of great
options. Maybe there is some mad-science way to actually solve this, but even
that would likely involve a lot of complexity, so it wouldn’t be ideal.

Alternatively, maybe some of you are thinking that I’m focusing too much on
closed and user-agent shadow roots. As long as you’re only using open shadow DOM
(which, recall, is like the sign that says “I’m a sign, not a cop”), you can do
whatever you want. So there’s no problem, right?

Personally, though, I like using <video controls> and <audio controls> (why ship
a bunch of JavaScript to do something the browser already does?). And
furthermore, I find it odd that if you put a <video controls> inside a <dialog>,
you end up with something that’s impossible to make accessible per the WAI ARIA
APG. (Is it too much to ask for a little internal consistency in the web
platform?)

In any case, I hope this blog post was helpful for others tinkering around with
the same problems. I’ll keep an eye on the browsers and standards space, and
update this post if anything promising emerges.

9 Jun


THE COLLAPSE OF COMPLEX SOFTWARE

Posted by Nolan Lawson in software engineering. Tagged: complexity. 35 Comments

In 1988, the anthropologist Joseph Tainter published a book called The Collapse
of Complex Societies. In it, he described the rise and fall of great
civilizations such as the Romans, the Mayans, and the Chacoans. His goal was to
answer a question that had vexed thinkers over the centuries: why did such
mighty societies collapse?

In his analysis, Tainter found the primary enemy of these societies to be
complexity. As civilizations grow, they add more and more complexity: more
hierarchies, more bureaucracies, deeper intertwinings of social structures.
Early on, this makes sense: each new level of complexity brings rewards, in
terms of increased economic output, tax revenue, etc. But at a certain point,
the law of diminishing returns sets in, and each new level of complexity brings
fewer and fewer net benefits, dwindling down to zero and beyond.

But since complexity has worked so well for so long, societies are unable to
adapt. Even when each new layer of complexity starts to bring zero or even
negative returns on investment, people continue trying to do what worked in the
past. At some point, the morass they’ve built becomes so dysfunctional and
unwieldy that the only solution is collapse: i.e., a rapid decrease in
complexity, usually by abolishing the old system and starting from scratch.

What I find fascinating about this (besides the obvious implications for modern
civilization) is that Tainter could have been writing about software.

Anyone who’s worked in the tech industry for long enough, especially at larger
organizations, has seen it before. A legacy system exists: it’s big, it’s
complex, and no one fully understands how it works. Architects are brought in to
“fix” the system. They might wheel out a big whiteboard showing a lot of boxes
and arrows pointing at other boxes, and inevitably, their solution is… to add
more boxes and arrows. Nobody can subtract from the system; everyone just adds.

“EKS is being deprecated at the end of the month for Omega Star, but Omega Star
still doesn’t support ISO timestamps.” We’ve all been there. (Via Krazam)

This might go on for several years. At some point, though, an organizational
shakeup probably occurs – a merger, a reorg, the polite release of some senior
executive to go focus on their painting hobby for a while. A new band of
architects is brought in, and their solution to the “big diagram of boxes and
arrows” problem is much simpler: draw a big red X through the whole thing. The
old system is sunset or deprecated, the haggard veterans who worked on it either
leave or are reshuffled to other projects, and a fresh-faced team is brought in
to, blessedly, design a new system from scratch.

As disappointing as it may be for those of us who might aspire to write the kind
of software that is timeless and enduring, you have to admit that this system
works. For all its wastefulness, inefficiency, and pure mendacity (“The old code
works fine!” “No wait, the old code is terrible!”), this is the model that has
sustained a lot of software companies over the past few decades.

Will this cycle go on forever, though? I’m not so sure. Right now, the software
industry has been in a nearly two-decade economic boom (with some fits and
starts), but the one sure thing in economics is that booms eventually turn to
busts. During the boom, software companies can keep hiring new headcount to
manage their existing software (i.e. more engineers to understand more boxes and
arrows), but if their labor force is forced to contract, then that same system
may become unmaintainable. A rapid and permanent reduction in complexity may be
the only long-term solution.

One thing working in complexity’s favor, though, is that engineers like
complexity. Admit it: as much as we complain about other people’s complexity, we
love our own. We love sitting around and dreaming up new architectural diagrams
that can comfortably sit inside our own heads – it’s only when these diagrams
leave our heads, take shape in the real world, and outgrow the size of any one
person’s head that the problems begin.

It takes a lot of discipline to resist complexity, to say “no” to new boxes and
arrows. To say, “No, we won’t solve that problem, because that will just
introduce 10 new problems that we haven’t imagined yet.” Or to say, “Let’s go
with a much simpler design, even if it seems amateurish, because at least we can
understand it.” Or to just say, “Let’s do less instead of more.”

Simplicity of design sounds great in theory, but it might not win you many
plaudits from your peers. A complex design means more teams to manage more parts
of the system, more for the engineers to do, more meetings and planning
sessions, maybe some more patents to file. A simple design might make it seem
like you’re not really doing your job. “That’s it? We’re done? We can clock
out?” And when promotion season comes around, it might be easier to make a case
for yourself with a dazzling new design than a boring, well-understood solution.

Ultimately, I think whether software follows the boom-and-bust model, or a more
sustainable model, will depend on the economic pressures of the organization
that is producing the software. A software company that values growth at all
cost, like the Romans eagerly gobbling up more and more of Gaul, will likely
fall into the “add-complexity-and-collapse” cycle. A software company with more
modest aims, that has a stable customer base and doesn’t change much over time
(does such a thing exist?) will be more like the humble tribe that follows the
yearly migration of the antelope and focuses on sustainable, tried-and-true
techniques. (Whether such companies will end up like the hapless Gauls, overrun
by Caesar and his armies, is another question.)

Personally, I try to maintain a good sense of humor about this situation, and to
avoid giving in to cynicism or despair. Software is fun to write, but it’s also
very impermanent in the current industry. If the code you wrote 10 years ago is
still in use, then you have a lot to crow about. If not, then hey, at least
you’re in good company with the rest of us, who probably make up the majority of
software developers. Just keep doing the best you can, and try to have a healthy
degree of skepticism when some wild-eyed architect wheels out a big diagram with
a lot of boxes and arrows.

29 May


STATE IS HARD: WHY SPAS WILL PERSIST

Posted by Nolan Lawson in Web. Tagged: spas. Leave a Comment

When I write about web development, sometimes it feels like the parable of the
blind men and the elephant. I’m out here eagerly describing the trunk, someone
else protests that no, it’s a tail, and meanwhile the person riding on its back
is wondering what all the commotion is down there.

We’re all building so many different types of products using web technology –
e-commerce sites, productivity apps, blogs, streaming sites, video games, hybrid
mobile apps, dashboards on actual spaceships – that it gets difficult to even
have a shared vocabulary to describe what we’re doing. And each sub-discipline
of web development is so deep that it’s easy to get tunnel-visioned and forget
that other people are working with different tools and constraints.

This is what I like about blogging, though: it can help solve the problem of
“feeling out the elephant.” I can offer my own perspective, even if flawed, and
summon the human hive-mind to help describe the rest of the beast.

My last two posts have been a somewhat clumsy fumbling toward a new definition
of SPAs (Single-Page Apps) and MPAs (Multi-Page Apps), and why you’d choose one
versus the other when building a website. As it turns out, there is probably
enough here to fill a book, but my goal is just to bring my own point of view
(and bias) to the table and let others fill in the gaps with their comments and
feedback.

I have a few main biases on this topic:

 1. I usually prize performance over ergonomics. I’ll go for the more performant
    solution, even if it’s awkward or unintuitive.
 2. I like understanding how browsers work, and relying on the “browser-y” way
    of doing things rather than inventing my own prosthetic solution.
 3. I don’t pay nearly enough attention to what’s happening in “user land” – I
    like to stay “close to the metal” and see the world from the browser’s
    perspective. Show me your compiled code, not your source code!

In thinking about this topic and reading what others have written on it, one
thing that struck me is that a big attraction for SPAs is the same thing that
can cause so many problems: state. People who like SPAs often celebrate the fact
than an SPA maintains state between navigations. For instance:

 1. You have a search input. You type into it, click somewhere else to navigate,
    and the next page still has the text in the input.
 2. You have a scrollable sidebar. You scroll halfway down, click on something,
    and the next page still has the sidebar at the last scroll position.
 3. You have a list of expandable cards. You expand one of them, click somewhere
    else, and the next page still has the one card expanded.

Note that these kinds of examples are particularly important for so-called
“nested routes”, especially in complex desktop UIs. Think of sidebars, headers,
and footers that maintain their state while the rest of the UI changes. I find
it interesting that this is much less of an issue in mobile UIs, where it’s more
common to change (nearly) the whole viewport on navigation.

Managing state is one of the hardest things about writing software. And in many
ways, this aspect of state management is a great boon to SPAs. In particular,
you don’t have to think about persisting state between navigations; it just
happens automatically. In an MPA, you would have to serialize this state into
some persistent format (LocalStorage, IndexedDB, etc.) when the page unloads,
and then rehydrate on page load.

On the other hand, the fact that the state never gets blown away is exactly what
leads to memory leaks – a problem endemic to SPAs that I’ve already documented
ad nauseam. Plus, the further that the state can veer from a known good initial
value, the more likely you are to run into bugs, which is why a misbehaving SPA
often just needs a good refresh.

Interestingly, though, it’s not always the case that an MPA navigation lands on
a fresh state. As mentioned in a previous post, the back-forward cache (now
implemented in all browsers) makes this discussion more nuanced.


CACHE CONTENTS

A quick refresher: in modern browsers, the back-forward cache (or BF cache for
short) keeps a cache of the previous and next page when navigating between pages
on the same origin. This vastly reduces load times when navigating back and
forth through standard MPA pages.

But how exactly does this cache work? Even an MPA page can be very dynamic. What
if the page has been dynamically modified, or the DOM state has changed, or the
JavaScript state has changed? What does the browser actually cache?

To test this out, I wrote a simple test page. On this page, you can set state in
a variety of ways: DOM state, JavaScript heap state, scroll state. Then you can
click a link to another page, press the back button, and see what the browser
remembers.

As it turns out, the browser remembers a lot. I tested this in various browsers
(Chrome/Firefox/Safari on desktop, Chrome/Firefox on Android, Safari on iOS),
and saw the same result in all of them: the full page state is maintained after
pressing the back button. Here is a video demonstration:



Note that the scroll positions on both the main document and the subscroller are
preserved. More impressively, JavaScript state that isn’t even represented in
the DOM (here, the number of times a button was clicked) is also preserved.

Now, to be clear: this doesn’t solve the problem of maintaining state in normal
forward navigations. Everything I said above about MPAs needing to serialize
their state would apply to any navigation that isn’t cached. Also, this behavior
may vary subtly between browsers, and their heuristics might not work for your
website. But it is impressive that the browser gives you so much out-of-the-box.


CONCLUSION

There are dozens of reasons to reach for an SPA technology, MPA technology, or
some blend of the two. Everything depends on the needs and constraints of what
you’re trying to build.

In these past few posts, I’ve tried to shed light on some interesting changes to
MPAs that have happened under our very feet, while we might not have noticed.
These changes are important, and may shift the calculus when trying to decide
between an SPA or MPA architecture. To be fair, though, SPAs haven’t stopped
moving either: experimental browser APIs like the Navigation API are even trying
to solve longstanding problems of focus and scroll management. And of course,
frameworks are still innovating on both SPAs and MPAs.

The fact that SPAs neatly simplify so many aspects of application development –
keeping state in one place, on the main thread, persistent across navigations –
is one of their greatest strengths as well as a predictable wellspring of
problems. Performance and accessibility wonks can continue harping on the
problems of SPAs, but at the end of the day, if developers find it easier to
code an SPA than the equivalent MPA, then SPAs will continue to be built. Making
MPAs more capable is only one way of solving the problem: approaching things
from the other end – such as improved tooling, guidance, and education for SPA
developers – can also work toward the same end goal.

As tempting as it may be to pronounce one set of tools as dead and another as
ascendant, it’s important to remain humble and remember that everyone is working
under a different set of constraints, and we all have a different take on web
development. For that reason, I’ve come around to the conclusion that SPAs are
not going anywhere anytime soon, and will probably remain a compelling
development paradigm for as long as the web is around. Some developers will
choose one perspective, some will choose another, and the big, beautiful
elephant will continue lumbering forward.

25 May


MORE THOUGHTS ON SPAS

Posted by Nolan Lawson in Web. Tagged: spas. 15 Comments

My last post (“The balance has shifted away from SPAs”) attracted a fair amount
of controversy, so I’d like to do a follow-up post with some clarifying points.

First off, a definition. In some circles, “SPA” has colloquially come to mean
“website with tons of JavaScript,” which brings its own set of detractors, such
as folks who just don’t like JavaScript very much. This is not at all what I
mean by “SPA.” To me, an SPA is simply a “Single-Page App,” i.e. a website with
a client-side router, where every navigation stays on the same HTML page rather
than loading a new one. That’s it.

It has nothing to do with the programming model, or whether it “feels” like
you’re coding a Single-Page App. By my definition, Turbolinks is an SPA
framework, even if, as a framework user, you never have to dirty your hands
touching any JavaScript. If it has a client-side router, it’s an SPA.

Second, the point of my post wasn’t to bury SPAs and dance on their grave. I
think SPAs are great, I’ve worked on many of them, and I think they have a
bright future ahead of them. My main point was: if the only reason you’re using
an SPA is because “it makes navigations faster,” then maybe it’s time to
re-evaluate that.

Jake Archibald already showed way back in 2016 that SPA navigations are not
faster when the page is loading lots of HTML, because the browser’s streaming
HTML parser can paint above-the-fold content faster than it takes for the SPA to
download the full-fat JSON (or HTML) and manually inject it into the DOM.
(Unless you’re doing some nasty hacks, which you probably aren’t.) In his
example, GitHub would be better off just doing a classic server round-trip to
fetch new HTML than a fancy Turbolinks SPA navigation.

That said, my post did generate some thoughtful comments and feedback, and it
got me thinking about whether there are other reasons for SPAs’ recent decline
in popularity, and why SPAs could still remain an attractive choice in the
future for certain types of websites.


CORE WEB VITALS

In 2020, Google announced that the Core Web Vitals would become a factor in
search page rankings. I think it’s fair to say that this sent shockwaves through
the industry, and caused folks who hadn’t previously taken performance very
seriously to start paying close attention to their site speed scores.

It’s important to notice that the Core Web Vitals are very focused on page load.
LCP (Largest Contentful Paint) and FID (First Input Delay) both apply only to
the user experience during the initial navigation. (CLS, or Cumulative Layout
Shift, applies to the initial navigation and beyond; see note below.) This makes
sense for Google: they don’t really care how fast your site is after that
initial page load; they mostly just care about the experience of clicking a link
in Google and loading the subsequent page.

Regardless of whether these metrics are an accurate proxy for the user
experience, they are heavily biased against SPAs. The whole value proposition of
SPAs (from a performance perspective at least) is that you pay a large upfront
cost in exchange for faster subsequent interactions (that’s the theory anyway).
With these metrics, Google is penalizing SPAs if they render client-side (LCP),
load a lot of JavaScript (FID), or render content progressively on the client
side (CLS).

A classic MPA (Multi-Page App) with a dead-simple HTML file and no JavaScript
will score very highly on Core Web Vitals. Miško Hevery, the creator of Qwik,
has explicitly mentioned Core Web Vitals as an influence on how he designed his
framework. Especially for websites that are very sensitive to SEO scores, such
as e-commerce sites, the Core Web Vitals are pushing developers away from SPAs.

Update: This post originally stated that CLS applies only to the initial
navigation; it turns out that it applies to the full page lifespan. (The
heuristics are pretty complex; you can read about them here.) I think my point
still stands, though, that an MPA with no JavaScript (and no unsized images or
iframes, poorly sized fonts, or other mistakes) should easily get a great CLS
score.


CODE CACHING

This was something I forgot to mention in my post, probably because it happened
long enough ago that it couldn’t possibly have had an impact on the recent
uptick in MPA interest. But it’s worth calling out.

When you navigate between pages in an MPA, the browser is smart enough not to
parse and compile the same JavaScript over and over again. Chrome does it,
Firefox does it, Safari does it. All modern browsers have some variation on
this. (Legacy Edge and IE, may they rest in peace, did not have this.)
Incidentally, this optimization also exists for stylesheet parsing (WebKit bug
from 2012, Firefox bug, demo).

So if you have the same shared JavaScript and CSS on multiple MPA pages, it’s
not a big deal in terms of subsequent navigations. At worst, you’re asking the
browser to re-parse and re-render your HTML, re-run style and layout calculation
(which would happen in an SPA anyway, although to a lesser degree thanks to
techniques like invalidation sets), and re-run JavaScript execution. (In a
well-built MPA, though, you should not have much JavaScript on each page.)

Throw in paint holding and the back-forward cache (as discussed in my previous
post), as well as the streaming HTML mentioned above, and you can see why the
value proposition of “SPA navigations are fast” is not so true anymore. (Maybe
it’s true in certain cases, e.g. where the DOM being updated is very small. But
is it so much faster that it’s worth the added complexity of a client-side
router?)

Update: It occurred to me that a good use case for this kind of SPA navigation
is a settings page, dashboard, or some other complex UI with nested routes – in
that case, the updated DOM might be very small indeed. There’s a good
illustration of this in the Next.js Layouts RFC. As with everything in software,
it’s all about tradeoffs.


SERVICE WORKER AND OFFLINE MPAS

One interesting response to my post was, “I like SPAs because they preserve
privacy, and keep all the user data client-side. My site can just be static
files.” This is a great point, and it’s actually one of the reasons I wrote my
Mastodon client, Pinafore, as an SPA.

But as I mentioned in my post, there’s nothing inherent about the SPA
architecture that makes it the only option for handling user data purely on the
client side. You could make a fully offline-powered MPA that relies on the
Service Worker to handle all the rendering. (Here is an example implementation I
found.)

I admit though, that this was one of the weaker arguments in my post, because as
far as I can tell… nobody is actually doing this. Most frameworks I’m aware of
that generate a Service Worker also generate a client-side router. The Service
Worker is an enhancement, but it’s not the main character in the story. (If you
know a counter-example, though, then please let me know!)

I think this is actually a very under-explored space in web development. I was
pitching this Service-Worker-first architecture back in 2016. I’m still hopeful
that some framework will start exploring this idea eventually – the recent focus
on frameworks supporting server-side JavaScript environments beyond Node (such
as Cloudflare Workers) should in theory make this easier, because the Service
Worker is a similarly-constrained JavaScript environment. If a framework can
render from inside a Cloudflare Worker, then why not a Service Worker?

This architecture would have a lot of upsides:

 1. No client-side router, so no need to implement focus management, scroll
    restoration, etc.
 2. You’d also still get the benefits of paint holding and the back-forward
    cache.
 3. If you open multiple browser tabs pointing to the same origin, each page
    will avoid the full-SPA JavaScript load, since the main app logic has
    already been bootstrapped in the Service Worker. (One Service Worker serves
    multiple tabs for the same origin.)
 4. The Service Worker can use ReadableStreams to get the benefits of the
    browser’s progressive HTML parser, as described above.
 5. Memory leaks? I’ve harped on this a lot in the past, and admittedly, this
    wouldn’t fully solve the problem. You’d probably just move the leaks into
    the Service Worker. But a Service Worker has a fire-and-forget model, so the
    browser could easily terminate it and restart it if it uses up too much
    memory, and the user might never notice.

This architecture does have some downsides, though:

 1. State is spread out between the Service Worker and the main thread, with
    asynchronous postMessage required for communication.
 2. You’d be limited to using IndexedDB and caches to store persistent state,
    since you’d need something accessible to the Service Worker – no more
    synchronous LocalStorage.
 3. In general, the simplified app development model of an SPA (all state is
    stored in one place, on the main thread, available synchronously) would be
    thrown out the window.
 4. No framework that I’m aware of is doing this.

I still think the performance and simplicity upsides of this model are worth at
least prototyping, but again, it remains to be seen if the DX (Developer
Experience) is seamless enough to make it viable in practice.


THE VIRTUES OF SPAS

So given everything I’ve said about SPAs – paint holding, the back-forward
cache, Core Web Vitals – why might you still want to build an SPA in 2022? Well,
to give a somewhat hand-wavy answer, I think there are a lot of cases where an
SPA is a good choice:

 1. You’re building an app where the holotype matches the right use case for an
    SPA – e.g. only one browser tab is ever open at a time, page loads are
    infrequent, content is very dynamic, etc.
 2. Core Web Vitals and SEO are not a big concern for you, e.g. because your app
    is behind a login gate.
 3. There’s a feature you need that’s only available in SPAs (e.g. an
    omnipresent video player, as mentioned in the previous post).
 4. Your team is already productive building an SPA, because that’s what your
    favorite framework supports.
 5. You just like SPAs! That’s fine! I’m not going to take them away from you, I
    promise.

That said, my goal with the previous post was to start a conversation
challenging some of the assumptions that folks have about SPAs. (E.g. “SPA
navigations are always faster.”) Oftentimes in the tech industry we do things
just because “that’s how things have always been done,” and we don’t stop to
consider if the conditions that drove our previous decisions have changed.

The only constant in software is change. Browsers have changed a lot over the
years, but in many ways our habits as web developers have not really adjusted to
fit the new reality. There’s a lot of prototyping and research yet to be done,
and the one thing I’m sure of is that the best web apps in 10 years will look a
lot different from the best web apps built today.

Next post: State is hard: why SPAs will persist

21 May


THE BALANCE HAS SHIFTED AWAY FROM SPAS

Posted by Nolan Lawson in Web. Tagged: spas. 18 Comments

There’s a feeling in the air. A zeitgeist. SPAs are no longer the cool kids they
once were 10 years ago.

Hip new frameworks like Astro, Qwik, and Elder.js are touting their MPA
capabilities with “0kB JavaScript by default.” Blog posts are making the rounds
listing all the challenges with SPAs: history, focus management, scroll
restoration, Cmd/Ctrl-click, memory leaks, etc. Gleeful potshots are being taken
against SPAs.

I think what’s less discussed, though, is how the context has changed in recent
years to give MPAs more of an upper hand against SPAs. In particular:

 1. Chrome implemented paint holding – no more “flash of white” when navigating
    between MPA pages. (Safari already did this.)
 2. Chrome implemented back-forward caching – now all major browsers have this
    optimization, which makes navigating back and forth in an MPA almost
    instant.
 3. Service Workers – once experimental, now effectively 100% available for
    those of us targeting modern browsers – allow for offline navigation without
    needing to implement a client-side router (and all the complexity therein).
 4. Shared Element Transitions, if accepted and implemented across browsers,
    would also give us a way to animate between MPA navigations – something
    previously only possible (although difficult) with SPAs.

This is not to say that SPAs don’t have their place. Rich Harris has a great
talk on “transitional apps,” which outlines some reasons you may still want to
go with an SPA. For instance, you might want an omnipresent element that
survives page navigations, such as an audio/video player or a chat widget. Or
you may have an infinite-loading list that, on pressing the back button, returns
to the previous position in the list.

Even teams that are not explicitly using these features may still choose to go
with an SPA, just because of the “unknown” factor. “What if we want to implement
navigation animations some day?” “What if we want to add an omnipresent video
player?” “What if there’s some customization we want that’s not supported by
existing browser APIs?” Choosing an MPA is a big architectural decision that may
effectively cut off the future possibility of taking control of the page in
cases where the browser APIs are not quite up to snuff. At the end of the day,
an SPA gives you full control, and many teams are hesitant to give that up.

That said, we’ve seen a similar scenario play out before. For a long time,
jQuery provided APIs that the browser didn’t, and teams that wanted to sleep
soundly at night chose jQuery. Eventually browsers caught up, giving us APIs
like querySelector and fetch, and jQuery started to seem like unnecessary
baggage.

I suspect a similar story may play out with SPAs. To illustrate, let’s consider
Rich’s examples of things you’d “need” an SPA for:

 * Omnipresent chat widget: use Shared Element Transitions to keep the widget
   painted during MPA navigations.
 * Infinite list that restores scroll position on back button: use
   content-visibility and maybe store the state in the Service Worker if
   necessary.
 * Omnipresent audio/video player that keeps playing during navigations: not
   possible today in an MPA, but who knows? Maybe the Picture-in-Picture API
   will support this someday.

To be clear, though, I don’t think SPAs are going to go away entirely. I’m not
sure how you could reasonably implement something like Photoshop or Figma as an
MPA. But if new browser APIs and features keep landing that slowly chip away at
SPAs’ advantages, then more and more teams in the future will probably choose to
build MPAs.

Personally I think it’s exciting that we have so many options available to us
(and they’re all so much better than they were 10 years ago!). I hope folks keep
an open mind, and keep pushing both SPAs and MPAs (and “transitional apps,” or
whatever we’re going to call the next thing) to be better in the future.

Follow-up: More thoughts on SPAs

8 Apr


THE STRUGGLE OF USING NATIVE EMOJI ON THE WEB

Posted by Nolan Lawson in Web. 19 Comments

Emoji are a standard overseen by the Unicode Consortium. The web is a standard
governed by bodies such as the W3C, WHATWG, and TC39. Both emoji and the web are
ubiquitous.

So you might be forgiven for thinking that, in 2022, it’s possible to plop an
emoji on a web page and have it “just work”:



If you see a lotus flower above, then congratulations! You’re on a browser or
operating system that supports Emoji 14.0, released in September 2021. If not,
you might see something that looks like the scoreboard on an old 80’s arcade
game:

Another apt description would be “robot barf.”

Let’s try another one. What does this emoji look like to you?



If you see a face with spiral eyes, then wonderful! Your browser can render
Emoji 13.1, released in September 2020. If not, you might see a puzzling
combination of face with crossed-out eyes and a shooting (“dizzy”) star:



It’s a fun bit of cartoon iconography to know that this combination means “dizzy
face,” but for most folks, it doesn’t really evoke the same meaning. It’s not
much better than the robot barf.


EMOJI AND BROWSER SUPPORT

If you’re like me, you’re a minimalist when it comes to web development. If I
don’t have to rebuild something from scratch, then I’ll avoid doing so. I try to
“use the platform” as much as possible and lean on existing web standards and
browser capabilities.

When it comes to emoji, there are a lot of potential upsides to using the
platform. You don’t need to bring your own heavy emoji font, or use a
spritesheet, or do any manual DOM processing to replace text with <img>s. But
sadly, if you try to avoid these heavy-handed techniques and just, you know, use
emoji on the web, you’ll quickly run into the kinds of problems I describe
above.

The first major problem is that, although emoji are released by the Unicode
Consortium at a yearly cadence, OSes don’t always update in a timely manner to
add the latest-and-greatest characters. And the browser, in most cases, is
beholden to the OS to render whatever emoji fonts are provided by the underlying
system (e.g. Apple Color Emoji on iOS, Microsoft Segoe Color Emoji on Windows,
etc.).

In the case of major releases (such as Emoji 14.0), a missing character means
the “robot barf” shown above. In the case of minor releases (such as Emoji
13.1), it can mean that the emoji renders as a bizarre “double” emoji – some of
my favorites include “man with floating wig of red hair” () for “man with red
hair” () and “bear with snowflake” () for “polar bear” ().

If I’m trying to convince you that native emoji are worth investing in for your
website, I’ve probably lost half my audience at this point. Most chat and social
media app developers would prefer to have a consistent experience across all
browsers and devices – not a broken experience for some users. And even if the
latest emoji were perfectly supported across devices, these developers may still
prefer a uniform look-and-feel, which is why vendors like Twitter, Facebook, and
WhatsApp actually design their own emoji fonts.


DETECTING BROKEN EMOJI

Let’s say, though, that you’re comfortable with emoji looking different on
different platforms. After all – maybe Apple users would prefer to see Apple
emoji, and Windows users would prefer to see Windows emoji. And in any case,
you’d rather not reinvent what the OS already provides. What do you have to do
in this case?

Well, first you need a way to detect broken emoji. This is actually much harder
than it sounds, and basically boils down to rendering the emoji to a <canvas>,
testing that it has an actual color, and also testing that it doesn’t render as
two separate characters. (is-emoji-supported is a decent JavaScript library that
does this.)

This solution has a few downsides. First off, you now need to run JavaScript
before rendering any text – with all the problems therein for SSR, performance,
etc. Second, it doesn’t actually solve the problem – it just tells you that
there is a problem. And it might not even work – I’ve seen this technique fail
in cross-origin iframes in Firefox, presumably because the <canvas> triggered
the browser’s fingerprinting detection.

But again, let’s just say that you’re comfortable with all this. You detect
broken emoji and perhaps replace them with text saying “emoji not supported.” Or
maybe you want a more graceful degradation, so you include half a megabyte of
JSON data describing every emoji ever created, so that you can actually show
some text to describe the emoji. (Of course, that file is only going to get
bigger, and you’ll need to update it every year.)

I know what you’re thinking: “I just wanted to show an emoji on my web page. Why
do I have to know everything about emoji?” But just wait: it gets worse.


BLACK-AND-WHITE OLDER EMOJI

Okay, so now you’re successfully detecting whether an emoji is supported, so you
can hide or replace those newfangled emoji that are causing problems. But would
it occur to you that the oldest emoji might be problematic too?



This is the classic smiling face emoji. But depending on your browser, instead
of the more familiar full-color version, you might see a simple black-and-white
smiley. In case you don’t see it, here is a comparison, and here’s how it looks
in Chrome on Windows:



You’ll also see this same problem for some other older emoji, such as red heart
() and heart suit (♥️), which both render as black hearts rather than red ones.

So how can we render these venerable emoji in glorious Technicolor? Well, after
a lot of trial-and-error, I’ve landed on this CSS:

1
2
3
4
5
6
7
8
9
10
div {
  font-family: "Twemoji Mozilla",
               "Apple Color Emoji",
               "Segoe UI Emoji",
               "Segoe UI Symbol",
               "Noto Color Emoji",
               "EmojiOne Color",
               "Android Emoji",
               sans-serif;
}

Basically, what we have to do is point the font-family at a known list of
built-in emoji fonts on various operating systems. This is similar to the
“system font” trick.

If you’re wondering what “Twemoji Mozilla” is, well, it turns out that Firefox
is a bit odd in that it actually bundles its own version of Twitter’s Twemoji
font on Windows and Linux. This will be important later, but let’s set it aside
for now.


WHAT IS AN EMOJI, ANYWAY?

At this point, you may be getting pretty tired of this blog post. “Nolan,” you
might say, “why don’t you just tell me what to do? Just give me a snippet I can
slap onto my website to fix all these dang emoji problems!” Well I wish it were
as simple as just chucking a CSS font-family onto your body and calling it a
day. But if you try that naïve approach, you’ll start to see some bizarre
characters:



As it turns out, characters like the asterisk (*), octothorpe (#), trademark
(™), and even the numbers 0-9 are technically emoji. And depending on your
browser and OS, the system emoji font will either not render them at all, or it
might render them as the somewhat-cartoony versions you see above.

Maybe to some folks it’s acceptable for these characters to be rendered as
emoji, but I would wager that the average person doesn’t consider these numbers
and symbols to be “emoji.” And it would look odd to treat them like that.

So all right, some “emoji” are not really emoji. This means we need to ensure
that some characters (like the smiley face) render using the system emoji font,
whereas other kinda-sorta emoji characters (like * and #) don’t. Potentially you
could use a JavaScript tool like emoji-regex or a CSS tool like
emoji-unicode-range to manage this, but in my experience, neither one handles
all the various edge cases (nor have I found an off-the-shelf solution that
does). And either way, it’s starting to feel pretty far from “use the platform.”


WINDOWS WOES

I could stop right here, and hopefully I’ve made the point that using native
emoji on the web is a painful experience. But I can’t help mentioning one more
problem: flag emoji on Windows.

As it turns out, Microsoft’s emoji font does not have country flags on either
Windows 10 or Windows 11. So instead of the US flag emoji, you’ll just see the
characters “US” (and the equivalent country codes for other flags). Microsoft
might have a good geopolitical reason to do this (although they’d have to
explain why no other emoji vendor follows suit), but in any case, it makes it
hard to talk about sports matches or national independence days.

Flag emoji in Chrome on Windows. You can have the pirate flag, you can have the
race car flag, but you can’t root for Argentina vs Brazil in a soccer match.

Interestingly, this problem is actually solvable in Firefox, since they ship
their own “Mozilla Twemoji” font (which, furthermore, tends to stay more
up-to-date than the built-in Microsoft font). But the most popular browser
engine on Windows, Chromium, does not ship their own emoji font and doesn’t plan
to. There’s actually a neat tool called country-flag-emoji-polyfill that can
detect the broken flag support and patch in a minimal Twemoji font to fix it,
but again, it’s a shame that web developers have to jump through so many hoops
to get this working.

(At this point, I should mention that the Unicode Consortium themselves have
come out against flag emoji and won’t be minting any more. I can understand the
sentiment behind this – a font consortium doesn’t want to be in the business of
adjudicating geopolitical boundaries. But in my opinion, the cat’s already out
of the bag. And it seems bizarre that Wales and Scotland get their own flag, but
no other countries, states, provinces, municipalities, duchies, earldoms, or
holy empires ever will. It seems guaranteed to lead to an explosion of
non-standard vendor-specific flags, which is already happening according to
Emojipedia.)


CONCLUSION

I could go on. I really could. I could talk about the sad state of browser
support for color fonts, or how to avoid mismatched emoji fonts in Firefox, or
subtle issues with measuring emoji width on Windows, or how you need to install
a separate package for emoji to work at all in Chrome on Linux.

But in the end, my message is a simple one: I, as a web developer, would like to
use emoji on my web sites. And for a variety of reasons, I cannot.

I build an emoji picker called emoji-picker-element. This is what it would look
like if I didn’t bend over backwards to fix emoji problems.

At a time when web browsers have gained a staggering array of new capabilities –
including Bluetooth, USB, and access to the filesystem – it’s still a struggle
to render a smiley face. It feels a bit odd to argue in 2022 that “the web
should have emoji support,” and yet here I stand, cap in hand, making my case.

You might wonder why browsers have been so slow to fix this problem. I suspect
part of it is that there are ready workarounds, such as twemoji, which parses
the DOM to look for emoji sequences and replaces them with <img>s. The fact that
this technique isn’t great for performance (downloading extra images, processing
the DOM and mutating it, needing to run JavaScript at all) might seem
unimportant when you consider the benefits (a unified look-and-feel across
devices, up-to-date emoji support).

Part of me also wonders if this is one of those cases where the needs of larger
entities have eclipsed the needs of smaller “mom-and-pop” web shops. A
well-funded tech company building a social media app with a massive user base
has the resources to handle these emoji problems – heck, they might even design
their own emoji font! Whereas your average small-time blogger, agency, or studio
would probably prefer for emoji to “just work” without a lot of heavy lifting.
But for whatever reason, their voices are not being heard.

What do I wish browsers would do? I don’t have much of a grand solution in mind,
but I would settle for browsers following the Firefox model and bundling their
own emoji font. If the OS can’t keep its emoji up-to-date, or if it doesn’t want
to support certain characters (like country flags), then the browser should fill
that gap. It’s not a huge technical hurdle to bundle a font, and it would help
spare web developers a lot of the headaches I listed above.

Another nice feature would be some sensible way to render what are colloquially
known as “emoji” as emoji. So for instance, the “smiley face” should be rendered
as emoji, but the numbers 0-9 and symbols like * and # should not. If backwards
compatibility is a concern, then maybe we need a new CSS property along the
lines of text-rendering: optimizeForLegibility – something like emoji-rendering:
optimizeForCommonEmoji would be nice.

In any case, even if this blog post has only served to dissuade you from ever
trying to use native emoji on the web, I hope that I’ve at least done a decent
job of summarizing the current problems and making the case for browsers to help
solve it. Maybe someday, when browsers everywhere can render a smiley face, I
can write something other than :-) to show my approval.

Update: At some point, WordPress started automatically converting emoji in this
blog post to <img>s. I’ve replaced some of the examples with CodePens to make it
clearer what’s going on. Of course, the fact that WordPress feels compelled to
use <img>s instead of native emoji kind of proves my point.

2 Feb


FIVE YEARS OF QUITTING TWITTER

Posted by Nolan Lawson in social media. Tagged: social media. 9 Comments

It’s been almost five years since I deleted my Twitter account. I didn’t just
delete the app or deactivate – I deleted my whole account and my entire tweet
history, lighting a match and burning the bridge behind me.

I don’t want to pretend to be some kind of seer, but since then, divesting
yourself from social media has become a somewhat fashionable lifestyle choice.
For a certain type of person, it’s the kind of pro-mental health, self-care kind
of thing you might do along with going vegan or taking up Vispassana meditation.
(To make it clear that I’m not above such intellectual trendiness, I’ve tried
all those things too.)

In this post, I want to talk honestly about the good and the bad that comes with
deleting your Twitter account, from the perspective of a tech guy who’s plugged
in to several different software communities (open source, web development,
Node.js, etc.).


THE GOOD

Let’s start off with the good stuff. Twitter is no longer what I check first
thing in the morning and the last thing before I go to sleep. In fact, I
instituted a personal rule to charge my phone outside of the bedroom altogether
so that I’m not tempted to read it in bed. (I don’t always hold fast to this.)

I have my RSS feed, I have Hacker News, and I have various news outlets (Ars
Technica, Wired, etc.), so there’s plenty for me to read on the internet. But
unlike Twitter, I actually run out of stuff to read and eventually get bored
with my phone. I consider this a plus, even if it ends up driving me towards
other screens – video games in particular. But even if my lofty goal is to spend
more time reading books or riding my bike, I still consider time spent with my
Switch or doing crossword puzzles to be time better spent than flicking through
social media.

I also disabled all notifications on my phone except for IM and email, which
helps reduce the neediness of my little pocket Tamagatchi. IM notifications are
invaluable for keeping up with family and friends, but my email notifications
are still sometimes a source of stress, so I try to unsubscribe as much as
possible from any newsletters, automated updates, and other bullshit. If my
email is going to buzz in my pocket and show me a notification, I want it to be
something important.

I still have a Mastodon account, and I still host a Mastodon server at
toot.cafe, but I’m not very active anymore. I mostly treat it as a write-only
medium. My reasons for this are various, but basically I’ve become less of a
booster of Mastodon (and the fediverse in general) over time. It’s a neat idea,
and it still works pretty well for the cohort of hardcore techies and
tech-adjacent folks who seem to be there, but I just don’t find it super
interesting any more. Sometimes I think of Mastodon as my Twitter nicotine patch
– it sorta feels like Twitter, it scratches the same itch, but it’s just not
nearly as compelling.


THE BAD

If you’re the kind of techie who uses social media to connect with your peers
and build your personal brand – the kind of person who speaks at conferences,
writes blog posts, talks on podcasts, etc. – then quitting Twitter is a terrible
idea. My blog posts get less traffic than they used to, I don’t get invited to
as many conferences anymore, and even when I do give the odd podcast interview,
there’s always an awkward moment when they ask for my Twitter handle, or which
social media account they should direct traffic and followers to. (I dunno,
GitHub? I think I have a Reddit account?)

My main public outlet these days is my blog, and from looking at the WordPress
stats, my overall traffic has taken a hit since I quit Twitter. I’ve kind of
ceased to exist for a certain segment of my (former) audience, and for the rest,
I only exist when someone takes pity on me and links to my blog from Twitter,
Reddit, Hacker News, or a big site like CSS Tricks. (I don’t abstain from Reddit
or Hacker News, but I’m also not super active there.) It feels kind of weird to
have quit Twitter, and yet to relish the traffic spike from a well-timed Twitter
mention.

For those people who are re-sharing my content on social media, I suspect most
of them found it from their RSS feed. So RSS definitely still seems alive and
well, even if it’s just a small upstream tributary for the roaring downstream
river of Twitter, Reddit, etc.

Another odd downside of deleting your Twitter account is that, after a cool-down
period, someone can grab your Twitter handle. I didn’t realize this was a thing,
so someone has squatted on my old Twitter name, presumably because they hope to
re-sell it later, or maybe because they want the SEO juice? I have no idea. I
would be mad about it, but the fact that this account exists (and my old
mentions on Twitter still link to it!) makes Twitter a slightly shittier place,
so in my own petty way, I’m kind of glad it exists.


THE MIXED BAG

Some things that I miss from Twitter are both good and bad. Twitter is a
sprawling global conversation, and a lot of the important debates in web
development (client-rendering vs server-rendering, web components vs frameworks,
etc.) were born and thrive there. I miss out on a lot of those debates, and many
of them could serve as good fodder for a thoughtful blog post or open-source
project, so I regret not having the creative spark that comes from those
conversations.

The problem is that a lot of these debates are, in my opinion, either trivial or
manufactured. Twitter (like all social media) is an outrage machine, designed to
goose engagement using whatever means the algorithm finds through blind
optimization. I fully believe that phenomena like “the great divide” in web
development wouldn’t exist without Twitter, and to the extent that it does exist
in the “real world,” it’s only because it was hatched on Twitter before
infecting the rest of us. Social media engagement thrives when it finds a wedge
to drive between two parts of a community, where it can cause incendiary content
to cross-pollinate from one camp to another, creating an endless cycle of
irritation, condemnation, dunking, and flaming.

Occasionally in my RSS feed I’ll read a post that starts off by saying, “There’s
been a huge debate about…” or “There’s been a recent controversy over…” and then
eventually I realize the whole post is about some Twitter beef. I don’t miss
being on the front lines of these kind of battles, but I do think some of these
debates are worth having, so I have mixed feelings about it.


CONCLUSION

I don’t plan on coming back to Twitter. Mostly because I just don’t need it
anymore – I’m not super active at conferences or meetups, I don’t have a
workshop or service I need to sell, and so there’s little professional reason
for me to be there. I like posting on my blog, but I can only hope that my
content gets attention in direct correlation to the value that people derive
from it. If I write a good blog post, people will read it. I try to focus on
that and that alone.

Honestly, even that lifestyle – writing blog posts, watching it occasionally
blow up on Hacker News and Reddit, reading occasionally scathing comments – is
hard enough on my mental health. Whenever I write a blog post these days, I have
a period of anxiety and dread where I worry about the potential backlash. I
mitigate that a bit by carefully editing my posts to remove anything that could
be misconstrued, and to occasionally have some trusted friends review a draft
(thank you all!), but frankly it’s a bit sad that I even do this, because my
writing has gotten decidedly more boring over the years.

Sometimes I go back and read my blog posts from 2014 and marvel at how
freewheeling, irreverent, and downright joyful my writing was. I don’t really
write like that anymore, because social media (and the internet in general) have
conditioned me to constantly fret over negative attention. So I act as my own PR
firm, carefully focus-testing and bowdlerizing my prose until it’s as dry as a
slice of burnt toast. Sometimes I can escape from this trap a little bit (like
I’m trying to do right now), but overall I worry that my writing has gotten
worse, not better, over time. (Another worry!)

So given my inherent worry-prone nature about posting content on the internet,
Twitter is probably just not right for me. The high I would get from seeing a
tweet go viral and getting adulation from my peers just doesn’t outweigh the
anxiety, the sleeplessness, or the careful tiptoeing and sanitization of my
thoughts that come with heavy social media use. I’m already bad enough with that
as it is, just with my blog; coming back to Twitter would dial that up to 11.

So I deleted my Twitter account, and I plan to keep it that way. Should you do
the same? Well, I dunno. If you need it for your livelihood, then decidedly not.
You should probably just see Twitter as a necessary evil and try to insulate
yourself from the bad parts while profiting from the good parts. If you’re a
casual user, then maybe you’ve already figured out a healthy way to live with
Twitter (curating your feed, turning off the algorithmic timeline, whatever),
and if so – good for you! For me, I have too much stubbornness and too little
faith in my own ability to manage my social media addiction to want to give
Twitter a second try.

5 Jan


MEMORY LEAKS: THE FORGOTTEN SIDE OF WEB PERFORMANCE

Posted by Nolan Lawson in performance, Web. Tagged: performance. 16 Comments

I’ve researched and learned enough about client-side memory leaks to know that
most web developers aren’t worrying about them too much. If a web app leaks 5 MB
on every interaction, but it still works and nobody notices, then does it
matter? (Kinda sounds like a “tree in the forest” koan, but bear with me.)

Even those who have poked around in the browser DevTools to dabble in the arcane
art of memory leak detection have probably found the experience… daunting. The
effort-to-payoff ratio is disappointingly high, especially compared to the
hundreds of other things that are important in web development, like security
and accessibility.

So is it really worth the effort? Do memory leaks actually matter?

I would argue that they do matter, if only because the lack of care (as shown by
public-facing SPAs leaking up to 186 MB per interaction) is a sign of the
immaturity of our field, and an opportunity for growth. Similarly, five years
ago, there was much less concern among SPA authors for accessibility, security,
runtime performance, or even ensuring that the back button maintained scroll
position (or that the back button worked at all!). Today, I see a lot more
discussion of these topics among SPA developers, and that’s a great sign that
our field is starting to take our craft more seriously.

So why should you, and why shouldn’t you, care about memory leaks? Obviously I’m
biased because I have an axe to grind (and a tool I wrote, fuite), but let me
try to give an even-handed take.


MEMORY LEAKS AND SOFTWARE ENGINEERING

In terms of actual impact on the business of web development, memory leaks are a
funny thing. If you speed up your website by 2 seconds, everyone agrees that
that’s a good thing with a visible user impact. If you reduce your website’s
memory leak by 2 MB, can we still agree it was worth it? Maybe not.

Here are some of the unique characteristics of memory leaks that I’ve observed,
in terms of how they actually fit into the web development process. Memory leaks
are:

 1. Low-impact until critical
 2. Hard to diagnose
 3. Trivial to fix once diagnosed


LOW-IMPACT…

Most web apps can leak memory and no one will ever notice. Not the user, not the
website author – nobody. There are a few reasons for this.

First off, browsers are well aware that the web is a leaky mess and are already
ruthless about killing background tabs that consume too much memory. (My former
colleague on the Microsoft Edge performance team, Todd Reifsteck, told me way
back in 2016 that “the web leaks like a sieve.”) A lot of users are tab hoarders
(essentially using tabs as bookmarks), and there’s a tacit understanding between
browser and user that you can’t really have 100 tabs open at once (in the sense
that the tab is actively running and instantly available). So you click on a tab
that’s a few weeks old, boom, there’s a flash of white while the page loads, and
nobody seems to mind much.

Second off, even for long-lived SPAs that the user may habitually check in on
(think: GMail, Evernote, Discord), there are plenty of opportunities for a page
refresh. The browser needs to update. The user doesn’t trust that the data is
fresh and hits F5. Something goes wrong because programmers are terrible at
managing state, and users are well aware that the old
turn-it-off-and-back-on-again solves most problems. All of this means that even
a multi-MB leak can go undetected, since a refresh will almost always occur
before an Out Of Memory crash.

Chrome’s Out Of Memory error page. If you see this, something has gone very
wrong.

Third, it’s a tragedy-of-the-commons situation, and people tend to blame the
browser. Chrome is a memory hog. Firefox gobbles up RAM. Safari is eating all my
memory. For reasons I can’t quite explain, people with 100+ open tabs are quick
to blame the messenger. Maybe this goes back to the first point: tab hoarders
expect the browser to automatically transition tabs from “thing I’m actively
using” to “background thing that is basically a bookmark,” seamlessly and
without a hitch. Browsers have different heuristics about this, some heuristics
are better than others, and so in that sense, maybe it is the browser’s “fault”
for failing to adapt to the user’s tab-hoarding behavior. In any case, the
website author tends to escape the blame, especially if their site is just 1 out
of 100 naughty tabs that are all leaking memory. (Although this may change as
more browsers call out tabs individually in Task Manager, e.g. Edge and Safari.)


…UNTIL CRITICAL

What’s interesting, though, is that every so often a memory leak will get so bad
that people actually start to notice. Maybe someone opens up Task Manager and
wonders why a note-taking app is consuming more RAM than DOTA. Maybe the website
slows to a crawl after a few hours of usage. Maybe the users are on a device
with low available memory (and of course the developers, with their 32GB
workstations, never noticed).

Here’s what often happens in this case: a ticket lands on some web developer’s
desk that says “Memory usage is too high, fix it.” The developer thinks to
themselves, “I’ve never given much thought to memory usage, well let’s take a
stab at this.” At some point they probably open up DevTools, click “Memory,”
click “Take snapshot,” and… it’s a mess. Because it turns out that the SPA
leaks, has always leaked, and in fact has multiple leaks that have accumulated
over time. The developer assumes this is some kind of sudden-onset disease, when
in fact it’s a pre-existing condition that has gradually escalated to stage-4.

The funny thing is that the source of the leak – the event listener, the
subscriber, whatever – might not even be the proximate cause of the recent
crisis. It might have been there all along, and was originally a tiny 1 MB leak
nobody noticed, until suddenly someone attached a much bigger object to the
existing leak, and now it’s a 100 MB leak that no one can ignore.

Unfortunately to get there, you’re going to have to hack your way through the
jungle of the half-dozen other leaks that you ignored up to this point. (We
fixed the leak! Oh wait, no we didn’t. We fixed the other leak! Oh wait, there’s
still one more…) But that’s how it goes when you ignore a chronic but steadily
worsening illness until the moment it becomes a crisis.


HARD TO DIAGNOSE

This brings us to the second point: memory leaks are hard to diagnose. I’ve
already written a lot about this, and I won’t rehash old content. Suffice it to
say, the tooling is not really up to the task (despite some nice recent
innovations), even if you’re a veteran with years of web development experience.
Some gotchas that tripped me up include the fact that you have to ignore
WeakMaps and circular references, and that the DevTools console itself can leak
memory.

Oh and also, browsers themselves can have memory leaks! For instance, see these
ResizeObserver/IntersectionObserver leaks in Chromium, Firefox, and Safari
(fixed in all but Firefox), or this Chromium leak in lazy-loading images (not
fixed), or this discussion of a leak in Safari. Of course, the tooling will not
help you distinguish between browser leaks and web page leaks, so you just kinda
have to know this stuff. In short: good luck!

Even with the tool that I’ve written, fuite, I won’t claim that we’ve reached a
golden age of memory leak debugging. My tool is better than what’s out there,
but that’s not saying much. It can catch the dumb stuff, such as leaking event
listeners and DOM nodes, and for the more complex stuff like leaking collections
(Arrays, Maps, etc.), it can at least point you in the right direction. But it’s
still up to the web developer to decide which leaks are worth chasing (some are
trivial, others are massive), and to track them down.

I still believe that the browser DevTools (or perhaps professional testing
tools, such as Cypress or Sentry), should be the ones to handle this kind of
thing. The browser especially is in a much better position to figure out why
memory is leaking, and to point the web developer towards solutions. fuite is
the best I could do with userland tooling (such as Puppeteer), but overall I’d
still say we’re in the Stone Age, not the Space Age. (Maybe fuite pushed us to
the Bronze Age, if I’m being generous to myself.)


TRIVIAL TO FIX ONCE DIAGNOSED

Here’s the really surprising thing about memory leaks, though, and perhaps the
reason I find them so addictive and keep coming back to them: once you figure
out where the leak is coming from, they’re usually trivial to fix. For instance:

 * You called addEventListener but forgot to call removeEventListener.
 * You called setInterval, but forgot to call clearInterval when the component
   unloaded.
 * You added a DOM node, but forgot to remove it when the page transitions away.
 * Etc.

You might have a multi-MB leak, and the fix is one line of code. That’s a
massive bang-for-the-buck! That is, if you discount the days of work it might
have taken to find that line of code.

This is where I would like to go with fuite. It would be amazing if you could
just point a tool at your website and have it tell you exactly which line of
code caused a leak. (It’d be even better if it could open a pull request to fix
the leak, but hey, let’s not get ahead of ourselves.)

I’ve taken some baby steps in this direction by adding stacktraces for leaking
collections. So for instance, if you have an Array that is growing by 1 on every
user interaction, fuite can tell you which line of code actually called
Array.push(). This is a huge improvement over v1.0 of fuite (which just told you
the Array was leaking, but not why), and although there are edge cases where it
doesn’t work, I’m pretty proud of this feature. My goal is to expand this to
other leaks (event listeners, DOM nodes, etc.), although since this is just a
tool I’m building in my spare time, we’ll see if I get to it.

fuite showing stacktraces for leaking collections.

After releasing this tool, I also learned that Facebook has built a similar tool
and is planning to open-source it soon. That’s great! I’m excited to see how it
works, and I’m hoping that having more tools in this space will help us move
past the Stone Age of memory leak debugging.


CONCLUSION

So to bring it back around: should you care about memory leaks? Well, if your
boss is yelling at you because customers are complaining about Out Of Memory
crashes, then yeah, you absolutely should. Are you leaking 5 MB, and nobody has
complained yet? Well, maybe an ounce of prevention is worth a pound of cure in
this case. If you start fixing your memory leaks now, it might avoid that crisis
in the future when 5 MB suddenly grows to 50 MB.

Alternatively, are you leaking a measly ~1 kB because your routing library is
appending some metadata to an Array? Well, maybe you can let that one slide.
(fuite will still report this leak, but I would argue that it’s not worth
fixing.)

On the other hand, all of these leaks are important in some sense, because even
thinking about them shows a dedication to craftsmanship that is (in my opinion)
too often lacking in web development. People write a web app, they throw
something buggy over the wall, and then they rewrite their frontend four years
later after users are complaining too much. I see this all the time when I
observe how my wife uses her computer – she’s constantly telling me that some
app gets slower or buggier the longer she uses it, until she gives up and
refreshes. Whenever I help her with her computer troubles, I feel like I have to
make excuses for my entire industry, for why we feel it’s acceptable to waste
our users’ time with shoddy, half-baked software.

Maybe I’m just a dreamer and an idealist, but I really enjoy putting that final
polish on something and feeling proud of what I’ve created. I notice, too, when
the software I use has that extra touch of love and care – and it gives me more
confidence in the product and the team behind it. When I press the back button
and it doesn’t work, I lose a bit of trust. When I press Esc on a modal and it
doesn’t close, I lose a bit of trust. And if an app keeps slowing down until I’m
forced to refresh, or if I notice the memory steadily creeping up, I lose a bit
of trust. I would like to think that fixing memory leaks is part of that extra
polish that won’t necessarily win you a lot of accolades, but your users will
subtly notice, and it will build their confidence in your software.

Thanks to Jake Archibald and Todd Reifsteck for feedback on a draft of this
post.

« Older Entries



RECENT POSTS

 * SPAs: theory versus practice
 * Style scoping versus shadow DOM: which is fastest?
 * Dialogs and shadow DOM: can we make it accessible?
 * The collapse of complex software
 * State is hard: why SPAs will persist


ABOUT ME

Hi, I'm Nolan. I'm a web developer living in Seattle and working for Salesforce.
Opinions expressed in this blog are mine and frequently wrong.


ARCHIVES

 * June 2022 (4)
 * May 2022 (3)
 * April 2022 (1)
 * February 2022 (1)
 * January 2022 (1)
 * December 2021 (3)
 * September 2021 (1)
 * August 2021 (6)
 * February 2021 (2)
 * January 2021 (2)
 * December 2020 (1)
 * July 2020 (1)
 * June 2020 (1)
 * May 2020 (2)
 * February 2020 (1)
 * December 2019 (1)
 * November 2019 (1)
 * September 2019 (1)
 * August 2019 (2)
 * June 2019 (4)
 * May 2019 (3)
 * February 2019 (2)
 * January 2019 (1)
 * November 2018 (1)
 * September 2018 (5)
 * August 2018 (1)
 * May 2018 (1)
 * April 2018 (1)
 * March 2018 (1)
 * January 2018 (1)
 * December 2017 (1)
 * November 2017 (2)
 * October 2017 (1)
 * August 2017 (1)
 * May 2017 (1)
 * March 2017 (1)
 * January 2017 (1)
 * October 2016 (1)
 * August 2016 (1)
 * June 2016 (1)
 * April 2016 (1)
 * February 2016 (2)
 * December 2015 (1)
 * October 2015 (1)
 * September 2015 (1)
 * July 2015 (1)
 * June 2015 (2)
 * October 2014 (1)
 * September 2014 (1)
 * April 2014 (1)
 * March 2014 (1)
 * December 2013 (2)
 * November 2013 (3)
 * August 2013 (1)
 * May 2013 (3)
 * January 2013 (1)
 * December 2012 (1)
 * November 2012 (1)
 * October 2012 (1)
 * September 2012 (3)
 * June 2012 (2)
 * March 2012 (3)
 * February 2012 (1)
 * January 2012 (1)
 * November 2011 (1)
 * August 2011 (1)
 * July 2011 (1)
 * June 2011 (3)
 * May 2011 (2)
 * April 2011 (4)
 * March 2011 (1)


TAGS

accessibility alogcat android android market apple app tracker blobs boost
bootstrap browsers bug reports catlog chord reader code contacts continuous
integration copyright couch apps couchdb couchdroid developers development
grails html5 indexeddb information retrieval japanese name converter javascript
jenkins keepscore listview localstorage logcat logviewer lucene modules nginx
nlp node nodejs npm offline-first open source passwords performance pokedroid
pouchdb pouchdroid query expansion relatedness calculator relatedness
coefficient s3 safari satire sectioned listview security semver shadow dom
social media socket.io software development solr spas supersaiyanscrollview
synonyms twitter ui design ultimate crossword w3c webapp webapps web platform
web sockets websql web workers


LINKS

 * Mastodon
 * GitHub
 * npm
 * Keybase

Blog at WordPress.com.


Read the Tea Leaves
Blog at WordPress.com.
 * Follow Following
    * Read the Tea Leaves
      Join 1,184 other followers
      
      Sign me up
    * Already have a WordPress.com account? Log in now.

 *  * Read the Tea Leaves
    * Customize
    * Follow Following
    * Sign up
    * Log in
    * Report this content
    * View site in Reader
    * Manage subscriptions
    * Collapse this bar

 

Loading Comments...

 

Write a Comment...
Email (Required) Name (Required) Website