nolanlawson.com Open in urlscan Pro
192.0.78.24  Public Scan

Submitted URL: http://nolanlawson.com/
Effective URL: https://nolanlawson.com/
Submission: On June 10 via api from GB — Scanned from GB

Form analysis 3 forms found in the DOM

GET https://nolanlawson.com/

<form method="get" id="searchform" action="https://nolanlawson.com/">
  <p><input type="text" name="s" onblur="this.value=(this.value=='' ) ? 'Search this Blog' : this.value;" onfocus="this.value=(this.value=='Search this Blog' ) ? '' : this.value;" value="Search this Blog" id="s">
    <button type="submit" id="top-search-submit"><img src="https://s0.wp.com/wp-content/themes/pub/springloaded/images/search-btn.gif" alt="Search"></button>
  </p>
</form>

POST https://subscribe.wordpress.com

<form method="post" action="https://subscribe.wordpress.com" accept-charset="utf-8" style="display: none;">
  <div class="actnbr-follow-count">Join 1,175 other followers</div>
  <div>
    <input type="email" name="email" placeholder="Enter your email address" class="actnbr-email-field" aria-label="Enter your email address">
  </div>
  <input type="hidden" name="action" value="subscribe">
  <input type="hidden" name="blog_id" value="21720966">
  <input type="hidden" name="source" value="https://nolanlawson.com/">
  <input type="hidden" name="sub-type" value="actionbar-follow">
  <input type="hidden" id="_wpnonce" name="_wpnonce" value="639980f9f7">
  <div class="actnbr-button-wrap">
    <button type="submit" value="Sign me up"> Sign me up </button>
  </div>
</form>

<form id="jp-carousel-comment-form">
  <label for="jp-carousel-comment-form-comment-field" class="screen-reader-text">Write a Comment...</label>
  <textarea name="comment" class="jp-carousel-comment-form-field jp-carousel-comment-form-textarea" id="jp-carousel-comment-form-comment-field" placeholder="Write a Comment..."></textarea>
  <div id="jp-carousel-comment-form-submit-and-info-wrapper">
    <div id="jp-carousel-comment-form-commenting-as">
      <fieldset>
        <label for="jp-carousel-comment-form-email-field">Email (Required)</label>
        <input type="text" name="email" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-email-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-author-field">Name (Required)</label>
        <input type="text" name="author" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-author-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-url-field">Website</label>
        <input type="text" name="url" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-url-field">
      </fieldset>
    </div>
    <input type="submit" name="submit" class="jp-carousel-comment-form-button" id="jp-carousel-comment-form-button-submit" value="Post Comment">
  </div>
</form>

Text Content

READ THE TEA LEAVES SOFTWARE AND OTHER DARK ARTS, BY NOLAN LAWSON



 * Home
 * Apps
 * Code
 * Talks
 * About

9 Jun


THE COLLAPSE OF COMPLEX SOFTWARE

Posted by Nolan Lawson in software engineering. Tagged: complexity. 4 Comments

In 1988, the anthropologist Joseph Tainter published a book called The Collapse
of Complex Societies. In it, he described the rise and fall of great
civilizations such as the Romans, the Mayans, and the Chacoans. His goal was to
answer a question that had vexed thinkers over the centuries: why did such
mighty societies collapse?

In his analysis, Tainter found the primary enemy of these societies to be
complexity. As civilizations grow, they add more and more complexity: more
hierarchies, more bureaucracies, deeper intertwinings of social structures.
Early on, this makes sense: each new level of complexity brings rewards, in
terms of increased economic output, tax revenue, etc. But at a certain point,
the law of diminishing returns sets in, and each new level of complexity brings
fewer and fewer net benefits, dwindling down to zero and beyond.

But since complexity has worked so well for so long, societies are unable to
adapt. Even when each new layer of complexity starts to bring zero or even
negative returns on investment, people continue trying to do what worked in the
past. At some point, the morass they’ve built becomes so dysfunctional and
unwieldy that the only solution is collapse: i.e., a rapid decrease in
complexity, usually by abolishing the old system and starting from scratch.

What I find fascinating about this (besides the obvious implications for modern
civilization) is that Tainter could have been writing about software.

Anyone who’s worked in the tech industry for long enough, especially at larger
organizations, has seen it before. A legacy system exists: it’s big, it’s
complex, and no one fully understands how it works. Architects are brought in to
“fix” the system. They might wheel out a big whiteboard showing a lot of boxes
and arrows pointing at other boxes, and inevitably, their solution is… to add
more boxes and arrows. Nobody can subtract from the system; everyone just adds.

“EKS is being deprecated at the end of the month for Omega Star, but Omega Star
still doesn’t support ISO timestamps.” We’ve all been there. (Via Krazam)

This might go on for several years. At some point, though, an organizational
shakeup probably occurs – a merger, a reorg, the polite release of some senior
executive to go focus on their painting hobby for a while. A new band of
architects is brought in, and their solution to the “big diagram of boxes and
arrows” problem is much simpler: draw a big red X through the whole thing. The
old system is sunset or deprecated, the haggard veterans who worked on it either
leave or are reshuffled to other projects, and a fresh-faced team is brought in
to, blessedly, design a new system from scratch.

As disappointing as it may be for those of us who might aspire to write the kind
of software that is timeless and enduring, you have to admit that this system
works. For all its wastefulness, inefficiency, and pure mendacity (“The old code
works fine!” “No wait, the old code is terrible!”), this is the model that has
sustained a lot of software companies over the past few decades.

Will this cycle go on forever, though? I’m not so sure. Right now, the software
industry has been in a nearly two-decade economic boom (with some fits and
starts), but the one sure thing in economics is that booms eventually turn to
busts. During the boom, software companies can keep hiring new headcount to
manage their existing software (i.e. more engineers to understand more boxes and
arrows), but if their labor force is forced to contract, then that same system
may become unmaintainable. A rapid and permanent reduction in complexity may be
the only long-term solution.

One thing working in complexity’s favor, though, is that engineers like
complexity. Admit it: as much as we complain about other people’s complexity, we
love our own. We love sitting around and dreaming up new architectural diagrams
that can comfortably sit inside our own heads – it’s only when these diagrams
leave our heads, take shape in the real world, and outgrow the size of any one
person’s head that the problems begin.

It takes a lot of discipline to resist complexity, to say “no” to new boxes and
arrows. To say, “No, we won’t solve that problem, because that will just
introduce 10 new problems that we haven’t imagined yet.” Or to say, “Let’s go
with a much simpler design, even if it seems amateurish, because at least we can
understand it.” Or to just say, “Let’s do less instead of more.”

Simplicity of design sounds great in theory, but it might not win you many
plaudits from your peers. A complex design means more teams to manage more parts
of the system, more for the engineers to do, more meetings and planning
sessions, maybe some more patents to file. A simple design might make it seem
like you’re not really doing your job. “That’s it? We’re done? We can clock
out?” And when promotion season comes around, it might be easier to make a case
for yourself with a dazzling new design than a boring, well-understood solution.

Ultimately, I think whether software follows the boom-and-bust model, or a more
sustainable model, will depend on the economic pressures of the organization
that is producing the software. A software company that values growth at all
cost, like the Romans eagerly gobbling up more and more of Gaul, will likely
fall into the “add-complexity-and-collapse” cycle. A software company with more
modest aims, that has a stable customer base and doesn’t change much over time
(does such a thing exist?) will be more like the humble tribe that follows the
yearly migration of the antelope and focuses on sustainable, tried-and-true
techniques. (Whether such companies will end up like the hapless Gauls, overrun
by Caesar and his armies, is another question.)

Personally, I try to maintain a good sense of humor about this situation, and to
avoid giving in to cynicism or despair. Software is fun to write, but it’s also
very impermanent in the current industry. If the code you wrote 10 years ago is
still in use, then you have a lot to crow about. If not, then hey, at least
you’re in good company with the rest of us, who probably make up the majority of
software developers. Just keep doing the best you can, and try to have a healthy
degree of skepticism when some wild-eyed architect wheels out a big diagram with
a lot of boxes and arrows.

29 May


STATE IS HARD: WHY SPAS WILL PERSIST

Posted by Nolan Lawson in Web. Tagged: spas. Leave a Comment

When I write about web development, sometimes it feels like the parable of the
blind men and the elephant. I’m out here eagerly describing the trunk, someone
else protests that no, it’s a tail, and meanwhile the person riding on its back
is wondering what all the commotion is down there.

We’re all building so many different types of products using web technology –
e-commerce sites, productivity apps, blogs, streaming sites, video games, hybrid
mobile apps, dashboards on actual spaceships – that it gets difficult to even
have a shared vocabulary to describe what we’re doing. And each sub-discipline
of web development is so deep that it’s easy to get tunnel-visioned and forget
that other people are working with different tools and constraints.

This is what I like about blogging, though: it can help solve the problem of
“feeling out the elephant.” I can offer my own perspective, even if flawed, and
summon the human hive-mind to help describe the rest of the beast.

My last two posts have been a somewhat clumsy fumbling toward a new definition
of SPAs (Single-Page Apps) and MPAs (Multi-Page Apps), and why you’d choose one
versus the other when building a website. As it turns out, there is probably
enough here to fill a book, but my goal is just to bring my own point of view
(and bias) to the table and let others fill in the gaps with their comments and
feedback.

I have a few main biases on this topic:

 1. I usually prize performance over ergonomics. I’ll go for the more performant
    solution, even if it’s awkward or unintuitive.
 2. I like understanding how browsers work, and relying on the “browser-y” way
    of doing things rather than inventing my own prosthetic solution.
 3. I don’t pay nearly enough attention to what’s happening in “user land” – I
    like to stay “close to the metal” and see the world from the browser’s
    perspective. Show me your compiled code, not your source code!

In thinking about this topic and reading what others have written on it, one
thing that struck me is that a big attraction for SPAs is the same thing that
can cause so many problems: state. People who like SPAs often celebrate the fact
than an SPA maintains state between navigations. For instance:

 1. You have a search input. You type into it, click somewhere else to navigate,
    and the next page still has the text in the input.
 2. You have a scrollable sidebar. You scroll halfway down, click on something,
    and the next page still has the sidebar at the last scroll position.
 3. You have a list of expandable cards. You expand one of them, click somewhere
    else, and the next page still has the one card expanded.

Note that these kinds of examples are particularly important for so-called
“nested routes”, especially in complex desktop UIs. Think of sidebars, headers,
and footers that maintain their state while the rest of the UI changes. I find
it interesting that this is much less of an issue in mobile UIs, where it’s more
common to change (nearly) the whole viewport on navigation.

Managing state is one of the hardest things about writing software. And in many
ways, this aspect of state management is a great boon to SPAs. In particular,
you don’t have to think about persisting state between navigations; it just
happens automatically. In an MPA, you would have to serialize this state into
some persistent format (LocalStorage, IndexedDB, etc.) when the page unloads,
and then rehydrate on page load.

On the other hand, the fact that the state never gets blown away is exactly what
leads to memory leaks – a problem endemic to SPAs that I’ve already documented
ad nauseam. Plus, the further that the state can veer from a known good initial
value, the more likely you are to run into bugs, which is why a misbehaving SPA
often just needs a good refresh.

Interestingly, though, it’s not always the case that an MPA navigation lands on
a fresh state. As mentioned in a previous post, the back-forward cache (now
implemented in all browsers) makes this discussion more nuanced.


CACHE CONTENTS

A quick refresher: in modern browsers, the back-forward cache (or BF cache for
short) keeps a cache of the previous and next page when navigating between pages
on the same origin. This vastly reduces load times when navigating back and
forth through standard MPA pages.

But how exactly does this cache work? Even an MPA page can be very dynamic. What
if the page has been dynamically modified, or the DOM state has changed, or the
JavaScript state has changed? What does the browser actually cache?

To test this out, I wrote a simple test page. On this page, you can set state in
a variety of ways: DOM state, JavaScript heap state, scroll state. Then you can
click a link to another page, press the back button, and see what the browser
remembers.

As it turns out, the browser remembers a lot. I tested this in various browsers
(Chrome/Firefox/Safari on desktop, Chrome/Firefox on Android, Safari on iOS),
and saw the same result in all of them: the full page state is maintained after
pressing the back button. Here is a video demonstration:

BF Cache demonstration

Note that the scroll positions on both the main document and the subscroller are
preserved. More impressively, JavaScript state that isn’t even represented in
the DOM (here, the number of times a button was clicked) is also preserved.

Now, to be clear: this doesn’t solve the problem of maintaining state in normal
forward navigations. Everything I said above about MPAs needing to serialize
their state would apply to any navigation that isn’t cached. Also, this behavior
may vary subtly between browsers, and their heuristics might not work for your
website. But it is impressive that the browser gives you so much out-of-the-box.


CONCLUSION

There are dozens of reasons to reach for an SPA technology, MPA technology, or
some blend of the two. Everything depends on the needs and constraints of what
you’re trying to build.

In these past few posts, I’ve tried to shed light on some interesting changes to
MPAs that have happened under our very feet, while we might not have noticed.
These changes are important, and may shift the calculus when trying to decide
between an SPA or MPA architecture. To be fair, though, SPAs haven’t stopped
moving either: experimental browser APIs like the Navigation API are even trying
to solve longstanding problems of focus and scroll management. And of course,
frameworks are still innovating on both SPAs and MPAs.

The fact that SPAs neatly simplify so many aspects of application development –
keeping state in one place, on the main thread, persistent across navigations –
is one of their greatest strengths as well as a predictable wellspring of
problems. Performance and accessibility wonks can continue harping on the
problems of SPAs, but at the end of the day, if developers find it easier to
code an SPA than the equivalent MPA, then SPAs will continue to be built. Making
MPAs more capable is only one way of solving the problem: approaching things
from the other end – such as improved tooling, guidance, and education for SPA
developers – can also work toward the same end goal.

As tempting as it may be to pronounce one set of tools as dead and another as
ascendant, it’s important to remain humble and remember that everyone is working
under a different set of constraints, and we all have a different take on web
development. For that reason, I’ve come around to the conclusion that SPAs are
not going anywhere anytime soon, and will probably remain a compelling
development paradigm for as long as the web is around. Some developers will
choose one perspective, some will choose another, and the big, beautiful
elephant will continue lumbering forward.

25 May


MORE THOUGHTS ON SPAS

Posted by Nolan Lawson in Web. Tagged: spas. 9 Comments

My last post (“The balance has shifted away from SPAs”) attracted a fair amount
of controversy, so I’d like to do a follow-up post with some clarifying points.

First off, a definition. In some circles, “SPA” has colloquially come to mean
“website with tons of JavaScript,” which brings its own set of detractors, such
as folks who just don’t like JavaScript very much. This is not at all what I
mean by “SPA.” To me, an SPA is simply a “Single-Page App,” i.e. a website with
a client-side router, where every navigation stays on the same HTML page rather
than loading a new one. That’s it.

It has nothing to do with the programming model, or whether it “feels” like
you’re coding a Single-Page App. By my definition, Turbolinks is an SPA
framework, even if, as a framework user, you never have to dirty your hands
touching any JavaScript. If it has a client-side router, it’s an SPA.

Second, the point of my post wasn’t to bury SPAs and dance on their grave. I
think SPAs are great, I’ve worked on many of them, and I think they have a
bright future ahead of them. My main point was: if the only reason you’re using
an SPA is because “it makes navigations faster,” then maybe it’s time to
re-evaluate that.

Jake Archibald already showed way back in 2016 that SPA navigations are not
faster when the page is loading lots of HTML, because the browser’s streaming
HTML parser can paint above-the-fold content faster than it takes for the SPA to
download the full-fat JSON (or HTML) and manually inject it into the DOM.
(Unless you’re doing some nasty hacks, which you probably aren’t.) In his
example, GitHub would be better off just doing a classic server round-trip to
fetch new HTML than a fancy Turbolinks SPA navigation.

That said, my post did generate some thoughtful comments and feedback, and it
got me thinking about whether there are other reasons for SPAs’ recent decline
in popularity, and why SPAs could still remain an attractive choice in the
future for certain types of websites.


CORE WEB VITALS

In 2020, Google announced that the Core Web Vitals would become a factor in
search page rankings. I think it’s fair to say that this sent shockwaves through
the industry, and caused folks who hadn’t previously taken performance very
seriously to start paying close attention to their site speed scores.

It’s important to notice that the Core Web Vitals are very focused on page load.
LCP (Largest Contentful Paint) and FID (First Input Delay) both apply only to
the user experience during the initial navigation. (CLS, or Cumulative Layout
Shift, applies to the initial navigation and beyond; see note below.) This makes
sense for Google: they don’t really care how fast your site is after that
initial page load; they mostly just care about the experience of clicking a link
in Google and loading the subsequent page.

Regardless of whether these metrics are an accurate proxy for the user
experience, they are heavily biased against SPAs. The whole value proposition of
SPAs (from a performance perspective at least) is that you pay a large upfront
cost in exchange for faster subsequent interactions (that’s the theory anyway).
With these metrics, Google is penalizing SPAs if they render client-side (LCP),
load a lot of JavaScript (FID), or render content progressively on the client
side (CLS).

A classic MPA (Multi-Page App) with a dead-simple HTML file and no JavaScript
will score very highly on Core Web Vitals. Miško Hevery, the creator of Qwik,
has explicitly mentioned Core Web Vitals as an influence on how he designed his
framework. Especially for websites that are very sensitive to SEO scores, such
as e-commerce sites, the Core Web Vitals are pushing developers away from SPAs.

Update: This post originally stated that CLS applies only to the initial
navigation; it turns out that it applies to the full page lifespan. (The
heuristics are pretty complex; you can read about them here.) I think my point
still stands, though, that an MPA with no JavaScript (and no unsized images or
iframes, poorly sized fonts, or other mistakes) should easily get a great CLS
score.


CODE CACHING

This was something I forgot to mention in my post, probably because it happened
long enough ago that it couldn’t possibly have had an impact on the recent
uptick in MPA interest. But it’s worth calling out.

When you navigate between pages in an MPA, the browser is smart enough not to
parse and compile the same JavaScript over and over again. Chrome does it,
Firefox does it, Safari does it. All modern browsers have some variation on
this. (Legacy Edge and IE, may they rest in peace, did not have this.)
Incidentally, this optimization also exists for stylesheet parsing (WebKit bug
from 2012, Firefox bug, demo).

So if you have the same shared JavaScript and CSS on multiple MPA pages, it’s
not a big deal in terms of subsequent navigations. At worst, you’re asking the
browser to re-parse and re-render your HTML, re-run style and layout calculation
(which would happen in an SPA anyway, although to a lesser degree thanks to
techniques like invalidation sets), and re-run JavaScript execution. (In a
well-built MPA, though, you should not have much JavaScript on each page.)

Throw in paint holding and the back-forward cache (as discussed in my previous
post), as well as the streaming HTML mentioned above, and you can see why the
value proposition of “SPA navigations are fast” is not so true anymore. (Maybe
it’s true in certain cases, e.g. where the DOM being updated is very small. But
is it so much faster that it’s worth the added complexity of a client-side
router?)

Update: It occurred to me that a good use case for this kind of SPA navigation
is a settings page, dashboard, or some other complex UI with nested routes – in
that case, the updated DOM might be very small indeed. There’s a good
illustration of this in the Next.js Layouts RFC. As with everything in software,
it’s all about tradeoffs.


SERVICE WORKER AND OFFLINE MPAS

One interesting response to my post was, “I like SPAs because they preserve
privacy, and keep all the user data client-side. My site can just be static
files.” This is a great point, and it’s actually one of the reasons I wrote my
Mastodon client, Pinafore, as an SPA.

But as I mentioned in my post, there’s nothing inherent about the SPA
architecture that makes it the only option for handling user data purely on the
client side. You could make a fully offline-powered MPA that relies on the
Service Worker to handle all the rendering. (Here is an example implementation I
found.)

I admit though, that this was one of the weaker arguments in my post, because as
far as I can tell… nobody is actually doing this. Most frameworks I’m aware of
that generate a Service Worker also generate a client-side router. The Service
Worker is an enhancement, but it’s not the main character in the story. (If you
know a counter-example, though, then please let me know!)

I think this is actually a very under-explored space in web development. I was
pitching this Service-Worker-first architecture back in 2016. I’m still hopeful
that some framework will start exploring this idea eventually – the recent focus
on frameworks supporting server-side JavaScript environments beyond Node (such
as Cloudflare Workers) should in theory make this easier, because the Service
Worker is a similarly-constrained JavaScript environment. If a framework can
render from inside a Cloudflare Worker, then why not a Service Worker?

This architecture would have a lot of upsides:

 1. No client-side router, so no need to implement focus management, scroll
    restoration, etc.
 2. You’d also still get the benefits of paint holding and the back-forward
    cache.
 3. If you open multiple browser tabs pointing to the same origin, each page
    will avoid the full-SPA JavaScript load, since the main app logic has
    already been bootstrapped in the Service Worker. (One Service Worker serves
    multiple tabs for the same origin.)
 4. The Service Worker can use ReadableStreams to get the benefits of the
    browser’s progressive HTML parser, as described above.
 5. Memory leaks? I’ve harped on this a lot in the past, and admittedly, this
    wouldn’t fully solve the problem. You’d probably just move the leaks into
    the Service Worker. But a Service Worker has a fire-and-forget model, so the
    browser could easily terminate it and restart it if it uses up too much
    memory, and the user might never notice.

This architecture does have some downsides, though:

 1. State is spread out between the Service Worker and the main thread, with
    asynchronous postMessage required for communication.
 2. You’d be limited to using IndexedDB and caches to store persistent state,
    since you’d need something accessible to the Service Worker – no more
    synchronous LocalStorage.
 3. In general, the simplified app development model of an SPA (all state is
    stored in one place, on the main thread, available synchronously) would be
    thrown out the window.
 4. No framework that I’m aware of is doing this.

I still think the performance and simplicity upsides of this model are worth at
least prototyping, but again, it remains to be seen if the DX (Developer
Experience) is seamless enough to make it viable in practice.


THE VIRTUES OF SPAS

So given everything I’ve said about SPAs – paint holding, the back-forward
cache, Core Web Vitals – why might you still want to build an SPA in 2022? Well,
to give a somewhat hand-wavy answer, I think there are a lot of cases where an
SPA is a good choice:

 1. You’re building an app where the holotype matches the right use case for an
    SPA – e.g. only one browser tab is ever open at a time, page loads are
    infrequent, content is very dynamic, etc.
 2. Core Web Vitals and SEO are not a big concern for you, e.g. because your app
    is behind a login gate.
 3. There’s a feature you need that’s only available in SPAs (e.g. an
    omnipresent video player, as mentioned in the previous post).
 4. Your team is already productive building an SPA, because that’s what your
    favorite framework supports.
 5. You just like SPAs! That’s fine! I’m not going to take them away from you, I
    promise.

That said, my goal with the previous post was to start a conversation
challenging some of the assumptions that folks have about SPAs. (E.g. “SPA
navigations are always faster.”) Oftentimes in the tech industry we do things
just because “that’s how things have always been done,” and we don’t stop to
consider if the conditions that drove our previous decisions have changed.

The only constant in software is change. Browsers have changed a lot over the
years, but in many ways our habits as web developers have not really adjusted to
fit the new reality. There’s a lot of prototyping and research yet to be done,
and the one thing I’m sure of is that the best web apps in 10 years will look a
lot different from the best web apps built today.

Next post: State is hard: why SPAs will persist

21 May


THE BALANCE HAS SHIFTED AWAY FROM SPAS

Posted by Nolan Lawson in Web. Tagged: spas. 17 Comments

There’s a feeling in the air. A zeitgeist. SPAs are no longer the cool kids they
once were 10 years ago.

Hip new frameworks like Astro, Qwik, and Elder.js are touting their MPA
capabilities with “0kB JavaScript by default.” Blog posts are making the rounds
listing all the challenges with SPAs: history, focus management, scroll
restoration, Cmd/Ctrl-click, memory leaks, etc. Gleeful potshots are being taken
against SPAs.

I think what’s less discussed, though, is how the context has changed in recent
years to give MPAs more of an upper hand against SPAs. In particular:

 1. Chrome implemented paint holding – no more “flash of white” when navigating
    between MPA pages. (Safari already did this.)
 2. Chrome implemented back-forward caching – now all major browsers have this
    optimization, which makes navigating back and forth in an MPA almost
    instant.
 3. Service Workers – once experimental, now effectively 100% available for
    those of us targeting modern browsers – allow for offline navigation without
    needing to implement a client-side router (and all the complexity therein).
 4. Shared Element Transitions, if accepted and implemented across browsers,
    would also give us a way to animate between MPA navigations – something
    previously only possible (although difficult) with SPAs.

This is not to say that SPAs don’t have their place. Rich Harris has a great
talk on “transitional apps,” which outlines some reasons you may still want to
go with an SPA. For instance, you might want an omnipresent element that
survives page navigations, such as an audio/video player or a chat widget. Or
you may have an infinite-loading list that, on pressing the back button, returns
to the previous position in the list.

Even teams that are not explicitly using these features may still choose to go
with an SPA, just because of the “unknown” factor. “What if we want to implement
navigation animations some day?” “What if we want to add an omnipresent video
player?” “What if there’s some customization we want that’s not supported by
existing browser APIs?” Choosing an MPA is a big architectural decision that may
effectively cut off the future possibility of taking control of the page in
cases where the browser APIs are not quite up to snuff. At the end of the day,
an SPA gives you full control, and many teams are hesitant to give that up.

That said, we’ve seen a similar scenario play out before. For a long time,
jQuery provided APIs that the browser didn’t, and teams that wanted to sleep
soundly at night chose jQuery. Eventually browsers caught up, giving us APIs
like querySelector and fetch, and jQuery started to seem like unnecessary
baggage.

I suspect a similar story may play out with SPAs. To illustrate, let’s consider
Rich’s examples of things you’d “need” an SPA for:

 * Omnipresent chat widget: use Shared Element Transitions to keep the widget
   painted during MPA navigations.
 * Infinite list that restores scroll position on back button: use
   content-visibility and maybe store the state in the Service Worker if
   necessary.
 * Omnipresent audio/video player that keeps playing during navigations: not
   possible today in an MPA, but who knows? Maybe the Picture-in-Picture API
   will support this someday.

To be clear, though, I don’t think SPAs are going to go away entirely. I’m not
sure how you could reasonably implement something like Photoshop or Figma as an
MPA. But if new browser APIs and features keep landing that slowly chip away at
SPAs’ advantages, then more and more teams in the future will probably choose to
build MPAs.

Personally I think it’s exciting that we have so many options available to us
(and they’re all so much better than they were 10 years ago!). I hope folks keep
an open mind, and keep pushing both SPAs and MPAs (and “transitional apps,” or
whatever we’re going to call the next thing) to be better in the future.

Follow-up: More thoughts on SPAs

8 Apr


THE STRUGGLE OF USING NATIVE EMOJI ON THE WEB

Posted by Nolan Lawson in Web. 19 Comments

Emoji are a standard overseen by the Unicode Consortium. The web is a standard
governed by bodies such as the W3C, WHATWG, and TC39. Both emoji and the web are
ubiquitous.

So you might be forgiven for thinking that, in 2022, it’s possible to plop an
emoji on a web page and have it “just work”:



If you see a lotus flower above, then congratulations! You’re on a browser or
operating system that supports Emoji 14.0, released in September 2021. If not,
you might see something that looks like the scoreboard on an old 80’s arcade
game:

Another apt description would be “robot barf.”

Let’s try another one. What does this emoji look like to you?



If you see a face with spiral eyes, then wonderful! Your browser can render
Emoji 13.1, released in September 2020. If not, you might see a puzzling
combination of face with crossed-out eyes and a shooting (“dizzy”) star:



It’s a fun bit of cartoon iconography to know that this combination means “dizzy
face,” but for most folks, it doesn’t really evoke the same meaning. It’s not
much better than the robot barf.


EMOJI AND BROWSER SUPPORT

If you’re like me, you’re a minimalist when it comes to web development. If I
don’t have to rebuild something from scratch, then I’ll avoid doing so. I try to
“use the platform” as much as possible and lean on existing web standards and
browser capabilities.

When it comes to emoji, there are a lot of potential upsides to using the
platform. You don’t need to bring your own heavy emoji font, or use a
spritesheet, or do any manual DOM processing to replace text with <img>s. But
sadly, if you try to avoid these heavy-handed techniques and just, you know, use
emoji on the web, you’ll quickly run into the kinds of problems I describe
above.

The first major problem is that, although emoji are released by the Unicode
Consortium at a yearly cadence, OSes don’t always update in a timely manner to
add the latest-and-greatest characters. And the browser, in most cases, is
beholden to the OS to render whatever emoji fonts are provided by the underlying
system (e.g. Apple Color Emoji on iOS, Microsoft Segoe Color Emoji on Windows,
etc.).

In the case of major releases (such as Emoji 14.0), a missing character means
the “robot barf” shown above. In the case of minor releases (such as Emoji
13.1), it can mean that the emoji renders as a bizarre “double” emoji – some of
my favorites include “man with floating wig of red hair” () for “man with red
hair” () and “bear with snowflake” () for “polar bear” ().

If I’m trying to convince you that native emoji are worth investing in for your
website, I’ve probably lost half my audience at this point. Most chat and social
media app developers would prefer to have a consistent experience across all
browsers and devices – not a broken experience for some users. And even if the
latest emoji were perfectly supported across devices, these developers may still
prefer a uniform look-and-feel, which is why vendors like Twitter, Facebook, and
WhatsApp actually design their own emoji fonts.


DETECTING BROKEN EMOJI

Let’s say, though, that you’re comfortable with emoji looking different on
different platforms. After all – maybe Apple users would prefer to see Apple
emoji, and Windows users would prefer to see Windows emoji. And in any case,
you’d rather not reinvent what the OS already provides. What do you have to do
in this case?

Well, first you need a way to detect broken emoji. This is actually much harder
than it sounds, and basically boils down to rendering the emoji to a <canvas>,
testing that it has an actual color, and also testing that it doesn’t render as
two separate characters. (is-emoji-supported is a decent JavaScript library that
does this.)

This solution has a few downsides. First off, you now need to run JavaScript
before rendering any text – with all the problems therein for SSR, performance,
etc. Second, it doesn’t actually solve the problem – it just tells you that
there is a problem. And it might not even work – I’ve seen this technique fail
in cross-origin iframes in Firefox, presumably because the <canvas> triggered
the browser’s fingerprinting detection.

But again, let’s just say that you’re comfortable with all this. You detect
broken emoji and perhaps replace them with text saying “emoji not supported.” Or
maybe you want a more graceful degradation, so you include half a megabyte of
JSON data describing every emoji ever created, so that you can actually show
some text to describe the emoji. (Of course, that file is only going to get
bigger, and you’ll need to update it every year.)

I know what you’re thinking: “I just wanted to show an emoji on my web page. Why
do I have to know everything about emoji?” But just wait: it gets worse.


BLACK-AND-WHITE OLDER EMOJI

Okay, so now you’re successfully detecting whether an emoji is supported, so you
can hide or replace those newfangled emoji that are causing problems. But would
it occur to you that the oldest emoji might be problematic too?



This is the classic smiling face emoji. But depending on your browser, instead
of the more familiar full-color version, you might see a simple black-and-white
smiley. In case you don’t see it, here is a comparison, and here’s how it looks
in Chrome on Windows:



You’ll also see this same problem for some other older emoji, such as red heart
() and heart suit (♥️), which both render as black hearts rather than red ones.

So how can we render these venerable emoji in glorious Technicolor? Well, after
a lot of trial-and-error, I’ve landed on this CSS:

1
2
3
4
5
6
7
8
9
10
div {
  font-family: "Twemoji Mozilla",
               "Apple Color Emoji",
               "Segoe UI Emoji",
               "Segoe UI Symbol",
               "Noto Color Emoji",
               "EmojiOne Color",
               "Android Emoji",
               sans-serif;
}

Basically, what we have to do is point the font-family at a known list of
built-in emoji fonts on various operating systems. This is similar to the
“system font” trick.

If you’re wondering what “Twemoji Mozilla” is, well, it turns out that Firefox
is a bit odd in that it actually bundles its own version of Twitter’s Twemoji
font on Windows and Linux. This will be important later, but let’s set it aside
for now.


WHAT IS AN EMOJI, ANYWAY?

At this point, you may be getting pretty tired of this blog post. “Nolan,” you
might say, “why don’t you just tell me what to do? Just give me a snippet I can
slap onto my website to fix all these dang emoji problems!” Well I wish it were
as simple as just chucking a CSS font-family onto your body and calling it a
day. But if you try that naïve approach, you’ll start to see some bizarre
characters:



As it turns out, characters like the asterisk (*), octothorpe (#), trademark
(™), and even the numbers 0-9 are technically emoji. And depending on your
browser and OS, the system emoji font will either not render them at all, or it
might render them as the somewhat-cartoony versions you see above.

Maybe to some folks it’s acceptable for these characters to be rendered as
emoji, but I would wager that the average person doesn’t consider these numbers
and symbols to be “emoji.” And it would look odd to treat them like that.

So all right, some “emoji” are not really emoji. This means we need to ensure
that some characters (like the smiley face) render using the system emoji font,
whereas other kinda-sorta emoji characters (like * and #) don’t. Potentially you
could use a JavaScript tool like emoji-regex or a CSS tool like
emoji-unicode-range to manage this, but in my experience, neither one handles
all the various edge cases (nor have I found an off-the-shelf solution that
does). And either way, it’s starting to feel pretty far from “use the platform.”


WINDOWS WOES

I could stop right here, and hopefully I’ve made the point that using native
emoji on the web is a painful experience. But I can’t help mentioning one more
problem: flag emoji on Windows.

As it turns out, Microsoft’s emoji font does not have country flags on either
Windows 10 or Windows 11. So instead of the US flag emoji, you’ll just see the
characters “US” (and the equivalent country codes for other flags). Microsoft
might have a good geopolitical reason to do this (although they’d have to
explain why no other emoji vendor follows suit), but in any case, it makes it
hard to talk about sports matches or national independence days.

Flag emoji in Chrome on Windows. You can have the pirate flag, you can have the
race car flag, but you can’t root for Argentina vs Brazil in a soccer match.

Interestingly, this problem is actually solvable in Firefox, since they ship
their own “Mozilla Twemoji” font (which, furthermore, tends to stay more
up-to-date than the built-in Microsoft font). But the most popular browser
engine on Windows, Chromium, does not ship their own emoji font and doesn’t plan
to. There’s actually a neat tool called country-flag-emoji-polyfill that can
detect the broken flag support and patch in a minimal Twemoji font to fix it,
but again, it’s a shame that web developers have to jump through so many hoops
to get this working.

(At this point, I should mention that the Unicode Consortium themselves have
come out against flag emoji and won’t be minting any more. I can understand the
sentiment behind this – a font consortium doesn’t want to be in the business of
adjudicating geopolitical boundaries. But in my opinion, the cat’s already out
of the bag. And it seems bizarre that Wales and Scotland get their own flag, but
no other countries, states, provinces, municipalities, duchies, earldoms, or
holy empires ever will. It seems guaranteed to lead to an explosion of
non-standard vendor-specific flags, which is already happening according to
Emojipedia.)


CONCLUSION

I could go on. I really could. I could talk about the sad state of browser
support for color fonts, or how to avoid mismatched emoji fonts in Firefox, or
subtle issues with measuring emoji width on Windows, or how you need to install
a separate package for emoji to work at all in Chrome on Linux.

But in the end, my message is a simple one: I, as a web developer, would like to
use emoji on my web sites. And for a variety of reasons, I cannot.

I build an emoji picker called emoji-picker-element. This is what it would look
like if I didn’t bend over backwards to fix emoji problems.

At a time when web browsers have gained a staggering array of new capabilities –
including Bluetooth, USB, and access to the filesystem – it’s still a struggle
to render a smiley face. It feels a bit odd to argue in 2022 that “the web
should have emoji support,” and yet here I stand, cap in hand, making my case.

You might wonder why browsers have been so slow to fix this problem. I suspect
part of it is that there are ready workarounds, such as twemoji, which parses
the DOM to look for emoji sequences and replaces them with <img>s. The fact that
this technique isn’t great for performance (downloading extra images, processing
the DOM and mutating it, needing to run JavaScript at all) might seem
unimportant when you consider the benefits (a unified look-and-feel across
devices, up-to-date emoji support).

Part of me also wonders if this is one of those cases where the needs of larger
entities have eclipsed the needs of smaller “mom-and-pop” web shops. A
well-funded tech company building a social media app with a massive user base
has the resources to handle these emoji problems – heck, they might even design
their own emoji font! Whereas your average small-time blogger, agency, or studio
would probably prefer for emoji to “just work” without a lot of heavy lifting.
But for whatever reason, their voices are not being heard.

What do I wish browsers would do? I don’t have much of a grand solution in mind,
but I would settle for browsers following the Firefox model and bundling their
own emoji font. If the OS can’t keep its emoji up-to-date, or if it doesn’t want
to support certain characters (like country flags), then the browser should fill
that gap. It’s not a huge technical hurdle to bundle a font, and it would help
spare web developers a lot of the headaches I listed above.

Another nice feature would be some sensible way to render what are colloquially
known as “emoji” as emoji. So for instance, the “smiley face” should be rendered
as emoji, but the numbers 0-9 and symbols like * and # should not. If backwards
compatibility is a concern, then maybe we need a new CSS property along the
lines of text-rendering: optimizeForLegibility – something like emoji-rendering:
optimizeForCommonEmoji would be nice.

In any case, even if this blog post has only served to dissuade you from ever
trying to use native emoji on the web, I hope that I’ve at least done a decent
job of summarizing the current problems and making the case for browsers to help
solve it. Maybe someday, when browsers everywhere can render a smiley face, I
can write something other than :-) to show my approval.

Update: At some point, WordPress started automatically converting emoji in this
blog post to <img>s. I’ve replaced some of the examples with CodePens to make it
clearer what’s going on. Of course, the fact that WordPress feels compelled to
use <img>s instead of native emoji kind of proves my point.

2 Feb


FIVE YEARS OF QUITTING TWITTER

Posted by Nolan Lawson in social media. Tagged: social media. 9 Comments

It’s been almost five years since I deleted my Twitter account. I didn’t just
delete the app or deactivate – I deleted my whole account and my entire tweet
history, lighting a match and burning the bridge behind me.

I don’t want to pretend to be some kind of seer, but since then, divesting
yourself from social media has become a somewhat fashionable lifestyle choice.
For a certain type of person, it’s the kind of pro-mental health, self-care kind
of thing you might do along with going vegan or taking up Vispassana meditation.
(To make it clear that I’m not above such intellectual trendiness, I’ve tried
all those things too.)

In this post, I want to talk honestly about the good and the bad that comes with
deleting your Twitter account, from the perspective of a tech guy who’s plugged
in to several different software communities (open source, web development,
Node.js, etc.).


THE GOOD

Let’s start off with the good stuff. Twitter is no longer what I check first
thing in the morning and the last thing before I go to sleep. In fact, I
instituted a personal rule to charge my phone outside of the bedroom altogether
so that I’m not tempted to read it in bed. (I don’t always hold fast to this.)

I have my RSS feed, I have Hacker News, and I have various news outlets (Ars
Technica, Wired, etc.), so there’s plenty for me to read on the internet. But
unlike Twitter, I actually run out of stuff to read and eventually get bored
with my phone. I consider this a plus, even if it ends up driving me towards
other screens – video games in particular. But even if my lofty goal is to spend
more time reading books or riding my bike, I still consider time spent with my
Switch or doing crossword puzzles to be time better spent than flicking through
social media.

I also disabled all notifications on my phone except for IM and email, which
helps reduce the neediness of my little pocket Tamagatchi. IM notifications are
invaluable for keeping up with family and friends, but my email notifications
are still sometimes a source of stress, so I try to unsubscribe as much as
possible from any newsletters, automated updates, and other bullshit. If my
email is going to buzz in my pocket and show me a notification, I want it to be
something important.

I still have a Mastodon account, and I still host a Mastodon server at
toot.cafe, but I’m not very active anymore. I mostly treat it as a write-only
medium. My reasons for this are various, but basically I’ve become less of a
booster of Mastodon (and the fediverse in general) over time. It’s a neat idea,
and it still works pretty well for the cohort of hardcore techies and
tech-adjacent folks who seem to be there, but I just don’t find it super
interesting any more. Sometimes I think of Mastodon as my Twitter nicotine patch
– it sorta feels like Twitter, it scratches the same itch, but it’s just not
nearly as compelling.


THE BAD

If you’re the kind of techie who uses social media to connect with your peers
and build your personal brand – the kind of person who speaks at conferences,
writes blog posts, talks on podcasts, etc. – then quitting Twitter is a terrible
idea. My blog posts get less traffic than they used to, I don’t get invited to
as many conferences anymore, and even when I do give the odd podcast interview,
there’s always an awkward moment when they ask for my Twitter handle, or which
social media account they should direct traffic and followers to. (I dunno,
GitHub? I think I have a Reddit account?)

My main public outlet these days is my blog, and from looking at the WordPress
stats, my overall traffic has taken a hit since I quit Twitter. I’ve kind of
ceased to exist for a certain segment of my (former) audience, and for the rest,
I only exist when someone takes pity on me and links to my blog from Twitter,
Reddit, Hacker News, or a big site like CSS Tricks. (I don’t abstain from Reddit
or Hacker News, but I’m also not super active there.) It feels kind of weird to
have quit Twitter, and yet to relish the traffic spike from a well-timed Twitter
mention.

For those people who are re-sharing my content on social media, I suspect most
of them found it from their RSS feed. So RSS definitely still seems alive and
well, even if it’s just a small upstream tributary for the roaring downstream
river of Twitter, Reddit, etc.

Another odd downside of deleting your Twitter account is that, after a cool-down
period, someone can grab your Twitter handle. I didn’t realize this was a thing,
so someone has squatted on my old Twitter name, presumably because they hope to
re-sell it later, or maybe because they want the SEO juice? I have no idea. I
would be mad about it, but the fact that this account exists (and my old
mentions on Twitter still link to it!) makes Twitter a slightly shittier place,
so in my own petty way, I’m kind of glad it exists.


THE MIXED BAG

Some things that I miss from Twitter are both good and bad. Twitter is a
sprawling global conversation, and a lot of the important debates in web
development (client-rendering vs server-rendering, web components vs frameworks,
etc.) were born and thrive there. I miss out on a lot of those debates, and many
of them could serve as good fodder for a thoughtful blog post or open-source
project, so I regret not having the creative spark that comes from those
conversations.

The problem is that a lot of these debates are, in my opinion, either trivial or
manufactured. Twitter (like all social media) is an outrage machine, designed to
goose engagement using whatever means the algorithm finds through blind
optimization. I fully believe that phenomena like “the great divide” in web
development wouldn’t exist without Twitter, and to the extent that it does exist
in the “real world,” it’s only because it was hatched on Twitter before
infecting the rest of us. Social media engagement thrives when it finds a wedge
to drive between two parts of a community, where it can cause incendiary content
to cross-pollinate from one camp to another, creating an endless cycle of
irritation, condemnation, dunking, and flaming.

Occasionally in my RSS feed I’ll read a post that starts off by saying, “There’s
been a huge debate about…” or “There’s been a recent controversy over…” and then
eventually I realize the whole post is about some Twitter beef. I don’t miss
being on the front lines of these kind of battles, but I do think some of these
debates are worth having, so I have mixed feelings about it.


CONCLUSION

I don’t plan on coming back to Twitter. Mostly because I just don’t need it
anymore – I’m not super active at conferences or meetups, I don’t have a
workshop or service I need to sell, and so there’s little professional reason
for me to be there. I like posting on my blog, but I can only hope that my
content gets attention in direct correlation to the value that people derive
from it. If I write a good blog post, people will read it. I try to focus on
that and that alone.

Honestly, even that lifestyle – writing blog posts, watching it occasionally
blow up on Hacker News and Reddit, reading occasionally scathing comments – is
hard enough on my mental health. Whenever I write a blog post these days, I have
a period of anxiety and dread where I worry about the potential backlash. I
mitigate that a bit by carefully editing my posts to remove anything that could
be misconstrued, and to occasionally have some trusted friends review a draft
(thank you all!), but frankly it’s a bit sad that I even do this, because my
writing has gotten decidedly more boring over the years.

Sometimes I go back and read my blog posts from 2014 and marvel at how
freewheeling, irreverent, and downright joyful my writing was. I don’t really
write like that anymore, because social media (and the internet in general) have
conditioned me to constantly fret over negative attention. So I act as my own PR
firm, carefully focus-testing and bowdlerizing my prose until it’s as dry as a
slice of burnt toast. Sometimes I can escape from this trap a little bit (like
I’m trying to do right now), but overall I worry that my writing has gotten
worse, not better, over time. (Another worry!)

So given my inherent worry-prone nature about posting content on the internet,
Twitter is probably just not right for me. The high I would get from seeing a
tweet go viral and getting adulation from my peers just doesn’t outweigh the
anxiety, the sleeplessness, or the careful tiptoeing and sanitization of my
thoughts that come with heavy social media use. I’m already bad enough with that
as it is, just with my blog; coming back to Twitter would dial that up to 11.

So I deleted my Twitter account, and I plan to keep it that way. Should you do
the same? Well, I dunno. If you need it for your livelihood, then decidedly not.
You should probably just see Twitter as a necessary evil and try to insulate
yourself from the bad parts while profiting from the good parts. If you’re a
casual user, then maybe you’ve already figured out a healthy way to live with
Twitter (curating your feed, turning off the algorithmic timeline, whatever),
and if so – good for you! For me, I have too much stubbornness and too little
faith in my own ability to manage my social media addiction to want to give
Twitter a second try.

5 Jan


MEMORY LEAKS: THE FORGOTTEN SIDE OF WEB PERFORMANCE

Posted by Nolan Lawson in performance, Web. Tagged: performance. 16 Comments

I’ve researched and learned enough about client-side memory leaks to know that
most web developers aren’t worrying about them too much. If a web app leaks 5 MB
on every interaction, but it still works and nobody notices, then does it
matter? (Kinda sounds like a “tree in the forest” koan, but bear with me.)

Even those who have poked around in the browser DevTools to dabble in the arcane
art of memory leak detection have probably found the experience… daunting. The
effort-to-payoff ratio is disappointingly high, especially compared to the
hundreds of other things that are important in web development, like security
and accessibility.

So is it really worth the effort? Do memory leaks actually matter?

I would argue that they do matter, if only because the lack of care (as shown by
public-facing SPAs leaking up to 186 MB per interaction) is a sign of the
immaturity of our field, and an opportunity for growth. Similarly, five years
ago, there was much less concern among SPA authors for accessibility, security,
runtime performance, or even ensuring that the back button maintained scroll
position (or that the back button worked at all!). Today, I see a lot more
discussion of these topics among SPA developers, and that’s a great sign that
our field is starting to take our craft more seriously.

So why should you, and why shouldn’t you, care about memory leaks? Obviously I’m
biased because I have an axe to grind (and a tool I wrote, fuite), but let me
try to give an even-handed take.


MEMORY LEAKS AND SOFTWARE ENGINEERING

In terms of actual impact on the business of web development, memory leaks are a
funny thing. If you speed up your website by 2 seconds, everyone agrees that
that’s a good thing with a visible user impact. If you reduce your website’s
memory leak by 2 MB, can we still agree it was worth it? Maybe not.

Here are some of the unique characteristics of memory leaks that I’ve observed,
in terms of how they actually fit into the web development process. Memory leaks
are:

 1. Low-impact until critical
 2. Hard to diagnose
 3. Trivial to fix once diagnosed


LOW-IMPACT…

Most web apps can leak memory and no one will ever notice. Not the user, not the
website author – nobody. There are a few reasons for this.

First off, browsers are well aware that the web is a leaky mess and are already
ruthless about killing background tabs that consume too much memory. (My former
colleague on the Microsoft Edge performance team, Todd Reifsteck, told me way
back in 2016 that “the web leaks like a sieve.”) A lot of users are tab hoarders
(essentially using tabs as bookmarks), and there’s a tacit understanding between
browser and user that you can’t really have 100 tabs open at once (in the sense
that the tab is actively running and instantly available). So you click on a tab
that’s a few weeks old, boom, there’s a flash of white while the page loads, and
nobody seems to mind much.

Second off, even for long-lived SPAs that the user may habitually check in on
(think: GMail, Evernote, Discord), there are plenty of opportunities for a page
refresh. The browser needs to update. The user doesn’t trust that the data is
fresh and hits F5. Something goes wrong because programmers are terrible at
managing state, and users are well aware that the old
turn-it-off-and-back-on-again solves most problems. All of this means that even
a multi-MB leak can go undetected, since a refresh will almost always occur
before an Out Of Memory crash.

Chrome’s Out Of Memory error page. If you see this, something has gone very
wrong.

Third, it’s a tragedy-of-the-commons situation, and people tend to blame the
browser. Chrome is a memory hog. Firefox gobbles up RAM. Safari is eating all my
memory. For reasons I can’t quite explain, people with 100+ open tabs are quick
to blame the messenger. Maybe this goes back to the first point: tab hoarders
expect the browser to automatically transition tabs from “thing I’m actively
using” to “background thing that is basically a bookmark,” seamlessly and
without a hitch. Browsers have different heuristics about this, some heuristics
are better than others, and so in that sense, maybe it is the browser’s “fault”
for failing to adapt to the user’s tab-hoarding behavior. In any case, the
website author tends to escape the blame, especially if their site is just 1 out
of 100 naughty tabs that are all leaking memory. (Although this may change as
more browsers call out tabs individually in Task Manager, e.g. Edge and Safari.)


…UNTIL CRITICAL

What’s interesting, though, is that every so often a memory leak will get so bad
that people actually start to notice. Maybe someone opens up Task Manager and
wonders why a note-taking app is consuming more RAM than DOTA. Maybe the website
slows to a crawl after a few hours of usage. Maybe the users are on a device
with low available memory (and of course the developers, with their 32GB
workstations, never noticed).

Here’s what often happens in this case: a ticket lands on some web developer’s
desk that says “Memory usage is too high, fix it.” The developer thinks to
themselves, “I’ve never given much thought to memory usage, well let’s take a
stab at this.” At some point they probably open up DevTools, click “Memory,”
click “Take snapshot,” and… it’s a mess. Because it turns out that the SPA
leaks, has always leaked, and in fact has multiple leaks that have accumulated
over time. The developer assumes this is some kind of sudden-onset disease, when
in fact it’s a pre-existing condition that has gradually escalated to stage-4.

The funny thing is that the source of the leak – the event listener, the
subscriber, whatever – might not even be the proximate cause of the recent
crisis. It might have been there all along, and was originally a tiny 1 MB leak
nobody noticed, until suddenly someone attached a much bigger object to the
existing leak, and now it’s a 100 MB leak that no one can ignore.

Unfortunately to get there, you’re going to have to hack your way through the
jungle of the half-dozen other leaks that you ignored up to this point. (We
fixed the leak! Oh wait, no we didn’t. We fixed the other leak! Oh wait, there’s
still one more…) But that’s how it goes when you ignore a chronic but steadily
worsening illness until the moment it becomes a crisis.


HARD TO DIAGNOSE

This brings us to the second point: memory leaks are hard to diagnose. I’ve
already written a lot about this, and I won’t rehash old content. Suffice it to
say, the tooling is not really up to the task (despite some nice recent
innovations), even if you’re a veteran with years of web development experience.
Some gotchas that tripped me up include the fact that you have to ignore
WeakMaps and circular references, and that the DevTools console itself can leak
memory.

Oh and also, browsers themselves can have memory leaks! For instance, see these
ResizeObserver/IntersectionObserver leaks in Chromium, Firefox, and Safari
(fixed in all but Firefox), or this Chromium leak in lazy-loading images (not
fixed), or this discussion of a leak in Safari. Of course, the tooling will not
help you distinguish between browser leaks and web page leaks, so you just kinda
have to know this stuff. In short: good luck!

Even with the tool that I’ve written, fuite, I won’t claim that we’ve reached a
golden age of memory leak debugging. My tool is better than what’s out there,
but that’s not saying much. It can catch the dumb stuff, such as leaking event
listeners and DOM nodes, and for the more complex stuff like leaking collections
(Arrays, Maps, etc.), it can at least point you in the right direction. But it’s
still up to the web developer to decide which leaks are worth chasing (some are
trivial, others are massive), and to track them down.

I still believe that the browser DevTools (or perhaps professional testing
tools, such as Cypress or Sentry), should be the ones to handle this kind of
thing. The browser especially is in a much better position to figure out why
memory is leaking, and to point the web developer towards solutions. fuite is
the best I could do with userland tooling (such as Puppeteer), but overall I’d
still say we’re in the Stone Age, not the Space Age. (Maybe fuite pushed us to
the Bronze Age, if I’m being generous to myself.)


TRIVIAL TO FIX ONCE DIAGNOSED

Here’s the really surprising thing about memory leaks, though, and perhaps the
reason I find them so addictive and keep coming back to them: once you figure
out where the leak is coming from, they’re usually trivial to fix. For instance:

 * You called addEventListener but forgot to call removeEventListener.
 * You called setInterval, but forgot to call clearInterval when the component
   unloaded.
 * You added a DOM node, but forgot to remove it when the page transitions away.
 * Etc.

You might have a multi-MB leak, and the fix is one line of code. That’s a
massive bang-for-the-buck! That is, if you discount the days of work it might
have taken to find that line of code.

This is where I would like to go with fuite. It would be amazing if you could
just point a tool at your website and have it tell you exactly which line of
code caused a leak. (It’d be even better if it could open a pull request to fix
the leak, but hey, let’s not get ahead of ourselves.)

I’ve taken some baby steps in this direction by adding stacktraces for leaking
collections. So for instance, if you have an Array that is growing by 1 on every
user interaction, fuite can tell you which line of code actually called
Array.push(). This is a huge improvement over v1.0 of fuite (which just told you
the Array was leaking, but not why), and although there are edge cases where it
doesn’t work, I’m pretty proud of this feature. My goal is to expand this to
other leaks (event listeners, DOM nodes, etc.), although since this is just a
tool I’m building in my spare time, we’ll see if I get to it.

fuite showing stacktraces for leaking collections.

After releasing this tool, I also learned that Facebook has built a similar tool
and is planning to open-source it soon. That’s great! I’m excited to see how it
works, and I’m hoping that having more tools in this space will help us move
past the Stone Age of memory leak debugging.


CONCLUSION

So to bring it back around: should you care about memory leaks? Well, if your
boss is yelling at you because customers are complaining about Out Of Memory
crashes, then yeah, you absolutely should. Are you leaking 5 MB, and nobody has
complained yet? Well, maybe an ounce of prevention is worth a pound of cure in
this case. If you start fixing your memory leaks now, it might avoid that crisis
in the future when 5 MB suddenly grows to 50 MB.

Alternatively, are you leaking a measly ~1 kB because your routing library is
appending some metadata to an Array? Well, maybe you can let that one slide.
(fuite will still report this leak, but I would argue that it’s not worth
fixing.)

On the other hand, all of these leaks are important in some sense, because even
thinking about them shows a dedication to craftsmanship that is (in my opinion)
too often lacking in web development. People write a web app, they throw
something buggy over the wall, and then they rewrite their frontend four years
later after users are complaining too much. I see this all the time when I
observe how my wife uses her computer – she’s constantly telling me that some
app gets slower or buggier the longer she uses it, until she gives up and
refreshes. Whenever I help her with her computer troubles, I feel like I have to
make excuses for my entire industry, for why we feel it’s acceptable to waste
our users’ time with shoddy, half-baked software.

Maybe I’m just a dreamer and an idealist, but I really enjoy putting that final
polish on something and feeling proud of what I’ve created. I notice, too, when
the software I use has that extra touch of love and care – and it gives me more
confidence in the product and the team behind it. When I press the back button
and it doesn’t work, I lose a bit of trust. When I press Esc on a modal and it
doesn’t close, I lose a bit of trust. And if an app keeps slowing down until I’m
forced to refresh, or if I notice the memory steadily creeping up, I lose a bit
of trust. I would like to think that fixing memory leaks is part of that extra
polish that won’t necessarily win you a lot of accolades, but your users will
subtly notice, and it will build their confidence in your software.

Thanks to Jake Archibald and Todd Reifsteck for feedback on a draft of this
post.

31 Dec


2021 BOOK REVIEW

Posted by Nolan Lawson in Books. Leave a Comment

I’ve been doing end-of-the year book reviews for almost 5 years now. At this
point I have to ask myself: why am I still doing this?

To encourage myself to read more? To show off? To convince myself that this blog
is about more than just tech stuff? There may be some truth to all those, but I
think my main goal is just to recommend some good books to others. I don’t use
GoodReads (although I link to it, as it seems nice), so this is my forum where I
highlight books I’ve enjoyed, in the hope that others might find something
interesting to read in the new year.

So without further ado, on with the book reviews!


QUICK LINKS


FICTION

 * The Name of the Wind and The Wise Man’s Fear by Patrick Rothfuss
 * Dragonflight by Anne McCaffrey
 * Kindred by Octavia Butler
 * 2034 by Elliot Ackerman and James Stavridis
 * The Ministry for the Future by Kim Stanley Robinson
 * Premier Sang by Amélie Nothomb


NON-FICTION

 * Why We Love Dogs, Eat Pigs, and Wear Cows by Melanie Joy
 * Dialogues on Ethical Vegetarianism by Michael Huemer
 * How Not to Die by Michael Grieger
 * Hate, Inc. by Matt Taibbi
 * Against the Grain by James C. Scott
 * A Brief History of Everyone Who Ever Lived by Adam Rutherford
 * The End of the End of History by Alex Hochuli, George Hoare, and Philip
   Cunliffe


FICTION

Like last year, I’ve been reading a lot of fantasy novels. My methodology is
crude: I just googled “best fantasy novels” and started from there. In the past,
I was never much of a wizards-and-pegasuses kind of reader (I always preferred
sci-fi and dystopias), so I’m trying to make up for lost time.


THE NAME OF THE WIND AND THE WISE MAN’S FEAR BY PATRICK ROTHFUSS

I struggled to like the first book. My main beef was that 1) it’s a bit too
predictable and groan-inducing with how the main character is a preternaturally
gifted Mary Sue who just inevitably excels at everything, and 2) after a great
“street urchin” backstory, the action really grinds to a halt when the character
arrives at university and mostly mopes after his would-be girlfriend.

The second book, however, redeems the first one in my eyes. It makes up for some
of the dull campiness of the first book with a never-ending series of inventive
subplots. Just as soon as you’re bored with one setting or cast of characters,
it dramatically switches to another. It almost feels like a collection of
vignettes.

I’m eagerly awaiting the third book, which (like The Winds of Winter by George
R. R. Martin), seems perpetually delayed.


DRAGONFLIGHT BY ANNE MCCAFFREY

This is another book that I struggled to like. The premise is so good (dragons!
extra-planetary colonization! a perennial existential threat!), and I have
friends who rave about the Dragonriders of Pern series. But to be honest, I just
found it to be a bit of a slog. I felt like the author was taking too much time
to set up names, places, history, concepts – almost like she started writing an
encyclopedic Silmarillion rather than an accessible Hobbit. By the time I had
gotten the lingo down and could keep the characters’ names straight, the story
was over.

I’ve picked up the next couple books in the series, and I’m going to give them a
shot, but I don’t have high hopes.


KINDRED BY OCTAVIA BUTLER

What I love about this book is that the author takes a completely ridiculous
premise and treats it with utmost seriousness, and by the end you’re so invested
in the story that it doesn’t matter that the paranormal elements are never
explained. Gives you a good sense of what it would feel like to live in a
society where daily barbarism is completely normalized.

Is it sci-fi? Is it fantasy? Hard to categorize, but I would lean towards
sci-fi, if we define sci-fi as “putting human beings in otherworldly situations
to see how they tick.” In any case, a great read.


2034 BY ELLIOT ACKERMAN AND JAMES STAVRIDIS

A chilling and all-too-plausible near-future sci-fi. I appreciate the attention
to detail that comes from having a subject matter expert (in military matters)
as a co-author. Hopefully it will turn out to be a cautionary tale rather than a
prescient prediction.


THE MINISTRY FOR THE FUTURE BY KIM STANLEY ROBINSON

A book that starts out with a bang and gradually limps towards an ending. I had
to put it down ~80% of the way through because it got into preachy, starry-eyed
utopia territory. Maybe I’m just a cynic, but the more pessimistic predictions
in the book seem way more believable to me.


PREMIER SANG BY AMÉLIE NOTHOMB

Amélie Nothomb is one of my favorite Francophone authors, and not just because
my French is terrible, and her writing is simple enough that I can understand it
without constantly switching to a dictionary. I picked up this book at random
while on vacation in France and gobbled it up on the plane ride back.

The story starts out with an incredible hook – a firing squad! – and from there
gives a richly detailed (and ultimately personal) character study. The scenes
from the protagonist’s childhood, where he’s alternately coddled and neglected
(but craves the latter!), are especially poignant.

Sorry for recommending a non-English book, but hopefully it will be translated
soon!


NON-FICTION


WHY WE LOVE DOGS, EAT PIGS, AND WEAR COWS BY MELANIE JOY

I’ve spent probably the past 15 years of my life struggling with a basic
question: what to eat? I’ve gone through carnism, pescetarianism, vegetarianism,
veganism, and right back around several times. These days I’m probably
best-described as flexitarian (i.e. I avoid meat, but I won’t turn down a turkey
dinner at Thanksgiving).

If you’re not already interested in vegetarianism or veganism, this book will
not convince you of anything. For myself, I found it pretty depressing, because
the situation feels kind of hopeless to me. The sheer scale of animal suffering
in factory farms makes it a good candidate for one of, if not the, most
consequential ethical questions of our day, and yet the average person couldn’t
care less, and is irritated to even consider it. Exploring this question will
make you the most unpopular person at a dinner party, and probably cause a lot
of stress and annoyance for your friends and relatives if they feel obliged to
accommodate your dietary choices.

So why do I read this stuff? Well I guess, like a good car crash, I just can’t
look away. If I’m going to be an ethical monster, I would at least like to be
cognizant of it when I put a forkful of egg or cheese (or rarely, meat) into my
mouth. And I’d like to have a ready-made answer if someone asks why I always
order the tofu. And I’d like to steel my resolve as I continually search for
good beans-and-rice and tempeh recipes that can compete with my fond memories of
a juicy Reuben sandwich. (This stir-fry recipe is quite good.) My inner
monologue on food is complicated, I don’t have it all figured out, but I’m
trying to wrestle with the tough questions.


DIALOGUES ON ETHICAL VEGETARIANISM BY MICHAEL HUEMER

Another pro-vegan book that will depress you if you’re already converted, and
probably convince you of nothing if you’re not. For myself, I found it
interesting because it fairly neatly demolishes all of the plausible excuses for
eating meat or animal products. (Yes, that one, and that one, and that other one
you just thought of.) This is a good book for the open-minded person who really
wants to engage with the best arguments for veganism, not just the straw-man.

I’ll also say that, for a philosophy book, this is eminently readable. I really
enjoy the short, brisk pace and the “Socratic dialogue” style rather than a
long-winded essay format.


HOW NOT TO DIE BY MICHAEL GRIEGER

As you might have noticed, I kind of went on a tear this year reading vegan
literature. I really wanted to confront my meat-eating (and egg-eating, and
dairy-eating) head on, so I tried to read all the “greatest hits” of vegan
literature.

This book has a lot of sensible advice (eat more whole grains, eat more nuts and
berries), although I get the impression that the author is pretty dogmatic in
promoting a pure-vegan lifestyle. Based on reviews I’ve read of the book, he
tends to ignore any research that advocates for moderate consumption of eggs,
cheese, and fish, even though those are (as far as I can tell) pretty good
ingredients in a healthy diet.

On the other hand, I do appreciate his no-nonsense, uncompromising position on
certain health questions. (Salt? Nope, just avoid it. Oil? Nope, just fry
everything with water or vinegar! Exercise? 30 minutes every day!) I prefer the
“give it to me straight, doc” approach, rather than a resigned shrug and “Well,
if you’re going to drink beer and eat potato chips, at least do it in
moderation.” Although I think his advice is much too extreme for the average
person to actually adhere to.


HATE, INC. BY MATT TAIBBI

One of the best political non-fiction books I’ve read. For a few years, I’ve had
the gnawing feeling that something in the media (including social media) felt
“off,” but I couldn’t quite put my finger on it. This book does a good job of
explaining why our media feels so hyper-partisan, and therefore less
trustworthy.


AGAINST THE GRAIN BY JAMES C. SCOTT

Elaborates on one of the minor points you may recall from Sapiens by Yuval Noah
Harari about how the agricultural revolution was probably kind of a bum deal for
humanity. Also has some interesting commentary on the origins of viruses from
livestock, and how they probably played havoc on early civilizations. (This book
was written pre-Covid, by the way!)


A BRIEF HISTORY OF EVERYONE WHO EVER LIVED BY ADAM RUTHERFORD

A fun and intriguing read. Makes you realize how silly and petty (and
temporary!) most of our human squabbles over race and ethnicity are. Also gives
a great explanation of why “I descended from Charlemagne” is not such a
remarkable statement.


THE END OF THE END OF HISTORY BY ALEX HOCHULI, GEORGE HOARE, AND PHILIP CUNLIFFE

A good, heterodox leftist perspective on the whole “what the heck is up with
liberal democracy?” genre. A good pairing with The New Class War by Michael
Lind.

17 Dec


INTRODUCING FUITE: A TOOL FOR FINDING MEMORY LEAKS IN WEB APPS

Posted by Nolan Lawson in performance, Web. Tagged: memory, performance. 22
Comments

Debugging memory leaks in web apps is hard. The tooling exists, but it’s
complicated, cumbersome, and often doesn’t answer the simple question: Why is my
app leaking memory?

Because of this, I’d wager that most web developers are not actively monitoring
for memory leaks. And of course, if you’re not testing something, it’s easy for
bugs to slip through.

Via Wikimedia Commons

When I first started looking into memory leaks, I assumed it was a rare thing.
How could JavaScript – a language with an automatic garbage collector – be a big
source of memory leaks? But the more I learned, the more I suspected that memory
leaks were actually quite common in Single Page Apps (SPAs) – it’s just that
nobody is testing for it!

Since most web developers aren’t fiddling with the Chrome memory tools for the
fun of it, they probably won’t notice a leak until the browser tab crashes with
an Out Of Memory error, or the page slows down, or someone happens to open up
the Task Manager and notice that a website is using many megabytes (or even
gigabytes!) of memory. But at that point, it’s gotten bad enough that there may
be multiple leaks on the same page.

I’ve written about memory leaks in the past, but my advice basically boils down
to: “Use the Chrome DevTools, follow these dozen tedious steps, and then maybe
you can figure out why your page is leaking.” This is not a great developer
experience, and I’m sure many readers just shook their heads in despair and
moved on. It would be much better if a tool could find memory leaks
automatically.

That’s why I wrote fuite (French for “leak”). fuite is a CLI tool that you can
point at any URL, and it will analyze the page for memory leaks:

1
npx fuite https://example.com

That’s it! By default, it assumes that the site is a client-rendered SPA, and it
will crawl the page for internal links (such as /about or /contact). Then, for
each link, it runs the following steps:

 1. Click the link
 2. Press the browser back button
 3. Repeat to see if memory grows

If fuite finds any leaks, it will show which objects are suspected of causing
the leak:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Test         : Go to /foo and back
Memory change: +10 MB
Leak detected: Yes
 
Leaking objects:
 
| Object            | # added | Retained size increase |
| ----------------- | ------- | ---------------------- |
| HTMLIFrameElement | 1       | +10 MB                 |
 
Leaking event listeners:
 
| Event        | # added | Nodes  |
| ------------ | ------- | ------ |
| beforeunload | 2       | Window |
 
Leaking DOM nodes:
 
DOM size grew by 6 node(s) 

To do this, fuite uses the basic strategy outlined in my blog post. It will
launch Chrome, run some scenario n number of times (7 by default) and see if any
objects are leaking a multiple of n times (7, 14, 21, etc.).

fuite will also analyze any Arrays, Objects, Maps, Sets, event listeners, and
the overall DOM to see if any of those are leaking. For instance, if an Array
grows by exactly 7 after 7 iterations, then it’s probably leaking.


TESTING REAL-WORLD WEBSITES

Somewhat surprisingly, the “basic” scenario of clicking internal links and
pressing the back button is enough to find memory leaks in many SPAs. I tested
fuite against the home pages for 10 popular frontend frameworks, and found leaks
in all of them:

Site Leak detected Internal links Average growth Max growth Site 1 yes 8 27.2 kB
43 kB Site 2 yes 10 50.4 kB 78.9 kB Site 3 yes 27 98.8 kB 135 kB Site 4 yes 8
180 kB 212 kB Site 5 yes 13 266 kB 1.07 MB Site 6 yes 8 638 kB 1.15 MB Site 7
yes 7 1.37 MB 2.25 MB Site 8 yes 15 3.49 MB 4.28 MB Site 9 yes 43 5.57 MB 7.37
MB Site 10 yes 16 14.9 MB 186 MB

In this case, “internal links” refers to the number of internal links tested,
“average growth” refers to the average memory growth for every link (i.e.
clicking it and then pressing the back button), and “max growth” refers to
whichever internal link was leaking the most. Note that these numbers don’t
include one-time setup costs, as fuite does one preflight iteration before the
normal 7 iterations.

To confirm these results yourself, you can use the Chrome DevTools Memory tab.
Here is a screenshot of the worst-performing site from my set, where I click a
link, press the back button, take a heap snapshot, and repeat:

On this particular site, memory grows by about 6 MB every time you click a link
and go back.

To avoid naming and shaming, I haven’t listed the actual websites. The point is
just to show a representative sample of some popular SPAs – the authors of those
websites are free to run fuite themselves and track down these leaks. (Please
do!)


CAVEATS

Note, though, that not every leak in an SPA is an egregious problem that needs
to be addressed. SPAs need to, for example, maintain the focus and scroll state
to properly support accessibility, which means that there may be some small
metadata that is stored for every page navigation. fuite will dutifully report
such leaks (because they are leaks), but it’s up to the developer to decide if a
tiny leak is worth chasing or not.

Some memory growth may also be due to browser-internal changes (such as JITing),
which the web page can’t really control. So the memory growth numbers are an
imperfect measure of what you stand to gain by fixing leaks – it could very well
be that a few kBs of growth are unavoidable. (Although fuite tries to ignore
browser-internal growth, and will only say “leaks detected” if there is
actionable advice for the web developer.)

In rare cases, some memory growth may also be due to outright browser bugs.
While analyzing the sites above, I actually found one (Site #4) that seems to be
suffering from this Chrome bug due to <img loading="lazy"> not being unloaded.
Unfortunately it’d be hard for fuite to detect browser bugs, so if you’re
mystified by a leak, it’s good to cross-check against other browsers!

Also note that it’s almost impossible for a Multi-Page App (MPA) to leak,
because the browser clears memory on every page navigation. (Assuming no browser
bugs, of course.) During my testing, I found two frontend frameworks whose home
pages were MPAs, and unsurprisingly, fuite couldn’t find any leaks in them.
These were excluded from the results above.

Memory leaks are more of a concern for SPAs, where memory isn’t cleared
automatically on each navigation. fuite is primarily designed for SPAs, although
you can run it on MPAs too.

fuite currently only measures the JavaScript heap memory in the main frame of
the page, so cross-origin iframes, Web Workers, and Service Workers are not
measured. Something like performance.measureUserAgentSpecificMemory() would be
more accurate, but it’s only available in cross-origin isolated contexts, so
it’s not practical for a general-purpose tool right now.


OTHER MEMORY LEAK SCENARIOS

The “crawl for internal links” scenario is just the default one – you can also
build your own. fuite is built on top of Puppeteer, so for whatever scenario you
want to test, you essentially just need to write a Puppeteer script to tell the
browser what to do. Some common scenarios you might test are:

 * Open a modal dialog and then close it
 * Hover over an element to show a tooltip, then mouse away to dismiss it
 * Scroll through an infinite-loading list, then navigate away and back
 * Etc.

In each of these scenarios, you would expect memory to be the same before and
after. But of course, it’s not always so simple with web apps! You may be
surprised how many of your dialogs and tooltips are harboring memory leaks.

To analyze leaks, fuite captures heap snapshot files, which you can load in the
Chrome DevTools to inspect. It also has a --debug mode that you can use for more
fine-grained analysis: stepping through the test as it’s running, debugging the
browser in real-time, analyzing the leaking objects, etc.

Under the hood, fuite is a fairly basic tool, and I won’t claim that it can do
100% of the work of fixing memory leaks. There is still the human component of
figuring out why your objects were allocated and retained, and then finding a
reasonable fix. But my goal is to automate ~95% of the work, so that it actually
becomes achievable to fix memory leaks in web apps.

You can find fuite on GitHub. Happy leak hunting!

Update: I made a video tutorial showing how to debug memory leaks with fuite.

5 Dec


ONE WEIRD TRICK TO IMPROVE YOUR WEBSITE’S PERFORMANCE

Posted by Nolan Lawson in performance, Web. Tagged: performance. 5 Comments

Every so often, I come across a web performance post from what I like to call
the “one weird trick” genre. It goes something like this:

“I improved my page load time by 50% by adding one line of CSS!”

or

“It’s 2x faster to use this JavaScript API than this other one!”

The thing is, I love a good performance post. I love when someone finds some odd
little unexplored corner of browser performance and shines a light on it. It
might actually provide some good data that can influence framework authors,
library developers, and even browser vendors to improve their performance.

But more often than not, the “one weird trick” genre drives me nuts, because of
what’s not included in the post:

 * Did you test on multiple browsers?
 * Did you profile to try to understand why something is slower or faster?
 * Did you publish your benchmark so that others can verify your results?

That’s why I wrote “How to write about web performance”, where I tried to
summarize everything that I think makes for a great web perf post. But of
course, not everyone reads my blog religiously (how dare they?), so the “one
weird trick” genre continues unabated.

Look, I get it. Writing about performance is hard. And we’re not all experts.
I’ve made the same mistakes myself, in posts like “High performance web worker
messages” (2016) – where I found the “one weird trick” that it’s faster to
stringify an object before sending it to a web worker. Of course this makes
little sense (the browser should be able to serialize the object faster than you
can do it yourself), and Surma has demonstrated that there’s no need to do this
stringify dance in modern versions of Chrome. (As I’ve said before: if you’re
not wrong about web perf today, you’ll be wrong tomorrow when browsers change!)

That said, I do occasionally find a post that really exemplifies what’s great
about the web perf genre. For instance, this post by Eoin Hennessy about
improving Webpack performance really ticks all the boxes. The author wasn’t
satisfied with finding “one weird trick” – they had to understand why the trick
worked. So they actually went to the trouble of building Node from source (!) to
find the true root cause, and they even submitted a patch to Webpack to fix it.

A post like this, like a good mystery novel, has everything that makes for a
satisfying story: the problem, the search, the resolution, the ending. Unlike
the “one weird trick” posts, this one doesn’t leave me craving more. Instead, it
leaves me feeling like I truly learned something about how browser engines work.

So if you’ve found “one weird trick,” that’s great! There might actually be
something really interesting there. But unless you do the extra research, it’s
hard to say more than just “Well, this technique worked for me, on my website,
in Chrome, in this scenario…” (etc.). If you want to extrapolate from your
results to something more widely-applicable, you have to put in the work.

So here are some things you can do. Test in multiple browsers. File a browser
bug if one is slower than the others. Ask around if you know any web perf
experts or folks who work at browser vendors. Take a performance profile. And if
you put in just a bit of extra effort, you might find more than “one weird
trick” – you might find a valuable learning opportunity for web developers,
browser vendors, or anyone interested in how the web works.

« Older Entries



RECENT POSTS

 * The collapse of complex software
 * State is hard: why SPAs will persist
 * More thoughts on SPAs
 * The balance has shifted away from SPAs
 * The struggle of using native emoji on the web


ABOUT ME

Hi, I'm Nolan. I'm a web developer living in Seattle and working for Salesforce.
Opinions expressed in this blog are mine and frequently wrong.


ARCHIVES

 * June 2022 (1)
 * May 2022 (3)
 * April 2022 (1)
 * February 2022 (1)
 * January 2022 (1)
 * December 2021 (3)
 * September 2021 (1)
 * August 2021 (6)
 * February 2021 (2)
 * January 2021 (2)
 * December 2020 (1)
 * July 2020 (1)
 * June 2020 (1)
 * May 2020 (2)
 * February 2020 (1)
 * December 2019 (1)
 * November 2019 (1)
 * September 2019 (1)
 * August 2019 (2)
 * June 2019 (4)
 * May 2019 (3)
 * February 2019 (2)
 * January 2019 (1)
 * November 2018 (1)
 * September 2018 (5)
 * August 2018 (1)
 * May 2018 (1)
 * April 2018 (1)
 * March 2018 (1)
 * January 2018 (1)
 * December 2017 (1)
 * November 2017 (2)
 * October 2017 (1)
 * August 2017 (1)
 * May 2017 (1)
 * March 2017 (1)
 * January 2017 (1)
 * October 2016 (1)
 * August 2016 (1)
 * June 2016 (1)
 * April 2016 (1)
 * February 2016 (2)
 * December 2015 (1)
 * October 2015 (1)
 * September 2015 (1)
 * July 2015 (1)
 * June 2015 (2)
 * October 2014 (1)
 * September 2014 (1)
 * April 2014 (1)
 * March 2014 (1)
 * December 2013 (2)
 * November 2013 (3)
 * August 2013 (1)
 * May 2013 (3)
 * January 2013 (1)
 * December 2012 (1)
 * November 2012 (1)
 * October 2012 (1)
 * September 2012 (3)
 * June 2012 (2)
 * March 2012 (3)
 * February 2012 (1)
 * January 2012 (1)
 * November 2011 (1)
 * August 2011 (1)
 * July 2011 (1)
 * June 2011 (3)
 * May 2011 (2)
 * April 2011 (4)
 * March 2011 (1)


TAGS

accessibility alogcat android android market apple app tracker blobs boost
bootstrap browsers bug reports catlog chord reader code contacts continuous
integration copyright couch apps couchdb couchdroid developers development
grails html5 indexeddb information retrieval japanese name converter javascript
jenkins keepscore listview localstorage logcat logviewer lucene mobile web
modules nginx nlp node nodejs npm offline-first open source passwords
performance pokedroid pouchdb pouchdroid query expansion relatedness calculator
relatedness coefficient s3 safari satire sectioned listview security semver
social media socket.io software development solr spas supersaiyanscrollview
synonyms twitter ui design ultimate crossword w3c webapp webapps web platform
web sockets websql web workers


LINKS

 * Mastodon
 * GitHub
 * npm
 * Keybase

Blog at WordPress.com.


Read the Tea Leaves
Blog at WordPress.com.
 * Follow Following
    * Read the Tea Leaves
      Join 1,175 other followers
      
      Sign me up
    * Already have a WordPress.com account? Log in now.

 *  * Read the Tea Leaves
    * Customize
    * Follow Following
    * Sign up
    * Log in
    * Report this content
    * View site in Reader
    * Manage subscriptions
    * Collapse this bar

 

Loading Comments...

 

Write a Comment...
Email (Required) Name (Required) Website