blog.robenkleene.com Open in urlscan Pro
2a06:98c1:3121::3  Public Scan

Submitted URL: http://blog.robenkleene.com/
Effective URL: https://blog.robenkleene.com/
Submission: On March 22 via manual from US — Scanned from NL

Form analysis 0 forms found in the DOM

Text Content

ROBEN KLEENE

 * About
 * Archive
 * Colophon
 * Influences


TUESDAY, DEC 26, 2023


INTRODUCING `REP` & `REN`: A NEW APPROACH TO COMMAND-LINE FIND & REPLACE, AND
RENAMING

This post is about two new command-line utilities: rep and ren. Both are
available on GitHub.


HOW TO USE REP

 1. Perform a search with a grep tool like ripgrep:
    
    

 2. Pipe the results of the search to rep, and provide search and replace terms
    as arguments:
    
    

 3. If the diff looks good, pass the -w flag to rep to write the changes to the
    files:
    
    


REP & REN

rep and ren are two new tools for performing find and replace in files, and
renaming files, respectively.

They share a similar design in two ways:

 1. Standard input determines what should be changed. rep takes grep-formatted
    text via standard input to determine which lines to change, and ren takes a
    single file per line to determine which files to rename.
 2. A preview of the resulting diff is printed to standard output by default. By
    default, rep and ren both print a diff of the changes that would result by
    performing the find and replace or rename. In order to actually write the
    changes to disk, both have a -w / --write flag.


ORIGIN STORY

Search is what pulled my editing towards the command line1. It began with ack.
ack is like grep but it automatically searches recursively and ignores version
control directories (e.g., .git/). Just enter ack foo at your shell prompt and
see all the results in your project. Today, I use ripgrep, an iteration on the
same idea with with a beautifully organized set of command-line flags.

If you do a lot of searching from the command line, eventually you also want to
do some editing there. After you’ve performed a search, and you have a list of
matches you want to change, what now? For a long time, my answer was to load the
matches into the vim quickfix list and use vim built-in support for operating on
grep matches to perform the replacement.

Loading matching into an editor and performing the replacement there is a great
approach, and it’s ideal for more complex edits2. But there are times when I’d
prefer skip the editor altogether. For one, if it’s a quick edit, it’s slower.
I’d rather just type out my replacement on the command line without opening my
editor3. Then there’s the matter of repeatability: I often do a sequence of
several edits, then realize I made a mistake somewhere in the sequence, have to
revert the replacement (e.g., with a git checkout .) and start over. If instead
I do the edits from the command line, I can just find the replacement in my
shell history, fix it there, and quickly re-run the steps in order.


WHY NOT SED?

sed is the definitive search and replace utility for the Unix command line, the
problem is, it doesn’t combine well with ripgrep. Here’s the command from an
example of a popular answer on Stackoverflow about how you do a recursive find
and replace with sed:

find /home/www \( -type d -name .git -prune \) -o -type f -print0 \
    | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'


You could adapt this to use rg instead of find by using the -l flag which only
lists files with matches:

rg "subdomainA\.example\.com" -l -0 \
    | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'


The problem with this approach is that sed is then performing the search again
from scratch on each file, and since sed and rg have different regular
expression engines, the matches could be different (or the search could fail
with a syntax error)4.

Only performing the replacement on matching lines also makes it easier to write
the replacement, because the replacement can then leverage the filtering that’s
already been performed by the search. For example, if you were trying to do a
find and replacement on function names that end in foo: To find the matching
functions, you might including some of the syntax for the function signature
(e.g., rg -n 'function .*Foo.*\(' to match all the functions containing Foo),
then once you have the matching lines, you can just pipe that to rep Foo Bar
(omitting the function signature syntax) to perform the replacement (with sed
the replacement would also need to include the function signature, because sed
is going to search every line in the file).

These workflow improvements arise because the semantic meaning of the sed
command means is to perform a find and replace on the matching files, whereas
the rep command means do the replacement on the matching lines.

Another, equally problematic, issue with sed approach is it’s hard to tell what
the exactly the result of your find and replace will be. In other words, it’s
difficult to interpret the preview what will be done with sed. The sed interface
is built around standard streams, as in it takes input either through standard
input, or file parameters, and then writes the output directly to standard
output, or edits the files in place with a flag. This means that to preview the
replacement you’ll just see the raw text of the entire contents of every file
that your command will perform a replacement in. Which isn’t practical to parse
to understand if the replacement is correct.


REP & REN

rep was written to solve these two problems. Semantically, rep performs the
replacement on matching lines, and the replacement is previewed in the form that
we’re used to reviewing to determine whether a change is correct: A diff.

ren takes a similar approach but for file renaming. The output of find (or fd)
can be piped to ren, so a command looks like this: find *foo* | ren foo bar. ren
also displays diff output by default, and writes the changes with -w.


ACKNOWLEDGEMENTS

 * The idea for rep was inspired by wgrep for Emacs, which allows editing of a
   grep results in a buffer, and writing those changes to the source files.
 * The structure of the source code, and much of the functionality, was borrowed
   from sd, rep and ren both started as forks of sd.
 * The code for specifying a custom pager for both rep and ren came from the
   source code for delta`.

--------------------------------------------------------------------------------

 1. Search is all about refinement and the shell excels at refinement. Too many
    results? Hit the up arrow and make your search more specific. Still too
    many? Maybe you can just cd to a subfolder. Want to search across many
    projects? Just cd up a folder. ↩︎

 2. Combining cdo and :norm is a powerful combination to perform vim normal mode
    commands on each matching line. ↩︎

 3. The approach of loading files into a text editor and then using editor
    functionality to perform the find and replace is both slow because of the
    number of steps and slow because of the specific implementations of this
    functionality by the editor. For example, :cdo does a bunch of additional
    work around loading a buffer and setting it up for further editing that’s
    superfluous to our goal of quickly performing a find and replace. Most
    engineers don’t share my sensitivity to latency, but for me, who finds great
    beauty in efficiency, the slowness of using :cdo to perform a find and
    replace due to all the additional superfluous work it performs, is
    repugnant. ↩︎

 4. Using rg and with sd instead of sed should resolve most discrepancies
    between regular expression engines, since they both use the same Rust
    engine. ↩︎

--------------------------------------------------------------------------------


WEDNESDAY, OCT 18, 2023


ADDING CUSTOM `MAN` PAGES FROM MARKDOWN NOTES

I recently found out it’s surprisingly easy to add your own man pages from
Markdown notes.

Just add the path to the MANPATH environment variable:

export MANPATH=$MANPATH:$HOME/.man


Then generate your man pages using pandoc:

pandoc --standalone --to man --from markdown "my-note.md" --output "$HOME/.man/man9/my-note.9"


The 9 is the man page section, I chose 9 for my own notes because there weren’t
many existing man pages in that section. This means I can tab complete from only
my notes by typing for example man 9 my-⇥.

--------------------------------------------------------------------------------


MONDAY, JUN 19, 2023


THE FIVE-YEAR RULE OF SOFTWARE TRANSITIONS



With software, I’m always trying to pick winners. I mainly care about the big
apps1: the Photoshops, the Excels, the NLEs, DAWs, IDEs. Software people spend
their whole day in, that can take a lifetime to learn, that a career can be
built on. I’m interested in picking winners for these apps because they’re
powerful and they’re hard to learn. So if I’m going to learn one, I want to be
sure I pick the right one.

I say this upfront because it means when I’m talking about software transitions,
I’m mainly talking about that kind of software, industry-leading creative
software, and not, say, the next big social networking platform. It also means
I’m mainly talking about desktop software, because this kind of software doesn’t
have any traction on mobile.

I’ve been watching this kind of software for a long time, looking for trends,
ideally based on any market share numbers I find. Over time I’ve noticed
something interesting: Transitions in this kind of software almost always happen
in the same way. In particular they happen quickly. And once they get going,
they always seem to take roughly the same amount of time. I call this the
“Five-Year Rule”. The rule is simple: Either a new piece of software will become
the market leader in about five years, or it never will.

In this piece we’ll look at a five transitions closely. It’s notable that for
the kind of software I’m interested in, these are the only transitions I’m aware
of. Five isn’t very many for the ~35 year history of creative software. For each
transition, I’ve listed the years I consider the transition to have taken place
over. These vary in confidence level, in particular the farther I go back, the
less data I tend to have, so choosing the years involves more guesswork.


THE TRANSITIONS

 1. PageMaker to QuarkXPress: 1987–1993 (6 Years)
 2. QuarkXPress to InDesign: 1999–2005 (6 Years)
 3. Photoshop to Sketch: 2010–2015 (5 Years)
 4. Sketch to Figma: 2015–2020 (5 Years)
 5. The Rise of Visual Studio Code: 2015–2018 (3 Years)


APPENDIX TRANSITIONS

There’s an appendix section at the end where we look at a few more transitions
that I also found interesting, but that don’t fit the mold of professional
creative software that we’re looking at in the main transitions.

 1. The PC Revolution: 1989–1994 (5 Years)
 2. The Rise of Google Chrome: 2015–2018 (3 Years)
 3. Subversion to Git: 2005–2010 (5 Years)


DO SOFTWARE TRANSITIONS EVEN ACTUALLY HAPPEN?

The answer here is of course yes, many of us who follow software are fresh off
the transition from Photoshop to Figma (for user-interface design). But things
aren’t as straightforward with this transition as they seem. For example, today
Photoshop is still likely more popular overall than Figma, with ~30 million
Adobe Creative Cloud subscribers versus Figma’s ~4 million users. It’s hard to
wrap your head around the supposed loser in a transition still being more
popular than the winner.

I started thinking about this question, of whether software transitions ever
really happen, when I noticed just how common it was for the most popular
application in a category to still be the very first application that was ever
released in that category, or, they became the market leader so long ago that
they might as well have been. The Adobe Creative Cloud is a hotbed of the
former: After Effects (1993, Mac), Illustrator (1987, Mac), Photoshop (1990,
Mac), Premiere (1991, Mac), and Lightroom (2007, Mac/Windows) are all market
leaders that were also first in their category. Microsoft Excel (1987, Mac) and
Word (1983, Windows) are examples of the latter, applications that weren’t first
but became market leaders so long ago they might as well be (PowerPoint [1987,
Mac] is another example of the former).

Software of course has a reputation of being fast-moving, so I’m surprised at
how little things have changed. The obvious explanation is that it’s hard to get
people to switch from software that’s hard to learn (because they’ve already
invested so much time and energy into learning the software they’re currently
using). But I don’t find this explanation fully satisfying, since when
transitions do happen, they happen very quickly, which seems to indicate that
when something truly better comes along there isn’t any hesitancy about jumping
ship.


METHODOLOGY

We’re going to look at some major transitions that happened in major creative
software. The process of looking at these transitions is not scientific, “the
Five-Year Rule” is really just a loose rule of thumb. It’s an observation about
how transitions always seem to happen over a similar time frames, but everything
about this evaluation process is fuzzy. For example, when do you mark the start
date of a transition? I usually use the first release date of the software, but
sometimes that doesn’t make sense. Take for example the current rise of DaVinci
Resolve, Resolve was originally released in 2004, but for most of it’s lifetime
it was a more specialized tool focused on color grading (and only had 100 users
in 2009). Later Resolve was acquired by Blackmagc Design, who both reduced the
price and added functionality to make it function as a standalone NLE (e.g.,
comparable other NLEs like Adobe Premiere and Final Cut Pro) with version 11 in
2014. In this case, 2014 makes more sense as the start date for the transition
in NLE’s, and using that date it roughly follows the five-year rule:

> The software had a user base of more than 2 million using the free version
> alone as of January 2019.[90] This is a comparable user base to Apple’s Final
> Cut Pro X, which also had 2 million users as of April 2017.

Then there’s the question of determining when a transition has occurred. To do
this, I relied on market share numbers when available (looking for the date when
an up-and-coming application overtakes the dominant player in popularity)
usually from informal surveys conducted online. When no data is available, I
resorted to anecdotal accounts I could find online. This is of course inherently
flawed, but it seems like enough to make the case for a rough rule of thumb.


THE TRANSITIONS




TRANSITIONS FROM THE DESIGN WORLD

All the best software transitions are from the design world. This is because
design as an industry consolidates around single applications for each category
(for example, Figma for user-interface design, and InDesign for print design).
I’m not sure why this is, but I think a contributing factor is that design is
unique relative to most other creative fields, in that the designers output
generally is not the final product, e.g., a design in Figma needs to actually be
implemented separately in software. Whereas say, when editing a video, the
exported video is the final product.

PAGEMAKER TO QUARKXPRESS

QuarkXPress goes from being it’s introduction in 1987 to 95% market share during
the 1990s.



PageMaker 7.0 running on Mac OS 9

Aldus PageMaker was the first widely-used desktop publishing app. Three events
happened in quick succession which ushered in the desktop publishing revolution:

 1. 1984: The debut of the Apple Macintosh
 2. 1985: The debut of the Apple LaserWriter
 3. 1985: The release of Aldus PageMaker

Soon after, QuarkXPress (1987, Mac) was released and began its ascent. There’s
not much information available about this transition, but QuarkXPress version 5,
released in 1990, appears to be the turning point. By 1994, when Adobe purchased
Aldus, QuarkXPress was considered the dominate application by a wide margin.

The transition appears to roughly follow the five-year rule: QuarkXPress had 95%
market share in the 1990s which makes it likely that by 1992 it had already
surpassed PageMaker, the pattern that the five year rule predicts.

With that said, this isn’t a great example of a transition because it happened
so early after the invention of desktop publishing, which means a PageMaker
hadn’t really had enough time to become firmly entrenched yet. Transitions are
the most interesting when they overcome the inertia an application has when it
truly owns a category. This transition was included anyway because it helps set
the stage for the next couple of transitions, which are also in the desktop
publishing industry.

QUARKXPRESS TO INDESIGN

QuarkXPress loses its dominate market position to Adobe InDesign over the course
of about six years.



QuarkXPress

It’s hard to overstate how dominant QuarkXPress’s position was as the industry
leader for desktop publishing software in the 1990s. For example, in 1998 Quark
made an offer to buy Adobe.

But we don’t talk much about QuarkXPress today, it’s fall being so great that
it’s drifted into irrelevance. I’ve always considered this the canonical
software transition, because it went from being so dominate, to so rarely used.
It happened long enough ago that it’s a story woven into the fabric of computing
history.

How did InDesign beat QuarkXPress? It starts with our old friend PageMaker, and
its parent company Aldus. Adobe purchased Aldus2 in 1995, with the intent of
taking on QuarkXPress. InDesign was based on the source code to a successor to
PageMaker Aldus had begun developing in-house and was first released in 1999:

> [Adobe] continued to develop a new desktop publishing application. Aldus had
> begun developing a successor to PageMaker, which was code-named “Shuksan”.
> Later, Adobe code-named the project “K2”, and Adobe released InDesign 1.0 in
> 1999.

At the time, Quark had a reputation of having become complacent, making user
hostile decisions just assuming their customers would go along with it. Dean
Allen describes customers animosity towards Quark:

> Pagemaker [sic], a crash-prone beast with a counterintuitive interface and
> slow as molasses in winter, was eventually bought by Adobe, whereupon everyone
> stopped using it and its unofficial name (‘Pagefucker’) took on common usage.
> This sent Quark flinging toward its destiny: to become a hostile monopoly
> spinning around in circles of pointless development, embracing dead-end
> technologies only to abandon customers once profitability proved unlikely,
> hobbling their own products with draconian antipiracy measures, signing
> unbendable licensing agreements in blood with newspaper chains, joining up
> with enemy Adobe to squish Quickdraw GX (one of many promising standards that
> actually showed a glimpse, at great development cost, of how sophisticated
> graphic design on computers could be before withering and dying once Quark
> said no thanks), and of course pissing off customers who paid a fortune for
> the privilege. And still it made horribly, horribly typeset pages.

Then there’s the bit where Quark bet against Mac OS X, Dave Girard for Ars
Technica has an in-depth piece on the decline QuarkXPress that contains a choice
quote from Quark CEO Fred Ebrahimi:

> Quark repeatedly failed to make OS X-native versions of XPress—spanning
> versions 4.1, 5, and 6—but the company still asked for plenty of loot for the
> upgrades. With user frustration high with 2002’s Quark 5, CEO Fred Ebrahimi
> salted the wounds by taunting users to switch to Windows if they didn’t like
> it, saying, “The Macintosh platform is shrinking.” Ebrahimi suggested that
> anyone dissatisfied with Quark’s Mac commitment should “switch to something
> else.”

In 2003, after a few years of development of InDesign, John Gruber at Daring
Fireball posted about InDesign vs. QuarkXPress:

> Competition was restored when Adobe launched InDesign, which offers vastly
> superior typographic features than does QuarkXPress. But QuarkXPress still
> dominates the industry, even though InDesign:
> 
>  * has been out for several years;
>  * is widely-hailed as a superior product;
>  * costs less;
>  * reads QuarkXPress documents; and
>  * comes from a company people actually like

Daring Fireball continued to post about QuarkXPress and InDesign, and reading
the chronology of the subsequent posts traces a nice little history of InDesign
overtaking QuarkXPress:

 * 2003: “InDesign hasn’t taken more of the market away from Quark”
   
   > I’m not quite sure why this is, that “InDesign hasn’t taken more of the
   > market away from Quark”. My best guess is that when Quark knocked off
   > PageMaker, the industry was still nascent; but during the 90’s it matured
   > and atrophied around Quark. InDesign may not be too little, but it might
   > have been too late.

 * 2005: “InDesign is clearly winning the horse race”:
   
   > Another interesting market share story is the relative position of Quark
   > and InDesign. InDesign is clearly winning the horse race, with sales up
   > 37%, while Quark book sales declined 35%.

 * 2006: He doesn’t know “a single designer who hasn’t switched to InDesign”:
   
   > I personally don’t know a single designer who hasn’t switched to InDesign.

InDesign was released in 1999, so 1999–2006 is seven years, which is close
enough for the accuracy we’re shooting for with the five-year rule. But that’s
tracking until QuarkXPress has almost disappeared, whereas the five-year rule
really tries to predict when the new player overtakes the original dominant
player in popularity, which has happened earlier than that. Without any market
share data to go on, we’ll just have to take a guess as to when that might have
happened. For the purposes of this piece, I said six years, which is close
enough accuracy for the five year rule.



InDesign

PHOTOSHOP TO SKETCH

Sketch becomes the most popular user-interface design tool, overtaking Photoshop
over the course of about five years.

While the QuarkXPress and InDesign transitions feel like ancient history,
Photoshop to Sketch still feels fresh. There’s so much mind share around this
transition, and even more so for the subsequent transition from Sketch to Figma,
that they feel bound to be the new default case studies in software transitions.

--------------------------------------------------------------------------------

Before getting into the history of Sketch itself, it’s important to quickly note
the history of Fireworks, the dedicated design application that Adobe acquired
as part of the the Macromedia acquisition in 2005. After the acquisition,
development of Fireworks was quickly paused, citing too much overlap with
Photoshop, it was later officially discontinued in 2013, but it had been
considered long dead before that with designers.

Fireworks is important to mention because, while it never really set the world
on fire, it had already demonstrated interest in a dedicated design app, and
when it was discontinued a vacuum was left that Sketch was able to capitalize
on. If you’re looking for where unexpected innovations will come from, look for
areas of neglect.

--------------------------------------------------------------------------------

Sketch was first release in 20103, but that’s not where its history begins,
before Sketch, the company behind Sketch, Bohemian Coding, had a vector drawing
app called DrawIt that would form its basis.

I worked as a user-interface designer at the time when Sketch was released, and
at the time, Photoshop’s hold on the user-interface design market tenuous.
User-interface designers only used a tiny portion of Photoshop’s features
(mainly vector drawing tools and layer effects). The idea to break out those
features into a separate, dedicated-design, app seemed was in the ether at the
time (followed by adding some user-interface design specific features, like
symbols). Adding fuel to the fire, around this time apps had begun leveraging
the OS X Core Image and Core Graphics frameworks to make raster image editing
apps replicating the functionality of Photoshop, like Pixelmator and Acorn. It
seemed like only a matter of time until these same frameworks were leveraged to
make a user-interface design app.

Sketch it made a splash with its initial release, but the inflection point was
really the release of Sketch 3 in 20144, which included a key feature: Symbols.
Symbols are re-usable components, an important feature when designing
user-interfaces which usually require repeating the same element in different
contexts with slight variations (e.g., picture the same button but with
different text). By the Subtraction Design Tools Survey in 2015, Sketch had
received the most votes as the designer’s tool of choice, beating out Photoshop
by 5%.

How was Sketch able to disrupt a behemoth application like Photoshop? That had
owned the design space for so long? On one hand, it was just focus: Photoshop is
a photo editor first and foremost, using it for design was always a bit of a
hack. But something else happened that paved the way for the rise of Sketch, and
later Figma: Flat design.

Apple announced iOS 7 in 2013, radically changing the user-interface design of
iOS. Before flat design, Apple had been pushing a skeumorphic style simulating
real-world objects using rich textures and whimsical animations. Photoshop,
which combined rich bitmap editing features with vector editing tools, was a
much better fit for the skeumorphic style than the austere flat design.



Wikipedia’s iOS 6 and iOS 7 screenshots side-by-side



Review for iPad, an app I designed during the skeumorphic era

I think it’s underappreciated just how bizarre the change from skeumorphic to
flat design was to the design tool market. For the capabilities that a software
package requires to serve a market to suddenly reduce so drastically was
unprecedented. It’s like if 3D modeling software suddenly didn’t care about
realistic textures and lighting. Of course that would open up the market to
being disrupted. The priorities of the software have changed dramatically,
making room for new approaches. An opportunity that more nimble startups would
be best positioned to capitalize on while the larger software packages, who
already have a lot of customers depending on their existing feature set, would
be slower to adapt.

To top it all off, flat design also facilitated a new workflow for designers
that Sketch was also able to capitalize on: Photoshop had always been used to
actually export image assets that were then reassembled in code to create the
design. Sketch (and later Figma) never really had to work this way, since flat
design is mainly comprised of text, lines, and gradients, rather than textures,
which can easily be created in code themselves, so don’t need to be exported5.

I’d argue that not needing to export assets is a larger change than it might
seem like, because it changes which category user-interface design software fits
into. For example, all of the other major applications in the Adobe Creative
suite, like Premiere, Photoshop, and Illustrator, the final asset (the photo,
the movie, the artwork) is actually exported from the application. In that way,
Premiere, Photoshop, and Illustrator fit into a one category of sofware:
Applications for making digital content. Figma and Sketch (outside of the
occasional SVG export) are mainly software for communicating a design. In that
way, they’re closer to presentation software like Keynote and PowerPoint, than
they are to rest of the Adobe suite. Presentation software, like Sketch and
Figma, are used to communicate ideas, not build the actual artifacts used to
create digital content.

You can blame Adobe for missing the boat on user-interface design deserving
their own tool, instead of shoehorning a photo editor for that purpose—I’d love
to have been a fly on the wall in the decision to kill Fireworks for example—but
it feels harder to fault them for not realizing that this new dedicated design
tool would also be closer to Google Slides than to the other software in Adobe’s
Creative Suite that’s their bread and butter like Premiere, Photoshop, and
Illustrator.

SKETCH TO FIGMA



UI design tool popularity from the Subtraction Design Tools Survey (2015) and UX
Tools (2016-2020)

This graph illustrates not just that Figma overtook Sketch over about five years
(2015–2020), but also Sketch’s own five-year ascent to overtake Photoshop (since
Sketch was released in 2010 and it starts out ahead in 2015).

Dylan Field and Evan Wallace started working on Figma in 2012, and it was
initially released in 2016. Figma runs entirely in the browser, it has a 2D
WebGL rendering engine, built in C++ and compiled to WebAssumbly, with
user-interface elements implemented in React. This stack really excited a lot of
people, because, before Figma there had never been a successful creative app
that was a web app. After seeing The Matrix, film director Darren Aronofsky’s
asked “what kind of science fiction movie can people make now?”. Similarly,
after the success of Figma, startup founders have been asking “what kinds of
software can be made as web apps now?” So far the answer has been “not many”,
I’m not aware of a single other startup that has fulfilled this promise6.

As was mentioned at the end of the PageMaker to QuarkXPress section, it’s
actually quite common for transitions to happen soon after the introduction of a
new software category, before an application has time to become firmly
entrenched in actually owning a category. For example, Microsoft Word came four
years after WordPerfect, and Microsoft Excel came eight years after VisiCalc.
There’s a “primordial ooze” phase right after a new category emerges where lack
of product maturity means it’s relatively easy for new players to enter a
category until a dominant player emerges. Sure, it’s still interesting to look
at the factors that determined which application becomes successful through that
process, but what’s more interesting is when an application has had time to
become entrenched and then gets supplanted. In this case, the entrenched
application was Photoshop, and the application responsible for supplanting it
was Sketch, not Figma.

In the section on Photoshop to Sketch, we discussed an underappreciated factor
in Sketch’s, and by extension, Figma’s, success: That flat design shifted the
category of design software from professional creative software to something
more akin to an office suite app (presentation software, like Google Slides,
being the closest sibling). By the time work was starting on Figma in 2012,
office suite software had already been long available and popular on the web,
Google Docs was first released in 2006. This explains why no other application
has been able to follow in Figma’s footsteps by bringing creative software to
the web: Figma didn’t blaze a trail for other professional creative software to
move to the web, instead Sketch blazed a trail for design software to become
office suite software, a category that was already successful on the web.

Another factor that’s rarely mentioned in Figma’s success is that co-founder and
former CTO Evan Wallace appears to me to be a once in a generation programmer,
deserving to be on a short list with the likes of Ken Thompson, Linus Torvalds,
and John Carmack. Figma itself is evidence of Wallace’s skill, especially since
no other company seems to be able to make another web app that feels as nice.
It’s rare for a technical implementation to act as a moat, but that appears to
be what has happened with Figma. For evidence of how widespread the architecture
Wallace pioneered for Figma is expanding, Google Docs is switching to
canvas-based rendering. I’m also struck by the startling beauty of some of his
early work that predates Figma, like this gorgeous WebGL water simulation
presumably done while Wallace was studying computer graphics at Brown
University. Then there’s esbuild, a JavaScript bundler like webpack, with
amazing performance. The homepage for esbuild sports this graph:



There’s a point at which the magnitude of the performance improvements starts to
show contempt for your competitors, and esbuild crosses that line. Wallace left
Figma at the end of 2021, less than a year before Adobe’s acquisition of Figma
was announced in 2022.


THE RISE OF VISUAL STUDIO CODE



Visual Studio Code, first released in 2015, becomes the most popular text editor
by 2018, over the course of just three years.

I’ve already written about the rise of Visual Studio Code. In some ways the rise
of VS Code is similar to Figma, in that another, earlier, trailblazer first
disrupted the market, before they came in and really took over. In Figma’s case
it was Sketch, and in VS Code’s case it’s Atom, released a year before VS Code
in 2014. Atom illustrated that there was a market for an open-source web-based
text editor built around extensions7. VS Code took that formula and solved the
main problem that held it back: Performance. Atom was known as being slow, and
VS Code has a reputation of being snappy in comparison.


CONCLUSION



I looked at six examples of software transitions of big creative apps, starting
with tracing the history of print design, then transitions from the
user-interface design world, and finally the rise of Visual Studio Code. In
this, imperfect, but hopefully still useful, analysis, all of those transitions
took between three and six years with an average (and median) of five years.

When I started out writing this piece, there were a few things I wanted to
illustrate. The first was that transitions even happen at all with big creative
software that’s firmly entrenched in an industry. There’s a perception that most
of the preference for one application over another just comes down to path
dependence. And there’s a lot to that argument, these kinds of applications8
often have whole asset pipelines built around them9. But if we can illustrate
that transitions do happen, then perhaps it’s less about inertia and more about
the relative merits of different software packages? In the end, I found evidence
of this lukewarm at best. Out of the five transitions I looked at: two of them
(PageMaker to QuarkXPress and Sketch to Figma) are “primordial ooze”
transitions, i.e., transitions that happened early enough after the creation of
a new software category that there wasn’t enough time for path dependence to
become a factor. That leaves just three transitions: QuarkXPress to InDesign,
Photoshop to Sketch, and the Rise of Visual Studio Code. That’s not very many
for ~35 year old industry10.

Another reason I wrote this piece is to illustrate where transitions are
unlikely to happen. A lot of software just hums along with lower usage numbers,
and yet I often see people commenting on how it’s on a path to disrupt an
industry. But I don’t think an application humming along at lower usage numbers
has ever ended up disrupting an industry. That doesn’t mean it can’ be a great
business, but Adobe (the company that comes up again and again in this piece)
has ~26,000 employees. The question is what scale the software will operate at.
Ableton has ~350 employees and Maxon has ~300, anecdotally it seems like a lot
of software categories can operate at 100> employees, but 100+ really requires
owning a market of some kind.

Overall my conclusion is that what accounts for the rarity of transitions is
that for a transition to happen, one of two pre-conditions need to happen that
are completely outside of the control of the new piece of software: One is that
the existing market leader has to make a major mistake. QuarkXPress betting
against OS X for the print industry, and Adobe killing there design-focused tool
Fireworks, are examples of this. The second is that a fundamental shift to the
industry can happen, the rise of flat design coinciding with the ascent of
Sketch is an example of this. Similarly, with the rise of web-based software on
the list (VS Code and Figma), a technical groundwork had to be in place before
these could become viable. For example, for Figma to create their web-based
performant graphics engine, WebGL (initial release in 2011) and asm.js (initial
release in 2013) both had to be in place.

In the end, just building a great software product is not enough to lead to a
transition, you also need the incumbent market leader to make a mistake, or
market conditions to fundamentally change (often due to new technology
breakthroughs), and preferably both.


APPENDIX



In the appendix I’ll look at a few more interesting transitions that don’t fit
in the narrow category of professional creative software.


THE PC REVOLUTION

Client-Server Architecture usage at businesses goes from 20% to over 50% in four
years from 1989 to 1992.

The PC revolution had a several phases. There’s the introduction mass market
computers like the Apple II, but that was an introduction not a transition
(i.e., people buying a computer for the first time, not switching from one kind
of computer to another), so it’s less relevant to this piece. What’s more
relevant is the transition from predominantly centralized computing (e.g., a
mainframes or minicomputers), to the client-server model, where the client and
server are both commodity PCs (under the centralized computing model, terminals
and mainframes are radically different architectures).

This transition is of course distant history. I’m mainly relying on one source:
A paper titled Technical Progress and Co-invention in Computing and in the Uses
of Computers by Timothy Bresnahan (Stanford University) and Shane Greenstein
(University of Illinois), published in the Brookings Papers on Economic Activity
in 1996.

The focus of Bresnahan and Greenstein’s paper isn’t about why companies were
transitioning from mainframes to client-server, it takes the stance of assuming
the switch is inevitable, and is more concerned about what might slow it down,
given the advantages were obvious. Here’s how they described the advantages of
the client-server model:

> Client/server computing emerged as a viable solution to the problem by
> promising to combine the power of traditional mainframe systems with the ease
> of use of personal computers (PCs). A network could permit the functions of a
> business system to be divided between powerful “servers” and easy to use
> “clients.” By the late 1980s the promise of C/S was articulated and
> demonstrated in prototypes, and the competitive impact was quick and powerful.
> Firms selling traditional large-scale computer systems saw dramatic falls in
> sales, profits, and market value.

As to what slowed down the switch, Bresnahan and Greenstein mainly attribute
this to the additional cost of “co-invention”: The additional work required by
businesses to adapt the new computing model to their needs (distinguished from
“invention”, because this work is done by the businesses themselves):

> Despite the speed and ambition of this technical progress, C/S did not become
> strictly better than mainframes. Instead, by the mid-1990s, each platform had
> distinct advantages and disadvantages. On the one hand, pre-existing data and
> programs for large applications were (necessarily) on host-based systems. If
> newly developed applications were simply improvements to the old, then there
> would be cost advantages to continuity. If new applications also needed to
> interact with the old programs or data, even greater advantage would arise.
> Finally, even with many technical problems solved, C/S still called for
> co-invention, which was potentially costly and time consuming, especially for
> complex applications. Hence, some users were going to switch cautiously, if at
> all.

Figuring out how long the transition from mainframes to the client-server model
took is more difficult than with software, because there isn’t a clear date to
mark the start of the transition. With software, we can use the first release
date of the software, but with something like the client-server model, which had
many moving parts evolving together to eventually create a compelling package,
there isn’t an obvious start date. Bresnahan and Greenstein choose 1989 as the
start date, because in their words, “before 1989 workstations and personal
computers could no more replace mainframes than could the people of Lilliput
wrestle Gulliver to the ground.”



The sample of companies and their distribution of mainframe versus client-server
over time.

Client-server starts out at about 20% in 198911, and the sum mixed of mainframe
and client-server businesses and all client-server businesses surpasses all
mainframe in 1992, so that’s four years12.


THE RISE OF GOOGLE CHROME

Google Chrome is introduced in 2008 and becomes the most popular browser five
years later in 2013.

One of the most popular transitions to discuss is browser market share because
it’s so impactful. The browser popularity has special significance because the
browser is an application that runs other applications, and therefore determines
a lot about the fate of the web apps that run in the browser.

Additionally, through the very nature of the browser, it’s very easy to collect
market share data. The main use of the browser is to access arbitrary remote
servers, so all you need to do collect market share numbers is for some of those
servers to record which browser is being used to access the site. A number of
different sites have done that over the years.



Browser usage data aggregated from the *Usage share of web browsers* Wikipedia
page13 which includes data from several different sources.

Firefox (released in 2002) once looked like it was on a trajectory to become the
market leader. If it had, it would have been the slowest transitions in this
piece. But it didn’t, instead it plateaued almost immediately when Google Chrome
was released in 2008. An interesting question is if Chrome hadn’t come along,
would Firefox eventually have become the market leader? I don’t really have an
answer to that question, but the five-year rule would say no, that the
transition was happening too slowly, a more likely outcome is that the market
opportunity (in this case created by the stagnation of Internet Explorer) would
be capitalized on by a more aggressive player that completes the transition over
the course of roughly five years, which is exactly what happened.

Chrome’s rise is a textbook example of the five year rule, released in 200814
and becoming most popular browser in 2013. Google themselves have a wonderful
comic with words from the Chrome team and illustrations by Scott McCloud (of
Understanding Comics fame). The overall message is that the browser was
originally designed for sharing documents, but that the web had moved towards
serving applications, instead of documents, so Chrome is a browser designed from
the ground up to serve applications. The features they emphasize are
operating-system-style process isolation (an innovation that is now standard
across all browsers), a new JavaScript VM (V8) built from the ground-up for web
apps (e.g., introducing JIT compilation, a user-interface focused around tabs
(e.g., the Omnibox), and incognito mode.


SUBVERSION TO GIT

git is introduced in 2005 and the total number of repositories using git
overtakes Apache Subversion nine years later in 2014. But I’d argue more
developers were using git for their work in five years, by 2010.



The total number of repositories using Subversion vs. git. The data was
collected by Ohloh, now called Black Duck Open Hub, a site that “aims to index
the open-source software development community”. The data was sourced the data
from a summary on StackExchange.

Linus Torvalds began developing git in 2005 to manage the source code for the
Linux kernel as a replacement for BitKeeper after a messy situation with
BitMover, the parent company behind BitKeeper. The key features that have made
git successful are it’s distributed nature (history is mirrored on every user’s
computer, instead of only being hosted remotely), it’s speed, and the simplicity
of its architecture. GitHub, the hosting service for git repositories, launched
in 2008, further paving the way for git to become by far the most popular
version control system today. According to the 2021 Stack Overflow heDeveloper
Survey, git is used by 93% of developers.

The Ohloh data at first appears to illustrate that git took a long time to
overtake Subversion (note that the graph starts at 2009 while git was first
released in 2005, so there’s really four years missing to the left that we
simply don’t have data for). But it’s important to note Ohloh is measuring the
total number of repositories using git, whereas the other surveys are measuring
which software users report that they’re actually using for their work. In other
words, Ohloh is tracking every project that has ever been created in a version
control system, when we actually want to track which system is being used more
often.

Tracking the number of Stack Overflow questions about different version control
systems over time probably maps more closely to which version control system is
actually being used. This data shows git overtaking Subversion in 2010, five
years after git was first released.



Stack Overflow Trends changes in questions about version control systems over
time.

--------------------------------------------------------------------------------

 1.  These apps often Follow Zawinski’s Law at least in spirit, if not
     literally. ↩︎

 2.  Adobe also acquired FreeHand from this transaction, but the FTC blocked
     Adobe from owning FreeHand, so it’s assets were returned to Altsys (who had
     been licensing the rights to FreeHand to Aldus). Altsys was acquired by
     Macromedia in 1995, and it became Macromedia FreeHand. Then of course in
     2005, Adobe acquired Macromedia and FreeHand along with it, and it became
     Adobe FreeHand, which was then discontinued in 2007 due to too much overlap
     in functionality with Adobe Illustrator. ↩︎

 3.  The year Sketch was released, 2010, was a year before Apple introduced
     sandboxing to the Mac App Store. I’ve always thought Sketch was the epitome
     of what Apple, and the NeXT lineage that preceded it, were trying to
     accomplish by providing a robust software development framework design to
     increase developer productivity in order to enhance innovation.
     
     The first web browser, and the genre-defining video games Doom and Quake
     were both developed on NeXT machiens. Not bad for a platform that only
     shipped 50,000 units!
     
     Mac App Store sandboxing was the end of that vision, since seemingly no
     major creative apps can be sandboxed. ↩︎

 4.  Sketch left the App Store in 2015. ↩︎

 5.  Development frameworks for user-interface design themselves maturing was
     another factor that reduced the dependency on exported assets. For example,
     CSS introduced features like drop shadow and rounded corners around the
     same time that Sketch was becoming popular. Before those features were
     added to CSS, implementing those features required exporting assets. ↩︎

 6.  I’m deliberately excluding Electron when I say no startups have followed in
     Figma’s footsteps, for a couple of reasons:
     
     1. Electron apps don’t leverage the collaborative advantages of the web.
     2. Electron apps have nothing to do with the 2D WebGL rendering engine
        that’s at the heart of Figma. This is what allowed Figma to be able to
        compete in new categories in software that were previously not feasible
        for web software, e .g., vector and bitmap rendering in this case.
     
     ↩︎

 7.  Light Table was an important predecessor to Atom, that was an even earlier
     demonstration of the advantages of a web-based text editor focused on
     extensions. Again, this early competition illustrates the “primordial ooze”
     phase of early competition new categories go through (or more precisely in
     this case, a new way of approaching an old category: text editors). ↩︎

 8.  One of the great beauties of programming (and writing) is that it’s built
     on plain text, which removes many of barriers making it difficult to switch
     software. If the rest of your video editing crew is using Adobe Premiere,
     there’s no feasible way for you constribute to the same project using
     something else. Or if you need to export a PDF with exact Pantone color
     values for a large print run, you’re going to be more hesitant about trying
     a new application (because even slight deviations could have irrepairable
     consequences).
     
     Working in plain text has none of these problems, you can freely
     collaborate with anyone else regardless of what software they’re using, and
     exporting plain text is always 100% accurate. ↩︎

 9.  An interesting side note about print design pipelines is that I’ve heard
     that it’s the reason AppleScript has survived for so long. In particular,
     the reason AppleScript survived the transition from Mac OS 9 to OS X is
     that the print design industry depended on it so much. ↩︎

 10. Another reason transitions are so rare, is that to clearly trace a
     transition, you first need to have a single dominant application to
     transition from. For some reason, single dominant applications seem to be
     common in visual design fields.
     
     3D modeling, NLEs, and DAWs all have much more diverse markets, so much so
     that you wouldn’t even be able to talk about a transition, instead you’d
     just be talking about a new application being added to the cornucopia of
     options (Ableton Live is a great example of this).
     
     In one of our sections, the Rise of Visual Studio Code is actually about
     the end of that kind of diversity for text editors (before VS Code, no
     single text editor had over 30% market share, now in the most recent Stack
     Overflow Developer Survey VS Code has almost %75), which is astonishing and
     I’m surprised that it isn’t talked about more. I’d generally consider
     diversity a healthier market: It provides options for people who want a
     different experience, the competition forces innovation, and there’s no one
     product that can exploit it’s market position in user hostile ways. ↩︎

 11. I decided to use Bresnahan and Greenstein as a source because the paper is
     thorough and backed by data, but the paper is looking at mainframe vs.
     client-server in the context of companies, which is likely the wrong prism
     to evaluate the transition by. E.g., the New York Times had an article in
     1984 stating that personal computers were already outselling mainframes:
     
     > For the first time, the value of desktop, personal computers sold in the
     > United States - computers that were almost unheard of only eight years
     > ago - will overtake sales of the large “mainframe” machines that first
     > cast America as the leader in computer technology.
     
     ↩︎

 12. The mainframe is also a great example of another consistent pattern:
     Technology that loses its market dominance rarely fades away completely, it
     often thrives indefinitely in a niche. The mainframe is a classic example
     of this, and continues to thrive to this day. In 1991, technology writer
     Stewart Alsop wrote, “I predict that the last mainframe will be unplugged
     on 15 March 1996.” Admitting he was wrong, he ate his words in 2002. The
     mainframe business has only grown since then, for example, here’s how
     Benedict Evans summarizes post-PC IBM:
     
     > The funny thing is, though, that mainframes didn’t go away. IBM went
     > through a near-death experience in the 1990s, but mainframes carried on
     > being used for mainframe things and IBM remained a big tech company. In
     > fact, IBM’s mainframe installed base (measured in MIPS) has grown to be
     > over ten times larger since 2000. Most people working in Silicon Valley
     > today weren’t even born when mainframes were the centre of the tech
     > industry, but they’re still there, inside the same big companies, doing
     > the same big company things. (This isn’t just about IBM either - the UK’s
     > sales tax system runs on DEC’s VAX. Old tech has a long half-life).
     > Mainframes carried on being a good business a long time after IBM stopped
     > being ‘Big Blue’.
     
     ↩︎

 13. This graph uses the data from OneStat.com, TheCounter.com, StatOwl.com, and
     W3Counter on the Usage share of web browsers. Notably, it isn’t always
     clear whether the data also includes mobile versions of the browsers, as
     always making sense of the available data is an imprecise process. ↩︎

 14. The first stable version of Chrome to support Mac, Linux, and Windows was
     Chrome 5, released in 2010. ↩︎

--------------------------------------------------------------------------------


SUNDAY, MAY 16, 2021


CODESPACES: GITHUB'S PLAY FOR A REMOTE DEVELOPMENT FUTURE



When I first saw Codespaces, I immediately wanted it. With ubiquitous high-speed
internet, why not offload more work to the cloud? What could our devices look
like if most of their power came from the server? What would their battery life
be like?

Seamlessly leveraging remote resources has always felt like an idea that’s just
around the corner, but never arrives. Just having a big beefy machine on site
usually ends up being the most practical solution (outside of some specialized
use cases)1.

Codespaces is perhaps the biggest play ever to take remote development more
mainstream. Development has always been a prime candidate for remote computing,
because with time-sharing machines, it’s how the roots of programming itself
began2.


VISUAL STUDIO ONLINE TO GITHUB CODESPACES

GitHub Codespaces began as a different product, called Visual Studio Online.
Visual Studio Online was announced on the Visual Studio Blog in November 2019.
Then, in April 2020, it was renamed to Visual Studio Codespaces, Nik Molnar
described the motivation behind the name change on the same blog:

> We learned that developers are finding Visual Studio Online to be much more
> than just an “editor in the browser”. They are saying that “the capabilities
> of this cloud-hosted dev environment make it the space where I want to write
> all my code“.
> 
> To better align with that sentiment, and the true value of the service, we’re
> renaming Visual Studio Online to Visual Studio Codespaces.

A few days later, a corresponding announcement appeared on the GitHub blog that
Codespaces was coming to GitHub. Then, almost a year later in September 2020, it
was announced on the Visual Studio Blog that Visual Studio Codespaces would be
consolidated into GitHub Codespaces, and that Visual Studio Codespaces would be
retired in February 2021.

Visual Studio Codespaces was similar to GitHub Codespaces, but it did have some
key differences. Visual Studio Codespaces wore more of its implementation
details on its sleeve, in particular, as being built on top of Microsoft Azure.
When you setup a Visual Studio Codespaces, it was linked to an Azure
subscription and location, and you chose a “Default Instance Type” for new
codespaces3.



The decision to remove these details from GitHub Codespaces, and provide quick
access to launch a codespace from a repository, was highlighted in the
announcement letter about shutting down Visual Studio Codespaces in favor of
GitHub Codespaces:

> During the preview we’ve learned that transitioning from a repository to a
> codespace is the most critical piece of your workflow and the vast majority of
> you preferred a richly integrated, native, one-click experience.

This is a great example of iterative product design. From a practical
perspective, Visual Studio Codespaces is essentially the same product as GitHub
Codespaces (and GitHub Codespaces is presumably also running on Azure), but
hiding the virtual machine implementation details makes GitHub Codespaces feel
different, and a bit more revolutionary4.


TOUR

Once you’re in the Codespaces beta, a “Codespaces” item appears in the
navigation menu when you click your user icon in the upper right5. Click it, and
you’re brought to a screen where you can manage the Codespaces you’ve already
created, including removing them by clicking “Delete” under the three disclosure
dots.



Every repository also has an “Open with Codespaces” option, which can either
create a new Codespace or open an existing one for that repository.



After opening a codespace, you’re brought to a browser window running Visual
Studio Code. It works similarly enough to the desktop version that it’s
practically indistinguishable6.



Alternatively, you can connect to the codespace directly from the desktop
version of VS Code by using the Visual Studio Codespaces extension. The
extension adds a “Remote Explorer” icon to the Activity Bar where you can
connect to, and manage, your codespaces.



The About Codespaces section of the documentation explains a couple of details
about the relationship between codespaces are repositories:

> Each codespace is associated with a specific branch of a repository. You can
> create more than one codespace per repository or even per branch. However,
> each user account has a two-codespace limit during limited public beta.


IMPLEMENTATION DETAILS

Codespaces uses Docker containers to setup development environments. GitHub and
Microsoft calls a running codespace a “development container” presumably after
Docker containers, emphasizing their close relationship.

Regarding what’s running locally, and what’s running in the development
container, the Remote Development FAQ describes how the user-interface runs
locally, i.e., in the browser or VS Code app, while a separate process running
on the server (“VS Code Server”) handles the operations that need to happen on
the server, such as file system access:

> The VS Code Server is quickly installed by VS Code when you connect to a
> remote endpoint and can host extensions that interact directly with the remote
> workspace, machine, and file system.

The FAQ also includes this handy diagram illustrating what’s running on the
server and what’s running locally:



Whether extensions runs locally or on the development container depends on
whether they “contribute to the VS Code user interface”. If they do, they’re
called “UI Extensions” and run locally, if they don’t, they’re called “Workspace
Extensions” and run on the server.

Whether extensions are UI Extensions or not, they’re all installed on the
development container at the path ~/.vscode-remote/extensions/:

% ls ~/.vscode-remote/extensions/
castwide.solargraph-0.21.1
davidanson.vscode-markdownlint-0.37.0
dbaeumer.vscode-eslint-2.1.8
dbankier.vscode-quick-select-0.2.9
eamodio.gitlens-10.2.2
editorconfig.editorconfig-0.15.1
...



THE RISE OF VIRTUALIZATION

The story of server-side infrastructure over the last couple of decades is the
story of the rise of virtualization, and, its sibling, containerization. Both
are ways of abstracting the hardware away from the software running on it, which
has some powerful benefits. It makes it easier add or remove hardware at will,
for example, which simplifies scaling. It also facilitates automating
configuration, which eases deployment. Both of these qualities of virtualization
are leveraged by Codespaces.

AWS, Azure, Docker, Heroku, and Kubernetes are all examples of services or
technologies that leverage virtualization or containerization. It’s also the
backbone of most CI/CD and serverless systems. While virtualization has
revolutionized the server-side, it hasn’t had much impact on development
environments outside of specialized use cases.

There are two, equally valid, ways of seeing the origins of Codespaces: one, is
as a natural extension of an editor that began as a browse-based version of
Visual Studio (formerly called “Visual Studio Online” now “Azure DevOps
Services”), the other is as another step in the march of virtualization
revolutionizing every aspect of development. These could even be considered the
same story: Azure DevOps Services is of course also built on virtualization.


THE PROMISE OF REMOTE DEVELOPMENT

Just being able to quickly spin up a remote development machine from git repo to
make an open source contribution, or to get a quick development environment to
spelunk into a dependency’s implementation details, is already enough benefit to
make Codespaces popular. But the ceiling of Codespaces’ success hinges on how
useful it is for day-to-day development.

On the VS Code blog, the vision is expressed with admirable restraint, focusing
on the benefits for large code bases and data models requiring “massive storage
and compute services”:

> Because the code bases are so large, we see engineers at shops like Facebook
> (and Microsoft!) use editors like vim to work remotely against secure and
> powerful “developer VMs”, using alternative cloud-based search and navigation
> services that scale beyond what even the best laptop can handle.
> 
> Data Scientists building and training data models often need massive storage
> and compute services to analyze large datasets that can’t be stored or
> processed even on a robust desktop.

In Facebook’s later announcement of their partnership with Microsoft on remote
development, the advantages are expressed in broader terms, suggesting that “any
developer can gain” from remote development:

> As Microsoft’s Visual Studio Code team stated when they first released the
> remote extensions, remote development is an emerging trend. While our use
> cases may be more advanced than most development teams given our scale, any
> developer can gain the benefits of remote development:
> 
>  * Work with larger, faster, or more specialized hardware than what’s
>    available on your local machine
>  * Create tailored, dedicated environments for each project’s specific
>    dependencies, without worrying about errors due to mixed or conflicting
>    configurations
>  * Support the flexibility of being able to quickly switch between multiple
>    running development environments without impacting local resources or tool
>    performance

Those are compelling advantages that most developers could benefit from. So what
are the chances of Codespaces supplanting local development, not just for
specialized use cases, but developer’s day-to-day work on their main project?

Remote development isn’t new, it’s been around since the dawn of programming,
and VS Code already has best-in-class support for it. But remote development in
VS Code, while frequently praised, hasn’t moved the needle much on its own for
day-to-day development. Which means we can look at the advantages of remote
development that VS Code already had before Codespaces, and note that they
probably won’t be enough on their own to make remove development more popular.
Here are the often cited advantages of remote development before Codespaces:

 1. Developing in the same server environment that production code runs in.
 2. Using more powerful hardware.
 3. Accessing the same development environment from any machine.

In addition to those advantages, Codespaces has a new trick up it’s sleeve:
Automatically setting up development environments when a new codespace is
created, by installing dependencies via Docker7. In other words, Codespaces
brings the same automated configuration advantages to the development side that
virtualization and containerization have already brought to the deployment side.
Configuring development environments is surprisingly complex, and subtle
differences between manually-configured development machines creates its own
problems.

It remains to be seen whether reproducible development environments is enough of
a draw to move more developers over to remote development, but it’s certainly a
compelling solution to a real problem.

Finally, there’s another important trait about Codespaces: It works with
locked-down devices, like iPads, which normally can’t download and execute
source code due to App Store Review Guideline 2.5.2. It also doesn’t require
source code to be checked out locally, which many companies already consider a
big security risk. These advantages will likely make some developers
uncomfortable, those that see current computing trends as the gradual erosion of
user freedoms, but the purpose of this piece is to predict the impact Codespaces
will have on the development process, and that it aligns well with both the
direction some devices are going, and many company’s security goals, are both
important traits to consider.


REMOTE DEVELOPMENT IN PRACTICE

Codespaces creates a fairly convincing illusion of working locally8. This is
especially true when using the VS Code app with the Codespaces extension.
Performing tasks like editing text, project-wide find-and-replace, or file
management in the Explorer don’t exhibit any major differences from editing
files locally.

One of VS Code’s best tricks is automatically forwarding ports for URLs printed
in the console when connected to a remote machine. If, for example, a server
process prints 127.0.0.1:3000 (because it’s running on port 3000) then port 3000
is automatically be forwarded to your local machine. You can then open that URL
in a local browser window (or just ⌘-click the URL in the console), just like
you would be able to if the server process were running locally9. This is
another example of how VS Code creates the illusion of working locally.

But there are some situations where the illusion breaks down. Developing offline
is obviously no longer an option. Another example is that when developing
remotely, VS Code becomes the only easy way to edit files. If you want to edit a
bitmap in Photoshop, or open a CSV file in Excel, you’ll have to figure out
another way of doing so.

The vastness of VS Code’s ecosystem is an interesting tangent to explore from
the limitation of not only being able to edit files with anything besides VS
Code. There are extensions for tasks like editing raster graphics, a Draw.io
editor for diagrams, and a tabular data viewer. If you squint, VS Code starts to
look more like a general purpose platform, rather than just a text editor. The
fact that this platform provides in many ways a better experience than say, VNC,
is quite powerful10.



The Draw.io Integration VS Code extension by Henning Dieterichs

Setting up and tearing down development environments at will, which Codespaces
encourages, also has its downsides. If your development environment requires
installing a lot of additional tools, such as compilers, linters, and other
shell tools, then those tools will all need to be installed each time you create
a new codespace. While Codespaces’ dotfiles support can automate this, having
more dependencies will make it take longer to spin up a new codespace.

Finally, the last issue I observed while using Codespaces is that each project
being in its own codespace makes it harder to make changes spanning multiple
projects. This comes up when performing maintenance tasks (like updating
continuous integration settings for several projects at once), making changes
that span multiple projects (like many an API change and then updating consumers
of that API), or even just trying to search through several projects to find a
piece of code I know I’ve already written, but I don’t know where. These are all
problems where organizing projects in the file-system hierarchy makes it easier
to work on several related projects at once. But with Codespaces, every project
is an island11.

It’s also worth mentioning that there are many types of development that
Codespaces isn’t applicable for at all. Anything that needs access to local
hardware, like mobile development, is obviously going to be out. The biggest
audience for Codespaces is web developers (which not coincidentally, is the
biggest audience of VS Code itself). Web development is a natural fit for remote
development, since the deployment target of the project is also remote.


CONCLUSION

Codespaces provides enough utility that I suspect it will find its way into
many, if not most, developers’ workflows. Just being able to open a Codespace to
quickly explore, or make contributions to, a repository seems like enough to
make it popular on its own. Not to mention being able to quickly edit a project
from a machine that hasn’t been setup for development12. But the question I find
the most interesting is whether Codespaces also has the potential to replace
local development entirely, at least for some kinds of developer (those that
aren’t deploying to local hardware).

I don’t expect Codespaces to win over many longtime developers, who already have
sophisticated development environments setup, since Codespaces’ biggest gains
come from initially setting up those environments13. The real benefit from
Codespaces comes from never having to setup those local development environments
in the first place, ever, over the course of a career. So what will be more
interesting to watch is when new developers join projects. Without Codespaces,
their first task would be to setup their development environment. With
Codespaces, they can just click a button and start coding. Will developers who
start working this way ever get around to setting up local development
environments?14

--------------------------------------------------------------------------------

 1.  The continued relevance of the Mac Pro is an example of how relevant
     powerful, on premises, hardware still is. ↩︎

 2.  Since Codespaces is still in beta, we’re not going to spend any time
     reviewing bugs or incomplete features, which might be fixed before release.
     This piece is about the full promise of Codespaces and remote development
     when it’s finished. ↩︎

 3.  Setting an instance type will also come to GitHub Codespaces:
     
     > Compute usage is billed per hour, at a rate that depends on your
     > codespace’s instance type. During the beta, Codespaces offers a single,
     > Linux instance type. At general availability, we’ll support three Linux
     > instance types.
     
     It remains to be seen whether these features can be added without
     compromising the one-click experience. ↩︎

 4.  At least one important feature was lost in the transition from Visual
     Studio Codespaces to GitHub Codespaces: self-hosted codespaces (which
     appears to be the most requested feature on the Codespaces Beta forum). In
     a way, it’s not surprising that it was removed, self-hosted codespaces fit
     more naturally into the Visual Studio Codespaces world (why not just let
     users swap the underlying Azure instance with their own hardware?), than
     they do into the GitHub Codespaces world (if a codespace is an extension of
     a repository on GitHub, how does using your own server make sense?). ↩︎

 5.  Earlier in the beta, Codespaces was in the main GitHub navigation along the
     top (i.e., alongside “Pull Requests”, “Issues”, “Marketplace”, and
     “Explore”), I wonder why it was removed from there? ↩︎

 6.  On macOS, the difference that jumps out between running Codespaces in the
     browser vs. the desktop app, is that some shortcuts that normally go
     straight to VS Code are instead interpreted by the browser. For example ⌘W,
     which closes the current split or tab, instead closes the entire browser
     tab. ↩︎

 7.  In addition to installing a projects development dependencies, codespaces
     can also be personalized by installing dotfiles. ↩︎

 8.  Emacs’ Tramp Mode is also known for creating the illusion of working
     locally when editing remote files. ↩︎

 9.  VS Code’s port forwarding also works well with launch configurations. A
     launch configuration can be setup where hitting F5 (for Debug: Start
     Debugging) launches the server and navigates to it in your browser, and
     this launch configuration will work regardless of whether your project is
     running locally or on a remote server. ↩︎

 10. VNC works by sending a video feed from the server to the client (and
     forwarding keyboard and mouse events to the server), whereas with VS Code
     the client is actually running the front-end code. VS Code’s approach seems
     better to me, and it fixes the most glaring problem with VNC today: Video
     compression artifacts. ↩︎

 11. I’ve stopped using Codespaces for my own projects. My development
     environment is quite elaborate (e.g., I install many shell utilities), and
     I also like having all of my projects organized together on the same file
     system, so I can do searches or make edits across related projects. Neither
     of these are a good fit for Codespaces.
     
     But I have found I like some of the benefits of remote development. In
     particular, it’s nice to not have to use local hard drive space for things
     like npm modules, especially for smaller projects. So instead of
     Codespaces, I’ve been using Microsoft’s Remote SSH extension, with a VPS.
     This provides some of the benefits of Codespaces, while working more
     seamlessly with my workflow. This approach also forgoes some of Codespaces’
     major selling points, like automatically setting up new development
     environments, and, perhaps most notably, web access via browser (it should
     be possible to add web access using code-server, if I ever decide I need
     it). ↩︎

 12. Codespaces can also be considered in terms of automation. This is my
     definition of automation:
     
     > Software automation is the alignment of intention and action.
     
     > You should be able to take one action to perform your intention.
     
     > And ideally, that action is configurable, e.g., you can either select a
     > menu item, press a button, or perform a keyboard shortcut.
     
     Codespaces takes what’s normally multi-step process, e.g., checking out the
     source code and then setting up a development environment, and turns it
     into a single action: Creating an environment for running and editing a
     project. Codespaces similarly optimizes finishing with a project. Normally,
     when you finish with a project, you might just delete the source, but this
     would still leave around any dependencies that were installed globally.
     When you remove a codespace, all of its dependencies are automatically
     removed with it.
     
     With Codespaces, intention and action are aligned. The single action of
     creating or removing a codespace accomplishes the intent of creating a
     working development environment or completely removing it. ↩︎

 13. Codespaces also presents a future for development that’s compatible with
     locked-down devices (e.g., iPads). I once thought creative professionals,
     like programmers, would eventually end up working on locked-down devices
     (defined here as a system that can only run sandboxed apps, but I no longer
     think that’s the case. ↩︎

 14. Replit is a start-up that’s also trying to remove the effort involved in
     setting up and maintaining development environments. See Replit co-founder
     Amjad Msaad discuss the original motivation behind it where he describes
     setting up a development environment as more difficult than development
     itself.
     
     The comparison of Replit to Codespaces is that Codespaces takes existing
     development workflows, and works backwards to figuring out how to make it
     as easy as possible for new developers to join projects. Whereas Replit
     asks what if development prioritized making it as easy as possible for new
     developers to start coding from the beginning? Both of these seem like
     valid approaches, and will likely end up serving different segments of the
     market. ↩︎

--------------------------------------------------------------------------------


MONDAY, SEP 21, 2020


THE ERA OF VISUAL STUDIO CODE



The most important thing I look for when choosing which tools to use is
longevity. Learning software is an investment, and if you have to switch to
another application later, you lose some of that investment.

In most software categories, choosing the software with longevity is easy, the
most popular tools are usually the ones that have been around the longest.
Microsoft Excel and Adobe Illustrator were both released in 1987 and, for the
most part, they’ve remained the most popular software in their categories since
then.

Text editors, on the other hand, are a software category where the most popular
options are not the oldest. According to the Stack Overflow Annual Developer
Survey, Sublime Text was the most popular text editor available on the Mac from
2015–2017. Sublime Text was released in 2008, a sprightly youth compared to
Excel and Illustrator. Text editors have been a category with a lot of movement:
In the last 20 years, TextMate, Sublime Text, and Atom have all been the text
editor with the most momentum1. For big complicated desktop software, has any
other category ever had so much movement?

I believe the era of new text editors emerging and quickly becoming popular has
now ended with Visual Studio Code. VS Code has reached unprecedented levels of
popularity and refinement, laying a foundation that could mean decades of market
dominance. If, like me, one of your priorities for your tools is longevity2,
then that means VS Code might be a great text editor to invest in learning
today.

The case for VS Code’s longevity comes from several points we’ll cover in this
piece:

 1. Popularity: It’s crossed a popularity threshold that no earlier text editor
    in recent history has crossed.
 2. The Text Editor as Platform: It’s the endgame of a revolution that saw text
    editors be remade around extensions.
 3. Paradigm Transcendence: It’s transcended its paradigm as a desktop app by
    becoming a hosted web app, and even a reference implementation.
 4. Company Management: It’s managed by a powerful tech company, and it’s being
    developed aggressively.


POPULARITY

VS Code is the most popular text editor today. It’s so popular, that it could be
the most popular GUI programming tool of all time. Since 2015, Stack Overflow
has included questions about text editors in their survey3. Back then Notepad++
was the most popular text editor, with 34.7% of respondents saying they were
“likely to use it”. In the following years, the popularities of different text
editors moved around a bit, but nothing ever broke the 40% mark. That is, until
its most recent poll in 2019, when VS Code jumped to 50.7%. This was the second
year in a row that VS Code increased by ~45%, this time jumping from 34.9% in
2018, where it had already been the most popular.


TEXT EDITOR POPULARITY 2015–2019



(Note that Stackoverflow started allowing multiple answers between 2015 and
2016, so I’d take the changes between those two years in particular with a grain
of salt.)


THE TEXT EDITOR AS PLATFORM

So VS Code is objectively wildly popular; the next point we’re going to look at
is more qualitative. For the past couple of decades, text editors have been on a
trajectory that I believe VS Code is the final representation of. This is the
progression of text editors becoming platforms in their own right by increasing
the role and capabilities of extensions. What follows is the history of this
progression4.


PRE-2004: BBEDIT, EMACS, AND VIM

BBEdit, Emacs, and Vim are all great text editors in their own right, but they
all have idiosyncrasies that (while beloved by people like me) prevent them from
ever being the most popular text editor.

Emacs, and Vim’s predecessor Vi, were both first released in 1976, before many
of todays user-interface conventions were solidified. Predating conventions like
using a modifier key with Z, X, C, and V for undo, cut, copy, and paste
(keyboard shortcuts that were popularized by the original Macintosh and Windows
1.0, released in 1984 and 1985 respectively). Neither Emacs5 or Vim use these
keys, and instead use their own terminology. They both use the term “yank” for
example (although to mean different things, it’s copy in Vim, and paste in
Emacs).

BBEdit was released in 1992, around the time that some of the first GUI tools
emerged that would become dynasties. Note the proximity to Excel (1987),
Illustrator (1987), and Photoshop (1990). And just like those apps, BBEdit is
still relevant today. But unlike those apps, it’s not the most popular in its
category, by a wide margin. The reason seems to be at least partially that it
never fully adapted to a world where text editors put so much emphasis on
package-driven ecosystems.


2004: TEXTMATE

TextMate, released in 2004, is arguably the most influential text editor ever.
Among the numerous features it popularized are abbreviation-based snippets,
automatic paired characters, and fuzzy finding by file name. All of these
features became staples in every popular text editor that followed. The
implementations of Scope Selectors and theming that TextMate pioneered have also
formed the basis for themes and syntax highlighting in every subsequent popular
text editor.

That’s already a lot to originate from a single app, but it still doesn’t even
include TextMate’s most significant innovation; the one that would go on to
re-shape text editors, solidify niche status for every text editor that came
before it, and pave the way for VS Code to become the most popular text editor
in history a decade later. TextMate’s most important innovation was that it was
the first popular text editor that was primarily built around extensions.

While TextMate popularized the concept of a text editor built around extensions,
in hindsight, it didn’t go far enough. TextMate’s extensions had limitations
that later text editors would thrive by removing.


2008: SUBLIME TEXT

Sublime Text, released in 2008, popularized the minimap and multiple cursors.
And unlike TextMate and BBEdit, it’s cross-platform, running on Linux, MacOS,
and Windows, which helped it reach a wider audience than those editors. But
Sublime Text’s biggest impact was greatly expanding the capabilities of
extensions.

Sublime Text’s extensions run in an embedded Python runtime with an extensive
API, unlike TextMate which uses the scripting languages built-in to macOS, and
rather than having a proper extension API, mainly centers on processing standard
out.

Sublime Text greatly expanded what extensions could do, allowing more
sophisticated integrations such as linters that included GUI components. And
Package Control, the enormously popular package manager for Sublime Text built
by Will Bond6, features a centralized source for package management, reducing
the friction to browse, install, and update packages; a model that all
subsequent popular text editors would also adopt.

Even with Sublime Text’s expanded extensions, it still didn’t go far enough.
Package Control wasn’t built-in, and, while Sublime Text does have an API, its
use of Python with custom calls for GUI components still left room for future
text editors to make extensions more accessible to build.


2014: ATOM

Atom, released by GitHub in 2014, finally brings extensions to their final form.
Atom’s package manager is built in7, displays extension READMEs complete with
inline images (and early extensions made by GitHub themselves popularized the
convention of using animated GIFs to illustrate functionality), creating an
extension experience reminiscent of an app store.

Then there’s the matter of HTML and CSS8. Atom is built on Electron9, which
means the editor itself is written in JavaScript and runs on Node10. Compared to
Sublime Text’s Python API; HTML, CSS, and JavaScript are some of most
widely-known languages in existence, which greatly lowers the barrier of entry
for creating extensions.

Atom had essentially perfected the extension-based editor, there was just one
problem: It’s slow. Performance complaints have plagued Atom since its release,
and market ended up split with Sublime Text, which is lightning fast by
comparison.


2015: VISUAL STUDIO CODE

VS Code was released in 2015, based on the Monaco Editor that Microsoft had
first released in 2013 that could be embedded into websites. When GitHub
released Electron along with Atom. Microsoft used it to create a desktop version
of the Monaco Editor called Visual Studio Code.

VS Code takes the same formula as Atom11—a local web-based text editor written
in Electron with an emphasis on extensions—and makes it more performant. VS Code
makes extensions even more visible, by putting them in the sidebar, raising to
the same level as file browsing, searching, source control, and debugging. VS
Code extensions can have rich user-interfaces, being written in HTML, CSS, and
JavaScript, and with full-access to Node, they can essentially do anything any
other application can do. And indeed, some extensions start to look like apps in
and of themselves.

With VS Code, the extension-based text editor has seemingly reached its final
form. Ever since TextMate, extensions have increased in prominence and
capabilities, and with VS Code, that progression appears to have culminated.
There just isn’t anywhere else to go. Correspondingly, there isn’t a way a new
text editor can leapfrog VS Code the same way previous text editors have been
leapfrogging each other by improving extensions.


PARADIGM TRANSCENDENCE

So far we’ve looked at VS Code’s popularity, and its extensions implementation,
as indicators of longevity. The third indicator we’ll look at is how VS Code has
moved beyond the confines of the desktop. The code-server project runs VS Code
as a regular web app, in other words, hosted on a server and accessed through
the browser. GitHub’s Codespaces also run VS Code as a web app, this time by
spinning up an ad hoc development environment.

Transcending a paradigm, like going from a desktop app to a web app, is a great
indicator of longevity. For one, it means it’s more likely to be ported to more
paradigms in the future. It takes herculean effort to port to a new paradigm,
and expending that effort is a great indicator of value. Emacs and Vim were both
ported from the terminal to GUI applications; they were too valuable not to have
GUI versions. Photoshop and Excel both run on mobile12, and Illustrator is
coming soon. Excel also has a web version13, and there’s a streaming version of
Photoshop (although it’s been in closed beta for six years).

Not only has VS Code transcended the parameters of its initial implementation by
becoming a web app, it’s also became something of a standard. Version 1.0 of the
Theia IDE maintained by the Eclipse Foundation is a re-implementation of VS
Code. VS Code is now not only a text editor, but also a model of how a text
editor should behave.


COMPANY MANAGEMENT

TextMate is largely the work of one developer, Allan Odgaard, the same with
Sublime Text and Jon Skinner. Both of these applications eventually ran into
trouble with frustrated users for perceived slow release schedules.

Here’s the history of major releases for these two applications:

 * 2004: TextMate 1
 * 2008: Sublime Text 1
 * 2011: Sublime Text 2 Alpha
 * 2012: Sublime Text 2
 * 2012: TextMate 2 Alpha
 * 2013: Sublime Text 3 Beta
 * 2017: Sublime Text 3
 * 2019: TextMate 2

Here’s a graph of the number of years between stable major releases (contrasted
with the release dates for BBEdit 10–13 for comparison):



A couple things jump out from this graph immediately:

 1. TextMate 2 took a long time.
 2. Sublime Text has been consistent with their release schedule.

The complaints about Sublime Text seem to center around the gap between the
Sublime Text 3 Beta being announced in 2013 and released in 2017, and a
perceived lack of sufficient changes during that period. Sublime Text’s release
schedule is slow when compared to BBEdit’s which has released three major
versions (11, 12, and 13), while Sublime Text 3 has been in beta. Although Coda
2 was released in 2012, and hasn’t been updated since, so it’s unclear whether
Sublime Text’s release schedule is really an anomaly for a commercial text
editor.

The current version of VS Code is 1.49, but VS Code is open source, so it plays
by different rules than commercial apps. Major versions exist at least partially
as an opportunity for companies to charge for upgrades.

Since VS Code is developed out in the open, we can evaluate its pace of
development directly by reviewing its commit history. VS Code’s commit graph on
GitHub tells a story of aggressive development, out pacing Atom, and even other
large open source project like Facebook’s React (note that these graphs have
different scales on the Y-axis).


VISUAL STUDIO CODE COMMIT GRAPH




ATOM COMMIT GRAPH




REACT COMMIT GRAPH



Aggressive development pulls platforms away from the pack because the
combination of forward momentum, and third parties building on the platform, is
difficult to compete with14. This is the same combination that makes it so hard
for new entrants to compete with popular browsers or OSes.


CONCLUSION

The goal of this piece is to determine if VS Code is a good investment in
learning if you value longevity. An implication of the Text Editor as Platform,
is that since TextMate’s introduction in 2004, every few years the text editor
with the most momentum has changed. These would be short reigns by any standard,
but they’re positively miniscule compared to apps like Excel and Photoshop.
Learning a new text editor is a questionable investment if you expect something
new to come along every few years.

VS Code is giving indications that the era of short reigns for text editors is
over. It has the potential to maintain its position as the most popular text
editor for a much longer time, possibly for decades, if we use examples of
popular software in other categories as a guides. As we’ve outlined in this
piece, the case for this is following:

 1. It’s crossed a popularity threshold that’s eluded other text editors by
    being used by over 50% of developers.
 2. It’s the final form of progression towards maximizing the power and
    visibility of extensions, making it immune to being leapfrogged by a new
    text editor with a more capable extension model.
 3. It’s moved beyond its origins as a desktop app, it’s also a web app, and
    it’s even become a model of how a text editor should behave.
 4. It’s managed by a company, so it’s unlikely to run into the development
    stagnation that’s plagued other text editors.

Before VS Code, I expected to never learn another general-purpose text editor
that wasn’t Emacs or Vim again, it was just too risky. I’ve found a good way to
make predictions is to assume things will stay the same; with text editors, that
means expecting a new text editor will emerge every few years that gains most of
the momentum. Expecting anything else to happen requires extraordinary evidence.

I believe VS Code has presented extraordinary evidence. I’m hoping it moves into
the category with apps like Excel, Illustrator, Photoshop, software that has
held the most popular position in its category for decades. These applications
are reliably time investments that repay their cost of learning over the course
of a career. Emacs and Vim have always been like that, but it’s always good to
have more choice.

--------------------------------------------------------------------------------

 1.  If you think about it, the fact that the most popular text editor is newer
     than popular software in other categories is pretty strange, since text
     editing predates almost every other computer task. I think there are a
     couple of reasons for this. The first is that, on a technical level,
     writing a text editor is easier than other categories. While I don’t want
     to downplay the difficulty, text files are the lingua franca of computers,
     and nearly every major software development framework has at least some
     built-in support for them. Modern hardware also gives you a lot of
     performance headroom to develop a text editor that you don’t have if you’re
     developing, say, a video editor.
     
     The second reason is that it’s easier for users to switch text editors.
     While learning new complex software packages is always difficult, at least
     with text editors you can just open your existing projects with a new one
     and start editing them, since development projects are just made up of
     plain text files. There’s almost no other software category where this is
     true, most applications use proprietary formats that only their application
     can read. Another reason text editors tend to be easier to pick up is that
     it’s usually, but not always, easy to figure out the basics: How to enter
     and edit text. The basics are usually easier to figure out than, say, Adobe
     Illustrator, which is almost impossible to use without reading the manual.
     
     These factors combine to make text editors a particularly competitive
     market, and competition is effective in driving innovation. For my money,
     it’s made text editors the best software there is: They have the best
     balance of form and function of any software category. The closest
     competition are browsers and terminals, which also combine power and
     flexibility into an elegant package, but I give the edge to text editors,
     because browsers and terminals achieve their power by simply wrapping
     powerful concepts, a protocol and standard streams respectively. With text
     editors in contrast, the user interface is the application in a way that
     just isn’t true for those other types of applications. (This is also why
     browsers and terminals all feel roughly the same, while text editors are
     wonderfully diverse.) ↩︎

 2.  If longevity is my priority, then why not use Emacs or Vim? For Vim, the
     answer is easy, I do already use it. But I don’t like writing prose in
     Vim’s modal editing model, so I started seeking out a non-modal editor to
     complement Vim.
     
     I’ve also spent a fair amount of time with Emacs, but it started crashing
     for me with an issue similar to this one. The author of that post solved
     their problem by compiling Emacs locally to run it in lldb, which is
     farther than I was willing to go to solve my problem.
     
     Emacs has a difficult balancing act to walk: It’s incredibly customizable,
     but it’s also fragmented. For the Mac, there are several popular ports.
     And, macOS isn’t a high-priority platform for Emacs. There’s a history of
     blocking macOS-only features from Emacs, as well as removing features that
     are already working. All-in-all this makes Emacs a hard sell on macOS.
     Customizability and fragmentation aren’t a great combination to begin with,
     because customizations will often work in one version and not another. But
     when combined with relatively low market-share (4.5% in 2019), and being on
     a platform that’s a second-class citizen relative to GNU/Linux, it’s hard
     to recommend, despite its strong points. ↩︎

 3.  For some reason Stack Overflow removed the questions about developer tools
     like text editors for the 2020 survey unfortunately. ↩︎

 4.  The progression of text editors becoming a platform is adapted from a
     previous post, which is in turn adapted from a Twitter thread. ↩︎

 5.  Emacs does include cua-mode, which when turned on, defines C-x, C-c, C-v,
     and C-z as “cut (kill), copy, paste (yank), and undo respectively”. (The
     name cua-mode, is a bit of a misnomer because IBM Common User Access never
     used these key bindings.) ↩︎

 6.  Will Bond was hired by Sublime HQ in 2016. ↩︎

 7.  TextMate 2, released in December 2011, also had the first built-in
     extension browser in a popular text editor. ↩︎

 8.  Light Table, released in 2012, is another important milestone in the
     web-based text editor journey. Light Table uses NW.js (formerly
     node-webkit), a predecessor to Electron, and it had an integrated package
     manager—foreshadowing the same combination that Atom would use a couple of
     years later.
     
     What’s most interesting about Light Table that it focused on innovative new
     features first, like watching variables change as code executes, evaluating
     code inline, and embedding running projects in the editor itself (some of
     these features inspired by Bret Victor’s seminal “Inventing on Principle”
     talk). These are features that even now, eight years later, have been slow
     to make it into the text editors that followed.
     
     Light Table was about new features that weren’t available anywhere else,
     whereas Atom, its closet successor that used a similar web-based approach,
     was about incremental improvements over previous text editors. Atom’s main
     feature was that it was web-based, whereas Light Table was about new
     features that had never been done before. ↩︎

 9.  Electron was originally called “Atom Shell”. ↩︎

 10. Atom was originally written in CoffeeScript. ↩︎

 11. VS Code is less “hackable” than other text editors. For example, it doesn’t
     have an “init” file in the tradition of .emacs.d and .vimrc (Atom does have
     one). This makes VS Code harder to customize, since the only way to do it
     is to create an extension. ↩︎

 12. Presumably, VS Code would already exist on iOS were it technically feasible
     to do so, since it’s open source and so popular. It makes an interesting
     case study for the future of iPadOS as a platform. Because, if it’s not
     technically possible to port VS Code to iPadOS, then, as VS Code becomes
     ubiquitous, that increasingly becomes synonymous with iPadOS not supporting
     programming at all.
     
     The point is probably moot, because an iOS native version of VS Code would
     probably work with the same client-server model described in Paradigm
     Transcendence. But it’s still an interesting thought experiment, because I
     often see the prediction that iPadOS will [disrupt] the industry from the
     bottom(https://en.wikipedia.org/wiki/Disruptive_innovation). I wonder how
     can that happen if a platform puts up so many technical barriers for
     creating a text editor? ↩︎

 13. Another nice thing about having a web version is that web apps don’t have
     to abide by the App Store Review Guidelines, so applications prohibited by
     Apple can still exist on the platform. ↩︎

 14. As I’m fond of saying, if you’re looking for areas that will be disrupted
     by new technology, look for areas of neglect. ↩︎

--------------------------------------------------------------------------------


FRIDAY, JUN 26, 2020


REMEMBERING THE O'REILLY MAC OS X INNOVATORS CONTEST

From 2003 to 2004, O’Reilly Media ran the O’Reilly Mac OS X Innovators Contest,
sponsored by Apple via the Apple Developer Connection (now just Apple
Developer). I still think of these winners as some of the high watermarks of app
innovation. Many concepts we take for granted today were either introduced, or
popularized, by apps on this list. Here are a few of my favorite examples:

 * NetNewsWire, while not necessarily the first RSS feed reader, was one of the
   most popular early ones. RSS feed readers are the origin of consuming media
   through streams of content, now exemplified by Twitter and Facebook’s
   Newsfeed.
 * SubEthaEdit was one of the earliest practical implementations of multiple
   simultaneous live document editing, a concept later brought to a much wider
   audience by Google Docs in 2006.
 * LaunchBar popularized many features we take for granted in search user
   interfaces today, such as seeing search results live as you type, fuzzy
   string matching, and combining search results of various types, such as apps,
   bookmarks, contacts, events, and files all into one unified interface.

I’ve listed the winners below, and linked to all the ones that are still
maintained, so you can see visually just how many these apps are still around.
All of these apps are over fifteen years old now.


2003 ROUND ONE WINNERS

 * Winner: NetNewsWire
 * Runner-up: Spring


2003 SECOND ROUND WINNERS

 * First Place, International Category: Hydra (now SubEthaEdit)
 * Second Place, International Category: LaunchBar
 * First Place, US Category: VoodooPad
 * Second Place, US Category: Audio Hijack Pro


2003 O’REILLY MAC OS X CONFERENCE WINNERS

 * First Place, US Category: OmniOutliner
 * Second Place, US Category: iBlog
 * First Place, International Category: iStopMotion
 * Second Place, International Category: ACSLogo
 * Honorable Mention: F-Script


2004 WINNERS

 * First Place, U.S.: Delicious Library
 * First Place, International: FotoMagico
 * Second Place, U.S.: Curio
 * Second Place, International: iDive
 * Honorable Mention, U.S.: Nicecast
 * Honorable Mention, International: Process

--------------------------------------------------------------------------------


THURSDAY, JUN 25, 2020


MACOS BIG SUR: HAS THE DUST FINALLY SETTLED ON LIMITING THIRD-PARTY APPS?

Apple’s strategy for years has been to trade desktop power for cross-device
feature parity. As expected, macOS Big Sur continues this trend, emphasizing a
consistent user interface across devices, and focusing on cross-device
development technologies like SwiftUI and Catalyst.

Personally, I wish Apple had different priorities. I’d like to see more apps
like Sketch, an industry-leading creative app that’s built top to bottom on
Apple technologies. But Sketch was released in 2010, and Apple hasn’t created
any new frameworks like Core Graphics and Core Image that support these kinds of
apps in over a decade. So I wasn’t holding my breath for them to announce
anything new for these kinds of apps at WWDC this year.

Since Apple isn’t prioritizing powerful desktop apps with their own
technologies, that means supporting these use cases mostly falls on third
parties. This is where companies like Adobe, Avid, Maxon, and Microsoft come in.
While Apple’s priorities regarding their own technologies have been clear for
awhile now, what hasn’t been clear is their priorities for third-party apps, in
particular, ones that aren’t sandboxed. The trend for the last few years has
been making it harder to develop these kinds of apps for macOS. AEpocalypse
(2018), Catalina’s security features (2019), and Notarization (2018) are all
examples of this trend.

The overarching reason behind the changes that make developing these kinds of
apps harder is “Security”. And unlike cross-device feature parity, it’s unclear
exactly where this all ends. Because after all, the most secure system there is
is the one that doesn’t run any software at all. That’s why it’s such a pleasant
surprise, that, as far as I can tell, Apple has done everything they can to make
Big Sur, and the accompanying transition to Apple Silicon, as seamless as
possible, even for apps that aren’t sandboxed.

Some were predicting that Macs on Apple Silicon wouldn’t even run apps from
outside of the Mac App Store, that didn’t happen. It seemed more likely that
Apple would drop OpenCL and OpenGL, but those are sticking along for the ride.
No details were known about whether there would be an emulation layer like the
original Rosetta from the 2006 Intel transition. Apple appears to have gone
above in beyond with Rosetta 2, which even supports plug-ins like VSTis, giving
lots of options for migration paths for powerful modular software.

I’m still frustrated that there probably won’t be another Sketch for the
foreseeable future, but that ship sailed a long time ago. And no other platform
has a Sketch either, an industry defining app that’s a platform exclusive, so
while Apple has lost a unique advantage that only they had, they haven’t lost
anything that other platforms already have. Other platforms can run powerful
modular software that’s distributed outside the Mac App Store, but today, so can
new Macs running Big Sur. Here’s to hoping that the dust has settled, and the
last of the restrictions on third-party apps are behind us now.

--------------------------------------------------------------------------------


WEDNESDAY, JUN 17, 2020


THE ENDURING INFLUENCE OF TEXTMATE

TextMate is arguably the most influential text editor ever made. Among the
numerous features it popularized are:

 1. Abbreviation-Based Snippets
 2. Auto-Paired Characters
 3. Fuzzy-Finding by Filename

These features have all since been built-in to every subsequent popular text
editor. In addition, the scope selectors and theming implementation that
TextMate pioneered form the basis of themes, syntax highlighting, and scoping
language-specific functionality in every subsequent popular text editor.

That’s already an impressive list of features to all stem from a single app,
especially one that entered an already mature category, and we haven’t even
gotten to TextMate’s greatest influence: TextMate was the first popular text
editor to be built around extensions. It’s this change that would entirely
re-shaped text editor market forever, solidify the niche status of every text
editor that came before it, and pave the way for the eventual dominance of
Visual Studio Code, more than a decade later. Based on how subsequent text
editors have all been built around TextMate’s use of extensions, it’s fair to
call Sublime Text, Atom, and VS Code, “TextMate-likes”.

--------------------------------------------------------------------------------


THURSDAY, JUN 11, 2020


ON THE MERE SUGGESTION THAT ARM MACS MIGHT ONLY RUN SANDBOXED APPS

Gus Mueller has my favorite take on the news that Apple plans to announce the
transition to their own ARM chips at WWDC. Of the various predictions people are
throwing around, the only one Mueller gives any credence to is the prospect that
ARM Macs will only running sandboxed apps1:

> Assertion: ARM Macs will only allow sandboxed app.

> This could happen. I give it a 50/50 shot at happening. Personally, I hope it
> doesn’t happen as there are still many problems with the sandbox on MacOS that
> have yet to be resolved, even though developers have been complaining about it
> for years.

Personally, I do not think that this is going to happen. In my overview of
creative apps, I was stunned at how ineffectual sandboxed software has been. I
keep a mental list of applications that aren’t sandboxed that the Mac absolutely
cannot afford to lose, here’s the list2:

 * Ableton Live
 * Adobe After Effects
 * Adobe Lightroom Classic
 * Adobe Photoshop
 * Adobe Premiere
 * Blender3
 * Cinema 4D
 * Sketch
 * Visual Studio Code

If the Mac were to lose everything on that list, then they lose the following
groups:

 * 3D Artists
 * Designers
 * Music Producers
 * Motion Graphics Artists
 * Photographers
 * Video Editors

For creative professionals, essentially the only remaining users would be the
tiny subset of developers, music producers, and video editors that work
exclusively in Xcode, Logic Pro, and Final Cut Pro respectively. A Mac cut down
to just those users is a dead platform.

--------------------------------------------------------------------------------

 1. Mysterious (former?) AppKit engineer (and fish shell maintainer) ridiculous
    fish also shares this worry. ↩︎

 2. This is a short list. There are many more applications I suspect should be
    on this list, but these are all applications that I’m familiar enough with
    both the application itself, and more importantly, the community around it,
    to know that people would rather move to another platform, then lose the
    application. ↩︎

 3. We are in danger of losing one application from this list either way:
    Blender. As Mueller states:
    
    > Assertion: OpenGL is going away on ARM for MacOS.
    
    > Yea, this is totally happening. OpenGL and OpenCL have been deprecated for
    > a while now in favor of Metal. Apple will use this opportunity to drop
    > them.
    
    My understanding is that this means Blender would have to support Metal in
    order to run on new ARM Macs. I think there’s enough intertia in the Blender
    community to support Metal, if the alternative is to lose the Mac as a
    platform, but I’ll be worried about this until that actually happens. ↩︎

--------------------------------------------------------------------------------


MONDAY, APR 27, 2020


SOFTWARE TO DIE FOR

Before I switched to being a full-time developer in 2010, I worked as a
user-interface designer for seven years. Something that always bothered me
during that time is that so much of what I was learning was just how to use
Photoshop really well. After I switched to development, I was hesitant to ever
invest that much in just learning a big software package again. “What if I
choose wrong? And waste all those years of learning by switching to another
software package?” I asked myself. Recently, I’ve re-evaluated that decision,
based on my analysis of the market share of major creative applications. It
turns out if I’d just chosen which software I want to learn ten years ago, for
most categories, it would still be the same today. For some categories, it would
still be the same if I’d chosen twenty years ago, and it’s often the first
software that was ever introduced to solve a problem, even if that application
is over 30 years ago, that’s still the best choice today. So it turns out I was
overcorrecting relative to the risk in learning big complex packages, so now I’m
investing in doing it again.

This is the list of software I’ve chosen to learn:

 * 3D Computer Graphics: Blender, Cinema 4D
 * Digital Audio Workstation: Ableton Live, Logic Pro
 * Motion Graphics: After Effects
 * Non-Linear Editing System: Final Cut Pro, Premiere Pro
 * Raster Graphics Editor: Photoshop
 * Spreadsheet: Excel
 * Text Editor: Vim, Visual Studio Code
 * Vector Graphics Editor: Illustrator

Some of these I already know quite well (Vim, Photoshop), and some I’ve barely
touched (Premiere Pro, Final Cut Pro). I’m not happy with the duplication, and
frankly, this is probably just too much for one person. Learning any one of
these applications is an lifetime of work, let alone all of them. But I can’t
decide what to cut, so here we are.

--------------------------------------------------------------------------------


TUESDAY, APR 14, 2020


PRODUCTIONS FOR ADOBE PREMIERE PRO

Van Bedient writing for the Adobe Blog:

> The new Productions feature set for Premiere Pro was designed from the ground
> up with input from top filmmakers and Hollywood editorial teams. Early
> versions of the underlying technology were battle-tested on recent films such
> as “Terminator: Dark Fate” and “Dolemite is My Name.” Special builds of
> Premiere Pro with Productions are being used now in editorial on films like
> David Fincher’s “MANK.”

Notice that the approach Adobe is taking is exactly what Adam Lisagor complained
about Apple not taking for Final Cut Pro X in 2011:

> When Apple pushed FCP to the industry pros five or six years ago, they did
> some hardcore outreach. They brought out Walter Murch, for God’s sake. The man
> cut Cold Mountain on it for God’s sake.


VERSUS ADOBE TEAM PROJECTS

Adobe already has an existing feature for video collaboration called Adobe Team
Projects, that’s designed around a cloud workflow, Bedient describes the
difference:

> Productions is designed for collaborators working on shared local storage.
> Team Projects is built for remote collaboration: assets can be stored locally
> with individual users; project files are securely stored in Creative Cloud.
> The two toolsets are distinct and currently cannot be combined. Productions is
> part of Premiere Pro and is included with all licenses. Team Projects is part
> of Team and Enterprise licenses for Premiere Pro and After Effects. In order
> to support users working from home due to COVID-19, Adobe is making Team
> Projects available to all users from April 14 through August 17, 2020. See
> this post for more information.


COLLABORATION & THE FUTURE OF CREATIVE APPS

Collaboration is the word of the day, and it’s great to see Adobe taking it
seriously, especially with a vision that isn’t web-based. The web is the
platform that makes collaboration the most straight-forward, the easiest way to
share something is through a URL, but a web app isn’t necessarily the best
trade-off for all use cases.

The question at the heart of Figma’s success is whether it winning the
user-interface design market means web apps are also destined to win other
creative markets which have otherwise been the stalwarts of desktop native apps.
But another way of looking at it is that the native app they are competing with,
Sketch, was doubly harmed by Apple’s policies:

 1. It’s built on AppKit, which Apple was aggressively enhancing in the 2000s
    for high-end desktop apps but since the 2010s they’ve switched to focusing
    on frameworks that benefit iOS. This essentially means the platform Sketch
    is built on has spent a decade stagnating.
 2. As software for professional creative users, Sketch isn’t compatible with
    the Mac App Store, this means they’ve been denied access to the most
    important promotion channel for Apple platforms.

Figma may have only been able to succeed because Sketch was built on a platform
that’s at best indifferent to their use case, and at worst actively hostile to
it. That’s a big difference compared to the cross-platform native desktop apps
that are still relevant: Microsoft will fight tooth and nail to make sure
Microsoft Office stays relevant, and Adobe will do the same with Adobe Creative
Cloud.

--------------------------------------------------------------------------------


MONDAY, APR 13, 2020


IS VISUAL STUDIO CODE IRREPLACEABLE?

I keep thinking about this post about git being irreplaceable with respect to
Visual Studio Code. I don’t see how another text editor will ever be able to
compete with VSCode, here’s how we got here:

 * Pre-2004: BBEdit, Emacs, and Vim are all great text editors in their own
   right, but all have idiosyncrasies that, while beloved by people like me,
   prevent them from ever being the most popular text editor.
 * 2004: TextMate is released, with its focus on packages to extend the editor
   and add support for different programming languages, but its API is still too
   limited to truly be a platform.
 * 2008: Sublime Text is released, with a more sophisticated API, facilitating
   more powerful packages, but package management is still an afterthought.
 * 2014: Atom is released, bringing package management to the forefront, but
   Atom has performance problems.
 * 2015: Visual Studio Code is released, keeping packages front and center while
   also solving Atom’s performance issues. As far as I can tell, this
   essentially represents the final form for (mainstream) text editors.

I don’t see how VSCode can ever being contested again, unless its team makes a
grave strategic error. We’ve seen over and over again, that once a platform
takes off, its momentum creates a moat that’s almost impossible for challengers
to overcome.

--------------------------------------------------------------------------------

This post was adapted from a Twitter thread I posted.

--------------------------------------------------------------------------------


FRIDAY, APR 10, 2020


VISUAL STUDIO CODE'S NEW 'CUSTOM EDITOR API'

An interesting new API in Visual Studio Code that allows replacing the entire
text editing engine. They included a list of use cases:

>  * Previewing assets, such as shaders or 3D models, directly in VS Code.
>  * Creating WYSIWYG editors for languages such as Markdown or XAML.
>  * Offering alternative visual renderings for data files such as CSV or JSON
>    or XML.
>  * Building fully customizable editing experiences text files.

It will be fascinating to see how far the text-editor-as-platform gets pushed.

--------------------------------------------------------------------------------


THURSDAY, APR 9, 2020


ZAPPY

Zapier releases a new tool for annotating screenshots called Zappy, in the
tradition of Skitch, and, my favorite, the unfortunately discontinued Napkin.

I’m fascinated by this problem, it seems so simple on the surface: Just share
what I’m looking at on the screen and be able to mark it up. But this problem is
wonderfully devious in ways that emphasize our expectations when using
computers. For example, once something is digital, we expect it be perfect. A
poorly-drawn digital line is somehow worse than the same line drawn on a
whiteboard.

Consider editing text, one of the first things a new computer user learns, and
almost immediately they can produce pixel-perfect text every time. The
comparative skill for just editing a line, is a Bezier curve tool like the one
found in Adobe Illustrator, which is extraordinarily difficult to master by
comparison.

Sharing an annotated screenshot brings out two truths about computers:

 1. We expect digital artifacts to be perfect.
 2. Editing anything graphical with a computer is very hard to master.

--------------------------------------------------------------------------------


WEDNESDAY, APR 8, 2020


IPAD MAIN MENU

Alexander Käßner presents his iPad Main Menu concept. The menu concept itself is
great, but what I’m really blown away by is the presentation, which is agency
quality but was presumably created by one person. Käßner explained on Twitter
that he created the demo video with Drama, Rotato, and Motion.

--------------------------------------------------------------------------------


TUESDAY, APR 7, 2020


THEIA: CLOUD & DESKTOP IDE PLATFORM

Theia is a new IDE “designed from the ground [sic] to run on Desktop and Cloud”
released by The Eclipse Foundation:

> Eclipse Theia is an extensible platform to develop multi-language Cloud &
> Desktop IDEs with state-of-the-art web technologies.

The screenshots look just like Visual Studio Code, and it supports VS Code
extensions:

> We believe VS Code is an excellent product. That is why Theia embraces many of
> the design decisions and even directly supports VS Code extensions.

As to distinctions from VS Code, as previously mentioned it’s designed as both a
desktop and cloud IDE, there’s some details about what exactly that means in the
Architecture Overview:

> The frontend process represents the client and renders the UI. In the browser,
> it simply runs in the rendering loop, while in Electron it runs in an Electron
> Window, which basically is a browser with additional Electron and Node.js
> APIs. Therefore, any frontend code may assume browser as a platform but not
> Node.js.

I’m curious what benefits this entails in practice over using VS Code with the
open-source code-server.

Another major component is the Open VSX Registry, which is described in a The
DEV Community post as “an open-source implementation of a VS Code extension
registry that we have developed under the umbrella of the Eclipse Foundation”
with the rational being:

> Unfortunately, Microsoft prohibits non-Visual Studio products from installing
> any binaries downloaded from their marketplace (see terms).

--------------------------------------------------------------------------------


MONDAY, APR 6, 2020


THE ORIGINS OF ACORN

Gus Mueller of Flying Meat Software on how the image editor Acorn emerged
organically from working on his annotation tool, FlySketch:

> Acorn grew out of upgrades to another app of mine, FlySketch, designed for
> screen capture and sketching. Customers asked for new features, I started
> adding brushes and layers and multiple windows, and all of a sudden I had a
> full-blown image editor. But Acorn still serves different needs than
> professional-level editors. Acorn is powerful, but nimble and approachable. It
> also has excellent documentation that we’ve worked hard on for years.

--------------------------------------------------------------------------------


FRIDAY, APR 3, 2020


A TAIL OF TWO COMPANIES

On the day after The Omni Group had layoffs, web collaboration startup Notion
announced they’ve raised a series A round of $50 Million, as Erin Griffith
reports for The New York Times. Per the article, Notion is also already
profitable1.

--------------------------------------------------------------------------------

 1. Notably, Whimsical, another web startup that competes directly one of The
    Omni Group’s products, OmniGraffle, announced they’ve reached profitability
    as of May last year. Coincidentally, Whimsical also integrates nicely with
    Notion. ↩︎

--------------------------------------------------------------------------------


THURSDAY, APR 2, 2020


THE NEW WAVE IS CLIENT-SIDE WEB APPS

An interesting characteristic of the new wave of web apps is that they run
client side. Figma, for example, runs C++ code compiled to WebAssembly client
side, and email startup Superhuman also runs client side as Sarah Perez writes
for TechCrunch1:

> Unlike most browser-based email, which is server-based, Superhuman can store
> and index gigabytes of email in the web browser itself. This is possible by
> leveraging today’s more powerful APIs in the browser, along with the faster
> CPUs and hard drives on our computers.

--------------------------------------------------------------------------------

 1. Islam Sharabash has a post on Superhuman’s blog that goes into more
    technical details about how Superhuman works. ↩︎

--------------------------------------------------------------------------------


TUESDAY, MAR 31, 2020


THE OMNI GROUP ROADMAP 2020: AUTOMATION

The Omni Group’s 2020 roadmap has a great section on automation that’s worth
reading in its entirety. It covers both their future plans for their own apps,
and a declaration of why automation is important in general:

> By providing automation technology in our apps, we make it possible for
> customers to extend our apps’ capabilities. People can build customized
> solutions to meet their own needs—and then share those solutions with others.
> We had thousands of customers using Kinkless GTD in OmniOutliner, even though
> most of those customers didn’t know AppleScript. I’ve been told that one of
> JTech Communications’ most popular blog posts was for a script for OmniGraffle
> which counts items on a canvas. With automation, people are able to create
> their own keyboard shortcuts to quickly perform actions like creating a task
> calendar from OmniFocus, or exporting Markdown from OmniOutliner—and those
> solutions can often be shared with others, making everyone’s lives easier.

The Omni Group is apparently going through some hard times, bless them for
taking automation so seriously in their apps.

 *