infiniteundo.com Open in urlscan Pro
66.6.44.4  Public Scan

Submitted URL: http://www.infiniteundo.com/
Effective URL: https://infiniteundo.com/
Submission: On October 18 via api from US — Scanned from CA

Form analysis 1 forms found in the DOM

GET /search

<form action="/search" method="get"><input type="text" name="q" value="" style="width: 8em;"><input type="submit" value="Search"></form>

Text Content

INFINITE UNDO!

Data-Mining In The Git Log • Falsehoods About Time


--------------------------------------------------------------------------------


 
 

Software As Narrative

How-To Articles

Devops Reading List

--------------------------------------------------------------------------------

CC Sharealike © 2020 by Noah Sussman



































Jan
21st
Tue


UNIT TESTING IS NOT A CRIME STICKERS NOW AVAILABLE

Take me to where I can buy the sticker!

> Unit Testing Is Not A Crime laptop stickers now available on @redbubble.
> 
> A lot of people ask me for these but I have always batch-printed them before
> which meant I couldn’t ship them to random people on the Internet… until now
> 🌈🦄https://t.co/C71wV1Hhor

Aug
24th
Sat


HOW TO SEE THE CONNECTIONS BETWEEN TESTING, OBSERVABILITY AND DEVOPS (TOAD 🐸)

I think that if the mental model one is using is like the New View on Systems
Safety, then the connections between the disciplines in TOAD (Testing,
Observability And Devops 🐸) naturally become obvious as you work more and more
with software systems.

Conversely for people rooted in the traditional / Cartesian view of software
systems, it’s pretty much impossible to explain. (edited)


FOR INSTANCE

If I think that the history of a software is composed of discrete, finite frames
of time that can be reconstructed after the fact then good fucking luck
explaining to me why I need an observability framework that lets me respond
quickly to unknowns.

Because if I think history is composed of discrete, comprehensible frames that
can be reconstructed after the fact, then I have a lot to learn from the system
out of that history — your observability and prediction techniques are nice but
not necessary.


HOWEVER

If I don’t accept that history is composed of discrete comprehensible events,
then I have very little to learn from history and then the o11y framework and
the culture of resiliency makes sense This is btw what is obliquely being gotten
at in the Clay Shirkey quote:

> Process is an embedded reaction to prior stupidity.

It means people with a lot of process are people with the old / Cartesian view
of system safety. They have a lot of process because they think they can predict
what sort of events will be important in the future.

Since formal process is reactive, it follows that in order to have a lot of
formal process, we have to have a lot of events we can predict with great
specificity.

People who operate mainly off of prediction just don’t think that way. We don’t
come up with taxonomies of past failures and try to derive ontologies that
address them.

May
29th
Wed


FEEDBACK ENGAGEMENT

Feedback engagement is a metric that describes how often and where developers
engage with feedback from the CI system. Engagement rates can be calculated
using all of the standard engagement measurement tools from production-facing
systems: email open rate, the frequency with which developers respond to slack
announcements, and most obviously: the rate at which failing tests are fixed or
ignored.

May
28th
Tue


PHENOTYPIC CONFORMANCE ANALYSIS

Phenotypic conformance analysis describes the practice of establishing a static
analysis ruleset based on the observed properties of the codebase rather than on
an existing open source standard or a ruleset voted upon by the team.

Then the initial feedback loop revolves around How much new changesets resemble
the code that is already in the codebase. This gets at the important properties
that static analysis speaks to: consistency and the absence of syntax errors —
without resorting to a normative standard that risks a bad fit with existing
practices and can lead to bikeshedding.

Nov
5th
Mon


HOW DO YOU KNOW WHAT YOU KNOW?

> Testing is just applied epistemology. — Brett Pettichord


STUFF YOU CAN TEST

A diagram.

Jul
2nd
Mon


HOW CHANGE WORKS IN LARGE ORGANIZATIONS

The Kübler-Ross change curve and the Six Phases Of A Project are two time-tested
ways of visualizing how organizations cope with change!

Here the Kübler-Ross curve and the Six Phases are together for the first time!

I hope this infographic helps to achieve every initiative in your portfolio!

May
12th
Sat


STOCHASTIC METHODOLOGY


TEACH A NEURAL NET TO PLAY PLANNING POKER WITH ITSELF

Sep
3rd
Sun


SOFTWARE ENGINEERING AS HYPOTHESIS INVALIDATION



--------------------------------------------------------------------------------

If testing software and writing code feel very different to you, it’s only
because you haven’t written enough code yet. That is only my own admittedly
controversial opinion. I believe that everything we call “software testing” is a
subset of the activity we call “programming.”

Implementation is a test of a hypothesis. To implement a pattern in code one
must first form a narrative or if you prefer a hypothesis. Implementation is
itself a test of whether the narrative holds up.

Corollary: Consider that the full specification for a program and the program
itself are the same thing. This implies you can’t design computer programs by
up-front, complete specification. You are constrained by “the laws of nature” to
begin with an incomplete hypothesis and proceed by testing the implementation of
said hypothesis, using the results of that test to decide how to go about
modifying either the hypothesis or the implementation or both, then repeating
that process.

At all levels of the stack, it is always the case that complete specification is
functionally impossible since such a thing could only exist in the form of a
complete implementation. The largest Web application and the derpiest hello
world program both have this quality: that they cannot be implemented by
complete specification but must be built iteratively via (in)validation of
hypotheses.

For a much more thorough exploration of this idea you can read or refer back to
Programming As Theory Building by Peter Naur (1985).

Aug
4th
Fri
Software is narrative | Hacker News

Thanks everyone for upvoting my post on HN!

It’s always exciting to see my work on the front page of Hacker News :)


FULL LIST OF SOFTWARE AS NARRATIVE POSTS.

Jun
25th
Sun


SOFTWARE ROT




SOFTWARE ROT

The software development life cycle is predictable in that any long-lived
product will eventually outgrow some of its subsystems. For instance a Web
service that begins life with a single monolithic database server will as it
scales need the capacity increase that comes from a distributed database. A
historical example can be found in Twitter’s original dependence on the
ActiveRecord ORM, which over time was replaced with a variety of databases and
services.


FOR HISTORICAL REASONS

In one sense this might be considered as a canonical definition of legacy
systems: the system contains subsystems that are not optimally suited to
day-to-day functioning despite the fact that at some point in the past those
same subsystems did function optimally. There is a concept of Software Rot or
Bit Rot that metaphorically encapsulates the life cycle phases that precede this
sort of legacy system.


FRICTIONLESS YET IT STILL WEARS OUT

The observed course of the software development life cycle is that features
begin life in a “working” state — meaning that they satisfy the requirements
agreed upon by an empowered group of stakeholders — but that inevitably the same
features begin to exhibit bugs that are not in any way related to changes of
code nor hosting environment. Software rot (as this phenomenon has come to be
called) is caused because the requirements for features continue to change and
evolve even after such features are in the hands of their users.

> the system contains subsystems that are not optimally suited to day-to-day
> functioning

This is not a well-understood area of software production, nor does Computer
Science have much to contribute by way of solutions. The problems are not
algorithmic but environmental, social, aesthetic — in other words it is what
programmers like to call a squishy problem because so much of the problem space
is taken up not by software but by humans and their co-collaborators.


IN PROGRAMMING, SOFT SKILL IS HARD

The idea of engaging with squishy problems is uncomfortable to a lot of
programmers. I think this is because programmers currently do not have an
opportunity to learn the heuristics that would allow them to distinguish good
solutions from bad when it comes to human-and-social issues.

Taking software rot as an example, it is a well-known phenomenon long documented
in the literature of software. Yet it has no commonly-agreed up on solution. Its
management is not a topic of discussion in job interviews nor in performance
evaluations (for the most part). The contraventions for software rot are not
listed in general programming books nor taught in coding boot camps.

I do not believe that software rot is ignored as a topic because no one
recognizes its importance. I’m pretty sure it’s ignored because no one feels
comfortable giving advice about it, because almost no one has successfully dealt
with the long-term requirements-changes and subsystem upgrades that go with
solving software rot.

The knowledge about how this problem have been successfully dealt with are
locked away in a couple of books. And those books are old, using server-side
programming and Java as their example environment. That’s a hard sell to a
junior engineer fresh out of a boot camp or undergrad program.

The recent movement toward systems thinking in software is a hopeful sign. But
we need modern discourse that concerns how to deal with changing requirements
over time. And so far such discourse hasn’t been forthcoming despite all the
growth and hype about code over the last ten years.


Jun
18th
Sun


SUBOPTIMIZATION IS THE REASON FOR TECHNICAL DEBT. BUT I HAVE TO GO OUTSIDE OF
PROGRAMMING WORLD TO FIND DISCUSSION OF IT.

> Suboptimization is THE reason for technical debt. But I have to go outside
> #programming to find discussion of it. https://t.co/cgTN57MrT4
> pic.twitter.com/SKjYKUVV4O
> 
> — Tentacular Devops 🐙 (@noahsussman)
> June 18, 2017


THE UNCOMFORTABLE TRUTH IS THAT DEV IS A COMPLEX RELATIONSHIP WITH AN
INCREASINGLY INTELLIGENT OTHER.

> The uncomfortable truth is that dev is a complex relationship with an
> increasingly intelligent other.

> The uncomfortable truth is that dev is a complex relationship with an
> increasingly intelligent other.
> 
> — Tentacular Devops 🐙 (@noahsussman)
> June 28, 2015

Jun
14th
Wed

—

An illustration of the “funnel” for FOSS developer engagement.

Jun
12th
Mon


MILLER IS LIKE JQ FOR CSV AND OTHER TABULAR DATA



--------------------------------------------------------------------------------

> Miller is like awk, sed, cut, join, and sort for name-indexed data such as
> CSV, TSV, and tabular JSON.


HERE’S HOW I USE MILLER TO PIPE CSV DATA INTO JQ

jq is currently my tool of choice when it comes to processing all sorts of data.
Except XML and CSV. XML is pretty well handled by xmlstarlet but I have never
found a CSV parsing tool that I liked. I just make do with using jq to work with
CSV and to be honest I find the results I can achieve to be substandard.

Anyway, Miller solves all that and is easy to install even if you don’t use
homebrew and need to compile it from source.

mlr --c2j cat my_file.csv | jq .


It is that easy! Now my CSV is structured as JSON, which I have spent a lot of
time learning to enjoy working with in jq.

 



Older »