blog.questionable.services Open in urlscan Pro
2606:4700::6812:20c  Public Scan

Submitted URL: http://questionable.services/
Effective URL: https://blog.questionable.services/
Submission: On May 21 via api from GB — Scanned from ES

Form analysis 0 forms found in the DOM

Text Content

QUESTIONABLE SERVICES

Technical writings about computing infrastructure, HTTP & security.

(by Matt Silverlock)


FROM THE ARCHIVES

 * A Guide To Writing Logging Middleware in Go
 * Admission Control: A helpful micro-framework for Kubernetes
 * Building Go Projects on CircleCI

--------------------------------------------------------------------------------


A GUIDE TO WRITING LOGGING MIDDLEWARE IN GO

•••

This is an opinionated guide on how to write extensible logging middleware for
Go web services.

I’ve had a number of requests to add a built-in logger to gorilla/mux and to
extend what is logged by gorilla/handlers, and they’re hard to triage. Many of
the asks are for different things, since “what” to log, how much to log, and
which library to use are not agreed-upon by all. Further, and especially in
mux’s case, logging is not the focus of the library, and writing your own
logging “middleware” can be simpler than you expect.

The patterns in this guide can be extended to any HTTP middleware use-cases,
including authentication & authorization, metrics, tracing, and web security.
Logging just happens to be one of the most common use-cases and makes for a
great example.


WHY IS MIDDLEWARE USEFUL?

> If you’ve been writing Go for a while, you can skip to the code at the end of
> this post.

Middleware allows us to separate concerns and write composable applications—and
in a world of micro-services, allow clearer lines of ownership for specific
components.

Specifically:

 * Authentication and authorization (“authn” and “authz”) can be handled
   uniformly: we can both keep it separate from our primary business logic,
   and/or share the same authn/authz handling across our organization.
   Separating this can make adding new authentication providers easier, or
   (importantly) fixing potential security issues easier as a team grows.
 * Similar to authn & authz, we can define a set of re-usable logging, metrics &
   tracing middleware for our applications, so that troubleshooting across
   services and/or teams isn’t a pot-luck.
 * Testing becomes simpler, as we can draw clearer boundaries around each
   component: noting that integration testing is still important for end-to-end
   validation.

With this in mind, let’s see how defining “re-usable” middleware in Go actually
works.


A COMMON MIDDLEWARE INTERFACE

One thing that’s important when writing any middleware is that it be loosely
coupled from your choice of framework or router-specific APIs. Handlers should
be usable by any HTTP-speaking Go service: if team A chooses net/http, team B
chooses gorilla/mux, and team C wants to use Twirp, then our middleware
shouldn’t force a choice or be constrained within a particular framework.

Go’s net/http library defines the http.Handler interface, and satisfying this
makes it easy to write portable HTTP handling code.

The only method required to satisfy http.Handler is
ServeHTTP(http.ResponseWriter, *http.Request) - and the concrete
http.HandlerFunc type means that you can convert any type with a matching
signature into a type that satisfies http.Handler.

Example:

func ExampleMiddleware(next http.Handler) http.Handler {
  // We wrap our anonymous function, and cast it to a http.HandlerFunc
  // Because our function signature matches ServeHTTP(w, r), this allows
  // our function (type) to implicitly satisify the http.Handler interface.
  return http.HandlerFunc(
    func(w http.ResponseWriter, r *http.Request) {
      // Logic before - reading request values, putting things into the
      // request context, performing authentication

      // Important that we call the 'next' handler in the chain. If we don't,
      // then request handling will stop here.
      next.ServeHTTP(w, r)
      // Logic after - useful for logging, metrics, etc.
      //
      // It's important that we don't use the ResponseWriter after we've called the
      // next handler: we may cause conflicts when trying to write the response
    }
  )
}


This is effectively the recipe for any middleware we want to build. Each
middleware component (which is just a http.Handler implementation!) wraps
another, performs any work it needs to, and then calls the handler it wrapped
via next.ServeHTTP(w, r).

If we need to pass values between handlers, such as the ID of the authenticated
user, or a request or trace ID, we can the use the context.Context attached to
the *http.Request via the *Request.Context() method introduced back in Go 1.7.

A stack of middleware would look like the below:

router := http.NewServeMux()
router.HandleFunc("/", indexHandler)

// Requests traverse LoggingMiddleware -> OtherMiddleware -> YetAnotherMiddleware -> final handler
configuredRouter := LoggingMiddleware(OtherMiddleware(YetAnotherMiddleware(router))))
log.Fatal(http.ListenAndServe(":8000", configuredRouter))


This looks composable (check!), but what about if we want to inject dependencies
or otherwise customize the behaviour of each handler in the stack?


INJECTING DEPENDENCIES

In the above ExampleMiddleware, we created a simple function that accepted a
http.Handler and returned a http.Handler. But what if we wanted to provide our
own logger implementation, inject other config, and/or not rely on global
singletons?

Let’s take a look at how we can achieve that while still having our middleware
accept (and return) http.Handler.

func NewExampleMiddleware(someThing string) func(http.Handler) http.Handler {
  return func(next http.Handler) http.Handler {
    fn := func(w http.ResponseWriter, r *http.Request) {
      // Logic here

      // Call the next handler
      next.ServeHTTP(w, r)
    }

    return http.HandlerFunc(fn)
  }
}


By returning a func(http.Handler) http.Handler we can make the dependencies of
our middleware clearer, and allow consumers of our middleware to configure it to
their needs.

In our logging example, we make want to pass an application-level logger with
some existing configuration—say, the service name, and a timestamp format—to our
LoggingMiddleware, without having to copy-paste it or otherwise rely on package
globals, which make our code harder to reason about & test.


THE CODE: LOGGINGMIDDLEWARE

Let’s take everything we’ve learned above, with a middleware function that logs:

 * The request method & path
 * The status code written to the response, using our own implementation of
   http.ResponseWriter (more on this below)
 * The duration of the HTTP request & response - until the last bytes are
   written to the response
 * Allows us to inject our own logger.Log instance from kit/log.

Source on GitHub

// request_logger.go
import (
  "net/http"
  "runtime/debug"
  "time"

  log "github.com/go-kit/kit/log"
)

// responseWriter is a minimal wrapper for http.ResponseWriter that allows the
// written HTTP status code to be captured for logging.
type responseWriter struct {
  http.ResponseWriter
  status      int
  wroteHeader bool
}

func wrapResponseWriter(w http.ResponseWriter) *responseWriter {
  return &responseWriter{ResponseWriter: w}
}

func (rw *responseWriter) Status() int {
  return rw.status
}

func (rw *responseWriter) WriteHeader(code int) {
  if rw.wroteHeader {
    return
  }

  rw.status = code
  rw.ResponseWriter.WriteHeader(code)
  rw.wroteHeader = true

  return
}

// LoggingMiddleware logs the incoming HTTP request & its duration.
func LoggingMiddleware(logger log.Logger) func(http.Handler) http.Handler {
  return func(next http.Handler) http.Handler {
    fn := func(w http.ResponseWriter, r *http.Request) {
      defer func() {
        if err := recover(); err != nil {
          w.WriteHeader(http.StatusInternalServerError)
          logger.Log(
            "err", err,
            "trace", debug.Stack(),
          )
        }
      }()

      start := time.Now()
      wrapped := wrapResponseWriter(w)
      next.ServeHTTP(wrapped, r)
      logger.Log(
        "status", wrapped.status,
        "method", r.Method,
        "path", r.URL.EscapedPath(),
        "duration", time.Since(start),
      )
    }

    return http.HandlerFunc(fn)
  }
}


Review:

 * We implement our own responseWriter type that captures the status code of a
   response, allowing us to log it (since it’s not known until the response is
   written). Importantly, we don’t have to re-implement every method of the
   http.ResponseWriter - we embed the one we receive, and override only the
   Status() int and WriteHeader(int) methods, so we can carry state in our
   .status and .wroteHeader struct fields.
 * http.HandlerFunc converts our return type into a http.HandlerFunc, which
   automatically allows it to satisfy the ServeHTTP method of http.Handler.
 * Our Logger also logs panics (optional, but useful) so we can capture them in
   our logging system too.
 * Because we directly inject the log.Logger - we can both configure it, and
   mock it during tests.
 * Calling .Log() allows us to pass whichever values we need - we may not want
   to log all values at once, but it’s also easy to expand as necessary. There
   is no “one size fits all” logger.

Notably, I use kit/log here, although you could use any logger you like,
including the standard library - noting that you’d be missing the benefits of
structured logging if you went down that path.


A FULL EXAMPLE

Below is a full (runnable!) example, using a version of LoggingMiddleware we
defined earlier from the elithrar/admission-control package:

// server.go
package main

import (
  "fmt"
  stdlog "log"
  "net/http"
  "os"

  "github.com/elithrar/admission-control"
  log "github.com/go-kit/kit/log"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
  fmt.Fprintln(w, "hello!")
}

func main() {
  router := http.NewServeMux()
  router.HandleFunc("/", myHandler)

  var logger log.Logger
  // Logfmt is a structured, key=val logging format that is easy to read and parse
  logger = log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr))
  // Direct any attempts to use Go's log package to our structured logger
  stdlog.SetOutput(log.NewStdlibAdapter(logger))
  // Log the timestamp (in UTC) and the callsite (file + line number) of the logging
  // call for debugging in the future.
  logger = log.With(logger, "ts", log.DefaultTimestampUTC, "loc", log.DefaultCaller)

  // Create an instance of our LoggingMiddleware with our configured logger
  loggingMiddleware := admissioncontrol.LoggingMiddleware(logger)
  loggedRouter := loggingMiddleware(router)

  // Start our HTTP server
  if err := http.ListenAndServe(":8000", loggedRouter); err != nil {
    logger.Log("status", "fatal", "err", err)
    os.Exit(1)
  }
}


If we run this server, and then make a request against it, we’ll see our log
line output to stderr:

    $ go run server.go
    # Make a request with: curl localhost:8000/
    ts=2020-03-21T18:30:58.8816186Z loc=server.go:62 status=0 method=GET path=/ duration=7.6µs


If we wanted to log more information - such as *Request.Host, a value from
*Request.Context() (e.g. a trace ID), or specific response headers, we could
easily do that by extending the call to logger.Log as needed in our own version
of the middleware.


SUMMARY

We were able to build a flexible, re-usable middleware component by:

 * Satisfying Go’s existing http.Handler interface, allowing our code to be
   loosely coupled from underlying framework choices
 * Returning closures to inject our dependencies and avoid global
   (package-level) config
 * Using composition - when we defined a wrapper around the http.ResponseWriter
   interface - to override specific methods, as we did with our logging
   middleware.

Taking this, you can hopefully see how you might provide the basis for
authentication middleware, or metrics middleware that counts status codes and
response sizes.

And because we used http.Handler as our foundation, the middleware we author can
be easily consumed by others!

Pretty good, huh?


POSTSCRIPT: LOGS VS METRICS VS TRACES

It’s worth taking a moment to define what we mean by “logging”. Logging is about
capturing (hopefully) structured event data, and logs are good for detailed
investigation, but are large in volume and can be slow(er) to query. Metrics are
directional (think: # of requests, login failures, etc) and good for monitoring
trends, but don’t give you the full picture. Traces track the lifecycle of a
request or query across systems.

Although this article talks about better logging for Go web services, a
production application should consider all dimensions. I recommend reading Peter
Bourgon’s post on Metrics, tracing & logging for a deeper dive on this topic.

--------------------------------------------------------------------------------


ADMISSION CONTROL: A HELPFUL MICRO-FRAMEWORK FOR KUBERNETES

•••

Admission Control (GitHub) is a micro-framework written in Go for building and
deploying dynamic admission controllers for your Kubernetes clusters. It reduces
the boilerplate needed to inspect, validate and/or reject the admission of
objects to your cluster, allowing you to focus on writing the specific business
logic you want to enforce.

The framework was born out of the need to cover a major gap with most managed
Kubernetes providers: namely, that a LoadBalancer is public-by-default. As I
started to prototype an admission controller that could validate-and-reject
public load balancer Services, I realized that I was writing a lot of
boilerplate in order to satisfy Kubernetes’ admission API and (importantly)
stand up a reliable controller.

> What is an Admission Controller?: When you deploy, update or otherwise change
> the state of a Kubernetes (k8s) cluster, your change needs to be validated by
> the control plane. By default, Kubernetes has a number of built-in “admission
> controllers” that validate and (in some cases) enforce resource quotas,
> service account automation, and other cluster-critical tasks. Usefully,
> Kubernetes also supports dynamic admission controllers: that is, admission
> controllers you can write yourself.

For example, you can write admission controllers for:

 * Validating that specific annotations are present on all of your Services -
   such as a valid DNS hostname on your company domain.
 * Rejecting Ingress or Service objects that would create a public-facing
   load-balancer/VIP as part of a defense-in-depth approach for a private
   cluster.
 * Mutating fields: resolving container image tags into hashes for security, or
   generating side-effects such as pushing state or status updates into another
   system.

The last example - a MutatingWebhookConfiguration - can be extremely powerful,
but you should consider how mutating live objects might make troubleshooting
more challenging down the road vs. rejecting admission outright.


WRITING YOUR OWN

Writing your own dynamic admission controller is fairly simple, and has three
key parts:

 1. The admission controller itself: a service running somewhere (in-cluster or
    otherwise)
 2. An admissioncontrol.AdmitFunc that performs the validation. An AdmitFunc has
    a http.Handler compatible wrapper that allows you to BYO Go webserver
    library.
 3. A ValidatingWebhookConfiguration (or Mutating...) that defines what Kinds of
    objects are checked against the controller, what methods (create, update,
    etc) and how failure should be handled.

If you’re already familiar with Go, Kubernetes, and want to see the framework in
action, here’s a simple example that requires any Service have a specific
annotation (key, value).

Note that the README contains step-by-step instructions for creating,
configuring and running an admission controller on your cluster, as well as
sample configurations to help you get started.

// ServiceHasAnnotation is a simple validating AdmitFunc that inspects any kind:
// Service for a static annotation key & value. If the annotation does not
// match, or a non-Service object is sent to the AdmitFunc, admission will be
// rejected.
func ServiceHasAnnotation(requiredKey, requiredVal string) AdmitFunc {
    // Return a function of type AdmitFunc
    return func(admissionReview *admission.AdmissionReview) (*admission.AdmissionResponse, error) {
        kind := admissionReview.Request.Kind.Kind
        // Create an *admission.AdmissionResponse that denies by default.
        resp := &admission.AdmissionResponse{
          Allowed: false,
		      Result:  &metav1.Status{},
	      }

        // Create an object to deserialize our requests' object into.
        // If we get a type we can't decode - we will reject admission.
        // Our ValidatingWebhookConfiguration will be configured to only ...
        svc := core.Service{}
        deserializer := serializer.NewCodecFactory(runtime.NewScheme()).UniversalDeserializer()
        if _, _, err := deserializer.Decode(admissionReview.Request.Object.Raw, nil, &svc); err != nil {
          return nil, err
        }

        for k, v := svc.ObjectMeta.Annotations {
          if k == requiredKey && v == requiredVal {
            // Set resp.Allowed to true before returning your AdmissionResponse
            resp.Allowed = true
            break
          }
        }

        if !resp.Allowed {
          return resp, xerrors.Errorf("submitted %s is missing annotation (%s: %s)",
            kind, requiredKey, requiredVal)
        }

        return resp, nil
    }
}


We can now use the AdmissionHandler wrapper to translate HTTP request &
responses for us. In this example, we’re using gorilla/mux as our routing
library, but since we satisfy the http.Handler type, you could use net/http as
well.

You would deploy this as Service to your cluster: an admission controller is
ultimately just a webserver that knows how to handle an AdmissionRequest and
return an AdmissionResponse.

r := mux.NewRouter().StrictSlash(true)
admissions := r.PathPrefix("/admission-control").Subrouter()
admissions.Handle("/enforce-static-annotation", &admissioncontrol.AdmissionHandler{
	AdmitFunc:  admissioncontrol.ServiceHasAnnotation("k8s.example.com", "hello-world"),
	Logger:     logger,
}).Methods(http.MethodPost)


You can hopefully see how powerful this is already.

We can decode our request into a native Kubernetes object (or a custom
resource), parse an object, and match on any field we want to in order to
enforce our business logic. We could easily make this more dynamic by feeding
the admission controller itself a ConfigMap of values we want it to check for,
instead of hard-coding the values into the service itself.


WRITING OUR VALIDATINGWEBHOOKCONFIGURATION

A ValidatingWebhookConfiguration is what determines which admissions are sent to
your webhook.

Using our example above, we’ll create a simple configuration that validates all
Service objects deployed in any Namespace across our cluster with an
enforce-annotations: "true" label.

apiVersion: v1
kind: Namespace
metadata:
  # Create a namespace that we'll match on
  name: enforce-annotations-example
  labels:
    enforce-annotations: "true"
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: enforce-static-annotations
webhooks:
  - name: enforce-static-annotations.questionable.services
    sideEffects: None
    # "Equivalent" provides insurance against API version upgrades/changes - e.g.
    # extensions/v1beta1 Ingress -> networking.k8s.io/v1beta1 Ingress
    # matchPolicy: Equivalent
    rules:
      - apiGroups:
          - "*"
        apiVersions:
          - "*"
        operations:
          - "CREATE"
          - "UPDATE"
        resources:
          - "services"
    namespaceSelector:
      matchExpressions:
        # Any Namespace with a label matching the below will have its
        # annotations validated by this admission controller
        - key: "enforce-annotations"
          operator: In
          values: ["true"]
    failurePolicy: Fail
    clientConfig:
      service:
        # This is the hostname our certificate needs in its Subject Alternative
        # Name array - name.namespace.svc
        # If the certificate does NOT have this name, TLS validation will fail.
        name: admission-control-service # the name of the Service when deployed in-cluster
        namespace: default
        path: "/admission-control/enforce-static-annotation"
      # This should be the CA certificate from your Kubernetes cluster
      # Use the below to generate the certificate in a valid format:
      # $ kubectl config view --raw --minify --flatten \
      #   -o jsonpath='{.clusters[].cluster.certificate-authority-data}'
      caBundle: "<snip>"
      # You can alternatively supply a URL to the service, as long as its reachable by the cluster.
      # url: "https://admission-control-example.questionable.services/admission-control/enforce-pod-annotations""


A Service that would match this configuration and be successfully validated
would look like the below:

apiVersion: v1
kind: Service
metadata:
  name: public-service
  namespace: enforce-annotations
  annotations:
    "k8s.example.com": "hello-world"
spec:
  type: LoadBalancer
  selector:
    app: hello-app
  ports:
    - port: 8000
      protocol: TCP
      targetPort: 8080


Deploying a Service without the required annotation would return an error
similar to the below:

Error from server: submitted Service is missing required annotation (k8s.example.com: hello-world)


… and reject admission. Because we also have UPDATE in our .rules.operations
list, removing or otherwise modifying a previously-admitted Service would also
be rejected if the annotation did not match.


THINGS TO WATCH OUT FOR

One important thing worth noting is that a “Pod” is not always a “Pod” - if you
want to enforce (for example) that the value of containers.image in any created
Pod references a specific registry URL, you’ll need to write logic that inspects
the PodTemplate of a Deployment, StatefulSet, DaemonSet and other types that can
indirectly create a Pod.

There is not currently (as of Kubernetes v1.17) a way to reference a type
regardless of how it is embedded in other objects: in order to combat this,
default deny objects that you don’t have explicit handling for.

Other best practices:

 * You should also scope admission controllers to namespaces using the
   .webhooks.namespaceSelector field: this will allow you to automate which
   namespaces have certain admission controls applied. Applying controls to
   kube-system and other cluster-wide administrative namespaces can break your
   deployments.
 * Make sure your admission controllers are reliable: running your admission
   controller as a Deployment with its own replicas will prevent downtime from
   the controller being unavailable.
 * Test, test, test. Run both unit tests and integration tests to make sure your
   AdmitFuncs are behaving as expected. The Kubernetes API surface is large, and
   there are often multiple versions of an object in play (v1beta1, v1, etc) for
   a given Kubernetes version. See the framework tests for an example of how to
   test your own AdmitFuncs.

> Note: a project with a similar goal is Open Policy Agent, which requires you
> to write policies in Rego, a query language/DSL. This can be useful for
> simpler policies, but I would argue that once you get into more complex policy
> matching, the ability to use k8s packages, types and a Turing-complete
> language (Go) is long-term beneficial to a large team.


WHAT’S NEXT?

Take a look at the README for Admission Control, including some of the built-in
AdmitFuncs, for how more complex enforcement and object handling can be done.

You can also create an AdmissionServer to simplify the creation of the webhook
server, including handling interrupt & termination signals cleanly, startup
errors, and timeouts. Good server lifecycle management is important when running
applications on top of Kubernetes, let alone ‘control plane’ services like an
admission controller.

Contributions to the framework are also welcome. Releases are versioned, and
adding to the existing library of built-in AdmitFuncs is an ongoing effort.

--------------------------------------------------------------------------------


BUILDING GO PROJECTS ON CIRCLECI

•••

> Updated September 2020: Now incorporates the matrix functionality supported in
> CircleCI.

If you follow me on Twitter, you would have noticed I was looking to migrate the
Gorilla Toolkit from TravisCI to CircleCI as our build-system-of-choice after
they were bought out & fired a bunch of senior engineers. We’d been using
TravisCI for a while, appreciated the simple config, but realized it was time to
move on.

I also spent some time validating a few options (Semaphore, BuildKite, Cirrus)
but landed on CircleCI for its popularity across open-source projects,
relatively sane (if a little large) config API, and deep GitHub integration.


REQUIREMENTS

I had two core requirements I needed to check off:

 1. The build system should make it easy to build multiple Go versions from the
    same config: our packages are widely used by a range of different Go
    programmers, and have been around since the early Go releases. As a result,
    we work hard to support older Go versions (where possible) and use build
    tags to prevent newer Go APIs from getting in the way of that.

 2. Figuring out what went wrong should be easy: a sane UI, clear build/error
    logs, and deep GitHub PR integration so that a contributor can be empowered
    to debug their own failing builds. Overall build performance falls into this
    too: faster builds make for a faster feedback loop, so a contributor is more
    inclined to fix it now.


THE CONFIG

Without further ado, here’s what the current (September, 2020)
.circleci/config.yml looks like for gorilla/mux - with a ton of comments to step
you through it.

version: 2.1

jobs:
  "test":
    parameters:
      version:
        type: string
        default: "latest"
      golint:
        type: boolean
        default: true
      modules:
        type: boolean
        default: true
      goproxy:
        type: string
        default: ""
    docker:
      - image: "circleci/golang:<< parameters.version >>"
    working_directory: /go/src/github.com/gorilla/mux
    environment:
      GO111MODULE: "on"
      GOPROXY: "<< parameters.goproxy >>"
    steps:
      - checkout
      - run:
          name: "Print the Go version"
          command: >
            go version
      - run:
          name: "Fetch dependencies"
          command: >
            if [[ << parameters.modules >> = true ]]; then
              go mod download
              export GO111MODULE=on
            else
              go get -v ./...
            fi
      # Only run gofmt, vet & lint against the latest Go version
      - run:
          name: "Run golint"
          command: >
            if [ << parameters.version >> = "latest" ] && [ << parameters.golint >> = true ]; then
              go get -u golang.org/x/lint/golint
              golint ./...
            fi
      - run:
          name: "Run gofmt"
          command: >
            if [[ << parameters.version >> = "latest" ]]; then
              diff -u <(echo -n) <(gofmt -d -e .)
            fi
      - run:
          name: "Run go vet"
          command: >
            if [[ << parameters.version >> = "latest" ]]; then
              go vet -v ./...
            fi
      - run:
          name: "Run go test (+ race detector)"
          command: >
            go test -v -race ./...

workflows:
  tests:
    jobs:
      - test:
          matrix:
            parameters:
              version: ["latest", "1.15", "1.14", "1.13", "1.12", "1.11"]


> Updated: September 2020:

We now use the matrix parameter to define a list of parameters. Our jobs are
then run for each version we define, automtically.

In our case, since we only want to run golint and other tools on the latest
version, we check << parameters.version >> = "latest" before running those build
steps.

Pretty straightforward, huh? We define a base job configuration, create a
reference for it at &test, and then refer to that reference with <<: *test and
just override the bits we need to (Docker image URL, env vars) without having to
repeat ourselves.

By default, the jobs in our workflows.build list run in parallel, so we don’t
need to do anything special there. A workflow with sequential build steps can
set a requires value to indicate the jobs that must run before it (docs).

> Note: If you’re interested in what the previous TravisCI config looked like
> vs. the new CircleCI config, see here.


GO MODULES?

> Updated: September 2020

Works out of the box!

If you’re also vendoring dependencies with go mod vendor, then you’ll want to
make sure you pass the -mod=vendor flag to go test or go build as per the Module
docs.


OTHER TIPS

A few things I discovered along the way:

 * Building from forks is not enabled by default - e.g. when a contributor
   (normally) submits a PR from their fork. You’ll need to turn it on
   explicitly.
 * Enable GitHub Checks to get deeper GitHub integration and make it easier to
   see build status from within the Pull Request UI itself (example).
 * Updating the CI config on 10+ projects is not fun, and so I wrote a quick Go
   program that templates the config.yml and generates it for the given list of
   repos.

In the end, it took a couple of days to craft a decent CircleCI config (see:
large API surface), but thankfully the CircleCI folks were pretty helpful on
that front. I’m definitely happy with the move away from Travis, and hopefully
our contributors are too!

--------------------------------------------------------------------------------


CONNECTING TO A CORAL TPU DEV BOARD WITH WINDOWS

•••

The Coral Dev Board is a TPU-enabled development board for testing out machine
learning models with a requirement for near-real-time inference. For instance,
image classification or object detection on video feeds, where a CPU would
struggle to keep up.

However, the dev board’s setup instructions only document a native Linux
process, but it’s entirely possible to flash the boot image via native Windows
(without WSL): here’s how!


PRE-REQUISITES

You’ll need to install a few things: this is mostly a process of clicking “next”
a few times (the USB drivers) and unzipping a folder (the Android Platform
Tools).

 * Install the CP210x USB to UART drivers:
   https://www.silabs.com/products/development-tools/software/usb-to-uart-bridge-vcp-drivers
 * Use the Android Platform Tools distribution for fastboot -
   https://developer.android.com/studio/releases/platform-tools.html#download
   and set your PATH to point at the location of this (unzipped) folder - e.g.
   in cmd via setx path "%path%;%userprofile%/Downloads/platform-tools
 * A serial console utility: PuTTY is my go-to on Windows.
 * Ensure you have the right cables: a USB-C power cable, a micro-USB cable (for
   the serial console), and a USB-C data cable.

You should also be moderately familiar with serial consoles & have read through
the Coral’s setup instructions to familiarize yourself with the process.

> Note: It’s important to make sure you’re using a data-capable USB-C cable when
> connecting to the USB-C data port. Like many things USB-C / USB 3.x, this can
> be non-obvious at first. You’ll know when the Device Manager shows a “⚠ USB
> Download Gadget” in the Device Manager. If you use a power-only cable, nothing
> will show up and it’ll seem as if the OS isn’t seeing the device.


CONNECTING TO THE SERIAL CONSOLE

Mostly identical to the Coral setup instructions:

 1. Connect to the dev board’s micro-USB port, and identify the COM port the
    device is attached to in the Device Manager by looking under “Ports (COM &
    LPT)” for the “CP2105 USB to UART (Standard)” device. In my case, it was
    COM3.
 2. Power on the board by connecting the USB-C power cable to the power port
    (furthest from the HDMI port).
 3. Open PuTTY, select “Serial” as the connection option, set the COM port to
    the one you identified above, and the data rate to 115200bps. For
    confirmation, the serial comms settings should be at 8 data bits, no parity
    bits, 1 stop bit and XON/XOFF flow control.

The serial port on the dev board accepts other settings, but I’m documenting an
explicit list for those who don’t have a background in serial comms.

You should now be at the dev board’s uboot prompt, and ready to flash the
bootloader & disk image. If not, check that the board is powered on, that the
COM port is correct, and that the Device Manager lists the device.


FLASHING THE BOARD

Connect the USB-C data cable to the dev board, and the other end to your PC.

In the Device Manager, you’ll see a “USB Download Gadget” appear with a warning
symbol. Right click, choose “Update Driver”, select “Browse my computer for
driver software” and then “Let me pick from a list of available drivers from my
computer”. In the driver browser, choose “WinUsb Device” from the left side, and
“ADB Device” (Android Debugger) from the right. Click “Next” and accept the
warning. The Device Manager will refresh, and show the device under “Universal
Serial Bus devices”.

To confirm it’s configured correctly and visible to the OS, head back to your
command prompt and enter:

λ fastboot devices
122041d6ef944da7        fastboot


If you don’t see anything, confirm the device is still showing in the Device
Manager, and that you have the latest version of fastboot from the Android
Platform Tools (linked above).

From here, you’ll need to download and unzip the bootloader image and the disk
image (identical to the official instructions), and confirm you see the contents
below:

λ curl -O https://dl.google.com/aiyprojects/mendel/enterprise/mendel-enterprise-beaker-22.zip
λ unzip mendel-enterprise-beaker-22.zip
λ cd mendel-enterprise-beaker-22
λ ls
    boot_arm64.img  partition-table-16gb.img  partition-table-8gb.img  rootfs_arm64.img
    flash.sh*       partition-table-64gb.img  recovery.img             u-boot.imx


Unfortunately, the flash.sh script is a Bash script, which won’t work for us:
but we can easily replicate what it does:

λ tail -n 15 flash.sh
fi

# Flash bootloader
${FASTBOOT_CMD} flash bootloader0 ${PRODUCT_OUT}/u-boot.imx
${FASTBOOT_CMD} reboot-bootloader

# Flash partition table
${FASTBOOT_CMD} flash gpt ${PRODUCT_OUT}/${PART_IMAGE}
${FASTBOOT_CMD} reboot-bootloader

# Flash filesystems
${FASTBOOT_CMD} erase misc
${FASTBOOT_CMD} flash boot ${PRODUCT_OUT}/boot_${USERSPACE_ARCH}.img
${FASTBOOT_CMD} flash rootfs ${PRODUCT_OUT}/rootfs_${USERSPACE_ARCH}.img
${FASTBOOT_CMD} reboot


Where we see “FASTBOOT_CMD” we simply run fastboot - and where we see
USERSPACE_ARCH we only have one choice for the dev board: arm64. We can work
with this.

In the serial console (e.g. in PuTTY), put the dev board into fastboot mode:

fastboot 0


Then, in the command prompt and from within the mendel-enterprise-beaker-22
directory, invoke the following commands. You should leave the serial console
connected: you’ll see the progress of each step.

fastboot flash bootloader0 u-boot.imx
fastboot reboot-bootloader
 
fastboot flash gpt partition-table-8gb.img
fastboot reboot-bootloader

fastboot erase misc
fastboot flash boot boot_arm64.img
fastboot flash rootfs rootfs_arm64.img
fastboot reboot


When the device reboots, you’ll get a more familiar Linux login prompt in the
serial console! Enter mendel (username) and mendel (password) to log in, and
then follow the steps within the official documentation to set up network
connectivity! You’ll then be able to log into the board remotely via SSH, and
will only need to connect it to power unless you want to flash it again.

Beyond that: enjoy experimenting & building things on your Coral Dev Board! And
if you run into issues, or find something unclear in these instructions, you can
reach me on Twitter at @elithrar.

--------------------------------------------------------------------------------


UPDATING KUBERNETES DEPLOYMENTS ON A CONFIGMAP CHANGE

•••

> Update (June 2019): kubectl v1.15 now provides a rollout restart sub-command
> that allows you to restart Pods in a Deployment - taking into account your
> surge/unavailability config - and thus have them pick up changes to a
> referenced ConfigMap, Secret or similar. It’s worth noting that you can use
> this with clusters older than v1.15, as it’s implemented in the client.
> 
> Example usage: kubectl rollout restart deploy/admission-control to restart a
> specific deployment. Easy as that!

One initially non-obvious thing to me about Kubernetes was that changing a
ConfigMap (a set of configuration values) is not detected as a change to
Deployments (how a Pod, or set of Pods, should be deployed onto the cluster) or
Pods that reference that configuration. That expectation can result in
unintentionally stale configuration persisting until a change to the Pod spec.
This could include freshly created Pods due to an autoscaling event, or even
restarts after a crash, resulting in misconfiguration and unexpected behaviour
across the cluster.

> Note: This doesn’t impact ConfigMaps mounted as volumes, which are
> periodically synced by the kubelet running on each node.

Updating the ConfigMap and running kubectl apply -f deployment.yaml results in a
no-op, which makes sense if you consider the impacts of an unintended config
change and rollout in a larger deployment.

But, there are certainly cases where we want to:

 * Update a ConfigMap
 * Have our Deployment reference that specific ConfigMap version (in a
   version-control & CI friendly way)
 * Rollout a new revision of our Deployment

So how can we accomplish that? It turns it out to be fairly straightforward, but
let’s step through an example.


EXAMPLE

Our ConfigMap, applied to our Kubernetes cluster:

➜  less demo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-config
  namespace: default
data:
  READ_TIMEOUT_SECONDS: "15"
  WRITE_TIMEOUT_SECONDS: "15"
  NAME: "elithrar"
➜  kubectl apply -f demo-config.yaml
configmap/demo-config created


And here’s our Deployment before we’ve referenced this version of our ConfigMap
- notice the spec.template.metadata.annotations.configHash key we’ve added. It’s
important to note that modifying a top-level Deployment’s metadata.annotations
value is not sufficient: a Deployment will only re-create our Pods when the
underlying template.spec (Pod spec) changes.

This is how we’ll couple the Deployment with our ConfigMap, triggering a change
in our Deployment only when our ConfigMap actually changes.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
  labels:
    app: config-demo-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: config-demo-app
  template:
    metadata:
      labels:
        app: config-demo-app
    annotations:
      # The field we'll use to couple our ConfigMap and Deployment
      configHash: ""
    spec:
      containers:
      - name: config-demo-app
        image: gcr.io/optimum-rock-145719/config-demo-app
        ports:
        - containerPort: 80
        envFrom:
        # The ConfigMap we want to use
        - configMapRef:
            name: demo-config
        # Extra-curricular: We can make the hash of our ConfigMap available at a
        # (e.g.) debug endpoint via a fieldRef
        env:
          - name: CONFIG_HASH
            valueFrom:
              fieldRef:
                fieldPath: spec.template.metadata.annotations.configHash


With these two pieces in mind, let’s create a SHA-256 hash of our ConfigMap.
Because this hash is deterministic (the same input == same output), the hash
only changes when we change our configuration: making this a step we can
unconditionally run as part of our deployment (CI/CD) pipeline into our
Kubernetes cluster.

Note that I’m using yq (a CLI tool for YAML docs, like jq is to JSON) to modify
our Deployment YAML at a specific path.

➜  yq w demo-deployment.yaml spec.template.metadata.annotations.configHash \
>  $(kubectl get cm/demo-config -oyaml | sha256sum)
...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        configHash: 4431f6d28fdf60c8140d28c42cde331a76269ac7a0e6af01d0de0fa8392c1145


We can now re-deploy our Deployment, and because our spec.template changed,
Kubernetes will detect it as a change and re-create our Pods.

As a bonus, if we want to make a shortcut for this during development/local
iteration, we can wrap this flow in a useful shell function:

# Invoke as hash-deploy-config deployment.yaml configHash myConfigMap
hash-deploy-config() {
  yq w $1 spec.template.metadata.annotations.$2 \
  $(kubectl get cm/$3 -oyaml | sha256sum)
}


--------------------------------------------------------------------------------

Older Newer

--------------------------------------------------------------------------------

© 2022 Matt Silverlock | His photo journal | Code snippets are MIT licensed |
Built with Jekyll