Hardscrabble 🍫

By Max Jacobson

See also: the archives and an RSS feed

Coconut Estate

January 22, 2020

As a software engineer who has spent the last 6-7ish years working full-time jobs and listening to podcasts, I’ve spent a meaningful amount of time day-dreaming about starting my own company. Here’s how I’d do it:

First I’d make a side project, as a nights and weekends thing. I’d charge for it, and people would buy it, and then I’d quit my day job. Maybe I’d do some freelance work in the early days to make ends meet. Maybe I’d do a podcast about the whole process and sell ads on it, to share some behind the scenes stories and get people interested. Eventually I’d rent an office in Brooklyn with a few desks and start hiring people.

In 2018 this day-dreaming bubbled over the pot. It was spring and I was in Rome, walking and looking at buildings and thinking. At some point, on a bench outside Vatican City, I had an idea. When I got back to New York, I spent the next seven months building a prototype. I rented a desk at a co-working space. I paid GitHub for private repos (this was before they were free). I bought a domain name. I made a twitter account. I chose a tech stack. I got a website online.

Eventually I gave up. I turned my focus back to my day job. I got into tennis and running. I found a new day job.

And I never wrote about any of it here until now.

So, while I still remember all this, here’s a bunch of information about what I did and how I did it and why I stopped.

the name

I never had a proper name for it, but the whole time I worked on it, I called it Coconut Estate. It’s a reference to my favorite song, which is about an obsessive zealot who destroys himself in his search for knowledge, and has no regrets. As a product name, it’s probably very bad, but there were times I thought about sticking with it. Maybe people will find it intriguing, I would think. I spent $5.94 on the domain coconutestate.top. I would joke that the other name I was considering, which comes from another mewithoutYou song, was Unbearably Sad. That would make Coconut Estate sound better by comparison. I liked that Coconut Estate sounded like a place that you enter and enjoy spending time in.

the product idea

Coconut Estate would be a website for roadmaps. You could sign up, and then share a roadmap, such as “roadmap to get good at tennis”. Others could follow that roadmap, and learn to get good at tennis. It would be kind of like a curriculum that you could annotate with whatever links, text, resources you want, for each step. I imagined it as a kind of recursive thing, so perhaps one of the steps would be “learn how to serve”, and you could drill down and that would be a whole roadmap unto itself.

This was inspired in part by a blog post, Roadmap for Learning Rails1, published about ten years ago by one Wyatt Greene, who I do not know. When I was starting to re-learn how to program in 2012 or so, I must have googled “ruby on rails roadmap”, and found it. It was so helpful. Web development was very complicated2. I didn’t know where to start. I felt overwhelmed. The blog post included a flow chart, which helped structure my learning. It told me: forget all that, start here. And that helped me relax.

I thought: I’m sure programming isn’t the only thing that’s complicated. I imagined a whole community flourishing, of people writing similar roadmaps out of the goodness of their hearts, about all kinds of topics. I imagined people’s lives changing as they self-improved by following roadmaps. Because of the thing I made.

I also imagined that having a really nice interface that made it super easy to build and explore these roadmaps would be irresistible, and more useful than a JPEG embedded in a blog post.

the product idea: phase two

The plan was to focus on getting that off the ground, and then to build out this second phase, which is geared to business. I imagined a model kind of like GitHub: free to use for your personal, public stuff, but you pay to use it at work. And maybe the fun, positive public side would make people feel good about the tool and want to bring it into their workplace.

Some of the business use-cases I was imagining:

  • a roadmap for how to on-board a new team member
  • a roadmap for how to onboard a new customer
  • a roadmap for how to offboard a team member, including all of the things that you need to revoke their access to
  • a roadmap for how to perform your team’s monthly security audit

Et cetera.3

If the public-facing option was to be basically “luxe wikihow”, the private-facing part was basically a checklist-oriented knowledge base. In fact, the other primary inspiration for this was The Checklist Manifesto, a book that I never actually read past the first chapter. As I understood it, it details how hospitals use checklists to ensure that they follow their processes every time, because the alternative inevitably leads to mistakes.

how I organized myself

I created a git repository called coconut-estate internal. I still have it. It looks like this:

.
├── PLAN.txt
├── README.md
├── TODOs
│   └── max.txt
├── TOLEARNS
│   └── max.txt
├── architecture
│   ├── dns.txt
│   ├── infrastructure.txt
│   ├── toolbox-design.txt
│   └── web-app-stack.txt
├── brainstorms
│   └── max
│       ├── 2018-04\ handbook\ product\ notes.txt
│       ├── 2018-04\ misc\ going\ indie\ thoughts.txt
│       ├── 2018-04\ security\ audit\ product\ notes.txt
│       ├── 2018-04-05\ roadmap\ product\ notes.txt
│       ├── 2018-05\ deploying\ to\ digital\ ocean\ thoughts.txt
│       ├── 2018-05-21\ private\ roadmaps\ I\ make\ for\ myself.txt
│       ├── 2018-06-03\ random\ thought.txt
│       ├── 2018-06-17\ women\ and\ email.txt
│       ├── 2018-07-29\ docker.txt
│       ├── 2018-08-10\ art.txt
│       ├── 2018-11-05\ perks.txt
│       ├── 2018-12-08\ dunbar.txt
│       ├── 2018-12-09\ more\ dunbar\ notes.txt
│       ├── 2018-12-11\ more\ dunbar\ thoughts.txt
│       └── 2019-04-07\ dunbar\ thought.txt
├── finance
│   └── expenses.rb
├── marketing
│   ├── competitive\ analysis.txt
│   ├── feedback
│   │   ├── 2018-04-05\ sarah.txt
│   │   ├── 2018-04-09\ nick.txt
│   │   ├── 2018-04-22\ gavin.txt
│   │   ├── 2018-05-01\ gordon.txt
│   │   ├── 2018-05-02\ dan.txt
│   │   └── 2018-07-19\ harsh.txt
│   ├── name-ideas.txt
│   ├── slogans.txt
│   └── strategy.txt
├── todo-list-to-revamp-terraform-and-digital-ocean.txt
└── work-logs
    └── max.txt

9 directories, 36 files

It was basically a junk drawer. A place to put thoughts related to the project. I’m very much a plaintext kind of person, and this suited me very well. Some of the files were meant to be living documents that I could keep up-to-date over time, while others were snapshots of specific conversations or brainstorms. I namespaced everything with my name in case anyone else joined in the future and also wanted to record their brainstorms or work logs there.

Here’s an example excerpt from brainstorms/max/2018-06-17\ women\ and\ email.txt, since that file name jumped out at me as I was just reviewing this:

Just yesterday I was chatting with three software engineers from Ellevest, a financial investing startup that caters primarily to women. I don’t think I’d ever thought to cater Coconut Estate primarily to women, but why not? It’s kinda like the “mobile first” of product design; it’s harder to make a website mobile-friendly if you start by making it desktop friendly, and maybe it’s true that it’s harder to make a product women-friendly if you start by catering to “everyone”.

I shared that thought with Sarah and she was like … I don’t think that’s a good analogy.

Huh! I’d forgotten all of that. I’m not sure how valuable that thought was, but if thoughts are like lightning bugs around you, the natural thing is for them to flicker off and disappear into the night. Having a hole-punched jar nearby encourages capturing those thoughts, some of which might be valuable.

The file I updated most often was work-logs/max.txt. This one was directly inspired by https://brson.github.io/worklog.html, the “work log” of a Rust programmer that I had stumbled on at some point. There were times that I kept one at work, on a notebook at my desk, and found it helpful to remember how I had spent my time, and keep me somewhat accountable as I continued to spend my time.

Here’s an excerpt:

2018-09-03

  • 11:39
  • At home, thinking about rewriting the front-end app in Elm. I’ve been following the progress of Level.app, an indie slack competitor that Derrick Reimer is working on. He’s doing it as an OSS thing, which is neat. He’s using Elixir/Phoenix and Elm. My coworker Scott is also interested in Elm, and my former colleague Todd was also enthusiastic about it. The latest episode of Reimer’s podcast talks about upgrading to Elm 0.19 and how it improved some performance and asset size stuff. For me, the big appeal is the type safety and the “no runtime exceptions” promise. I started working my way through the guide yesterday, and this morning I watched the “Let’s be mainstream!” talk from Evan Czaplicki. The guide is mostly clicking with me. The other precipiating incident here is that at the day job, we’re currently considering adopting a more modern front-end tool like React or Vue, and I’m wondering if we should be considering elm. I haven’t made much of a commitment to Ember.js at this point, and I haven’t found it to be extraordinarily intuitive, so I’m tempted to give Elm a shot.
  • 14:23
  • At Konditori on Washington.
  • Plannning to keep working through the guide, with the goal of standing up a dockerized hello world web app today. Stretch goal 1: deploy it to prod. Stretch goal 2: have it load the roadmaps and display them on the page.
  • A few things that are appealing about elm so far:
    • a static type system
    • no nil, runtime exceptions
    • fast
    • an opinionated linter (third-party but still)
    • A kind/entertaining leader (per the 1.5 talks I watched)
    • At least attempts to be beginner-friendly with its emphasis on guides, docs, and error messages
    • no npm
    • Tests feel less necessary
  • A few things that are unappealing about elm so far
    • learning curve on syntax
    • tooling is a little minimalist and it’s not super clear how I’m supposed to be using these
    • Releases happen but maybe not that often?
  • 21:01
  • Took a bunch of breaks but more recently got to the good stuff in the guide and feel like I have the general idea of how this thing works and I kind of want to try to push thru to prod.
  • 22:37
  • Shipped a basic sketch of a site layout to prod and opened a PR (https://github.com/maxjacobson/coconut-estate/pull/4) so it’ll be easy to pick up where I left off… Big things to figure out next:

    1. how to hook up graphql
    2. how to do asset fingerprinting
    3. how to use elm reactor with a spa (maybe won’t?)
  • Overall quite pleased.

Others were more spartan:

2018-09-10

  • 22:57
  • refactoring some elm
  • 23:53
  • I give up! Too complicated.

I’m really grateful that I took notes as I was working. For me, putting thoughts to words helps me know what my thoughts are. It also made me very aware of how much time I was putting into the project. This reminded me that I was taking the project seriously, but also helped keep me very aware of how slowly it was going, which grew discouraging over time.

I did eventually open source the code, but the internal repo is just for me.

the tech stack

Here’s an overview of some of the most important technologies I used:

  • Rust
  • Elm
  • Docker (for development)
  • Digital Ocean
  • Terraform
  • Let’s Encrypt
  • PostgreSQL
  • GraphQL
  • systemd

I was motivated by a few things:

  1. wanting to use things that I found interesting
  2. wanting to use things that seemed like they’d help me build something very sturdy and reliable
  3. wanting to use things that would keep costs down

Some things I wasn’t motivated by:

  1. Building quickly
  2. Keeping things simple

In retrospect, these were the wrong motivations, if my goal was to actually finish something. Which, nominally, it was.

how I organized the code

I made a single repo that had all the code in it. At work, we were having success using such a monorepo, and it felt right. I made the top-level of the repo into a Cargo workspace, which is Rust’s built-in monorepo-like concept. The idea is that you can have several sibling Rust projects that you can think of as a family. By the end, I had four members in my workspace:

  1. api
  2. secrets_keeper
  3. toolbox
  4. authorized_keys_generator

Additionally, alongside the rust projects, I had two non-Rust codebases, each in their own subdirectory:

  1. website
  2. terraform

I’ll go through each of those in a bit of detail, providing some highlights of how and why they were built.

api

The primary back-end service was called api. I chose Rust because I found Rust interesting. I didn’t really know it, beyond the basics. I got much better at Rust while building this, although I still consider myself probably an advanced beginner at best. Most of what I’ve learned in my career I’ve learned from colleagues, either by copying what they did or soliciting their feedback. I’ve never had that with Rust, and so I really just have no idea if I’m doing anything right. But I got to a point where I felt somewhat productive, which was very gratifying, and several years in the making.

I used clap to define a CLI interface for the api program. This was mainly helpful because it gave me an easy way to define a few required command line flags that the api needs to boot. I particularly fell in love with clap while building this project, as it made it extremely easy to build a nice CLI program with subcommands and flags.

I used diesel as an ORM for interacting with the database. Diesel is created by Sean Griffin, who I had listened to talking about building Diesel on The Bike Shed for ages and was curious to try. It’s excellent. I didn’t really stress test it, but everything I wanted to do, it had thoughtfully modeled within Rust’s type system.

I decided to build a GraphQL API rather than a RESTful one. It felt trendy at the time. Moreover, at work we were integrating with GitHub’s GraphQL API, and I didn’t understand how any of that integration code worked. Learning about GraphQL APIs by building a simple one was very helpful for me to learn the important concepts, and that made it so much more clear what our client code was doing at work. I used the juniper crate to define the schema, and used actix-web4 to serve the requests. GraphQL is really cool. I admire the web dev community for opening its heart to a perspective that entirely rejects REST. I’m not anti-REST, but I like for established ideas to be challenged. And I like that there is an option available for projects where it’s important to give the maximum flexibility to front-end developers.

The one other interesting thing about the api was how I went about doing authentication and authorization. I decided to do claims-based authorization using JSON Web Tokens (JWT). To be totally honest, I’m not sure that I totally figured out a nice, ergonomic way of working with JWT, although the jsonwebtokens crate was very easy to work with. The basic flow is:

  • User signs up or signs in via an API request
  • In response, the API renders a token, which encodes their claims, which look like:
    {
      "user_id": 3,
      "site_admin": false
    }
    
  • The front-end can then do two things:
    1. store the token in local storage to use to authenticate future requests
    2. actually deserialize the claims from the token, and use that to make decisions about what to try and render (for example, whether to show an admin-only widget on a page)
  • When the API receives subsequent requests to access data, it will inspect the provided token to determine:
    1. what claims are the requestor making about who they are?
    2. are the claims signed, meaning they were issued by the API itself?
    3. have the claims expired, meaning they were once issued by the API itself, but we want to reject the request and force them to re-authenticate? (I didn’t actually implement this part)

Overall this felt good. Building something using JWT was a great way for me to learn about JWT. For example, I learned that if you’re using a JWT token for an API you’re integrating with, you can Base64-decode the three segments of the token and actually inspect the claims that have been serialized into the token. You could even attempt to change the claims, re-encode them, and make a new token, but it shouldn’t work, because the signature will reveal that you’ve tampered with the claims.

Perhaps I would have felt more comfortable with JWT over time.

secrets_keeper

I decided that I wanted to roll my own service for managing secrets: secrets_keeper. For example, api needs to know a PostgreSQL username and password to establish a database connection. It reads those credentials from the environment, but how do they get to the environment? I think I probably over-engineered my answer to that question.

secrets_keeper is a second API web service, designed to run behind a firewall. It has exactly two routes:

  1. GET /secrets
  2. POST /secrets

I built this one as a RESTful API using warp rather than GraphQL and actix-web. I was really just having fun poking around and taking my self-directed whistle-stop tour through the community. Of course, like api, it also uses clap to define its CLI.

Here’s how you use it:

  1. Create a new secret by making a POST /secrets request, with a body that looks like:
    {
      "group": "api",
      "name": "POSTGRES_USERNAME",
      "value": "coconutpg"
    }
    
  2. Before starting up the api service, fetch the secrets for the api group by requesting GET /secrets?group=api, and then export all of secrets you get back into your environment

Internally, it just stores the secrets in plaintext files on the filesystem. There’s absolutely no authorization built into secrets_keeper, so it’s very important that it run behind a firewall which will be responsible for making sure that only authorized personnel can read and write secrets.

toolbox

At the beginning, I wanted to build all of the ops-related scripting in Rust, so I created toolbox. I thought this would help me keep my Rust skills sharp. I gradually let that go and shifted to doing lots of scripting in plain-old shell scripting. But this exists.

Of course, it uses clap to define its CLI interface. Here’s what it looks like to use it:

$ bin/toolbox secrets write --help
toolbox-secrets-write
Write secrets to the secrets keeper service

USAGE:
    toolbox secrets write [OPTIONS] <VARIABLE> <VALUE> --group <GROUP>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -e, --env <ENVIRONMENT>    Specifies which environment to target
    -g, --group <GROUP>        Which group of secrets does this secret belong to?

ARGS:
    <VARIABLE>    The environment variable name for the new secret
    <VALUE>       The value of the new secret

clap takes care of generating that help text based on the registered subcommands, which is pretty neat.

The main responsibility that lingered in toolbox was this command for registering a new secret with the production secrets_keeper. It would have been much simpler and lighter-weight to make a shell script that uses curl to make the POST request.

Because the secrets keeper is not publicly available, there’s some extra work that you need to set up in order to make this request from your local environment. You know what, let’s talk about it.

We’re going to illustrate the bastion host pattern.

The idea is that there are three computers involved:

*************
* my laptop *
*************
    ||
    \/
***********
* bastion *
***********
    ||
    \/
***********
* secrets *
* keeper  *
***********

In words: The networking is configured so that the only computer that is allowed to make requests to the secrets keeper is the bastion server. Additionally, the only computer that is allowed to make SSH connections to the secrets keeper server is the bastion server. The bastion and the secrets server have their authorized keys defined to only allow known administrators to SSH into them.

That means a few things:

  1. if you were to SSH onto the bastion server, you could then SSH into the secrets keeper server, but if you were to try to SSH directly into the secrets keeper server, it would be like it didn’t even exist.
  2. if you were to SSH onto the bastion server, you would be able to use curl to read and write secrets, but if you try from your laptop it will be like the site doesn’t even exist.

My laptop has an SSH key on it that identifies me as a known adminstrator. So it would be cool if I could make a request from my laptop that tunnels all the way through the bastion server to the secrets server, and allows the response to tunnel all the way back.

Well, we can.

There’s an ssh command you can run which creates an “ssh tunnel”. While the tunnel remains open, you can make network requests to a specified port on localhost, and ssh will take care of “forwarding” the request through the bastion server.

The bastion pattern is something I had learned at work, but building my own implementation of it helped reinforce why it is valuable and how it works.

authorized_keys_generator

The final Rust service in my workspace was called authorized_keys_generator. In the last section, we talked a bit about how we use an authorized keys file to govern who can SSH into the production infrastructure. We could manually generate that file, but I felt the need to automate it. This was inspired by codeclimate/popeye, a tool that generates an authorized keys file based on keys registered with AWS.

At one point, at Code Climate, we talked about instead pulling this from GitHub, which exposes each users’s public keys at, e.g., https://github.com/maxjacobson.keys. I thought I’d try building a simple version of that concept for my project, so I made a simple CLI (again, using clap), that lets you do:

$ authorized_keys_generator --usernames maxjacobson dhh

And it would print out:

# authorized keys generated from authorized_keys_generator

# @maxjacobson
<first key here>

# @maxjacobson
<second key here>

# @dhh
<another key here>

You could take that text and use it as an authorized keys file on a server. I imagined later on I might build in support for you to provide a GitHub org and team, and it would then take care of looking up the users in that team, but it didn’t come to that.

website

The other significant directory in my monorepo with application code was website, which represented the front-end of the website. I used create-elm-app to scaffold a “hello world” Elm app, and boogied from there.

I had really never used Elm before, and it was kind of a lark that very quickly started to feel very right. At the time, all of my front-end experience was with JavaScript and jQuery. This was my first exposure to:

  1. a functional programming language
  2. a front-end web app framework
  3. the declarative UI pattern

I had been planning to use Ember.js, which I’d been meaning to try for years. I still do want to try it some day, I think. Elm was just something I was curious to read about, and I got ensnared pretty quickly. The Elm guide is just very good: Very friendly and persuasive and not very long. It feels like you can learn it in an afternoon, and you kind of can, if you’re in the right mood.

Over time, I did sour a little bit on Elm, but I mostly blame myself. I don’t think I designed my system very well to scale to support many pages and many actions. I would have appreciated more guidance from the guide on how I’m supposed to do that, I suppose.

I found that it was very easy to build something that was:

  1. very fast
  2. very reliable (they promise no runtime errors and the hype is real)
  3. very easy to refactor, knowing the compiler will help you along

I definitely wrote a lot of awkward Elm code. I never fully mastered how to use the various functional programming combinators that keep your code natural and streamlined.

The relationship between Elm and JavaScript frequently made my brain melt a little bit. Part of the idea with the Elm code is that the Elm code has no side effects. Your program has an entry point, and it receives some parameters, and depending on what the parameters, are, it resolves to some value, and that’s it. You don’t use it to actually do anything, you just take in some parameters, and use those to deterministically produce a single value.

Here are some things you can’t “do”:

  • You can’t make a network request.
  • You can’t look up the current time
  • You can’t write to local storage

Here’s what you can do:

You can define these 3-ish entrypoints, each of which answers a single question:

  1. init – what is the initial state of the application?
  2. update – if, hypothetically, someone were to interact with the application, how might that change the state of the application?
  3. view – what does the application look like, based on the current state of the application?

Each of these is totally pure and has no side effects. But like, of course, we… do want to have side effects. So while you can’t “do” those things in Elm, you can use Elm to do those things. Here’s what I mean: The Elm code compiles to JavaScript code, and gets run as JavaScript code in a browser. And JavaScript can do whatever it wants. So when you want to write some Elm code that has a side effect, it’s this sort of coy two step process:

  1. whisper a “command” into the air, for example: would someone please make this network request?
  2. describe what should happen if someone did perform that command

This much is all outlined in the Elm guide, and grows to feel natural over time. Kind of.

One major asset of the Elm community is its very active Slack channel. When I was getting stuck, I found myself spending time there lurking and occasionally asking questions.

One thing about Elm that strikes me as a liability is the paucity of releases. I think that Rust has the right idea with the release train model, which helps them achieve the goal of “stability without stagnation”. Elm sometimes goes over a year without any release at all in the core compiler. That doesn’t tell the whole story: there is activity, in that time, in the package ecosystem, including in the core packages (in theory). But it seems to take a deliberately slow, thoughtful pace that strikes me as a bit discomfiting.

I genuinely love to see the areas of focus that the Elm compiler developers choose. In their 2019 release, 0.19.1, they focused mostly on improving their already-good compiler error messages. That’s excellent. Given that they’re taking the tack of keeping the language small and slow-moving, I admire that they’re polishing that bauble as much as they can, which should help new people get into it even easier.

After I got my head over my skis with Elm, we started using React at work, and I found that the concept of a declarative UI framework with an internal state that you can update based on user interaction was oddly familiar. React was all new to me, but it went down fairly easy, and I credit my experience with Elm. And I did find myself missing the types.

terraform

The final significant codebase in the monorepo was terraform. This directory contained the definitions of all of the components of my infrastructure. I wanted to follow the “infrastructure as code” trend here, using HashiCorp terraform because we were using it at work and I barely understood any of its core concepts. Actually being responsible for setting it up from scratch was hugely valuable for me to learn terraform fundamentals.

Here’s the idea for the not-yet-inducted who may be reading this:

Let’s say you want to deploy your API service to the cloud, and you choose Digital Ocean as your vendor. You’ll need to create a number of resources within Digital Ocean:

And, actually a number of additional resources. One way that you can create those is to log in to the website and click some buttons to create them for you. That works well, but it’s up to you to remember which buttons you pressed, which is not easy.

Another option is to write some scripts to interact with the superb API, creating all of the resources that you need. That’s also pretty good, but what if you want to just change one thing about a resource. For example, Digital Ocean droplets can have tags. Let’s say you decide later on that you want to add a new tag to a droplet. If your script is set up to only create the resource, it won’t be easily changed to actually update a resource. Or, what if you want to delete some resources? That’s a whole new set of scripts.

This is all where terraform comes in. You don’t write any scripts to do anything, at all. Instead you describe the resources that you want to exist, with the qualities that you want them to have. Then you use the terraform CLI to apply those descriptions. The CLI will figure out what it needs to create, update, or delete. It just sort of figures it out. It’s excellent.

I really enjoyed using Digital Ocean with Terraform. I was using AWS at work at the time, which was my first experience with a big cloud vendor. It was very helpful for me to see what was in common and what was different between AWS and Digital Ocean. It gave me an appreciation for the staggeringly vast number of services that AWS offers. Digital Ocean is pretty quickly launching new things to close the gap, like Kubernetes support, managed databases, generating SSL certificates for your load balancers… I like the idea of using smaller players sometimes, and I’d be happy to try Digital Ocean again for something else.

deploys

All right, so that’s my tour through the code. The last section, about terraform, is a great segue into another topic I’d like to talk about: deploys. Sigh. This part is hard. I did not do anything particularly elegant here.

It’s possible to use terraform to help you deploy changes to your application code, but it requires a lot of cleverness. I didn’t do that.

Instead, I had terraform be responsible for:

  • creating servers
  • creating SSL certificates
  • registering systemd units that define a service that ought to run, such as: the api service
  • install a dummy script that stands in for the application code which doesn’t actually do anything.

Then, I wrote some manual deploy scripts which take care of:

  1. building the new version of the code
  2. uploading the new build to the production service
  3. stopping the service in production
  4. moving the new build into place
  5. starting the service in production

Here’s an example:

#!/bin/sh

set -ex

bin/prod/build api

bin/scp deploy-artifacts/api \
  www.coconutestate.top:/mnt/website/binary/api-new

bin/ssh www.coconutestate.top sudo systemctl stop api.service

bin/ssh www.coconutestate.top mv /mnt/website/binary/api-new /mnt/website/binary/api

bin/ssh www.coconutestate.top sudo systemctl restart api.service

As you can see, all of my helper scripts tend to reference other helper scripts. It’s a kind of rat’s nest of helper scripts.

The downside of this approach is that it incurs a moment of downtime. I tried to minimize it by uploading the assets before swapping it into place. It didn’t really matter, because no one was using the website at any point, so I was happy to compromise on uptime.

types

This project was definitely the zenith of my interest in types:

  • all of the data in the database hews to the types defined in the database’s schema
  • as data is loaded into memory in the Rust code, it is always deserialized into Rust types
  • when requests are made to the API, the request must adhere to the types defined in the GraphQL schema
  • when the API responds to requests, must adhere to the types defined in the GraphQL schema
  • when the Elm front-end wants to talk to the API, it uses Elm types that represent the GraphQL schema’s queries
  • then of course the Elm front-end deserializes the responses into memory in the browser, using Elm types that represent the expected structure of the responses

Frankly, it was a lot.

It led to quite a lot of boilerplate, modeling all of those types. And, of course, the more boilerplate the more opportunity for human error. And, more pressingly, human boredom.

cheapskate ci

So, did I write any tests? Not really. I felt like the types were keeping me in check enough.

I did use some code formatters:

I thought it would be cool to add some CI that made sure all of my code was well-formatted before I could merge anything into my default branch. One challenge was that I had kept my repo private, and the various CI providers all required you to pay to run builds for your private repo. As if to demonstrate the fact that I was more motivated by dorking around and learning things than actually launching a product, I took a detour to build something new:

cheapskate-ci.

Here’s the idea: instead of having a .circleci/config.yml in your codebase, add a cheapskate-ci.toml. Mine looked like:

[ci]
steps = [
  "docker-compose run --rm build cargo fmt --all -- --check",
  "terraform fmt -write=false -check=true -list=true -diff=true terraform",
  "docker-compose run --rm build cargo check --quiet --all",
  "elm-format --validate website",
  "docker-compose run --rm website elm-app test",
]

[github]
repo = "maxjacobson/coconut-estate"

Then, feel free to run: cheapskate-ci run --status to run those steps, and report a pass/fail status to GitHub regarding the latest commit. Then you can make that status a required status on GitHub. All taken care of, for free. You know, for cheapskates, like me.

It reminded me of my first programming job when we didn’t have CI, we just had an honor system. You open a pull request. You get an approval You run the tests locally, and if they pass, you merge the pull request. Looking back that feels so hard to believe.

(We got CI eventually)

Oh, and also…

It will prompt you for a GitHub token so that it can create a commit status on your behalf. I actually made another library, called psst, which cheapskate-ci uses to prompt you for info and persist the provided value for later use. I was really into that kind of thing at the time.

what was rewarding about all this

I’ve tried to sprinkle in some things that I found rewarding about this process. I’ll summarize and sneak in a couple more:

  1. Working independently meant that I was exposed to a lot of technical work that others were taking care of at work, or which were set up long before I joined and no one needed to think about anymore. This gave me opportunities to learn things that I otherwise wouldn’t have, and which put me in a better position to debug and operate production issues, particularly networking issues
  2. Working independently meant that I could make every technical decision, and make a lot of mistakes, in a low stakes environment. Sometimes following my curiosity worked out great and gave me some confidence in my technical instincts. Other times they were a disaster, which was genuinely humbling and helped clarify where I have room to grow.
  3. Playing product manager was so fun. Thinking about what to build is so fun. Talking to people about your idea and getting their feedback is so fun. Brainstorming how you might market your thing is actually so fun.
    • Oh: thank you very much to everyone who gave me feedback on this at any point, or listened to me talk about it. It meant a lot and I thought it was fun.

what was discouraging

  1. Actually finding time to work on it and feeling like progress is so slow is kind of a bummer
  2. The valleys you go through when you doubt your idea and worry that all the effort you’ve put in is a bummer
  3. As you do more market research and start finding that there are other things out there that are similar to your idea, and you start feeling like a bit of a fraud, it’s kind of a bummer
  4. I thought it would be more fun to join a coworking space, but working there on nights and evenings, it was always empty and a little lonesome

what I learned about myself

When you work on a team, you can learn about how you work on a team. On a team, hopefully, your team supports you, and provides you feedback to keep you on track. On your own, if you start to drift out of your lane, you’re going to just keep drifting. On a team, if your energy flags, your teammates can pick up the slack and the project keeps moving forard. On your own, it just … kind of … stops.

I think that everyone will drift in a different direction and probably stop somewhere interesting. For myself, I learned that when my natural instincts are unchecked, I’m inclined to fuss over the code and try to get things just right, and I’m completely unmotivated by actually delivering completed products to other human beings at any point.

Good to know!

why I stopped working on this

Oh, good question.

My enthusiasm petered out. It was very, very slow going. The technical choices I made were optimized more for my own curiosity and intellectual pleasure and less for actually being scrappy and shipping something. Ultimately, for all the work I did, I built very, very few features, bordering on none at all.

What I actually built:

  1. users can sign up, sign out, sign in
  2. signed in users can create a roadmap, providing a name
  3. signed in users can see the list of roadmap names

That’s it!

Lol.

I believe that the backend architecture was fairly extensible and could have fairly easily grown to have more functionality.

The front-end, however, kind of stalled out. I hit the limits of my Elm knowledge and wasn’t able to keep extending it easily. I needed to refactor it somehow but didn’t know how. Probably would have been surmountable but I ran out of steam.

The ops was a pain. I made it too complicated, and doing things like renewing SSL certificates was tedious and required these very precise sequencings. I should’ve just put it on Heroku.

Thinking about the possibility of launching it, I imagined it flopping. I imagined no one actually signing up for it, or people coming by and finding it a ghost town with no roadmaps on it. I thought about how much grit I’d need to keep promoting it, and I felt very tired. At some point I found myself asking myself, how is this better than WikiHow again?

Around that time, two other things were going on:

  1. I had a different idea that was new and shiny which I got more excited about, spent a while brainstorming and day-dreaming about that idea, and then realized that a bunch of apps were already doing it, and none of them seemed that cool
  2. I got a mortgage and bought an apartment (another thing I haven’t written about here) and in some ways I felt like I could do one or the other: buy an apartment or go independent. I’m not sure if that’s actually true, but it’s how I felt.

And so, I basically just made peace with moving on. I felt like I got a ton out of the project. Maybe one day I’ll try something else. I’m still listening to the podcasts.

  1. Since, apparently, deleted. Thank goodness for the Internet Archive. 

  2. I can’t imagine how much more overwhelming it must seem now. 

  3. Et cetera. Et cetera! 

  4. Although I haven’t worked on this project in ages, I was sad to recently learn that the maintainer of actix-web got burnt out on negative feedback and walked away from the project. I found actix-web very easy to work with in large part because of the effort he put in to providing examples. Hopefully he knows his work was appreciated by a lot of people. As I’m writing this, I’m seeing that the project is actually going to carry on under a new maintainer, who somehow saw that happen and thought “sign me up”. 

tree, but respecting your gitignore

October 30, 2019

Whenever I’m setting up a new computer, there’s a bunch of CLI programs I tend to install. Things like:

  • git
  • jq
  • tree

I actually have a list of them in my dotfiles. Honestly, there aren’t that many.

One that I like is: tree.

It’s good. It prints out the files in a visual way that is pretty easy to look at.

The only problem with it is that it doesn’t respect the .gitignore file. That means that it will list all of the files, even the ones I don’t actually care about, like logs or temporary files.

Honestly, that’s fair. The tree program predates git by at least a decade.

After git became the thing that everyone uses there came a new generation of tools that are git-aware. I’m thinking of things like rg, the grep alternative. It came of age in a git world, and by default it respects your .gitignore. That’s nice!

Should I wait for someone to make a second generation tree, written in Rust? Should I make a second generation tree, written in Rust?

Well, I could. But I don’t really have to. As of 1.8.0, tree has this kind of magic, unintuitively named option --fromfile. Let’s see how it works:

Let’s say you have this file, animals.txt:

animals/dogs/chihuahua.txt
animals/dogs/terrier.txt
animals/amphibians/frogs.txt

And then you run:

$ tree --fromfile animals.txt

It will print:

animals.txt
└── animals
    ├── amphibians
    │   └── frogs.txt
    └── dogs
        ├── chihuahua.txt
        └── terrier.txt

3 directories, 3 files

Look at that! It’s printing out a lovely tree based on some arbitrary input. Those files don’t even exist, I just made them up and wrote them in a list.

In this context, the flag name “from file” kind of makes sense. Instead of loading the list of files from the actual file system, it loads the list of files from a file. But you don’t actually need to write your list to a file at all. Instead, you can just pipe the list into the tree command, like so:

$ echo "animals/dogs/chihuahua.txt
animals/dogs/terrier.txt
animals/amphibians/frogs.txt" | tree --fromfile

And that works just the same. Hmm. Even though there’s no file. Hmm.

Now that we can print a tree from some arbitrary input, we can think of it as a building block. If we can gather a list of the files that we want to print, we can now ask tree to print them as a tree, and it will.

So now the problem we need to solve is: how do we gather the list of files in a directory, while respecting the .gitignore?

The simplest thing to do is:

$ git ls-files

This will ask git to print out all of the files which it is tracking all in a big long list. That’s not bad. Let’s take it for a spin:

$ git ls-files | tree --fromfile

It works pretty well!

Here’s the one problem: what about new files, which will be tracked by git but just haven’t been committed yet?

Here’s another problem: what if you’re not in a git repository? I mean, I usually am, but not always.

So here’s what I’m doing: using yet another tool. We’ve been talking about tree, which seems to lack a second generation alternative. Let’s turn our attention to find, which appears to date back to the 70s. find is very good. It finds and lists files. It has options for doing things like filtering them. Not bad. Let’s try combining that with tree --fromfile:

$ find . -type f -name "*ruby*" | tree --fromfile

In the source code for this blog, that prints out:

.
└── .
    ├── _posts
    │   ├── 2014-06-23-whoa-rubys-alias-is-weirder-than-i-realized.md
    │   ├── 2014-10-19-ruby-build-chruby-and-yosemite.md
    │   ├── 2015-03-29-ruby-keyword-arguments-arent-obvious.md
    │   ├── 2015-06-02-gemfiles-are-ruby-files.md
    │   ├── 2015-11-09-how-to-tell-ruby-how-to-compare-numbers-to-your-object.md
    │   ├── 2015-12-14-how-to-make-a-progress-bar-in-ruby.md
    │   └── 2017-07-02-there-are-no-rules-in-ruby.md
    └── talks
        └── there-are-no-rules-in-ruby
            └── slides
                └── assets
                    └── ruby.svg

6 directories, 8 files

So now we can use tree to find the structure of our repositories, filtered down to just certain files.

That filtered down to files with “ruby” in the filename.

What if we want to filter down to files where “ruby” is in the text of the file? Perhaps like so:

$ rg --files-with-matches ruby | tree --fromfile

This is actually super useful. You might wonder something like “what are all of the places in my large code base that reference this class/module that I want to change?” You could find that out, and see it visually, like so:

$ rg --files-with-matches "Url2png" --type ruby | tree --fromfile

And see:

.
└── app
    ├── facades
    │   └── template.rb
    ├── models
    │   ├── layout.rb
    │   └── site.rb
    └── services
        └── url2png.rb

4 directories, 4 files

Now I feel more confident that it won’t be too painful to touch that.

But, let’s get back to find. Given that find is so old, it has no idea about git. But, similar to rg, there’s a git-aware modern version with a two letter name: fd.

Coincidentally, but not surprisingly, both rg and fd are written in Rust.

With fd, we can find the list of files, and it will:

  1. respect the .gitignore
  2. include files that are not yet tracked by git

You just run it like this:

$ fd --type f --hidden --exclude .git

Some notes:

  • We use --type f so that it will print out files (by default it will also print out a line for each directory, which I personally do not care about)
  • We use --hidden so that it will include dotfiles
  • We use --exclude .git so that it will not print out the contents of the hidden directory that git uses to keep track of all of its data.

So, bringing it all together, a tree that respects your .gitignore:

$ fd --type f --hidden --exclude .git | tree --fromfile

Done!

Well, almost done. I am very happy with this behavior, but it’s obviously a chore to invoke. I’m not going to type all that out all the time. I already forgot the various flags. So what to do?

To be honest, I’m not sure what’s best. What I’ve done is this: in my login shell configuration, I’ve defined a function

treeeee() {
  fd --type f --hidden --exclude .git | tree --fromfile
}

And so now, sometimes, I’ll type treeeee (I type treee and then hit tab because I can’t remember how many es there are). Not my best-named thing, but hey, what can you do.

I've started initializing git repositories in the weirdest places

November 26, 2018

Earlier tonight I caught myself doing this:

cd ~/Downloads
git init
git commit --allow-empty --message ':sunrise:'

Then adding this to a README.md file:

This is my downloads folder

Then adding this to a .gitignore file:

*
!.gitignore
!README.md

And then running:

git add -A
git commit -m 'Add a repository for my downloads folder'

And while it felt extraordinarily natural, I laughed imagining a stranger observing this little ritual and wondering what the hell I was doing.

So what the hell was I doing?

I’ll start with the particulars of what it was and then dig into the why.

What?

  1. I initialized a new git repository in my downloads folder
  2. I added a sunrise commit
  3. I added a README file explaining the purpose of the repository
  4. I added a gitignore file which tells git to ignore everything, except for itself and the README
  5. I committed those two files

Why?

I don’t plan to have any code in this repository. I don’t plan to push this repository anywhere. I might not ever change that README to say anything else, or commit any other files. What’s the point?

I’m going to say some things which are willfully naive and I want you to come with me on this little journey: what does it mean for a folder to be the downloads folder? I suppose it means that one downloads files there. I suppose most browsers will put files there when you use the browser to download them. I suppose other applications, like Slack, might do the same. I suppose as the owner of the downloads folder, it’s my responsibility to tend the folder, perhaps to periodically delete the files there? Or maybe I don’t care and I’ll just let them pile up?

You know this and I know this, but we weren’t born with this knowledge. We scraped together this knowledge through years of trial and error. We’re unlikely to forget these things. We know how to use computers, especially our own ones.

But our computers have so many directories on them and it can be challenging to organize them all in a way that makes sense. And it can be hard to remember the decisions we made about how to organize them when we made those decisions a few years in the past.

For a few years, I’ve been making little repositories like this one all over my computers. The READMEs will include little notes to self about:

  • what is the purpose of this folder?
  • how do I use it?
  • where should I put things?
  • what’s the thinking behind those decisions?

The git log helps me remember how long I’ve been following whatever system I’m currently following (or neglecting to follow), which I’m not very proud to admit is something I find interesting.

I’ve been doing this specifically for most of the top-level directories in my Dropbox (things like src and writing and Documents) and frankly I enjoy very much stumbling on things like this:

cd ~/Dropbox\ \(Personal\)/Documents
git show

And seeing:

commit eeb9f6b2e24179a7995b4e6b52995270e8cec759
Author: Max Jacobson <max@hardscrabble.net>
Date:   Sun Aug 28 23:38:13 2016 -0400

    init with readme

diff --git a/README.md b/README.md
new file mode 100644
index 0000000..edb6eed
--- /dev/null
+++ b/README.md
@@ -0,0 +1,7 @@
+# Documents
+
+I copied this stuff over from my ~/Documents on my macbook pro
+
+wtf is all this shit?
+
+I'd like to find a proper home for it in my dropbox...

And, uh: I haven’t!

Feel free to be like me.

Commit your skeletons right away

November 26, 2018

I was just writing a post about habits around starting new git repositories and there was one additional thought that isn’t quite related but which I also want to say and so now I’m really blogging tonight and coming back to you with a second post.

Please commit your skeletons right away.

Imagine you’re making a new rails app, and you use the rails command to generate a skeleton for your new application:

rails new the_facebook
cd the_facebook
$ git status
On branch master

No commits yet

Untracked files:
  (use "git add <file>..." to include in what will be committed)

        .gitignore
        .ruby-version
        Gemfile
        Gemfile.lock
        README.md
        Rakefile
        app/
        bin/
        config.ru
        config/
        db/
        lib/
        log/
        package.json
        public/
        storage/
        test/
        tmp/
        vendor/

nothing added to commit but untracked files present (use "git add" to track)

This command will generate a whole bunch of files. It will also initialize a new git repository. But it doesn’t commit those new files for you.

I urge you: please commit them right away (after your sunrise commit).

Why? Because these files were all generated for you by a computer, and the computer deserves credit. Kind of. Really, it’s because you’re about to make a bunch of changes to those files and you’re also about to forget which of those lines you wrote and which of those lines the computer wrote. You’re going to be maintaining this code base for the rest of your life. I mean, maybe. It’s really helpful to look at git blame and figure out who wrote which lines and why and in my opinion it can actually be helpful to have that context all the way down to the very beginning.

The same is true of cargo new for rust people and jekyll new for blogger people. The ember new command is a welcome exception: it commits its skeleton and throws in some cute ASCII art for free.

If you already didn’t do this with your repository, rest assured that it doesn’t really matter, it’s just kind of nice.

Sunrise commits

November 26, 2018

I’ve picked up the habit from some people I’ve worked with that whenever I create a new repository, I make an initial empty commit that has the commit message :sunrise: and no changes in it. I thought it might be helpful to jot down some context on why I do that, or at least why I think I do that.

Starting new repositories

When you initialize a new git repository, it doesn’t yet have any commits in it. Let’s say you create a new repository:

mkdir my-great-repository
cd my-great-repository
git init

And then ask git to tell you about it:

git status

It will print out:

On branch master

No commits yet

nothing to commit (create/copy files and use "git add" to track)

And if you ask git to tell you about the commits:

git log

It won’t be able to:

fatal: your current branch 'master' does not have any commits yet

Let’s try to make a commit and see what happens:

git commit --message "some commit"

It didn’t let us:

On branch master

Initial commit

nothing to commit

We tried to make a commit, but we hadn’t staged any changes to be included in the commit, and so git was like … no. Which is kind of fair, since ordinarily the point of making a commit is to introduce a change to the code base. But the first commit is kind of special: what does it even mean to make a change to nothing?

I encourage you not to spend too long pondering that question.

There are basically two ways out of this:

  1. Actually add some files and commit them
  2. Tell git that you don’t mind having an empty commit, and make an empty commit

The first way: having the first commit include some files

The first way is probably what most people do, since it’s pretty straight-forward:

echo "Hello" > README.md
git add README.md
git commit --message "some commit"

The last command will output:

[master (root-commit) e52641c] some commit

Notice the part that says (root-commit), which is how you know that it’s the first commit.

This is basically fine.

Running git log works how you might suspect: it shows that there’s one commit.

Running git show works just fine: it prints out the details of the latest commit, including the changes.

It gets a little more complicated if you want to use git diff. Let’s say you want to construct a git diff command which will display the changes you introduced in your first commit. What would that look like?

git diff ??? e52641c

Bizarrely, there is a way to do this, and it looks like this:

git diff 4b825dc642cb6eb9a060e54bf8d69288fbee4904 e52641c

What is that first sha? Basically don’t worry about it. It’s a constant in libgit2, although apparently it might change if git changes the algorithm it uses to generate hashes.

The second way: making a sunrise commit

The other thing that you could’ve done was make a sunrise commit:

git commit --allow-empty --message ":sunrise:"

What’s going on here?

This time, git lets us make a commit even though we haven’t staged any change, because we specifically passed the --allow-empty flag to the commit command.

The commit message is short and sweet and paints a picture that fills your heart with hope.

That’s it.

Some advantages to making a sunrise commit:

  1. You can make a new repository on GitHub and push your sunrise commit to your default branch, and then immediately check out a feature branch and start working on sketching out the initial project structure, and open a PR to introduce that.
  2. If you want to make a new branch that has a totally empty tree, you can checkout your sunrise commit and then branch off from there. There are other ways to do that but they melt my brain a little more.
  3. All of the meaningful commits in your repository will have a parent, making them easily diffed.
  4. You feel the simple pleasure of following the recommendation from a blog post.
  5. Probably some other reason that I’m forgetting (feel free to tell me).

A handy alias

If you find yourself following this pattern, you may want to add this handy alias:

git config --global alias.sunrise "commit --allow-empty --message ':sunrise:'"

That way, you can run simply git sunrise after you initialize a new repository.

Note: this will render as the sunrise emoji on GitHub. You can feel free to use the actual emoji. I don’t because emoji don’t render properly in terminal emulators on Linux, at least in my experience.

Shout out

I’m pretty sure I picked up this habit from Devon Blandin.

git cleanup-branches

March 1, 2018

Do you clean up your git branches as you go or are you, like I am, a lazy hoarder?

$ git branch | wc -l
150

Look at all those things I did that I don’t care about anymore.

Yesterday I googled a little to try and find some magic incantation that would just clean up my branches for me. There are some, but I find that they’re either too conservative or too liberal for me. By “too conservative” I mean that they try to only delete branches that have been merged, except that they’re not actually very accurate, because they aren’t aware of GitHub’s “Squash and Merge” or “Rebase and Merge”, which I use pretty much exclusively. By “too liberal” I mean that some people recommend just literally deleting all of your branches.

I want to have control over the situation.

I can just run git branch -D branch-name-goes-here over and over, one-by-one, for all of my branches, but that would take several minutes, which I definitely technically have, but don’t want to spend that way, even while curled up with a podcast.

What I really kind of want is some kind of interactive process that gives me total control but doesn’t take that long to do. So I made a little shell script, which looks like this to use:

gif demonstrating git cleanup-branches which lets you interactively delete branches

As you may notice, it takes some loose inspiration from git’s interactive rebase.

It does something like this:

  1. get your list of branches
  2. open your default editor (whatever you have the $EDITOR global variable set to) (vim for me)
  3. wait for you to mark which branches should be deleted
  4. delete the ones you marked

git lets you plug in little scripts by just naming them git-whatever-you-want and putting that script on your $PATH and I think it’s fun to take advantage of that.

Here’s the latest version of the script as of this writing:

#!/usr/bin/env bash

set -euo pipefail

file="/tmp/git-cleanup-branches-$(uuidgen)"

function removeCurrentBranch {
  sed -E '/\*/d'
}

function leftTrim {
  sed -E 's/\*?[[:space:]]+//'
}


all_branches=$(git branch | removeCurrentBranch | leftTrim)

# write branches to file
for branch in $all_branches; do
  echo "keep $branch" >> $file
done

# write instructions to file
echo "

# All of your branches are listed above
# (except for the current branch, which you can't delete)
# change keep to d to delete the branch
# all other lines are ignored" >> $file

# prompt user to edit file
$EDITOR "$file"

# check each line of the file
cat $file | while read -r line; do

  # if the line starts with "d "
  if echo $line | grep --extended-regexp "^d " > /dev/null; then
    # delete the branch
    branch=$(echo $line | sed -E 's/^d //')

    git branch -D $branch
  fi
done

# clean up
rm $file

It follows the “chainable shell function” pattern I’ve written about before.

It uses set -o pipefail, my favorite recent discovery in shell scripting, which makes sure that each command succeeds, not just each expression. I should probably do a separate blog post about that with more detail.

Yeah, I guess that’s pretty much it. Have fun shell scripting out there.

2017 in television

January 3, 2018

Here’s my favorite shows that aired in 2017. Just sharing because I spent way too much time this year watching television, so maybe this will help you pick the good stuff, as long as your taste is also my taste, which it’s not. I’ll try to avoid spoilers.

10. Mr. Robot (s3)

This show remains a technical wonder. It has a very sharply drawn aesthetic – you know you’re watching this show by the mannered acting and the off-center camera compositions more than anything else. Nothing I saw on television was as cool as the 4th episode, which is presented as one long take, and aired without commercials. It feels like the rules don’t apply to this show. This season was mostly about pain and regret and what to do with them. Anything productive? Maybe it’s possible.

Rami Malek is and has been great, but this season was as much about the constellation of characters around him. This show knows it’s fun to watch smart people do hard things and so it drew up a bunch of them, made you love or fear them, and set them against each other. I particularly liked any scene with Dom, Grant, Darlene, or Angela.

Highlight: eps3.4_runtime-error.r00

9. Crazy Ex-Girlfriend (part of s2 / part of s3)

I’ve loved Rachel Bloom since I Steal Pets, a truly brilliant, stupid music video she made in 2011. Somehow she’s made a show that has several hysterical songs in each episode, a cast of lovable gentle Californian weirdos, and a very thoughtful depiction of what it’s like to struggle with mental health. It really feels like an auteur work in the sense that it’s hugely personal and no one else could’ve or would’ve made it. She writes and acts and sings. It’s nuts. I giggled like a child all through “I Go To The Zoo” and “The First Penis I Saw” and felt a swell of an uneasy hope during “My Diagnosis”.

Highlight: Josh Is A Liar

8. Catastrophe (s3)

This show is mostly special beacuse it’s very funny, and there are few pleasures comparable to Sharon and Rob making fun of each other so cruelly that you know they must really love each other or how else could they put up with that? It’s also one of the best, most real-feeling stories of alcoholism that I’ve seen. It reminded me, at times, of a crime drama like Dexter where the anti-hero has a secret and we’re for some reason invested in him not being found out as a serial killer, except here he’s sneaking drinks, and you feel it in the pit of your stomach that this is bad. It’s mostly very funny. But it’s also very dedicated to arguing that we’re better when we step up and be there for each other even when it sucks and I find that helpful to think about. The drama is very ordinary but the characters are so lovable that you care. And Carrie Fisher is great.

Highlight: Episode 6

7. The Marvelous Mrs. Maisel (s1)

Just watched this one over Christmas break, with my mom, so there may be some recency bias here, but I really loved it. This show entertains on so many levels. It’s hysterical, with great characters, acting, and dialogue. The scenes between the comedian characters in particular feel funny in a way that funny people are with each other, where they want to skip past the norms of what they’re supposed to say, and then also not laugh, and then also act like that was normal. All the scenes with Midge and Lenny Bruce were golden like that. And just watching a show about a person figuring out they want to do some kind of art and that they might be good at it and then realizing it’s going to be a lot of work and maybe not that fun all the time is hugely fun, because each step feels real and earned and satisfying. That that person is a Jewish woman with two kids in the fifties in Manhattan played by someone as charismatic as Rachel Brosnahan makes it feel very unique and alive. Also her parents are hysterical and Joel sucks in such a real and lived in way that it almost brings pleasure how reliably the dude sucks. And that the cinematography is frequently dazzling, with a camera that floats through clubs and apartments and over tables and around Hora dancers, and the soundtrack crackles with sometimes on-the-nose but very charming contemporary songs makes the whole thing just kinda whizz by.

Highlight: Because You Left

6. The Americans (s5)

Maybe I’m a sucker and a fool for things that are just the slightest bit unusual, but the premier’s exhuming sequence, which has no dialog for something like 15 minutes as we just watch these people do their job ultra competently and feel the weight of it on their backs and the amount they’re stuck with the decisions they’ve made and have been made for them growing and growing … is very good … and is enough to make me sit up straight and hold my breath.

This is the penultimate season and it kinda feels like one. This is the TV show version of a clenched jaw. It’s all heading to hell, for sure. I think of it as a show about marriage and parenting, which isn’t really an original way to think about it. But it really makes me feel the painful feeling that maybe for all our/their hard work, the next generation won’t really be better off. On that theme, the new characters of Tuan and Pasha were fascinating and painful to watch.

Highlight: Amber Waves

5. The Leftovers (s3)

The final season! This show was based on a book, but they pretty much covered it in the (pretty good) season one. Seasons two and three veered off and did their own thing and explored grief, mental health, love, religion, and family using some of the most bizarre scenarios with the most committed cast. It’s a really stunning show just to look at even if you don’t have any idea what’s going on, which you mostly don’t. Justin Theroux and Carrie Coon are incredible.

I don’t really need things to stick the landing perfectly; Lost’s ending wasn’t perfect but it was fine IMO. The Leftover’s is another Lindelof joint, and it does feel like Lost is hanging over it a little. He did a better job of managing expectations this time, because I don’t think anyone watching expected the show to start making sense right at the end. Nevertheless, the ending had me spellbound.

Highlight: It’s a Matt, Matt, Matt, Matt World

(for the submarine sequence if nothing else, but also the rest)

4. Better Call Saul (s4)

I did like Breaking Bad a lot and I didn’t really know what to make of this when it started. It’s a little hard to talk about. It’s really its own show, distinct from Breaking Bad. When I say that the whole show was building to this season, that doesn’t mean what you think it means if all you know is it’s the Breaking Bad prequel show. More than that it’s a family drama about a handful of lawyers, all proud, all brilliant, each kind of broken in their own way. Like Breaking Bad, it’s about how people’s consciences rot and fall away, or don’t, and the effects on other people. It’s pretty heavy, but in a way that feels real and awful. It’s also, frequently, hysterical.

Highlight: Lantern

3. The Carmichael Show (s3)

Really sharp writing and amazing chemistry from the cast. Nothing else made me laugh as often. I loved the sneaky emotional ones, too, which is almost all of them. Each character gets to take their turn being the asshole taking the overly harsh position on whatever the argument of the week is, and actually gets a chance to speak their mind, and then they keep digging until they find some understanding. It’s a very winning formula. I’m sad this one ended.

Highlight: Cynthia’s Birthday

2. Nathan For You (s3)

Nathan For You is audacious and radical and sweet and kind of cruel and depressing. I hope he does more. It’s so unpredictable, but in unpredictable ways. Most of the funniest moments come from Nathan violating some social norm, but the surprising thing is how polite he is as he does it. I often feel terrible for his subjects, except that it’s very hard to find fault in Nathan’s behavior. He’s never really ridiculing anyone. He’s more enabling people to succeed at whatever their dreams are, if only temporarily. That their results are uniformly terrible makes their dreams feel small and unimportant and them seem foolish for even having them. But maybe it’s better to live your dream than to not?

Highlight: Finding Frances

1. Halt and Catch Fire (s4)

This was the fourth and final season and it was just about perfect. The episode called “Goodwill” is one of the most beautiful things I’ve ever seen on television. What this show did better than any other is make you really love its characters and care about them and feel for them. That it was about the early days of the internet and felt real and lovely is only extra credit. That it told the story of Cam and Donna struggling and having success as women in tech, also extra credit, and I think not talked about enough, IMO.

Highlight: Goodwill

Honorable mentions (alphabetical ordered)

  • Black Mirror (s4)
  • Bold Type, The (s1)
  • Brooklyn Nine-Nine (s5)
  • Fargo (s3)
  • Fresh Off the Boat (s4)
  • Girls (s6)
  • Good Place, The (s2)
  • Master of None (s2)
  • Mindy Project, The (s6)
  • Scandal (s7)
  • Search Party (s2)
  • Sweet/Vicious (s1)

Also enjoyed (alphabetical ordered)

  • Broad City (s4)
  • Curb Your Enthusiasm (s9)
  • Game of Thrones (s7)
  • Glow (s1)
  • Handmaid’s Tale, The (s1)
  • Legion (s1)
  • New Girl (s6)
  • Rick and Morty (s3)
  • Stranger Things (s2)
  • Unbreakable Kimmy Schmidt (s3)
  • Veep (s6)
  • You’re the Worst (s4)

the hardware and software I use (2017)

December 21, 2017

I’m an inveterate reader of Uses This style articles and I’ve always wanted someone to ask me to participate in one and no one has, and I have a perfectly fine blog, so here goes nothing (actually something quite self-indulgent and in need of editing and unlikely to be an annual tradition now that I know how weird it feels to write this all out).

I’m borrowing the questions from that website (thanks).

Table of contents, because this got long, but I guess that’s because I really like using hardware and software, which is true, and that’s a good point when you think about it

Who are you, and what do you do?

Yes hello I am Max Jacobson, I’m a software engineer who makes web apps, currently at Code Climate. Occasionally, and not professionally, I write, speak in public, and podcast.

What hardware do you use?

I have too much and want to have fewer but I’m not sure what to not use. I try to make my hardware last as long as I can and avoid indulging the guilty thrill of buying new things, with mixed success. By frequency of use:

I use my iPhone 7 constantly. It’s great. I like the funny vibrating home button and don’t really miss the headphone jack. I got the AirPods for my birthday and they’re great, although they’ve turned me into someone who gets nervous stepping over grates in the sidewalk. I use it in a Smart Battery Case which is fine, although I feel very warped by it because I start feeling a little stressed out as soon as my phone’s battery dips below 100%.

At work, since March 2017, I use a Lenovo ThinkPad P50 which is enormous and heavy and has a numpad built in that I never use and that little red nub for mousing that I never use and a hinge that curiously opens to a full 180 degrees which I never do and a second set of mouse buttons above the track pad which I never use and a fingerprint scanner that I never use. I didn’t exactly pick it out: when someone left the company I took it over and turned in a MacBook Pro so I could try out switching to Linux (more on that later). I do like it: it’s super fast; the keyboard is good; the screen is sharp; it has all the ports you could want; the actual thing feels pretty good; and once your eyes adjust, its stark black and red aesthetic starts to look kind of slick. I’ll probably pick a smaller ThinkPad for my next laptop.

I use it with a WASD V2 87-Key Custom Mechanical Keyboard, which is awesome and a $5 AmazonBasics USB Mouse which is perfectly fine. I’m not in the camp of people who detest wires on a desk. If anything, I’m in the camp of people who detest having to charge things.

Next up is the 2016 9.7” iPad Pro. I mainly use it to do things like watch videos in bed or read Twitter on the couch. Occasionally I’ll take it out to a cafe with the Apple Magic Keyboard so I can do more productive stuff like write emails or research some project, where I appreciate being able to command+tab to switch between apps or command+space to quick-launch apps. I recently bought a Canopy keyboard case to kinda encourage me to do that more, and hopefully I’ll like that. (Edit: I did)

I like having it. It lets me give my phone a break. It’s so light. It has such a nice screen. Its speakers are surprisingly loud and nice-sounding. It’s fun to use and to look at (I have the pink one with the mint green smart cover).

For a personal, non-iPad computer, I used to have a 15-inch, Early 2011 MacBook Pro, but it died in early 2017. My grandma bought it for me after college when I was trying to become a filmmaker. It ended up being the computer where I learned to code instead. I put that machine through hell trying to keep it going, and I struggled for a while figuring out what to replace it with. I probably would’ve gotten a spec’d out MacBook pro except that all signs suggested they were going to release a big new update soon. They ended up doing so, but not until November, and it was kind of a controversial new design which I’m glad I didn’t wait for.

I ended up getting a 2015 Dell XPS 13 Developer Edition in November 2015, actually awhile before the MacBook Pro formally stopped booting. I was looking for a light laptop that I could travel with (by this point the MacBook Pro felt like a brick in my bag and had a few loose screws clattering around in it which left me preferring to keep it at home), and also to experiment with using Linux. It cost $1,370. I like this laptop fine. It’s light and fast and runs Linux and has a decent battery and a nice screen. I think it’s weird to have the webcam positioned below the display instead of above. The keyboard is only fine. It feels so sturdy and compact that it doesn’t feel fragile and I toss it in my bag without a case. This is the laptop I brought to India and studied Rust, and later brought to New Orleans to present at RubyConf. It’s reliable and straight-forward.

But when my MacBook Pro died, I did kind of feel like I needed a Mac. I had salvaged the harddrive from the laptop and bought a drive enclosure. I also had them backed up to another hard drive and to the cloud, via backblaze. All those backups cried out to be restored, somewhere.

I ended up getting a Late 2014 Mac Mini spec’d out with 16GB of RAM, an SSD, and whatever the fastest processor was. It cost $1,399. That’s a weird computer to choose in early 2017. Even in 2014, it was really poorly received because it didn’t offer all the options that the previous generation did and wasn’t as upgradable after purchase. I was kind of thinking it’d be a stop-gap until the new MacBook Pros came out, and then I’d sell it and get one of those, but then those came out and were not very appealing to me. But to be honest, I kind of love it. In my experience, it’s plenty fast and reliable. I almost bought a Mac Pro and it turned out the Mac Mini was enough for me. Lol. I don’t use it a ton: I’m on a computer enough at work and don’t thrill at the idea of spending much more time on one. But I still like to have a Mac somewhere in my life, to serve as a hub for things like my photo library, my music library, and all my old college essays and writing projects. And as much as I’ve come to like Linux, there remain a few things that I need a Mac for (more on this later).

I use it with another AmazonBasics mouse and a Spacesaver M White Buckling Spring keyboard from Unicomp. It’s hilarious and thunderous and retro and great. I love it.

For sound, I use Altec Lansing BXR1220 computer speakers I bought for $15 five and a half years ago which are perfectly fine and look pretty cool IMO.

For a display, I use a Dell U2713HM 27-Inch Screen LED-lit Monitor that The Wirecutter recommended when I bought it in 2014. It’s been really great.

For recording audio, I use a MXL Tempo USB Desktop Cardioid Condenser Microphone. I was going through a phase where I enjoyed red things. It wasn’t a huge investment, is simple to use, and sounds OK.

For Wi-Fi, I switched to Eero in 2016 to get better coverage of my apartment and it worked out great.

I watch TV and Movies using an Apple TV (which is fine) on a TV that my friend Russ handed down to me when his aunt handed hers down to him (which is fine).

When I write longhand, it’s usually on some cheap Gregg-ruled steno pad I picked up at a pharmacy (I like the spiral being at the top so it stays away from my wrists; I like for the margin to be right down the middle so I don’t feel weird writing right up to the edge of the page, and it also gives me a sense of how far across the line I’ve gotten at any given point? Maybe I don’t need the middle margin actually) using a Muji 0.5MM Gel-Ink black pen which my sister recommended to me once, years ago.

For reading books, I use the library.

I have way too much hardware, but at least I don’t have an Apple Watch.

And what software?

I try not to use too much software, because the more things I use the more keyboard shortcuts I have to remember, and the less room in my heart there is for poetry. I try to use built-in software when possible unless it’s really bad.

For programming a computer, I primarily use Ruby or, for simple things, shell scripts. I also like to use Rust.

For writing code, I prefer to use terminal-based tools, primarily: vim for editing text, tmux for terminal multiplexing (creating separate workspaces for separate projects, each consisting of a few related shells, controlling how they’re laid out and which to focus on), zsh for a shell, and git for tracking changes to source code. I like them because they’re free and open source, blazing fast, and have user interfaces that feel like they’re carved from stone. There’s a long tail of unix tools that assist in the process of writing and testing code, but I’m going to consider them out of the scope of this post. Thankfully, those all work pretty much the same on both macOS and Linux.

My dotfiles are available on GitHub. They use thoughtbot’s lovely rcm tool to ease installation and syncing across multiple machines.

On macOS, I use the built-in Terminal terminal emulator and on Linux I use rxvt-unicode.

For an operating system on my two laptops, I initially tried Arch Linux at the recommendation of a few co-workers and I’ve come to quite love it. It has a deserved reputation for being ultra minimalist, which means you have to do more legwork to get it up-and-running, but then you can customize it exactly to your taste. It’s way more bare bones than I could ever have imagined. If you want it to behave in any way at all, you have to tell it to – even for super basic things like locking the screen after a few minutes of inactivity – but it has all the seams in place for you to do just that. I initially set it up using this fabulous tutorial from LearnLinux.tv and have iterated on it via a lot of guidance from my patient co-workers and by copious browsing of the elaborate ArchWiki.

For managing windows on my laptops, I’m using xmonad, a tiling window manager. In the past I’ve tried macOS apps that add keyboard shortcuts for managing windows in a tiling fashion (Spectacle, I think, and others) and never found them to be particularly compelling. I thought things like: I’ve always arranged my windows using my mouse and it’s been fine; I don’t want to learn a bunch of new keyboard shortcuts; etc. But depite those earlier fears and protestations, xmonad is absolutely wonderful. I think I like it more because it’s not a layer on top of a dynamic window manager, it’s the whole system. I set it up so all of my windows get chunky, hot pink outlines. My screen is always 100% filled. Windows get automatically resized to fit as I open and close things. I can re-arrange and navigate the windows without ever using the mouse. It’s cool as hell. It’s also fast as hell.

For browsing the web, I’m a stubborn Firefox apologist and have been for a while. It feels like the internet to me. I even use it on iOS, so my history and bookmarks will sync there. Unfortunately, I end up in Safari for iOS all the time, since that’s the default browser for everyone.

For email, contacts, and calendar I use FastMail. It’s rock solid, has super fast and pleasant web interfaces, and has no ads. I pay $70 per two years for it. I love it.

For email, I use the built-in Mail app on iOS, which works great with FastMail. I’ve tried a handful of alternatives and didn’t really like any of them, and you can’t change the default app to handle email links anyway so I just go with the flow. On macOS and Linux, I just the FastMail web interface, which is great.

For interacting with my personal calendar on macOS and iOS, I use Fantastical which is very delightful.

For buying domains and managing DNS, I use NearlyFreeSpeech.NET. It’s one of my favorite websites. It’s extremely plain and extremely polite and extremely clear.

For lightweight checklists, both long-lived and short lived, I use the Apple Notes app. For example, I have a note called “movies out” which is a checklist of movies that are out or coming out soon that I think I might want to see. I refer to that occasionally when I think “hmm what’s out?” or when I pass a poster and think “Oh, that’s out?” I have another note called “pantry” which has a checklist of the staples I like to keep in my kitchen, and I check things off when I buy them, and uncheck them when they’re running low. I have another note called “Christmas gifts” which lists all the people I need to get gifts for, with checklists for each person of the things I’ve gotten them (checked) or am thinking about getting them (not checked). I’ll delete that one after Christmas. Sometimes I’ll make a note that just has a checklist of all the things I’d like to get to in the day, and I can look at it throughout the day, and then later on delete it.

For being upset and inspired and not-bored and informed about the world, I use Twitter. It’s a big part of my day. I can’t really imagine the world without it. I used to use it exclusively via a third-party app called Tweetbot, but I switched to the official iOS app and it’s honestly fine. I do see ads now, and I lose some neat features and design, but more importantly I get all of the modern twitter features, like polls and group DMs, which aren’t available to third-party apps. On Linux and macOS, I just use the web interface.

For subscribing to websites and newsletters, I use Feedbin. I still love RSS in 2017. It’s a big part of my day. Whenever I read anything I like on the web, I look for a feed so I can subscribe and get more. Also, whenever a newsletter seems interesting, I subscribe via feedbin rather than via my email, which helps me prevent my email inbox from getting cluttered. On iOS and macOS, I read via Reeder, which probably comes second only to Firefox as my favorite and most-used app of all time; I was browsing Google Reader via Reeder on an iPod Touch between classes in college. On Linux, I use the Feedbin web UI, which is actually really nice. On iOS, to detect RSS feeds on web pages and subscribe to them in Feedbin, I use Feed Hawk.

For hosting source code, I use GitHub. It’s great.

For making my blog, I use Jekyll to structure the source code and build the site, GitHub Pages to host the static site, Markdown to make it pleasant to write prose that will become HTML, and Clicky for some traffic analytics.

For creating slides for my one talk I gave in public, I used remark, an open source tool that let me use familiar web technologies like HTML, CSS, and JavaScript to customize the slides, and let me use my beloved Markdown for writing the actual content, and let me easily host the finished product on my blog.

For editing my podcast, I use … actually I do it so infrequently that each time I basically forget and try something new, and I currently don’t remember.

For listening to music, I use Spotify although I kind of wish I just paid for music again so I didn’t feel a mounting dread about how much money I’ve sunk into something I don’t get to keep. It has good native apps for all the platforms I use. Sometimes I use the Apple Music app for things that aren’t on Spotify, and other times I listen to musicforprogramming.net.

For listening to podcasts, I use the wonderful Overcast. I love to have a podcast app that does server-side polling and sends push notifications when new episodes are available. I generally use the iOS apps, but I’m glad it has a spartan web app I can use on Linux and macOS.

For tracking personal tasks, I use OmniFocus, which is available on all of the Apple platforms. I use it pretty passively. It’s more a thing I write to than read from. If someone recommends something, I’ll put it in OmniFocus. If I see a tweet with a link I want to check out later, I’ll put it in. If I have a random thought I want to explore further, I’ll put it in. If I feel guilty about something, I’ll put it in. Occasionally I’ll go back and look through it and check things off and delete things and organize them into little projects. It helps me remember what are all the things I want to or am supposed to do, which helps me not feel worried all the time, and when I do feel worried I know where to go to remind myself who I am. I used to use Instapaper for saving articles to read later, and this year I stopped, and it’s a relief. But I did make a single action project in OmniFocus called “articles to read later”, and I do occasionally put articles in it. It’s a little deranged. I dearly wish they had a web version so I could check it on Linux.

For tracking work-related personal tasks, I use Todoist, which is pretty similar to OmniFocus, except it feels less reliable to me, has subscription-based pricing, and has a web version so I can check it at work on my Linux computer. I actually kind of like keeping a divide between work stuff and personal stuff. I just wish its syncing engine felt more rock solid.

For keeping files in sync across my goofy amount of computers, and occasionally for sharing files with other people, I use Dropbox. I’m tempted to become more reliant on it. My photo library is currently in Apple’s Photos app, which I can’t access on Linux. That might be a project for next year.

For managing my personal passwords, I use 1Password. I’m very pleased because they recently introduced a web version, which should let me use it on Linux, although I haven’t tried yet. I think I may need to switch over to subscription pricing to use that, which would totally be worth it for that alone. Currently, whenever I need to look up a password on my personal laptop I just look it up on my phone and peck it in, which suuucks when your passwords are super involved. For my work passwords, I use Rooster, a CLI password manager.

For texting I use some combination of iMessage, WhatsApp, Facebook Messenger, Twitter DM, Instagram DM, and Slack depending on who I’m talking to. It’s a mess. And I’m just realizing three of those are owned by Facebook. In theory, I prefer Twitter and Slack most, because those are available on all of the operating systems I use. In practice, I use iMessage the most.

For making screencasts, I use QuickTime to record my screen and ScreenFlow if I need to do any editing. I have used recordmydesktop to record my screen on Linux and it works great, but I never really do it. And I have no idea how to edit video on Linux, although I’m sure it’s done.

For remembering where I’m at in which TV shows, I use iShows TV on my phone. I love it.

For figuring out where to go and planning trips, I use Foursquare.

For giving and receiving FOMO, and helping me remember later the names of the places I’ve been, I use Swarm on my phone.

For some twitter analytics, I use Birdbrain on my phone.

For reading comics, with cool panel-by-panel transition animations, I use Comixology, mostly on my iPad.

For tracking the shipping status of packages I use Deliveries on macOS and iOS.

What would be your dream setup?

Hmmmmm.

I wish that all iOS apps I liked had at least spartan web interfaces so I could interact with their data from my Linux computers.

I’m looking at you, iMessage and OmniFocus.

I wish Vimscript were replaced with Ruby.

I wish there were less lock-in.

I wish I could use iOS more like a general-purpose computer. I know some people can, but I don’t think I can until I can do things like:

  • change the default browser to Firefox so I can follow links in, for example, my email, and have them open the Firefox app
  • run a terminal emulator that gives me access to the actual file system and run arbitrary software and have access to a package manager such as Homebrew
  • not rely on a separate Mac to add arbitrary mp3s to the Music app
  • do things like invoke 1Password from Firefox without going into a share menu – using “sharing” as the way for apps to communicate feels like the wrong abstraction
  • probably other stuff

I should probably get better speakers.

I have a kind of allergy to using Google products that I should probably get over, because they do make a lot of good stuff.

I wish more people had blogs and fewer people had newsletters.

Normalizing surgical drain output

December 11, 2017

Let’s talk body fluids, and then math.

(Note: this isn’t medical advice. Listen to your doctor.)

During a surgery, it’s sometimes necessary to install a drain to prevent the build-up of fluids under your skin from whatever wound you’ve got. When the patient leaves the hospital, it becomes their responsibility to care for the drain. Here’s what that entails:

The drain is a tube connected to your body via some stitches. It runs along until it empties into a bulb, which you may keep clipped to your undershirt throughout the day. The bulb has most of the air squeezed out of it, to create suction.

Periodically, you must unplug the bulb and let in the air. Pour out whatever fluids have collected into a small measuring cup. Make a note of how much you’ve collected, and what time it is. Then dispose of the fluids, squeeze the air back out of the bulb, and plug it back up. Do this at least twice a day.

When you see your doctors, they’ll want to know the rate of drain output so they can get a sense for:

  1. how the wound is healing
  2. if it’s time to remove the drain

Let’s say you’ve taken these notes:

2017-12-07 00:00 0
2017-12-07 14:09 30
2017-12-07 22:10 10
2017-12-08 10:20 7.5
2017-12-08 10:55 23
2017-12-08 22:00 2.5
2017-12-09 11:45 5
2017-12-09 19:15 8
2017-12-10 11:50 18
2017-12-10 17:40 7
2017-12-11 8:55 10
2017-12-11 22:10 12.5

Some days you’ve taken two measurements, and other days three. Some of the measurements follow six hours after the previous one, and some much more.

Let’s say the doctor wants to know the answer to this question:

How many milliliters of serosanguineous fluids did you drain each day since your surgery?

We can do some eye-ball math and determine:

2017-12-07: 0 + 30 + 10 = 40
2017-12-08: 7.5 + 23 + 2.5 = 33
2017-12-09: 5 + 8 = 13
2017-12-10: 18 + 7 = 25
2017-12-11: 10 + 12.5 = 22.5

And that would probably be good enough. It gives you a sense for the trend:

40
33
13
25
22.5

Slowly going down, except for one weirdly quiet day.

But IMO this feels dissatisfying and wrong.

Let’s look at these two measurements again:

2017-12-09 19:15 8
2017-12-10 11:50 18

Is it really fair to bucket those 18 milliliters of serosanguineous fluids solely on 2017-12-10? You emptied the drain at 19:15 the day prior, so those 18 milliliters were trickling out for about five hours on one day, and about twelve hours the next. If we can assume that it trickled out evenly, we should be able to smear that data across both days and get a more accurate picture of the daily trend.

This bugged me enough that I wrote a quick ruby script to do this for me:

require "time"

Measure = Struct.new(:time, :value)

data = File.read("./data").lines.map { |line|
  date, time, amount = line.split(" ")

  Measure.new(
    DateTime.parse(date + " " + time).to_time,
    amount.to_f,
  )
}

all = [data.shift]

raise "no data!!!!" if all.empty?
raise "must start with zero value!!!" if all.first.value.nonzero?

data.each do |next_data_point|
  last_data_point = all.last

  amount = next_data_point.value
  diff = (next_data_point.time - last_data_point.time).to_i

  amount_per_diff = amount / diff.to_i

  diff.times do |n|
    all.push(Measure.new(
      (last_data_point.time + n),
      amount_per_diff,
    ))
  end
end

result = all.each_with_object({}) do |measure, obj|
  obj[measure.time.to_date] ||= 0
  obj[measure.time.to_date] += measure.value
end

result.sort_by(&:first).each do |(date, total)|
  puts "#{date} - #{total.round(2)}"
end

I hacked this together pretty quickly and I’m a little pleased with it. Here’s the idea:

Instead of having just a few measurements taken at odd intervals, let’s pretend we have many thousands of measurements taken at regular intervals. One per second, in this implementation. And then use code instead of dumb eyeballs to add up all the measurements in each day. I’m calling this smearing the data for want of a proper term.

After smearing the data (assuming an even trickle between measurements), the trend looks like this:

41.13
32.6
17.43
24.0
18.35

In this case, it’s so close to the eyeball math that it may not have been worth doing, but I will rest easier nevertheless.

I just have one urgent question for you my dear reader: is there a name for what I did here? I want to learn more about data. As I learn things, I am often pleased to find out that all of the ideas already exist and have good names. What’s this one’s?

mewithoutYou and me

September 29, 2017

mewithoutYou is a fairly prolific rock and roll band from Philadelphia who I love very much. In this post, I’m going to talk a little bit about why and share some of my favorite songs.

Catch For Us The Foxes (2004)

I first heard mewithoutYou in probably 2004. I was going to see my then (and current) favorite band Bear Vs. Shark play a club in Poughkeepsie called The Loft. There were two openers, Codeseven and mewithoutYou, neither of which I had heard of. I’m sure I probably pirated both of their albums, so I could be prepared. I’m not sure if I ever got around to listening to Codeseven…

I remember sitting on the couch in my parents house with headphones on (did I have an iPod? did I burn a CD for an album I wasn’t even sure I liked?) and being fully knocked on my ass.

The album, Catch For Us The Foxes, is their second. It starts with a song called Torches Together. Immediately… who the hell is this guy? What is he talking about? Why isn’t he singing? This dude is out of control spazzing out while very comfortably weaving a metaphor about, I think, the power that comes from letting go of personal ego and joining a community? Individual lines jump out: “Anyway, aren’t you unbearably sad?” Just like that, as an aside. Later: “And I’m afraid and everyone’s afraid and everyone knows it / But we don’t have to be afraid anymore.”

I recently read a brief profile of chef and cookbook author Meera Sodha. She has this great line:

You know when you realize what you’re eating is just so magnificent, and there’s a sort of rip in the atmosphere?

Torches Together ripped into my 16-year-old atmosphere.

Anyway, Catch For Us The Foxes isn’t my favorite album of theirs. Maybe my least favorite. It has a lot of nice moments, but it’s not very focused. Stylistically, I think it reaches the limits of the spoken (and shouted) word style. It gets a lot of mileage from the contrast between the band (who play very precisely) and Weiss’s unhinged vocals. He sounds, often, distressed, and the music behind him almost feels like it’s trying to keep up with him and support the performance like a movie or broadway score. Imagine, if you were distressed, and you broke down, and as you expressed your feelings, some chunky riffs and a tight rhythtm section buoyed you, and encouraged you to keep going. That might be kind of powerful and you might feel kind of validated.

The style is definitely not for everyone, but I love it, and that’s sort of the lens I see it through. Maybe that’s how all bands work, I don’t know.

[A->B] Life (2002)

After the concert (which was amazing, I can’t believe I got to see Bear Vs. Shark and mewithoutYou in the same night) I went back and listened to their debut album, [A->B] Life.

Honestly, I think this is is a really good album. It’s not really trying to be that profound. It’s mostly trying to rock out. It’s short. It moves fast. Weiss pretty consistently screams all his lyrics. There are a few spacey interludes, but overall it’s very crunchy. Track eight, “We Know Who Our Enemies Are”, has a kind of hilarious moment where it fades out and then back in for some reason. It ends with a secret acoustic track, so you know they have a sensitive side.

There are a few songs that are more on the snotty and abrasive side IMO (“I Never Said That I Was Brave”). But a lot of them are (relative) bangers, like “The Ghost”:

Can you even imagine how fun it is sing lines like “Put music! Put music! Put music to our troubles!” along at a concert?

Look, things aren’t going well, and there are simple pleasures to be had. Consider air-guitarring to the little guitar solo at the end there.

This year, in 2017, they’re doing a 15 year anniversary tour where they’re playing the album in full. I’m obviously going.

Brother, Sister (2006)

So album three comes out and I’m officially a fan. I’m also in college now, and sophisticated. I order it on vinyl and put it on my wall (I do not have a record player).

I think this is their first great album. The corner they turned is that Weiss has started to transition from performing anguish to telling stories. He’s still kind of freaked out by everything, but he doesn’t quite as angry anymore. He’s coming around to being gently amused.

Here’s “The Dryness and the Rain”:

My favorite moment comes near the end:

A fish swims in the sea
While the sea is in a certain sense
Contained within the fish!
Oh, what am I to think
What the writing
Of a thousand lifetimes
Could not explain
If all the forest trees were pens
And all the oceans ink?

What am I to think?

Not for the first time, many of their best lines are cribbed from or inspired by The Qoran and the Sufi Islam teacher(s?) Weiss is obsessed with.

The next song, “Wolf Am I! (and Shadow)” is maybe my favorite of their hard rock songs:

I just think the band sounds so tight and good. But also, Weiss finds room for some spontaneity in a way that I find pleasingly self-deprecating:

Oh there, I go showing off again
Self-impressed by how well I can put myself down!

Later, he corrects himself in a way that almost feels like he’s making all this up as he goes:

Shadow am I!
Like suspicion that’s never confirmed
But it’s never denied
Wolf am I

No, “shadow”, I think is better
As I’m not so much something
More like the absence of something

So shadow am I!

Of course, I hope, he’s not just making it up as he goes. But it does play like an effective performance of confusion and self-loathing and fear.

This might feel a little like more of the last album, but it’s redeemed by two things:

  1. it just sounds so good
  2. it’s in the context of an album that isn’t just that, but seeks and finds some level of grace by the end of it

So let’s jump to that…

The album ends with a song called “In a Sweater Poorly Knit” (the title winks at an earlier song’s title, “In a Market Dimly Lit”). It’s the story of Moses in Egypt, sort of. It features some lovely harp and acoustic guitar. All of the genius annotations for this song insist it’s a big metaphor for a breakup, which I’d never once considered before. Eh. Idk. It’s so pretty. Like most of their songs, I think it’s about his relationship with God and with his own ego. It ends the same way the album opens, with the line “I do not exist”. Mostly it’s just so pretty.

At some point you realize he started actually singing and not speaking or shouting and it kind of suits him. He’ll do more of that soon.

It’s All Crazy! It’s All False! It’s All a Dream! It’s Alright (2009)

OK. Here’s my big opinion: this album is a masterpiece.

Pretty much all traces of personal anguish are gone. Almost every song is a story. Almost all of the characters are animals. When they’re not animals, they’re fruits and vegetables. When they’re not produce, they’re the baby Jesus. It sounds gorgeous.

The opening track, “Every Thought A Thought of You”, is a blissed out, bouncy, slightly strained devotional to God. The closing track, “Allah, Allah, Allah”, is also basically the same, except it has a kind of fun campfire singalong vibe.

I’m not a religious person in any way except that I love this band and find them very persuasive, particularly the way they seem almost absurdly non-denominational. There’s even room in these songs for doubt. One of my favorites is the longest song on the album, “The King Beetle On a Coconut Estate”:

What is this song getting at? Is the King a fool? It seems he’s just a dumb bug who flew into a light and died. That’s not so great. The “great mystery” turned out to be dumb. Hmm.

The details in this song are so lovingly rendered (“The beetle king summoned his men / From the top of the rhododendron stem”; “The lieutenant stepped out from the line; As he lassoed his thorax with twine”) that I can’t help but feel that Weiss empathizes with the plight of the Beetle King.

Why not be utterly changed into fire?

Wuff..

Most of all this album sounds peaceful and gentle and assured and pretty. Here’s “A Stick, A Carrot, A String”, which positively shimmers:

Look, this one is straight up about the baby Jesus, from the perspective of the animals who were around when he was born. Weiss finds time to make you empathize with each one:

At a distance stood a mangy goat
With the crooked teeth and a matted coat
Weary eyes and worn
Chipped and twisted horns

Thinking “maybe I’ll make friends someday
With the cows and the hens and the rambouillet
But for now, I’ll keep away
I’ve got nothing smart to say”

God damn - I love that goat.

There are so many songs I love on this album (all of them). Other highlights:

  • “Goodbye, I!” – the bit where he sings “Let’s stand completely still” and then the drummer pitter patters out a little drum solo makes me catch my breath each time I hear it
  • “The Fox, The Crow and the Cookie” – this is a perfect little story delightfully rendered
  • “Fig With A Bellyache” – First of all, that title, come on. But it’s the most stylistically weird song on the album and features some wonderfully awkward lyrics (“The dog below our waists aroused as arms embraced the pretty gals / It came much more as a surprise / It happening while I hugged the guys”)
  • “Cattail Down” – for the conclusion “You think you’re you? You’re not you. You’re everyone else.” and some sweet trumpet action

One thing I loved about this album when it came out is that it felt like the culmination of everything that came before it. Over time their albums became less angry and more concerned with humility. The arrival of wind and string instruments felt like an answer. The urgent searching of youth seemed resolved. The answers had been found.

So where do they go from there?

Ten Stories (2012)

For one thing, they decided to play their electric guitars on all the songs again. I think I recall reading that the non-Weiss members of the band wanted to play more rock songs again, and Weiss was like, sure. When it came out, I was disappointed that it didn’t feel like a continuation of the aesthetic of the previous album. In some ways, it is, though. It’s also concerned with fables, this time it loosely tracks the story of a collection of circus animals who cause a train wreck and make their escape. I don’t really connect to this story. I dunno. For me, this album is just fine.

There’s one song which sounds like it could’ve been on the previous album, “Cardiff Giant”, and it is my favorite song on the album:

Pale Horses (2015)

When this one was coming out, I felt trepidation. Were mewithoutYou drifting in a direction I didn’t like? Was Weiss being held hostage by the rock and roll instincts of his band mates?

I don’t know, but this album is super good, so who cares.

Here’s something from an interview with Weiss:

Interviewer: What does Pale Horses mean to you?

Weiss: That’s a hard one. That question could take up the whole interview. I wouldn’t know where to begin, but it probably doesn’t differ too much from our other albums in that it’s just an expression of where I’m at and what I’ve been experiencing in the past few years, what my convictions are and what my hopes are. It’s pretty personal. Probably more so than some of our more recent albums, which were a little bit more distant and third person and fabulous – you know, fable-based and character-based. For this one I’ve come back more to the first person to try to share what’s in my heart and what’s on my mind, and hopefully be somewhat uplifting in the process. But covers a wide range of things, so it’d be too hard to pinpoint any one of those.

So here’s my theory: the hard rock vibe doesn’t work well with the fables (see Ten Stories) but it works great for the not stories.

The interview continues:

Interviewer: What do you think brought mewithoutYou back to you, as opposed to those third-person fables? Why did you decide to do that?

Weiss: For a couple of reasons, I guess, but the easiest one to give you is that the guys in my band asked me to, and I was happy to oblige. A little bit more of a personal reason has to do with wanting to keep things fresh. When we put out It’s All Crazy! and Ten Stories, that was me trying to move away from the first person and get the ego out of the way, trying to write in a way that others could relate to just as well, or address content that might have some universal – or quasi-universal – significance, and trying to avoid letting my subjectivity muddy the waters. But more recently, I’ve started to doubt whether I can ever do that, and it felt like maybe those attempts were too ambitious and I was maybe biting off more than I could chew. So I thought if there’s one thing I can write about with some insight, it’s myself. That’s not to say I even have any expertise on myself…

Aha. Here is the real, correct answer to “where do you go” from the 2009 album. You walk it back a little. You didn’t find the answers. Obviously. C’mon. Who are you? All that peace you cultivated? Maybe try living life for a little while and see how that goes.

So: he’s screaming again.

Here’s a lovely music video for “Red Cow” and “Dorothy” which hard-lefts from a screamy rock jam into a graceful, melancholy wind-down:

This feels like rich new ground to root around in.

Here’s another song, Blue Hen, that just sounds great, and contains an all-time great Weiss freak out

And I’ll wrap up your absence
In blankets of reverence
A mastodon shadow
Divided by zero

I have no idea what that means but I totally know what it means and I love it.

My favorite song has got to be the last one, “Rainbow Signs”:

It’s long and builds slowly and contains the apocalypse and I guess is about his father dying. I find it very powerful and kind of awe inspiring.

(????) (2018??)

mewithoutYou seems to be in the studio, per their twitter, and a new album should come out soon enough, and I can’t wait.