The Road to TypeScript at Quip, Part One

By Rafael Weinstein

Modern software engineering frequently feels like the Aliens. We are all Sigourney Weaver, complexity is the alien threatening to tear our limbs off, and tooling is the robotic exoskeleton that makes it anything close to a fair fight.

In late 2018, Quip engineering began an investigation into the prospect of converting its client-side code from Google Closure Compiler-annotated JavaScript to TypeScript. While Closure Compiler was (and still is) unmatched in optimizing and minimizing JavaScript, Quip’s periodic engineering surveys pointed to deep frustration with the experience of authoring and maintaining its style of annotated JavaScript. Meanwhile, many on the team had experience using TypeScript and made the case that an investment in transitioning to TypeScript would pay returns in terms of productivity, code correctness, and engineer satisfaction, by way of:

  • Interactive compiler feedback (i.e. while editing)
  • First class integration in code editors
  • Integrated syntax
  • A modern and flexible type system (which, in many cases, could better model patterns that were already in use)
  • Excellent support for React
  • A thriving user community
  • Active development from a focused and responsive team at Microsoft

This is part one in a two-part series about Quip’s experience attempting to migrate a mature codebase from Closure Compiler to TypeScript. Part one is about our analysis of our options and the resulting plan, and part two is our experience traversing this path.

Two Basic approaches: Coexist vs The Big Bang

In 2018, Quip’s client code was about 500K lines of globally-namespaced, Closure Compiler-annotated JavaScript with a fair amount of custom tooling supporting it — including a home-grown compiler pass for the compiler which offered type safety when using React. This tooling (and the whole migration) was maintained by the recently formed “Client Infra” team, which focused on ensuring that all Quip engineers can be productive and write safe and efficient code.

We had the benefit of Lucidchart’s chronicle of a similar effort as well as having worked and been in touch with people still at Google working to support internal use of TypeScript.

After some initial investigation, it became apparent the question of how to get from here to there came down to two basic approaches: all at once or incremental: we called these two approaches “The Big Bang” vs. “Coexist.”

The Big Bang

The Big Bang is easy to understand. At some particular moment (a flag day), a major change takes place (some combination of automation and human effort), and after that moment, all the Closure Compiler-annotated JavaScript has become TypeScript.

The Big Bang is simple but it’s potentially risky. You can fail to put Humpty Dumpty back together again — perhaps after an extended and costly off-lining of a bunch of now-cranky engineers. Or worse, Humpty Dumpty might look fine on the outside, and your customers get to discover its insides are all scrambled.

Coexist

The other approach is to create a world where TypeScript and Closure can peacefully coexist. This was the approach being taken out of necessity by larger projects at Google. The idea is that any given module is either TypeScript or Closure. A special set of tooling automates the creation of language specific type declarations so that TypeScript can “see” types written for Closure Compiler and/or vice-versa. In Google’s system, Closure Compiler continues to produce a binary by requiring that all modules either be compiler-style JS or be convertible to it.

Coexisting reduces risk by allowing incremental progress towards your goal, but it has downsides. First, progress can be slow. Second, it requires a fair amount of tooling to support (tooling, in particular, that you don’t need in your desired end-state). Third, it means that many engineers need to live in a split world of two different syntaxes and type systems, possibly for a long time.

Quip had some experience with coexisting systems — when we did our React migration in 2015 we (out of necessity) did a gradual port from our old system to one rendered by React and backed by a new client-side model. That migration took about nine months, but it had a few mitigating factors:

  • The core editor was mostly untouched, since it was already mostly compatible with the new React world
  • The codebase was a lot simpler back then (this was before Live Apps, charts, Salesforce integrations, and much more)

We estimated that without an all-hands-on-deck approach to actively convert to TypeScript, the two worlds would need to coexist for years.

Quip’s Approach: A Series of Medium Bangs

Many large engineering trade-offs come down to values. What was paramount to Quip was:

  1. Our customers’ trust — which meant not risking destabilizing the product or jeopardizing existing schedules.
  2. Our engineers’ productivity — which meant wanting to minimize disruption to the larger engineering organization.

In other words, we wanted the stability of an incremental solution, but we really didn’t want engineers living for an extended period of time having to frequently do a large mental shift from one syntax and type system to another.

The approach we settled on was basically the Big Bang approach, modified to de-risk in two directions:

  • Break it down into a series of smaller steps to be deployed one by one. The two biggest steps were transforming the global namespace to ES6 Modules and Closure Compiler-annotated JS to TypeScript, but there were quite a few smaller ones.
  • Front-load the human effort for each step, so that we could be reasonably sure that once the automated piece of the transformation ran, we’d more-or-less immediately be back in a stable state.

Our implementation

Conceptually, our solution ended up looking like (but not actually being) a set of codemods — that is, operations over the AST of each source file. For each transformation step in our pipeline there were a set of AST operations that transformed a particular code pattern and zero or more invariant checks whose goal was to ensure that all the code would be correct and consistent (i.e. having created zero new bugs) after the full transformation. As part of the attempt to transform all the code, violations of the invariant checks were recorded (along with a code location) and the full set of these violations represented all of human work that was required in order for the result of the transformation to be committed back to our source repo.

We considered Google’s tools as part of the solution, but the lack of support for JSX syntax, slow iteration (because it spun up a full Closure compile behind the scenes), difficulty of precisely controlling diagnostic output (so we could identify code that needed pre-transformation work), and lack of control over the transformation of Quip-specific idioms (for example, syntax that was only meaningful to our custom React/Closure compiler pass) drove us to start from scratch.

For the most part, the AST operations were relatively simple and single-pass. A few cases (for example, resolving Closure’s @override annotation), required two passes: one to build a full catalog of metadata and a second to make the change.

To get a sense of what this looked like, here are some examples from the two biggest steps in our pipeline.

Initial Compiler Compiler-annotated JS (with namespaced symbols)

import "core/strings.js"; /* global strings */

/** @fileoverview URL utility functions. */
const urls = {};

/**
 * @type {RegExp}
 * @const
 */
urls.URL_REG_EXP = RegExp("...");

/**
 * @param {string} url
 * @return {boolean}
 */
urls.isUrl = function(str) {
    return strings.isFullRegexpMatch(urls.URL_REG_EXP, str);
};

After ES6 modules transform

import * as strings from "core/strings.js";

/** @fileoverview URL utility functions. */

/**
 * @type {RegExp}
 * @const
 */
export const URL_REG_EXP = RegExp("...");

/**
 * @param {string} url
 * @return {boolean}
 */
export function isUrl(str) {
    return strings.isFullRegexpMatch(URL_REG_EXP, str);
}

After TypeScript transform

import * as strings from "./strings";

/** @fileoverview URL utility functions. */

export const URL_REG_EXP: RegExp = RegExp("...");

export function isUrl(str: string): boolean {
    return strings.isFullRegexpMatch(URL_REG_EXP, str);
}

Our Plan

To convince ourselves that the approach could work at all, our first step was to prototype the entire pipeline. Though it did not produce a “working” bundle, it uncovered most of the major issues that we would encounter. Once that was achieved, we started focusing on deploying each successive step in the transform:

  1. Manually fix all code locations identified by a transform attempt as invariant violations (and thus could not be automatically transformed). This was usually managed by way of a Quip checklist of code locations with each assigned to someone familiar with the code (i.e. git blame).
  2. Validate the post-transform code is stable (e.g. has zero Closure compiler errors, passes tests, binaries seem to work).
  3. Address any aesthetic concerns (e.g., when converting to ES6 modules, a readable and consistent style for import identifiers).
  4. Run and commit the transform step — usually after the last push to production for the week.
  5. Do any early round of manual validation that all is quiet (push to staging, some manual testing, etc...)

So how did all of this go … ?

So that was the plan — at least that was the plan as it now looks in retrospect, with parts of it created along the way in response to inevitable challenges that presented themselves.

Part two of this series details the various ups and downs of attempting to put this plan into action.