It’s hard to find a textbook series garnering more effusive praise than *The Little Schemer*, *The Little Prover*, *The Little Typer* & co.
*The Little Typer* introduces dependent type theory and is the first of the series I’ve read.
I quickly grew to appreciate & enjoy its dialogue-based presentation - I’m a real convert!
I might release future didactic blog posts as a dialogue rather than straight recitation of material in block paragraphs.
Humans seem to naturally find conversations more interesting than the traditional lecture format of a single voice droning on, which might explain why comment sections often get more traction than the actual article being commented upon!
Or the popularity of topic podcasts versus lecture series.

*The Little Typer* was smooth sailing right up to the very start of Chapter 9, titled *Double Your Money, Get Twice as Much*.
I had to read chapters 7 (introducing inductive functions) and 8 (introducing inductive proofs of equality) twice, but they yielded easily to careful consideration.
Not so for chapter 9, with its introduction of the seemingly-esoteric `replace`

function giving the impression we had blasted off into the type theory stratosphere.
While searching online I found there were others (1 2) who had also found this chapter notably difficult.
After mulling it over for a few days and asking a question on CSSE, I’ve finally come to understand `replace`

- and it turned out to be really simple!
So much so that I’d like to humbly submit the explanation in this blog post as a missing prelude to the chapter.
Bookmark this page if you’re planning to read the book.
If I weren’t on sabbatical I think I would have tapped out here, and it would be a shame for that to happen to someone else - it’s a really good book!

## Summary for the tourists

I’ll write a quick summary of dependent type theory for those who aren’t currently reading the book.
Dependent types are types that depend on things that are not types.
This explanation sounds like nonsense without a concrete example, so here’s the one that’s always used - vectors.
In most programming languages you declare a vector of type `vec<int>`

.
The vector is a generic type you can parameterize with another type, so `vec<int>`

is a different type than `vec<float>`

.
Here, the size of the vector is only known at runtime.
Consider a slight modification, where you declare the size of the vector as part of the type: `vec<int, 3>`

.
The type `vec<int, 3>`

is a different type from `vec<int, 4>`

.
This new vector type *depends on* a value, some positive number, that is not itself a type.
With this definition we can restrict functions like `head`

to not allow vectors of type `vec<int, 0>`

.
If you try to call `head`

with an empty vector you’ll get a compilation error instead of a runtime error.
You can also give functions fancier type signatures; `append`

maps `vec<int, n>`

to `vec<int, n+1>`

, for example.
Dependent types aren’t yet widely used, but can be found in the languages like Agda, Idris, and ATS.

Dependent type checkers often aren’t smart enough to automatically figure out whether your input to `head`

is a `vec<int, 0>`

or not.
So you have to give them some help, which ultimately takes the form of a formal proof that the value passed to `head`

is not a `vec<int, 0>`

.
These proofs are often fairly simple, and the popularity of Rust shows there is some appetite for pre-compilation hoop-jumping.
Still, the real use case of dependent types is making it so you can publish a library and people will be forced to write a formal proof before they’re allowed to call it.
This appeals to a certain sort of personality, and you could easily believe dependent types were cooked up by a grad student with a vendetta against coding bootcamp grads.
In all seriousness there’s a whole underdeveloped frontier of API design here; it’s critical to choose your function types so that they’re easy to target and use in proofs!
It is also possible to call dependently-typed libraries from languages without dependent types by converting the types into runtime assertions.

The connection between type checking and formal proofs runs deep.
Writing a type checker for even a language with a very simple type system feels a lot like you’re writing an automated theorem prover.
In fact, one reason dependent types are so interesting is because they lend themselves well to proving general mathematical theorems!
You can write a theorem *as a type*, and then writing a program which constructs a value *of that type* is equivalent to proving that theorem.
The proof is checked with the same logic the type checker uses when analyzing ordinary programs.
This proof/program equivalence is widely known as the Curry-Howard correspondence - of course, while all proofs are programs (and vice-versa) each instance isn’t necessarily a useful example of the other (if you have thoughts about this please answer my CSSE question!).
Several proof assistants like Lean and Coq use dependent types.
This proof/type-checking correspondence works so well that I’ve put a decent amount of time into using Lean without ever once understanding I was writing a program to construct a value with the type of the theorem I was proving!

## The Dialogue

Insert this dialogue directly at the start of chapter 9.
I was considering writing some HTML to mimic the book’s two-column format but it would be difficult to make that responsive for mobile users.
The two people in this dialogue are the teacher **T** and the student **S**.

**T:** In chapter 8 we proved a number of `=`

-statements, including the simple `+1=add1`

and the more difficult `incr=add1`

.
What did we gain from proving these?

**S:** It’s nice to know whether things are true! Also, the act of proving something is itself statisfying.

**T:** It sure is.
We also gained something else: we can use these `=`

-statements as tools to prove other `=`

-statements!
Similar to an old-school adventure game, overcoming each challenge leaves us stronger and with more tools to tackle further challenges.

**S:** I certainly do feel myself becoming more capable.
How can we use `+1=add1`

and `incr=add1`

as tools to prove other `=`

-statements?

**T:** Let’s prove another `=`

-statement:

```
(claim incr=+1
(Π ((n Nat))
(= Nat (incr n) (+ 1 n))))
```

How would we prove this?

**S:** Since the neutral expressions `(incr n)`

and `(+ 1 n)`

don’t have the same normal form when evaluated, I guess we need to prove it inductively with `ind-Nat`

similar to the proof of `incr=add1`

.

**T:** Don’t be so hasty!
First try to informally convince yourself `incr=+1`

is correct.
What does your reasoning look like?

**S:** From `+1=add1`

we know `(= Nat (+ 1 n) (add1 n))`

, and from `incr=add1`

we know `(= Nat (incr n) (add1 n))`

.
So it seems obvious that `(= Nat (incr n) (+ 1 n))`

is true.

**T:** Good, this is a basic property of equality called *transitivity*.
If A equals B, and B equals C, then A must equal C.
To use transitivity in Pie, we must use a new eliminator called `replace`

.
Just like `cong`

, `replace`

is an eliminator for `=`

.
`replace`

takes three parameters: `target`

, `motive`

, and `base`

.

**S:** What does `replace`

do, exactly? And what are the values of those parameters?

**T:** `replace`

rewrites part of an expression according to a `=`

-statement known to be true.
`target`

is the `=`

-statement you want to use for the rewrite; `base`

is the expression in which the rewrite is to occur.

**S:** What about `motive`

?

**T:** `motive`

is the only real tricky part of `replace`

.
`motive`

is how you describe what part of `base`

you want to be rewritten.
You can be an expression surgeon, replacing only one small part of the expression with something else!
We’ll put off examining the precise workings of `motive`

for a bit.

**S:** Okay.
So we want to use our proofs of `+1=add1`

and `incr=add1`

as parameters to `replace`

to form a proof of `incr=+1`

.
How do we do that?
What do we start with as the `base`

?

**T:** First we need to prove one more quick theorem.
We have `+1=add1`

, what about `add1=+1`

?

```
(claim add1=+1
(Π ((n Nat))
(= Nat (add1 n) (+ 1 n))))
```

**S:** This seems obvious.
Isn’t equality symmetric?
Anyway, since the normal form of `(+ 1 n)`

is `(add1 n)`

, the proof of `add1=+1`

is the same as for `+1=add1`

:

```
(define add1=+1
(λ (n)
(same (add1 n))))
```

Why did we need to do this?

**T:** Maybe equality is symmetric, but we aren’t quite ready to use that fact yet.
Thanks for proving `add1=+1`

; remember how the parameters to the `=`

type constructor were called `(= X from to)`

?
`replace`

only works by rewriting expressions matching `from`

to match `to`

, not vice-versa.

**S:** I was wondering why they were called that.
So how about that `base`

?
Is it something like `(same (incr n))`

?

**T:** I suggest we start with `(incr=add1 n)`

, a value with type `(= Nat (incr n) (add1 n))`

.
You can use `=`

-statements as a `base`

for the expression you’re building in addition to using them as a `target`

to transform other expressions.

```
(define incr=+1
(λ (n)
(replace
________
________
(incr=add1 n))))
```

The `________`

are placeholders for the `target`

and `motive`

.

**S:** I see what we need to do.
We must rewrite the `to`

of `(incr=add1 n)`

’s type to be `(+ 1 n)`

.
Then we will have constructed a value proving `incr=+1`

.
Now that I think of it, isn’t `replace`

different from any kind of function we’ve seen before?

**T:** Different how?

**S:** All the functions we’ve seen so far at least transform the value they’re given.
Like `incr`

adds one to any `Nat`

you give it, although the type isn’t modified - the output is still a `Nat`

.
Then we saw functions like `tail`

that both transform the `Vec`

passed into it, *and* change its type - by decreasing the number of elements it contains.
But replace doesn’t transform the value of `base`

at all!
It only modifies its type!
That’s kind of spooky.

**T:** It sounds strange when you put it like that.
Functions such as `replace`

are more like annotations to help the type checking process instead of actual operations that affect the output value of a function.
Just something to get used to!
Anyway, what should be the `target`

of our `replace`

function?

**S:** Well, we need to rewrite the expression `(add1 n)`

to be `(+ 1 n)`

.
So it should probably be `(add1=+1 n)`

, which has type `(= Nat (add1 n) (+ 1 n))`

.
The `from`

of `(add1=+1 n)`

is `(add1 n)`

, and the `to`

is `(+ 1 n)`

which meets our requirements.

```
(define incr=+1
(λ (n)
(replace
(add1=+1 n)
________
(incr=add1 n))))
```

Now all that’s missing is the `motive`

.

**T:** Your target is correct; it’s time to go over the `motive`

.
This is tricky, but we can do it.
If `target`

has type `(= X from to)`

, then `motive`

is some function `(→ X U)`

such that:

`base`

is a`(motive from)`

, and`(replace target motive base)`

is a`(motive to)`

.

Remember that the purpose of `motive`

is to identify what part of the `base`

expression should be rewritten, and `U`

means the type of types.

**S:** You’re right, it is tricky.
So the output of `motive`

itself is a type.
I guess it makes sense that the entire result of `replace`

is of type `(motive to)`

, but it’s confusing that `base`

is a `(motive from)`

.
The `motive`

function seems very interconnected with everything.
Maybe an example `motive`

would help?

**T:** Well, here’s the motive we’re looking for; try it out:

```
(claim mot-incr=+1
(→ Nat Nat
U))
(define mot-incr=+1
(λ (n)
(λ (k)
(= Nat (incr n) k))))
```

**S:** The type of `mot-incr=+1`

is different than you described.
It’s a `(→ Nat Nat U)`

, not a `(→ Nat U)`

as promised.

**T:** Remember how partial application works, and that we’re operating inside a `Π`

-expression.
Everything has to be parameterized with an additional `Nat`

because we’re proving this equality for all `Nat`

.
So `(mot-incr=+1 n)`

is the `(→ Nat U)`

you’re looking for.

**S:** Okay, that makes sense.
So the `from`

of `(add1=+1 n)`

is `(add1 n)`

.
If we plug it into `(mot-incr=+1 n)`

we should get the type of the `base`

, `(incr=add1 n)`

; let’s see:

```
((λ (n)
(λ (k)
(= Nat (incr n) k)))
n (add1 n))
```

```
(= Nat (incr n) (add1 n))
```

**T:** That is indeed the type of the `base`

`(incr=add1 n)`

.
What about applying `(mot-incr=+1 n)`

to the `to`

of `(add1=+1 n)`

?

**S:** The `to`

of `(add1=+1 n)`

is `(+ 1 n)`

, so:

```
((λ (n)
(λ (k)
(= Nat (incr n) k)))
n (+ 1 n))
```

```
(= Nat (incr n) (+ 1 n))
```

Which is indeed the desired output type of this `rewrite`

function, and the ultimate type we’re trying to prove!
We did it!
Seems complicated, but is surprisingly simple: we just stick a parameter where we want the rewrite to happen.

**T:** Exactly.
Finally, we have the full proof:

```
(define incr=+1
(λ (n)
(replace
(add1=+1 n)
(mot-incr=+1 n)
(incr=add1 n))))
```

Now that you know how to use `replace`

, your proofs will become much shorter.
In addition to equalities you’ve already proved, there’s another common source of `base`

and `target`

parameters for `replace`

.
Do you know what it is?

**S:** No, what is it?

**T:** The partial answer you get in the `step`

function of `ind-Nat`

, also known as the inductive hypothesis if you’re a mathematician.
You’ll often use the inductive hypothesis to rewrite an expression to get the type you want.
In fact, let’s try that out now - rewrite the proof of `incr=add1`

from chapter 8 to use `replace`

in the `step`

function instead of `cong`

!

END

At this point you can continue on from frame 12 of chapter 9, where you do indeed rewrite the proof of `incr=add1`

using `replace`

in the `step`

function.

## Conclusion

I think the above dialogue addresses the three points of confusion with `replace`

as presented in the book:

- Insufficiently motivated - what does
`replace`

give us that we don’t already have? - Confusion over how
`replace`

only affects the value’s type, not the value itself - Unclear explanations of
`target`

,`base`

, and - most critically -`motive`

Ultimately I think the book’s explanation just rested too heavily on the difficult type signature of `replace`

to explain how it worked.
An easy mistake for an expert to make, and a valuable skill for a beginner to acquire!

I hope you found this dialogue helpful and can now enjoy the rest of the book.