Types Provide Confidence
Learning Objectives
- You know what types are.
- You know the challenges related to type checking lambda calculus.
- You know of typing contexts.
- You know of hypothetical typing.
- You understand the typing rules of simply typed lambda calculus.
Introduction
In this chapter, we will study how a type system could be added on top of lambda calculus and develop the theory of the simply typed lambda calculus. To start off, let’s think about types in the language developed in the previous chapter — a lambda calculus style language with booleans and natural numbers.
As a reminder, the type quantifier binds a type variable and makes the type polymorphic — i.e. able to work with any type that is provided. We will use the double colon notation () to denote the type of the term in question.
Each of these types except for the last is a simple type — the last type contains a quantifier, so it is not simple.
What’s common between these terms is that they are closed terms.
Terms like
are not closed, but we can (and have to) still reason about their types.
For example, plus_one has to have the type of a function, and happy and sad have to have the same type.
Type Checking: A Motivation
Let’s motivate the topic of this chapter by considering the following question.
The answers to this are manifold, although quite subjective. Here are what the course authors find most important.
-
Expressiveness of the language. A programming language without a static type checker lets the user write functions which can accept and return data of different (runtime) types without enforcing a contract. Such languages are error prone to crash when doing computations with an unexpected value. As an example, we can write the double function in Python as follows
def double(x): return 2 * xThe function can be called with values ranging in integers, floating points, booleans and even strings and lists. The programmer has to be aware of these details of the language to have confidence that their code works correctly. The code itself is not expressing its range of functionalities and instead hides the functionality behind the multiplication operator. As the multiplication operator can be defined to return an arbitrary value, there is no requirement that
double(x)has the same runtime type/class asx.Compare this with Rust, where the translation of the
doublefunction can only be called with integersfn double(x: i32) -> i32 { 2 * x }There is no hidden behavior for what happens when calling the function with booleans or lists. If we wanted to emulate the Python function’s behavior more closely, we would need to use traits (type classes) or sum types. The sum type, or enum version, of the code could look as follows
enum IntOrString { Int(i32), String(String), } fn double(x: IntOrString) -> IntOrString { match x { IntOrString::Int(n) => IntOrString::Int(2 * n), IntOrString::String(s) => IntOrString::String(format!("{s}{s}")), } }But the type of the function
double : IntOrString -> IntOrStringdoesn’t express a key invariant: the variant is preserved during the function. As this is quite an important feature of the function, how could we make the type reflect the behavior better?As an example of the expressiveness of Rust’s type system, we solve this “expression problem” with the use of a trait called
Doubleable.trait Doubleable { fn double(self) -> Self; } impl Doubleable for i32 { fn double(self) -> Self { 2 * self } } impl Doubleable for String { fn double(self) -> Self { format!("{s}{s}") } } // This is exactly `Doubleable::double` fn double<T: Doubleable>(t: T) -> T { t.double() }Now there is no question that
double(x)has the same type asx, and the behavior can be extended to other types as necessary. Types are therefore a strong mechanism for enforcing simple contracts such as what kind of data a function accepts and returns. Polymorphism and type classes are a way to generalize a function to work with various concrete types: they express that a function can work with any type (possibly with some trait bounds). -
Confidence at runtime. Types eliminate a class of bugs where an incompatible runtime value is passed to a function. Such bugs can be hard to find if the range of values in normal/successful operation have been tested to work. Edge cases and error handling might be harder to test and what is worse, might involve runtime values outside the range that is encountered during normal/successful operation.
With types, it’s usually easier to analyze what code does. For example, which function/method is called, depends on the object in question. But with static typing, we almost always know the concrete types involved, so it becomes easy to analyze which functions/methods are called.
-
Compile-time guarantees and optimization.:
With a strong static type system we are able to impose stronger guarantees about the code during compilation. Generally speaking this allows the compiler to generate more optimal code because it can make stronger assumptions. Let’s explore a few examples of this by exploring the behavior of the Rust compiler.
The example below shows a seemingly uninteresting block of Rust code matching on a
boolean value. What’s interesting is if we think what the code would look like if we couldn’t guarantee thatbis a boolean, what if it was instead any value that fits into a byte? That would mean that our match case wouldn’t be exhaustive anymore.Languages that cannot guarantee type preservation, aka. dynamically typed programming languages, have to emit extra code in order to make sure that the runtime types match the operation.
let b: bool = ...; // bool takes 8 bits of memory and is indistinguishable from u8 at runtime match b { true => ..., false => ... }Rust’s type system goes way beyond even most other static type systems. Some of the guarantees are particularly useful for optimization purposes, this example shows how the Rust compiler is able to generate better code for a seemingly simple function. The key in this optimization is that Rust’s type system, and more importantly, the borrow checker are able to guarantee that mutable references don’t have concurrent writers, which then allows the compiler to only read the value of
bonce. This is something the C-compiler cannot assume and therefore must always generate two reads to maintain correctness. -
APIs and compatibility. As the programming language ecosystem evolves, libraries undergo various changes to their API. Types are an excellent tool for expressing the API and its invariants. If a library changes in a breaking way, its clients will (be more likely to) get a type error rather than encounter a runtime bug. A type error is also much easier to debug than a runtime error — to retry after making adjustments, one only needs to compile the project.
Simple Types
In part 4 in Type Systems and Type Checking we added a type checker for conditional arithmetic (), and types were just an enumeration with two variants.
enum Type {
Integer,
Boolean,
}
had both integers and booleans, but no functions. To describe the type of function, we need a more sophisticated type system.
Let’s begin by defining what types are in the simply typed lambda calculus (STLC) with booleans.
Types in STLC (with any extensions) are simple, so they can’t contain polymorphic type quantifiers1.
Infinite Types Are Illegal
Types are defined inductively, so they must be finite. The infinite type-like object
which comes for example from the recursive definition
is not a type.
As the dynamics of STLC are “the same” as in lambda calculus, we begin by studying its type checking.
Type Checking
The relation that a term has the type is denoted . Here are some STLC terms and their types.
How should we define this typing relation ? As always, because terms have a recursive structure, a good first guess is inductively, i.e. recursively.
Let’s consider the following term.
As a lambda abstraction with parameter type Bool, we can immeditely say that its an arrow type with the domain Bool:
However, to infer the type of the codomain, we need to type check , a mere variable term. Clearly we need some way to keep track of variables in a context-like fashion.
Hypothetical Typing with Context
As a reminder, contexts were defined in Variables and Abstractions as follows.
However, for type inference, the set of variable names is not enough, we also need the types of variables which are provided as part of the abstractions.
Now, we can answer the question “what is the type of a variable” as follows.
The above definition requires that the type of a variable comes from the last occurrence of the variable in the context. This is important to notice in terms involving variable shadowing, i.e. nested abstractions with the same variable name.
From now on, we refer to typing contexts using just the word “context” as it’s always “clear from the context” which kind of context we are working with.
Type checking is often used to refer to the verification process that takes in a term and a type and verifies the term really has the provided type.
On this course we use type checking to refer to the decision problem whether a term is well-typed or not. “Does this code type check?” is a decision problem as it has a yes/no answer.
Type inference is the term we use for the computational procedure for deriving the type from a term. If type inference fails the term is ill-typed. Type checking can then use type inference to check if the term is well-typed.
On this course, the terms are always from the simply typed lambda calculus, so type inference doesn’t need to work out the type of function parameters.
Hypothetical Typing Rules
To define the typing relation , we define an auxiliary relation for hypothetical typing which keeps track of the context.
The base rules governing the typing of true and false terms are straight-forward.
As a general rule, can be any (typing) context from now on.
The typing rule for if-then-else requires that the condition is a boolean, and that the if-true and if-false subterms have the same type .
Here’s an example instance of the Ite rule.
The second and third premise are instance of the base rule, but the first premise contains the variable
This corresponds with the “type of a variable” defined earlier — we just need to express it as a typing rule. To express the “type of a variable” rule, we need to first express how to access a variable from the context. For this, we need to define the relation .
As mentioned earlier, we need to search the context in reverse, which is easy enough to express as follows
The CtxVar rule says if , then .
The CtxCons rule says that if and (to require that we indeed have the last occurrence of in ) then we get .
Now the “type of a variable” rule gives us if .
Application Typing
The typing rule for an application term comes from our intuition of how function calls are typed. For to have type , should be a function from and the argument should have type .
If the type of is not , the rule can’t be applied and typing will get stuck. Likewise if doesn’t have an arrow type. As can’t have a type, we say it is ill-typed.
Abstraction Typing
To motivate the typing rule for a lambda abstraction, we observe that the application term would have the following typing derivation. As a reminder, a typing derivation is the tree structure consisting of instances of (hypothetical) typing rules.
To fill in the ?, we need to have a typing rule that matches the term .
This would be the typing rule for abstractions.
To type check an abstraction, we need to check if the body of the abstraction is well-typed in the context where we have added the name of the binding variable and its type.
In this case the context is empty, the binding variable with its type is , so we recursively infer the type of the body in the context .
This branch is then conventiently closed by the Var rule.
With this typing rule, called Abs, we could fill in the example as follows:
And so we present the general form of the rule.
You can verify that the instance used in the example matches the rule: .
Using hypothetical typing, we can define the regular “non-hypothetical” typing relation “has type” as simply a special case with the empty context.
When a term doesn’t have a type in the empty context, we say that it’s ill-typed.
Type Erasure
As simply typed lambda calculus is more focused on type checking, its dynamics can be defined using the dynamics of untyped lambda calculus which we have already defined in the previous part. However, we only briefly visit this idea, and from the next chapter onwards, we define dynamics for STLC using call-by-value semantics.
Let’s denote the set of STLC terms (with booleans) with and use to denote the set of terms of untyped lambda calculus (with booleans). We define a translation which erases all types from a term. The translation works recursively, and for lambda abstractions, acts as follows
To help distinguish between the different sets, we draw lines over the terms that are from the set , and underlines for terms from the set . So given a we get an .
For other terms, simply reconstructs the term recursively.
Type erasure is therefore quite similar to desugaring as it’s also a translation between different languages.
To define the dynamics of STLC, we first erase types and then evaluate/multistep the untyped lambda term.
Type Safety Revisited
(Syntactic) type safety has two main properties: preservation and progress. These are true theorems in the simply typed lambda calculus (with the boolean extension as well.)
What it means to be a value is not defined here explicitly. If one defined it as “normal form”, progress would always be true, and therefore a useless property. A better notion of “value” is tied to “canonical forms” which are discussed later.
Suppose we are working with a term which is well-typed (has some type in the empty context). Let’s analyze what happens during its evaluation, if instead of erasing types we reduce the typed STLC terms instead.
If the well-typed term we are evaluating happens to be an application term , both subterms and must be closed. Therefore, if , as is closed, the substitution must be legal.
Notice that we are only talking about a top-level application.
The above logic doesn’t work with applications inside abstractions as the context there is not empty.
By dropping the Abs reduction rule, we ensure that all substitutions that happen during evaluation have to be legal.
The definition for when a term is “Closed Under a Context ” was defined in part 5: Variables and Abstractions. The definition is straight-forward to extend to STLC terms.
Next, we prove a key lemma that’s used to prove that substitution with well-typed terms is always legal. The proof demonstrates working with typing derivations and their inductive structure. It’s not strictly necessary to understand, but gives a picture of what proofs involving typing rules look like.
Feel free to skip the proof on a first reading.
As a side-note about the above proof, as typing rules match the different kinds of terms one-to-one, an alternative proof could use induction in the term .
From this, we immediately get the following special case for when .
Using the corollary, we can prove that substitution with well-typed terms is always legal. We don’t explicitly define what it means for substitution to be legal for STLC terms, as it’s nearly identical to the untyped case (which you can find in part 5: Variables and Abstractions).
By the preservation theorem, if we step a well-typed term, it remains well-typed. Therefore during evaluation, the top term remains well-typed. As long as we don’t reduce inside abstractions, substitutition only happens with closed values. This means that we can implement substitution with the naïve algorithm which saves us from quite a bit of hassle. In the next chapter we do exactly this, and implement call-by-value (CBV) semantics for STLC, which unlike full -reduction, doesn’t reduce under abstractions.