Multiple Steps and Normalization
Learning Objectives
- You understand multi-step reduction as the reflexive transitive closure of single-step reduction.
- You can distinguish between weak and strong β-reduction and their normal forms.
- You understand β-equivalence and know that normalization is undecidable in general.
- You can implement strong β-reduction and parse lambda calculus terms.
Introduction
In the last chapter, our weak -reduction couldn’t reduce application subterms in the argument position. For example, the following terms are in weak-head normal form.
In this chapter we will study a stronger notion of -reduction which allows reducing every possible subterm. This makes the previous three terms have the same normal form according to the stronger -reduction.
Before doing that, let’s explore the concept
Multiple Small Steps
Now that we have a single-step weak/call-by-name reduction, we can extend that to the notion of a multi-step reduction, which let’s us do zero or more reductions in one go. Multi-step reduction is denoted with and is defined as the reflexive transitive closure of .
Termination of Evaluation
Consider the following term.
As is a value we cannot apply the App1 rule, but we can apply the AppAbs rule to reduce the term to
Which means it reduces to itself!
This term is called the Omega-combinator () and is an example of a term which has no normal form.
Trying to multi-step a term with no normal form in the Rust implementation results in the computation diverging unless we constrain it using a maximum number of steps.
Unfortunately this means that the Rust implementation is somewhat faulty as the multistep function is not total, and hence not a mathematical function.
In the next part where we will study typed lambda calculi we will get rid of this issue at the cost of turning our language effectively back to a non Turing-complete calculator. Having a Turing-complete/computationally powerful language comes with the cost of losing normalization of all terms.
Normalization
Our definition of weak -reduction focused on reducing the left subterm in an application but left some things to be desired. For instance, if the right subterm is reducible, we should reduce it. This leads to a smaller set of normal forms which are value-like terms.
Reminder: Weak Beta-Reduction Rules
To get to the full “strong” -reduction, we add two more rules. We reduce the term in the argument position of application:
And we reduce terms inside abstractions.
From now on, when referring to -reduction, we mean the strong variant with the four rules:
AppAbs,App1,App2andAbs.
Confluence
As we learned in part 1: Simplification and Normalization, rewriting systems (like -reduction) can have the useful property of being confluent. As a reminder, this is the general definition of confluence of a relation.
It’s recommended to read through part 1: Simplification and Normalization now to refresh your memory on the subject.
We are naturally interested in the question: is -reduction confluent? That is, given any two distinct terms with a common ancestor (a term such that and ), is there a common descendant of and ?
Let’s demonstrate this diagrammatically by looking at the different -reductions of
As it seems, the answer to the question is in fact yes, -reduction is confluent. This statement is also called the Church-Rosser theorem, for which you can find a proof in chapter 4 of Selinger: Lecture Notes on the Lambda Calculus. Although we won’t prove it, let’s write it down into a nice box.
As we learned in part 1, a confluent relation is functional if the relation is also weakly normalizing (i.e. every term has a normal form). But as we know doesn’t have a normal form, so -reduction is not weakly normalizing.
However, confluence implies that normal forms are unique when they exists, which gets us to the next subject: -equivalence.
When Are Lambda Terms Equal?
When two terms have the same normal form (according to -reduction), we call them -equivalent. -equivalent terms behave in the same way and are practically indistinguishable from their -equivalent counterparts. -equivalence is a weaker requirement than -equivalence but still captures the idea of two terms being the same
The definition of -equivalence is simply the reflexive symmetric transitive closure of -reduction.
To test if two normalizing terms are -equivalent, it suffices to compare their normal forms.
As it’s undecidable whether a term normalizes or not (for general ), -equivalence is an undecidable relation. However, we are usually not interested in -equivalence between non-normalizing terms like .
Let’s look at some examples next.
The last statement is an example of how -equivalence fails to capture the notion of extensional equality of functions. To learn more about extensional versus intensional equality, read the first chapter in Selinger: Lecture Notes on the Lambda Calculus.
Interaction and Parsing
To make interacting with the language more convenient, a parser implemented in nom is provided in the following code snippet.