source stringlengths 17 118 | lean4 stringlengths 0 335k |
|---|---|
fp-lean/book/FPLean/DependentTypes/Summary.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.DependentTypes"
#doc (Manual) "Summary" =>
%%%
tag := "dependent-types-summary"
%%%
# Dependent Types
%%%
tag := none
%%%
Dependent types, where types contain non-type code such as function calls and ordinary data constructors, lead to a massive increase in the expressive power of a type system.
The ability to _compute_ a type from the _value_ of an argument means that the return type of a function can vary based on which argument is provided.
This can be used, for example, to have the result type of a database query depend on the database's schema and the specific query issued, without needing any potentially-failing cast operations on the result of the query.
When the query changes, so does the type that results from running it, enabling immediate compile-time feedback.
When a function's return type depends on a value, analyzing the value with pattern matching can result in the type being _refined_, as a variable that stands for a value is replaced by the constructors in the pattern.
The type signature of a function documents the way that the return type depends on the argument value, and pattern matching then explains how the return type can be fulfilled for each potential argument.
Ordinary code that occurs in types is run during type checking, though {kw}`partial` functions that might loop infinitely are not called.
Mostly, this computation follows the rules of ordinary evaluation that were introduced in {ref "evaluating"}[the very beginning of this book], with expressions being progressively replaced by their values until a final value is found.
Computation during type checking has an important difference from run-time computation: some values in types may be _variables_ whose values are not yet known.
In these cases, pattern-matching gets “stuck” and does not proceed until or unless a particular constructor is selected, e.g. by pattern matching.
Type-level computation can be seen as a kind of partial evaluation, where only the parts of the program that are sufficiently known need to be evaluated and other parts are left alone.
# The Universe Pattern
%%%
tag := none
%%%
A common pattern when working with dependent types is to section off some subset of the type system.
For example, a database query library might be able to return varying-length strings, fixed-length strings, or numbers in certain ranges, but it will never return a function, a user-defined datatype, or an {anchorName otherEx}`IO` action.
A domain-specific subset of the type system can be defined by first defining a datatype with constructors that match the structure of the desired types, and then defining a function that interprets values from this datatype into honest-to-goodness types.
The constructors are referred to as _codes_ for the types in question, and the entire pattern is sometimes referred to as a _universe à la Tarski_, or just as a _universe_ when context makes it clear that universes such as {anchorTerm otherEx}`Type 3` or {anchorTerm otherEx}`Prop` are not what's meant.
Custom universes are an alternative to defining a type class with instances for each type of interest.
Type classes are extensible, but extensibility is not always desired.
Defining a custom universe has a number of advantages over using the types directly:
* Generic operations that work for _any_ type in the universe, such as equality testing and serialization, can be implemented by recursion on codes.
* The types accepted by external systems can be represented precisely, and the definition of the code datatype serves to document what can be expected.
* Lean's pattern matching completeness checker ensures that no codes are forgotten, while solutions based on type classes defer missing instance errors to client code.
# Indexed Families
%%%
tag := none
%%%
Datatypes can take two separate kinds of arguments: _parameters_ are identical in each constructor of the datatype, while _indices_ may vary between constructors.
For a given choice of index, only some constructors of the datatype are available.
As an example, {anchorName otherEx}`Vect.nil` is available only when the length index is {anchorTerm Vect}`0`, and {anchorName consNotLengthN}`Vect.cons` is available only when the length index is {anchorTerm Vect}`n+1` for some {anchorName Vect}`n`.
While parameters are typically written as named arguments before the colon in a datatype declaration, and indices as arguments in a function type after the colon, Lean can infer when an argument after the colon is used as a parameter.
Indexed families allow the expression of complicated relationships between data, all checked by the compiler.
The datatype's invariants can be encoded directly, and there is no way to violate them, not even temporarily.
Informing the compiler about the datatype's invariants brings a major benefit: the compiler can now inform the programmer about what must be done to satisfy them.
The strategic use of compile-time errors, especially those resulting from underscores, can make it possible to offload some of the programming thought process to Lean, freeing up the programmer's mind to worry about other things.
Encoding invariants using indexed families can lead to difficulties.
First off, each invariant requires its own datatype, which then requires its own support libraries.
{anchorName otherEx}`List.zip` and {anchorName VectZip}`Vect.zip` are not interchangeable, after all.
This can lead to code duplication.
Secondly, convenient use of indexed families requires that the recursive structure of functions used in types match the recursive structure of the programs being type checked.
Programming with indexed families is the art of arranging for the right coincidences to occur.
While it's possible to work around missing coincidences with appeals to equality proofs, it is difficult, and it leads to programs littered with cryptic justifications.
Thirdly, running complicated code on large values during type checking can lead to compile-time slowdowns.
Avoiding these slowdowns for complicated programs can require specialized techniques.
# Definitional and Propositional Equality
%%%
tag := none
%%%
Lean's type checker must, from time to time, check whether two types should be considered interchangeable.
Because types can contain arbitrary programs, it must therefore be able to check arbitrary programs for equality.
However, there is no efficient algorithm to check arbitrary programs for fully-general mathematical equality.
To work around this, Lean contains two notions of equality:
* _Definitional equality_ is an underapproximation of equality that essentially checks for equality of syntactic representation modulo computation and renaming of bound variables. Lean automatically checks for definitional equality in situations where it is required.
* _Propositional equality_ must be explicitly proved and explicitly invoked by the programmer. In return, Lean automatically checks that the proofs are valid and that the invocations accomplish the right goal.
The two notions of equality represent a division of labor between programmers and Lean itself.
Definitional equality is simple, but automatic, while propositional equality is manual, but expressive.
Propositional equality can be used to unstick otherwise-stuck programs in types.
However, the frequent use of propositional equality to unstick type-level computation is typically a code smell.
It typically means that coincidences were not well-engineered, and it's usually a better idea to either redesign the types and indices or to use a different technique to enforce the needed invariants.
When propositional equality is instead used to prove that a program meets a specification, or as part of a subtype, there is less reason to be suspicious. |
fp-lean/book/FPLean/DependentTypes/Pitfalls.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.DependentTypes.Pitfalls"
#doc (Manual) "Pitfalls of Programming with Dependent Types" =>
%%%
tag := "dependent-type-pitfalls"
%%%
The flexibility of dependent types allows more useful programs to be accepted by a type checker, because the language of types is expressive enough to describe variations that less-expressive type systems cannot.
At the same time, the ability of dependent types to express very fine-grained specifications allows more buggy programs to be rejected by a type checker.
This power comes at a cost.
The close coupling between the internals of type-returning functions such as {anchorName Row (module:=Examples.DependentTypes.DB)}`Row` and the types that they produce is an instance of a bigger difficulty: the distinction between the interface and the implementation of functions begins to break down when functions are used in types.
Normally, all refactorings are valid as long as they don't change the type signature or input-output behavior of a function.
Functions can be rewritten to use more efficient algorithms and data structures, bugs can be fixed, and code clarity can be improved without breaking client code.
When the function is used in a type, however, the internals of the function's implementation become part of the type, and thus part of the _interface_ to another program.
As an example, take the following two implementations of addition on {anchorName plusL}`Nat`.
{anchorName plusL}`Nat.plusL` is recursive on its first argument:
```anchor plusL
def Nat.plusL : Nat → Nat → Nat
| 0, k => k
| n + 1, k => plusL n k + 1
```
{anchorName plusR}`Nat.plusR`, on the other hand, is recursive on its second argument:
```anchor plusR
def Nat.plusR : Nat → Nat → Nat
| n, 0 => n
| n, k + 1 => plusR n k + 1
```
Both implementations of addition are faithful to the underlying mathematical concept, and they thus return the same result when given the same arguments.
However, these two implementations present quite different interfaces when they are used in types.
As an example, take a function that appends two {anchorName appendL}`Vect`s.
This function should return a {anchorName appendL}`Vect` whose length is the sum of the length of the arguments.
Because {anchorName appendL}`Vect` is essentially a {anchorName moreNames}`List` with a more informative type, it makes sense to write the function just as one would for {anchorName moreNames}`List.append`, with pattern matching and recursion on the first argument.
Starting with a type signature and initial pattern match pointing at placeholders yields two messages:
```anchor appendL1
def appendL : Vect α n → Vect α k → Vect α (n.plusL k)
| .nil, ys => _
| .cons x xs, ys => _
```
The first message, in the {anchorName moreNames}`nil` case, states that the placeholder should be replaced by a {anchorName appendL}`Vect` with length {lit}`plusL 0 k`:
```anchorError appendL1
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
ys : Vect α k
⊢ Vect α (Nat.plusL 0 k)
```
The second message, in the {anchorName moreNames}`cons` case, states that the placeholder should be replaced by a {anchorName appendL}`Vect` with length {lit}`plusL (n✝ + 1) k`:
```anchorError appendL2
don't know how to synthesize placeholder
context:
α : Type u_1
n k n✝ : Nat
x : α
xs : Vect α n✝
ys : Vect α k
⊢ Vect α ((n✝ + 1).plusL k)
```
The symbol after {lit}`n`, called a _dagger_, is used to indicate names that Lean has internally invented.
Behind the scenes, pattern matching on the first {anchorName appendL1}`Vect` implicitly caused the value of the first {anchorName plusL}`Nat` to be refined as well, because the index on the constructor {anchorName moreNames}`cons` is {anchorTerm Vect (module:=Examples.DependentTypes)}`n + 1`, with the tail of the {anchorName appendL}`Vect` having length {anchorTerm Vect (module:=Examples.DependentTypes)}`n`.
Here, {lit}`n✝` represents the {anchorName moreNames}`Nat` that is one less than the argument {anchorName appendL1}`n`.
# Definitional Equality
%%%
tag := "definitional-equality"
%%%
In the definition of {anchorName appendL3}`plusL`, there is a pattern case {anchorTerm plusL}`0, k => k`.
This applies in the length used in the first placeholder, so another way to write the underscore's type {anchorTerm moreNames}`Vect α (Nat.plusL 0 k)` is {anchorTerm moreNames}`Vect α k`.
Similarly, {anchorName plusL}`plusL` contains a pattern case {anchorTerm plusL}`n + 1, k => plusL n k + 1`.
This means that the type of the second underscore can be equivalently written {lit}`Vect α (plusL n✝ k + 1)`.
To expose what is going on behind the scenes, the first step is to write the {anchorName plusL}`Nat` arguments explicitly, which also results in daggerless error messages because the names are now written explicitly in the program:
```anchor appendL3
def appendL : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusL k)
| 0, k, .nil, ys => _
| n + 1, k, .cons x xs, ys => _
```
```anchorError appendL3
don't know how to synthesize placeholder
context:
α : Type u_1
k : Nat
ys : Vect α k
⊢ Vect α (Nat.plusL 0 k)
```
```anchorError appendL4
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α ((n + 1).plusL k)
```
Annotating the underscores with the simplified versions of the types does not introduce a type error, which means that the types as written in the program are equivalent to the ones that Lean found on its own:
```anchor appendL5
def appendL : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusL k)
| 0, k, .nil, ys => (_ : Vect α k)
| n + 1, k, .cons x xs, ys => (_ : Vect α (n.plusL k + 1))
```
```anchorError appendL5
don't know how to synthesize placeholder
context:
α : Type u_1
k : Nat
ys : Vect α k
⊢ Vect α k
```
```anchorError appendL6
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α (n.plusL k + 1)
```
The first case demands a {anchorTerm appendL5}`Vect α k`, and {anchorName appendL5}`ys` has that type.
This is parallel to the way that appending the empty list to any other list returns that other list.
Refining the definition with {anchorName appendL7}`ys` instead of the first underscore yields a program with only one remaining underscore to be filled out:
```anchor appendL7
def appendL : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusL k)
| 0, k, .nil, ys => ys
| n + 1, k, .cons x xs, ys => (_ : Vect α (n.plusL k + 1))
```
```anchorError appendL7
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α (n.plusL k + 1)
```
Something very important has happened here.
In a context where Lean expected a {anchorTerm moreNames}`Vect α (Nat.plusL 0 k)`, it received a {anchorTerm moreNames}`Vect α k`.
However, {anchorName plusL}`Nat.plusL` is not an {kw}`abbrev`, so it may seem like it shouldn't be running during type checking.
Something else is happening.
The key to understanding what's going on is that Lean doesn't just expand {kw}`abbrev`s while type checking.
It can also perform computation while checking whether two types are equivalent to one another, such that any expression of one type can be used in a context that expects the other type.
This property is called _definitional equality_, and it is subtle.
Certainly, two types that are written identically are considered to be definitionally equal—{anchorName moreNames}`Nat` and {anchorName moreNames}`Nat` or {anchorTerm moreNames}`List String` and {anchorTerm moreNames}`List String` should be considered equal.
Any two concrete types built from different datatypes are not equal, so {anchorTerm moreNames}`List Nat` is not equal to {anchorName moreNames}`Int`.
Additionally, types that differ only by renaming internal names are equal, so {anchorTerm moreNames}`(n : Nat) → Vect String n` is the same as {anchorTerm moreNames}`(k : Nat) → Vect String k`.
Because types can contain ordinary data, definitional equality must also describe when data are equal.
Uses of the same constructors are equal, so {anchorTerm moreNames}`0` equals {anchorTerm moreNames}`0` and {anchorTerm moreNames}`[5, 3, 1]` equals {anchorTerm moreNames}`[5, 3, 1]`.
Types contain more than just function arrows, datatypes, and constructors, however.
They also contain _variables_ and _functions_.
Definitional equality of variables is relatively simple: each variable is equal only to itself, so {anchorTerm moreNames}`(n k : Nat) → Vect Int n` is not definitionally equal to {anchorTerm moreNames}`(n k : Nat) → Vect Int k`.
Functions, on the other hand, are more complicated.
While mathematics considers two functions to be equal if they have identical input-output behavior, there is no efficient algorithm to check that, and the whole point of definitional equality is for Lean to check whether two types are interchangeable.
Instead, Lean considers functions to be definitionally equal either when they are both {kw}`fun`-expressions with definitionally equal bodies.
In other words, two functions must use _the same algorithm_ that calls _the same helpers_ to be considered definitionally equal.
This is not typically very helpful, so definitional equality of functions is mostly used when the exact same defined function occurs in two types.
When functions are _called_ in a type, checking definitional equality may involve reducing the function call.
The type {anchorTerm moreNames}`Vect String (1 + 4)` is definitionally equal to the type {anchorTerm moreNames}`Vect String (3 + 2)` because {anchorTerm moreNames}`1 + 4` is definitionally equal to {anchorTerm moreNames}`3 + 2`.
To check their equality, both are reduced to {anchorTerm moreNames}`5`, and then the constructor rule can be used five times.
Definitional equality of functions applied to data can be checked first by seeing if they're already the same—there's no need to reduce {anchorTerm moreNames}`["a", "b"] ++ ["c"]` to check that it's equal to {anchorTerm moreNames}`["a", "b"] ++ ["c"]`, after all.
If not, the function is called and replaced with its value, and the value can then be checked.
Not all function arguments are concrete data.
For example, types may contain {anchorName moreNames}`Nat`s that are not built from the {anchorName moreNames}`zero` and {anchorName moreNames}`succ` constructors.
In the type {anchorTerm moreFun}`(n : Nat) → Vect String n`, the variable {anchorName moreFun}`n` is a {anchorName moreFun}`Nat`, but it is impossible to know _which_ {anchorName moreFun}`Nat` it is before the function is called.
Indeed, the function may be called first with {anchorTerm moreNames}`0`, and then later with {anchorTerm moreNames}`17`, and then again with {anchorTerm moreNames}`33`.
As seen in the definition of {anchorName appendL}`appendL`, variables with type {anchorName moreFun}`Nat` may also be passed to functions such as {anchorName appendL}`plusL`.
Indeed, the type {anchorTerm moreFun}`(n : Nat) → Vect String n` is definitionally equal to the type {anchorTerm moreNames}`(n : Nat) → Vect String (Nat.plusL 0 n)`.
The reason that {anchorName againFun}`n` and {anchorTerm againFun}`Nat.plusL 0 n` are definitionally equal is that {anchorName plusL}`plusL`'s pattern match examines its _first_ argument.
This is problematic: {anchorTerm moreFun}`(n : Nat) → Vect String n` is _not_ definitionally equal to {anchorTerm stuckFun}`(n : Nat) → Vect String (Nat.plusL n 0)`, even though zero should be both a left and a right identity of addition.
This happens because pattern matching gets stuck when it encounters variables.
Until the actual value of {anchorName stuckFun}`n` becomes known, there is no way to know which case of {anchorTerm stuckFun}`Nat.plusL n 0` should be selected.
The same issue appears with the {anchorName Row (module:=Examples.DependentTypes.DB)}`Row` function in the query example.
The type {anchorTerm RowStuck (module:=Examples.DependentTypes.DB)}`Row (c :: cs)` does not reduce to any datatype because the definition of {anchorName RowStuck (module:=Examples.DependentTypes.DB)}`Row` has separate cases for singleton lists and lists with at least two entries.
In other words, it gets stuck when trying to match the variable {anchorName RowStuck (module:=Examples.DependentTypes.DB)}`cs` against concrete {anchorName moreNames}`List` constructors.
This is why almost every function that takes apart or constructs a {anchorName RowStuck (module:=Examples.DependentTypes.DB)}`Row` needs to match the same three cases as {anchorName RowStuck (module:=Examples.DependentTypes.DB)}`Row` itself: getting it unstuck reveals concrete types that can be used for either pattern matching or constructors.
The missing case in {anchorName appendL8}`appendL` requires a {lit}`Vect α (Nat.plusL n k + 1)`.
The {lit}`+ 1` in the index suggests that the next step is to use {anchorName consNotLengthN (module:=Examples.DependentTypes)}`Vect.cons`:
```anchor appendL8
def appendL : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusL k)
| 0, k, .nil, ys => ys
| n + 1, k, .cons x xs, ys => .cons x (_ : Vect α (n.plusL k))
```
```anchorError appendL8
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α (n.plusL k)
```
A recursive call to {anchorName appendL9}`appendL` can construct a {anchorName appendL9}`Vect` with the desired length:
```anchor appendL9
def appendL : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusL k)
| 0, k, .nil, ys => ys
| n + 1, k, .cons x xs, ys => .cons x (appendL n k xs ys)
```
Now that the program is finished, removing the explicit matching on {anchorName appendL9}`n` and {anchorName appendL9}`k` makes it easier to read and easier to call the function:
```anchor appendL
def appendL : Vect α n → Vect α k → Vect α (n.plusL k)
| .nil, ys => ys
| .cons x xs, ys => .cons x (appendL xs ys)
```
Comparing types using definitional equality means that everything involved in definitional equality, including the internals of function definitions, becomes part of the _interface_ of programs that use dependent types and indexed families.
Exposing the internals of a function in a type means that refactoring the exposed program may cause programs that use it to no longer type check.
In particular, the fact that {anchorName appendL}`plusL` is used in the type of {anchorName appendL}`appendL` means that the definition of {anchorName appendL}`plusL` cannot be replaced by the otherwise-equivalent {anchorName plusR}`plusR`.
# Getting Stuck on Addition
%%%
tag := "stuck-addition"
%%%
What happens if append is defined with {anchorName appendR}`plusR` instead?
Beginning in the same way, with explicit lengths and placeholder underscores in each case, reveals the following useful error messages:
```anchor appendR1
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusR k)
| 0, k, .nil, ys => _
| n + 1, k, .cons x xs, ys => _
```
```anchorError appendR1
don't know how to synthesize placeholder
context:
α : Type u_1
k : Nat
ys : Vect α k
⊢ Vect α (Nat.plusR 0 k)
```
```anchorError appendR2
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α ((n + 1).plusR k)
```
However, attempting to place a {anchorTerm appendR3}`Vect α k` type annotation around the first placeholder results in an type mismatch error:
```anchor appendR3
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusR k)
| 0, k, .nil, ys => (_ : Vect α k)
| n + 1, k, .cons x xs, ys => _
```
```anchorError appendR3
Type mismatch
?m.11
has type
Vect α k
but is expected to have type
Vect α (Nat.plusR 0 k)
```
This error is pointing out that {anchorTerm plusRinfo}`Nat.plusR 0 k` and {anchorName plusRinfo}`k` are _not_ definitionally equal.
:::paragraph
This is because {anchorName plusR}`plusR` has the following definition:
```anchor plusR
def Nat.plusR : Nat → Nat → Nat
| n, 0 => n
| n, k + 1 => plusR n k + 1
```
Its pattern matching occurs on the _second_ argument, not the first argument, which means that the presence of the variable {anchorName plusRinfo}`k` in that position prevents it from reducing.
{anchorName plusRinfo}`Nat.add` in Lean's standard library is equivalent to {anchorName plusRinfo}`plusR`, not {anchorName plusRinfo}`plusL`, so attempting to use it in this definition results in precisely the same difficulties:
```anchor appendR4
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n + k)
| 0, k, .nil, ys => (_ : Vect α k)
| n + 1, k, .cons x xs, ys => _
```
```anchorError appendR4
Type mismatch
?m.15
has type
Vect α k
but is expected to have type
Vect α (0 + k)
```
Addition is getting _stuck_ on the variables.
Getting it unstuck requires {ref "equality-and-ordering"}[propositional equality].
:::
# Propositional Equality
%%%
tag := "propositional-equality"
%%%
Propositional equality is the mathematical statement that two expressions are equal.
While definitional equality is a kind of ambient fact that Lean automatically checks when required, statements of propositional equality require explicit proofs.
Once an equality proposition has been proved, it can be used in a program to modify a type, replacing one side of the equality with the other, which can unstick the type checker.
The reason why definitional equality is so limited is to enable it to be checked by an algorithm.
Propositional equality is much richer, but the computer cannot in general check whether two expressions are propositionally equal, though it can verify that a purported proof is in fact a proof.
The split between definitional and propositional equality represents a division of labor between humans and machines: the most boring equalities are checked automatically as part of definitional equality, freeing the human mind to work on the interesting problems available in propositional equality.
Similarly, definitional equality is invoked automatically by the type checker, while propositional equality must be specifically appealed to.
In {ref "props-proofs-indexing"}[Propositions, Proofs, and Indexing], some equality statements are proved using {tactic}`decide`.
All of these equality statements are ones in which the propositional equality is in fact already a definitional equality.
Typically, statements of propositional equality are proved by first getting them into a form where they are either definitional or close enough to existing proved equalities, and then using tools like {tactic}`decide` or {tactic}`simp` to take care of the simplified cases.
The {tactic}`simp` tactic is quite powerful: behind the scenes, it uses a number of fast, automated tools to construct a proof.
A simpler tactic called {kw}`rfl` specifically uses definitional equality to prove propositional equality.
The name {kw}`rfl` is short for _reflexivity_, which is the property of equality that states that everything equals itself.
Unsticking {anchorName appendR}`appendR` requires a proof that {anchorTerm plusR_zero_left1}`k = Nat.plusR 0 k`, which is not a definitional equality because {anchorName plusR}`plusR` is stuck on the variable in its second argument.
To get it to compute, the {anchorName plusR_zero_left1}`k` must become a concrete constructor.
This is a job for pattern matching.
:::paragraph
In particular, because {anchorName plusR_zero_left1}`k` could be _any_ {anchorName plusR_zero_left1}`Nat`, this task requires a function that can return evidence that {anchorTerm plusR_zero_left1}`k = Nat.plusR 0 k` for _any_ {anchorName plusR_zero_left1}`k` whatsoever.
This should be a function that returns a proof of equality, with type {anchorTerm plusR_zero_left1}`(k : Nat) → k = Nat.plusR 0 k`.
Getting it started with initial patterns and placeholders yields the following messages:
```anchor plusR_zero_left1
def plusR_zero_left : (k : Nat) → k = Nat.plusR 0 k
| 0 => _
| k + 1 => _
```
```anchorError plusR_zero_left1
don't know how to synthesize placeholder
context:
⊢ 0 = Nat.plusR 0 0
```
```anchorError plusR_zero_left2
don't know how to synthesize placeholder
context:
k : Nat
⊢ k + 1 = Nat.plusR 0 (k + 1)
```
Having refined {anchorName plusR_zero_left1}`k` to {anchorTerm plusR_zero_left1}`0` via pattern matching, the first placeholder stands for evidence of a statement that does hold definitionally.
The {kw}`rfl` tactic takes care of it, leaving only the second placeholder:
```anchor plusR_zero_left3
def plusR_zero_left : (k : Nat) → k = Nat.plusR 0 k
| 0 => by rfl
| k + 1 => _
```
:::
The second placeholder is a bit trickier.
The expression {anchorTerm plusRStep}`Nat.plusR 0 k + 1` is definitionally equal to {anchorTerm plusRStep}`Nat.plusR 0 (k + 1)`.
This means that the goal could also be written {anchorTerm plusR_zero_left4}`k + 1 = Nat.plusR 0 k + 1`:
```anchor plusR_zero_left4
def plusR_zero_left : (k : Nat) → k = Nat.plusR 0 k
| 0 => by rfl
| k + 1 => (_ : k + 1 = Nat.plusR 0 k + 1)
```
```anchorError plusR_zero_left4
don't know how to synthesize placeholder
context:
k : Nat
⊢ k + 1 = Nat.plusR 0 k + 1
```
:::paragraph
Underneath the {anchorTerm plusR_zero_left4}`+ 1` on each side of the equality statement is another instance of what the function itself returns.
In other words, a recursive call on {anchorName plusR_zero_left4}`k` would return evidence that {anchorTerm plusR_zero_left4}`k = Nat.plusR 0 k`.
Equality wouldn't be equality if it didn't apply to function arguments.
In other words, if {anchorTerm congr}`x = y`, then {anchorTerm congr}`f x = f y`.
The standard library contains a function {anchorName congr}`congrArg` that takes a function and an equality proof and returns a new proof where the function has been applied to both sides of the equality.
In this case, the function is {anchorTerm plusR_zero_left_done}`(· + 1)`:
```anchor plusR_zero_left_done
def plusR_zero_left : (k : Nat) → k = Nat.plusR 0 k
| 0 => by rfl
| k + 1 =>
congrArg (· + 1) (plusR_zero_left k)
```
:::
:::paragraph
Because this is really a proof of a proposition, it should be declared as a {kw}`theorem`:
```anchor plusR_zero_left_thm
theorem plusR_zero_left : (k : Nat) → k = Nat.plusR 0 k
| 0 => by rfl
| k + 1 =>
congrArg (· + 1) (plusR_zero_left k)
```
:::
Propositional equalities can be deployed in a program using the rightward triangle operator {anchorTerm appendRsubst}`▸`.
Given an equality proof as its first argument and some other expression as its second, this operator replaces instances of one side of the equality with the other side of the equality in the second argument's type.
In other words, the following definition contains no type errors:
```anchor appendRsubst
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusR k)
| 0, k, .nil, ys => plusR_zero_left k ▸ (_ : Vect α k)
| n + 1, k, .cons x xs, ys => _
```
The first placeholder has the expected type:
```anchorError appendRsubst
don't know how to synthesize placeholder
context:
α : Type u_1
k : Nat
ys : Vect α k
⊢ Vect α k
```
It can now be filled in with {anchorName appendR5}`ys`:
```anchor appendR5
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusR k)
| 0, k, .nil, ys => plusR_zero_left k ▸ ys
| n + 1, k, .cons x xs, ys => _
```
Filling in the remaining placeholder requires unsticking another instance of addition:
```anchorError appendR5
don't know how to synthesize placeholder
context:
α : Type u_1
n k : Nat
x : α
xs : Vect α n
ys : Vect α k
⊢ Vect α ((n + 1).plusR k)
```
Here, the statement to be proved is that {anchorTerm plusR_succ_left}`Nat.plusR (n + 1) k = Nat.plusR n k + 1`, which can be used with {anchorTerm appendRsubst}`▸` to draw the {anchorTerm appendRsubst}`+ 1` out to the top of the expression so that it matches the index of {anchorName Vect}`cons`.
The proof is a recursive function that pattern matches on the second argument to {anchorName appendR}`plusR`, namely {anchorName appendR5}`k`.
This is because {anchorName appendR5}`plusR` itself pattern matches on its second argument, so the proof can “unstick” it through pattern matching, exposing the computational behavior.
The skeleton of the proof is very similar to that of {anchorName appendR}`plusR_zero_left`:
```anchor plusR_succ_left_0
theorem plusR_succ_left (n : Nat) :
(k : Nat) → Nat.plusR (n + 1) k = Nat.plusR n k + 1
| 0 => by rfl
| k + 1 => _
```
The remaining case's type is definitionally equal to {anchorTerm congr}`Nat.plusR (n + 1) k + 1 = Nat.plusR n (k + 1) + 1`, so it can be solved with {anchorName congr}`congrArg`, just as in {anchorName plusR_zero_left_thm}`plusR_zero_left`:
```anchorError plusR_succ_left_2
don't know how to synthesize placeholder
context:
n k : Nat
⊢ (n + 1).plusR (k + 1) = n.plusR (k + 1) + 1
```
This results in a finished proof:
```anchor plusR_succ_left
theorem plusR_succ_left (n : Nat) :
(k : Nat) → Nat.plusR (n + 1) k = Nat.plusR n k + 1
| 0 => by rfl
| k + 1 => congrArg (· + 1) (plusR_succ_left n k)
```
The finished proof can be used to unstick the second case in {anchorName appendR}`appendR`:
```anchor appendR
def appendR : (n k : Nat) → Vect α n → Vect α k → Vect α (n.plusR k)
| 0, k, .nil, ys =>
plusR_zero_left k ▸ ys
| n + 1, k, .cons x xs, ys =>
plusR_succ_left n k ▸ .cons x (appendR n k xs ys)
```
When making the length arguments to {anchorName appendR}`appendR` implicit again, they are no longer explicitly named to be appealed to in the proofs.
However, Lean's type checker has enough information to fill them in automatically behind the scenes, because no other values would allow the types to match:
```anchor appendRImpl
def appendR : Vect α n → Vect α k → Vect α (n.plusR k)
| .nil, ys => plusR_zero_left _ ▸ ys
| .cons x xs, ys => plusR_succ_left _ _ ▸ .cons x (appendR xs ys)
```
# Pros and Cons
%%%
tag := "dependent-types-pros-and-cons"
%%%
Indexed families have an important property: pattern matching on them affects definitional equality.
For example, in the {anchorName Vect}`nil` case in a {kw}`match` expression on a {anchorTerm Vect}`Vect`, the length simply _becomes_ {anchorTerm moreNames}`0`.
Definitional equality can be very convenient, because it is always active and does not need to be invoked explicitly.
However, the use of definitional equality with dependent types and pattern matching has serious software engineering drawbacks.
First off, functions must be written especially to be used in types, and functions that are convenient to use in types may not use the most efficient algorithms.
Once a function has been exposed through using it in a type, its implementation has become part of the interface, leading to difficulties in future refactoring.
Secondly, definitional equality can be slow.
When asked to check whether two expressions are definitionally equal, Lean may need to run large amounts of code if the functions in question are complicated and have many layers of abstraction.
Third, error messages that result from failures of definitional equality are not always very easy to understand, because they may be phrased in terms of the internals of functions.
It is not always easy to understand the provenance of the expressions in the error messages.
Finally, encoding non-trivial invariants in a collection of indexed families and dependently-typed functions can often be brittle.
It is often necessary to change early definitions in a system when the exposed reduction behavior of functions proves to not provide convenient definitional equalities.
The alternative is to litter the program with appeals to equality proofs, but these can become quite unwieldy.
In idiomatic Lean code, indexed datatypes are not used very often.
Instead, subtypes and explicit propositions are typically used to enforce important invariants.
This approach involves many explicit proofs, and very few appeals to definitional equality.
As befits an interactive theorem prover, Lean has been designed to make explicit proofs convenient.
Generally speaking, this approach should be preferred in most cases.
However, understanding indexed families of datatypes is important.
Recursive functions such as {anchorName plusR_zero_left_thm}`plusR_zero_left` and {anchorName plusR_succ_left}`plusR_succ_left` are in fact _proofs by mathematical induction_.
The base case of the recursion corresponds to the base case in induction, and the recursive call represents an appeal to the induction hypothesis.
More generally, new propositions in Lean are often defined as inductive types of evidence, and these inductive types usually have indices.
The process of proving theorems is in fact constructing expressions with these types behind the scenes, in a process not unlike the proofs in this section.
Also, indexed datatypes are sometimes exactly the right tool for the job.
Fluency in their use is an important part of knowing when to use them.
# Exercises
%%%
tag := "dependent-type-pitfalls-exercises"
%%%
* Using a recursive function in the style of {anchorName plusR_succ_left}`plusR_succ_left`, prove that for all {anchorName moreNames}`Nat`s {anchorName exercises}`n` and {anchorName exercises}`k`, {anchorTerm exercises}`n.plusR k = n + k`.
* Write a function on {anchorName moreNames}`Vect` for which {anchorName plusR}`plusR` is more natural than {anchorName plusL}`plusL`, where {anchorName plusL}`plusL` would require proofs to be used in the definition. |
fp-lean/book/FPLean/DependentTypes/IndicesParametersUniverses.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.DependentTypes.IndicesParameters"
#doc (Manual) "Indices, Parameters, and Universe Levels" =>
%%%
tag := "indices-parameters-universe-levels"
%%%
The distinction between indices and parameters of an inductive type is more than just a way to describe arguments to the type that either vary or do not between the constructors.
Whether an argument to an inductive type is a parameter or an index also matters when it comes time to determine the relationships between their universe levels.
In particular, an inductive type may have the same universe level as a parameter, but it must be in a larger universe than its indices.
This restriction is necessary to ensure that Lean can be used as a theorem prover as well as a programming language—without it, Lean's logic would be inconsistent.
Experimenting with error messages is a good way to illustrate these rules, as well as the precise rules that determine whether an argument to a type is a parameter or an index.
Generally speaking, the definition of an inductive type takes its parameters before a colon and its indices after the colon.
Parameters are given names like function arguments, whereas indices only have their types described.
This can be seen in the definition of {anchorName Vect (module := Examples.DependentTypes)}`Vect`:
```anchor Vect (module := Examples.DependentTypes)
inductive Vect (α : Type u) : Nat → Type u where
| nil : Vect α 0
| cons : α → Vect α n → Vect α (n + 1)
```
In this definition, {anchorName Vect (module:=Examples.DependentTypes)}`α` is a parameter and the {anchorName Vect (module:=Examples.DependentTypes)}`Nat` is an index.
Parameters may be referred to throughout the definition (for example, {anchorName consNotLengthN (module:=Examples.DependentTypes)}`Vect.cons` uses {anchorName Vect (module:=Examples.DependentTypes)}`α` for the type of its first argument), but they must always be used consistently.
Because indices are expected to change, they are assigned individual values at each constructor, rather than being provided as arguments at the top of the datatype definition.
A very simple datatype with a parameter is {anchorName WithParameter}`WithParameter`:
```anchor WithParameter
inductive WithParameter (α : Type u) : Type u where
| test : α → WithParameter α
```
The universe level {anchorTerm WithParameter}`u` can be used for both the parameter and for the inductive type itself, illustrating that parameters do not increase the universe level of a datatype.
Similarly, when there are multiple parameters, the inductive type receives whichever universe level is greater:
```anchor WithTwoParameters
inductive WithTwoParameters (α : Type u) (β : Type v) : Type (max u v) where
| test : α → β → WithTwoParameters α β
```
Because parameters do not increase the universe level of a datatype, they can be more convenient to work with.
Lean attempts to identify arguments that are described like indices (after the colon), but used like parameters, and turn them into parameters:
Both of the following inductive datatypes have their parameter written after the colon:
```anchor WithParameterAfterColon
inductive WithParameterAfterColon : Type u → Type u where
| test : α → WithParameterAfterColon α
```
```anchor WithParameterAfterColon2
inductive WithParameterAfterColon2 : Type u → Type u where
| test1 : α → WithParameterAfterColon2 α
| test2 : WithParameterAfterColon2 α
```
When a parameter is not named in the initial datatype declaration, different names may be used for it in each constructor, so long as they are used consistently.
The following declaration is accepted:
```anchor WithParameterAfterColonDifferentNames
inductive WithParameterAfterColonDifferentNames : Type u → Type u where
| test1 : α → WithParameterAfterColonDifferentNames α
| test2 : β → WithParameterAfterColonDifferentNames β
```
However, this flexibility does not extend to datatypes that explicitly declare the names of their parameters:
```anchor WithParameterBeforeColonDifferentNames
inductive WithParameterBeforeColonDifferentNames (α : Type u) : Type u where
| test1 : α → WithParameterBeforeColonDifferentNames α
| test2 : β → WithParameterBeforeColonDifferentNames β
```
```anchorError WithParameterBeforeColonDifferentNames
Mismatched inductive type parameter in
WithParameterBeforeColonDifferentNames β
The provided argument
β
is not definitionally equal to the expected parameter
α
Note: The value of parameter `α` must be fixed throughout the inductive declaration. Consider making this parameter an index if it must vary.
```
Similarly, attempting to name an index results in an error:
```anchor WithNamedIndex
inductive WithNamedIndex (α : Type u) : Type (u + 1) where
| test1 : WithNamedIndex α
| test2 : WithNamedIndex α → WithNamedIndex α → WithNamedIndex (α × α)
```
```anchorError WithNamedIndex
Mismatched inductive type parameter in
WithNamedIndex (α × α)
The provided argument
α × α
is not definitionally equal to the expected parameter
α
Note: The value of parameter `α` must be fixed throughout the inductive declaration. Consider making this parameter an index if it must vary.
```
Using an appropriate universe level and placing the index after the colon results in a declaration that is acceptable:
```anchor WithIndex
inductive WithIndex : Type u → Type (u + 1) where
| test1 : WithIndex α
| test2 : WithIndex α → WithIndex α → WithIndex (α × α)
```
Even though Lean can sometimes determine that an argument after the colon in an inductive type declaration is a parameter when it is used consistently in all constructors, all parameters are still required to come before all indices.
Attempting to place a parameter after an index results in the argument being considered an index itself, which would require the universe level of the datatype to increase:
```anchor ParamAfterIndex
inductive ParamAfterIndex : Nat → Type u → Type u where
| test1 : ParamAfterIndex 0 γ
| test2 : ParamAfterIndex n γ → ParamAfterIndex k γ → ParamAfterIndex (n + k) γ
```
```anchorError ParamAfterIndex
Invalid universe level in constructor `ParamAfterIndex.test1`: Parameter `γ` has type
Type u
at universe level
u+2
which is not less than or equal to the inductive type's resulting universe level
u+1
```
Parameters need not be types.
This example shows that ordinary datatypes such as {anchorName NatParamFour}`Nat` may be used as parameters:
```anchor NatParamFour
inductive NatParam (n : Nat) : Nat → Type u where
| five : NatParam 4 5
```
```anchorError NatParamFour
Mismatched inductive type parameter in
NatParam 4 5
The provided argument
4
is not definitionally equal to the expected parameter
n
Note: The value of parameter `n` must be fixed throughout the inductive declaration. Consider making this parameter an index if it must vary.
```
Using the {anchorName NatParam}`n` as suggested causes the declaration to be accepted:
```anchor NatParam
inductive NatParam (n : Nat) : Nat → Type u where
| five : NatParam n 5
```
What can be concluded from these experiments?
The rules of parameters and indices are as follows:
1. Parameters must be used identically in each constructor's type.
2. All parameters must come before all indices.
3. The universe level of the datatype being defined must be at least as large as the largest parameter, and strictly larger than the largest index.
4. Named arguments written before the colon are always parameters, while arguments after the colon are typically indices. Lean may determine that the usage of arguments after the colon makes them into parameters if they are used consistently in all constructors and don't come after any indices.
When in doubt, the Lean command {kw}`#print` can be used to check how many of a datatype's arguments are parameters.
For example, for {anchorTerm printVect}`Vect`, it points out that the number of parameters is 1:
```anchor printVect
#print Vect
```
```anchorInfo printVect
inductive Vect.{u} : Type u → Nat → Type u
number of parameters: 1
constructors:
Vect.nil : {α : Type u} → Vect α 0
Vect.cons : {α : Type u} → {n : Nat} → α → Vect α n → Vect α (n + 1)
```
It is worth thinking about which arguments should be parameters and which should be indices when choosing the order of arguments to a datatype.
Having as many arguments as possible be parameters helps keep universe levels under control, which can make a complicated program easier to type check.
One way to make this possible is to ensure that all parameters come before all indices in the argument list.
Additionally, even though Lean is capable of determining that arguments after the colon are nonetheless parameters by their usage, it's a good idea to write parameters with explicit names.
This makes the intention clear to readers, and it causes Lean to report an error if the argument is mistakenly used inconsistently across the constructors. |
fp-lean/book/FPLean/Examples/OtherLanguages.lean | import SubVerso.Examples
import Lean.Data.NameMap
import Std.Data.HashMap
import VersoManual
import Lean.DocString.Syntax
open Lean (NameMap MessageSeverity)
open Lean.Doc.Syntax
open Std
namespace FPLean
open Verso Doc Elab Genre.Manual ArgParse Code Highlighted WebAssets Output Html
open SubVerso.Highlighting
open Lean
-- TODO syntax highlighting for C#, TypeScript, Python etc (use the same technique as C in the manual)
@[code_block_expander CSharp]
def CSharp : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander CSharp]
def inlineCSharp : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander Kotlin]
def Kotlin : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander Kotlin]
def inlineKotlin : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander cpp]
def cpp : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander cpp]
def inlineCpp : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander typescript]
def typescript : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander typescript]
def inlineTypescript : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[role_expander c]
def c : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[role_expander java]
def java : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[role_expander rust]
def rust : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander python]
def python : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander python]
def inlinePython : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander fsharp]
def fsharp : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
@[role_expander fsharp]
def fsharpInline : RoleExpander
| _args, code => do
let #[code] := code
| throwErrorAt (mkNullNode code) "Expected exactly one code argument"
let `(inline|code($code)) := code
| throwErrorAt code "Expected code"
return #[← ``(Inline.code $(quote code.getString))]
@[code_block_expander fsharpError]
def fsharpError : CodeBlockExpander
| _args, code => do
return #[← ``(Block.code $(quote code.getString))]
section
variable [Monad m] [MonadError m]
structure IncludePythonConfig where
file : String
anchor? : Option Ident
instance : FromArgs IncludePythonConfig m where
fromArgs := IncludePythonConfig.mk <$> .positional' `file <*> .named' `anchor true
end
/--
If a line could be an anchor indicator, returns the anchor's name along with `true` for a start
and `false` for an end.
An anchor consists of a line that contains `ANCHOR:` or `ANCHOR_END:` followed by the name
of the anchor.
-/
def anchor? (line : String) : Except String (String × Bool) := do
let mut line := line.trimAscii
line := line.dropWhile (· ≠ 'A')
if line.startsWith "ANCHOR:" then
line := line.dropPrefix "ANCHOR:"
line := line.trimAsciiStart
if line.isEmpty then throw "Expected name after `ANCHOR: `" else return (line.copy, true)
else if line.startsWith "ANCHOR_END:" then
line := line.dropPrefix "ANCHOR_END:"
line := line.trimAsciiStart
if line.isEmpty then throw "Expected name after `ANCHOR_END: `" else return (line.copy, false)
else throw s!"Expected `ANCHOR:` or `ANCHOR_END:`, got {line}"
private def stringAnchors (s : String) : Except String (String × HashMap String String) := do
let mut out := ""
let mut anchors : HashMap String String := {}
let mut openAnchors : HashMap String String := {}
let lines := s.splitToList (· == '\n')
for line in lines do
if let some (a, isOpener) := anchor? line |>.toOption then
if isOpener then
if openAnchors.contains a then throw s!"Already open: {a}"
else openAnchors := openAnchors.insert a ""
else
if let some txt := openAnchors[a]? then
anchors := anchors.insert a txt
openAnchors := openAnchors.erase a
else throw s!"Not open: {a}"
else
out := out ++ line ++ "\n"
for (k, v) in openAnchors do
openAnchors := openAnchors.insert k (v ++ line ++ "\n")
for a in openAnchors.keys do
throw s!"Anchor {a} never closed"
return (out, anchors)
@[code_block_expander includePython]
def includePython : CodeBlockExpander
| args, code => do
let {file, anchor?} ← parseThe IncludePythonConfig args
let s ← IO.FS.readFile file
let txt ←
if let some a := anchor? then
let .ok (_, as) := stringAnchors s
| throwErrorAt a m!"Couldn't parse anchors"
let some txt := as[a.getId.eraseMacroScopes.toString]?
| throwErrorAt a m!"Not found in {as.keys}"
pure txt
else
pure s
_ ← ExpectString.expectString "file" code txt
return #[← ``(Block.code $(quote code.getString))] |
fp-lean/book/FPLean/Examples/Commands.lean | import FPLean.Examples.Commands.Env
import FPLean.Examples.Commands.ShLex
import Lean.Elab
import Verso.FS
namespace FPLean.Commands
open Lean
variable {m : _} [Monad m] [MonadEnv m] [MonadLiftT IO m] [MonadLiftT BaseIO m] [MonadError m]
def ensureContainer (container : Ident) : m Container := do
let name := container.getId
if let some c := (containersExt.getState (← getEnv)).find? name then return c
let tmp ← IO.FS.createTempDir
let c : Container := ⟨tmp, {}⟩
let projectRoot : System.FilePath := ".."
let copyErrors : IO.Ref (Array String) ← IO.mkRef #[]
Verso.FS.copyRecursively (fun s => copyErrors.modify (·.push s)) projectRoot tmp shouldCopy
let errs ← (copyErrors.get : IO _)
unless errs.isEmpty do
throwErrorAt container "Errors copying project to container {name}: {indentD <| MessageData.joinSep (errs.toList.map toMessageData) Format.line}"
modifyEnv (containersExt.modifyState · (·.insert name c))
return c
where
shouldCopy (path : System.FilePath) : IO Bool := do
let some x := path.fileName
| return true
return !(x.startsWith ".") && !(x == "site-packages") && !(x == "_out") && !(x == "static")
def requireContainer (container : Ident) : m Container := do
let name := container.getId
if let some c := (containersExt.getState (← getEnv)).find? name then return c
else throwErrorAt container "Not found: '{name}'"
private def localeVars : Array String :=
#["LANG", "LC_ALL"]
private def lakeVars : Array String :=
#["ELAN_TOOLCHAIN", "LEAN_SYSROOT", "LEAN", "LAKE", "LAKE_HOME", "LEAN_PATH", "LAKE_CACHE_DIR", "LEAN_AR", "LEAN_CC", "DYLD_LIBRARY_PATH"]
private def fixPath (path : String) :=
System.SearchPath.parse path
|>.iter
|>.map (·.toString)
|>.filter (fun p => ((p.find? ".elan").isNone || (p.find? "toolchains").isNone))
|>.toList
|> System.SearchPath.separator.toString.intercalate
def command (container : Ident) (dir : System.FilePath) (command : StrLit) (viaShell := false) : m IO.Process.Output := do
let c ← ensureContainer container
unless dir.isRelative do
throwError "Relative directory expected, got '{dir}'"
let dir := c.workingDirectory / "examples" / dir
IO.FS.createDirAll dir
let (cmd, args) ← if viaShell then pure ("bash", #["-c", command.getString]) else cmdAndArgs
let extraPath := (← IO.currentDir) / ".." / "examples" / ".lake" / "build" / "bin" |>.toString
let extraPath := if extraPath.contains ' ' || extraPath.contains '"' || extraPath.contains '\'' then extraPath.quote else extraPath
let path := (← IO.getEnv "PATH").map (System.SearchPath.separator.toString ++ ·) |>.getD ""
let out ← IO.Process.output {
cmd := cmd,
args := args,
cwd := dir,
env := #[("PATH", some (extraPath ++ fixPath path))] ++ lakeVars.map (·, none) ++ localeVars.map (·, some "C.UTF-8")
}
if out.exitCode != 0 then
let stdout := m!"Stdout: {indentD out.stdout}"
let stderr := m!"Stderr: {indentD out.stderr}"
throwErrorAt command "Non-zero exit code from '{command.getString}' ({out.exitCode}).\n{indentD stdout}\n{indentD stderr}"
modifyEnv (containersExt.modifyState · (·.insert container.getId { c with outputs := c.outputs.insert command.getString.trimAscii.copy out.stdout }))
return out
where
cmdAndArgs : m (String × Array String) := do
match Shell.shlex command.getString with
| .error e => throwErrorAt command e
| .ok components =>
if h : components.size = 0 then
throwErrorAt command "No command provided"
else
return (components[0], components.extract 1)
def commandOut (container : Ident) (command : StrLit) : m String := do
let c ← requireContainer container
if let some out := c.outputs[command.getString.trimAscii.copy]? then
return out
else throwErrorAt command "Output not found: {indentD command}"
def fileContents (container : Ident) (file : StrLit) : m String := do
let c ← requireContainer container
let filename := c.workingDirectory / "examples" / file.getString
unless (← filename.pathExists) do
throwErrorAt file "{filename} does not exist"
if (← filename.isDir) then
throwErrorAt file "{filename} is a directory"
IO.FS.readFile filename
end Commands |
fp-lean/book/FPLean/Examples/Commands/ShLex.lean | namespace FPLean.Commands.Shell
private inductive State where
| normal
| inSingle
| inDouble
| escaped (st : State)
def shlex (cmd : String) : Except String (Array String) := do
let mut state : State := .normal
let mut iter := cmd.startPos
let mut out : Array String := #[]
let mut current : Option String := none
while h : iter ≠ cmd.endPos do
let c := iter.get h
iter := iter.next h
match state, c with
| .normal, '\\' =>
state := .escaped state
| .normal, '\'' =>
state := .inSingle
current := some <| current.getD ""
| .normal, '"' =>
state := .inDouble
current := some <| current.getD ""
| .normal, ' ' | .normal, '\t' | .normal, '\n' =>
if let some curr := current then
out := out.push curr
current := none
| .normal, _ =>
current := current.getD "" |>.push c |> some
| .escaped s', _ =>
current := current.getD "" |>.push c |> some
state := s'
| .inDouble, '"' =>
state := .normal
| .inDouble, '\\' =>
state := .escaped state
| .inDouble, _ =>
current := current.getD "" |>.push c |> some
| .inSingle, '\'' =>
state := .normal
| .inSingle, _ =>
current := current.getD "" |>.push c |> some
match state with
| .inSingle => throw "Unclosed single quote"
| .inDouble => throw "Unclosed double quote"
| .escaped _ => throw "Unterminated escape"
| .normal =>
if let some curr := current then
out := out.push curr
return out
/-! # Tests -/
/-! ## Basic cases -/
/-- info: Except.ok #["ls", "-la"] -/
#guard_msgs in
#eval shlex "ls -la"
/-- info: Except.ok #[] -/
#guard_msgs in
#eval shlex ""
/-- info: Except.ok #[] -/
#guard_msgs in
#eval shlex " "
/-! ## Quote handling -/
/-- info: Except.ok #["echo", "dont"] -/
#guard_msgs in
#eval shlex "echo 'don''t'"
/-- info: Except.ok #["echo", "don't"] -/
#guard_msgs in
#eval shlex "echo \"don't\""
/-- info: Except.ok #["echo", "don\"t"] -/
#guard_msgs in
#eval shlex "echo 'don\"t'"
/-- info: Except.ok #["echo", "\"quoted\""] -/
#guard_msgs in
#eval shlex "echo \"\\\"quoted\\\"\""
/-- info: Except.ok #["echo", ""] -/
#guard_msgs in
#eval shlex "echo ''"
/-- info: Except.ok #["echo", ""] -/
#guard_msgs in
#eval shlex "echo \"\""
/-! ## Escaping -/
/-- info: Except.ok #["echo", "a b"] -/
#guard_msgs in
#eval shlex "echo a\\ b"
/-- info: Except.ok #["echo", "\"quoted\""] -/
#guard_msgs in
#eval shlex "echo \\\"quoted\\\""
/-- info: Except.ok #["echo", "\\"] -/
#guard_msgs in
#eval shlex "echo \\\\"
/-- info: Except.ok #["echo", "\\\\"] -/
#guard_msgs in
#eval shlex "echo \\\\\\\\"
/-- info: Except.ok #["echo", "a\\b"] -/
#guard_msgs in
#eval shlex "echo a\\\\b"
/-- info: Except.ok #["echo", "a\nb"] -/
#guard_msgs in
#eval shlex "echo a\\\nb"
/-! ## Mixed quotes and escapes -/
/-- info: Except.ok #["echo", "mixed 'quotes'"] -/
#guard_msgs in
#eval shlex "echo \"mixed 'quotes'\""
/-- info: Except.ok #["echo", "mixed \"quotes\""] -/
#guard_msgs in
#eval shlex "echo 'mixed \"quotes\"'"
/-- info: Except.ok #["echo", "partially quoted argument"] -/
#guard_msgs in
#eval shlex "echo \"partially quoted\\ argument\""
/-- info: Except.ok #["echo", "ends with backslash\\"] -/
#guard_msgs in
#eval shlex "echo \"ends with backslash\\\\\""
/-- info: Except.ok #["echo", "single quoted string with \\ backslash"] -/
#guard_msgs in
#eval shlex "echo 'single quoted string with \\ backslash'"
/-! ## Whitespace-/
/-- info: Except.ok #["cmd", "with", "multiple", "spaces"] -/
#guard_msgs in
#eval shlex "cmd with multiple spaces"
/-- info: Except.ok #["cmd", "with", "tabs"] -/
#guard_msgs in
#eval shlex "cmd\twith\ttabs"
/-- info: Except.ok #["cmd", "with", "newlines"] -/
#guard_msgs in
#eval shlex "cmd with\nnewlines"
/-! ## Complex combinations -/
/-- info: Except.ok #["echo", "foobar"] -/
#guard_msgs in
#eval shlex "echo \"foo\"'bar'"
/-- info: Except.ok #["echo", "foobar"] -/
#guard_msgs in
#eval shlex "echo \"foo\"\"bar\""
/-- info: Except.ok #["grep", "term with spaces", "file with spaces.txt"] -/
#guard_msgs in
#eval shlex "grep \"term with spaces\" file\\ with\\ spaces.txt"
/-- info: Except.ok #["echo", "-n", "mixed \"quotes\" and 'apostrophes'"] -/
#guard_msgs in
#eval shlex "echo -n \"mixed \\\"quotes\\\" and 'apostrophes'\""
/-- info: Except.ok #["find", ".", "-name", "*{1,2}*"] -/
#guard_msgs in
#eval shlex "find . -name \"*\\{1,2\\}*\""
/-! ## Potentially problematic edge cases -/
/-- info: Except.error "Unterminated escape" -/
#guard_msgs in
#eval shlex "\\"
/-- info: Except.ok #["cmd", "with", "trailing", "space"] -/
#guard_msgs in
#eval shlex "cmd with trailing space "
/-- info: Except.ok #["cmd", "with", "trailing", "escaped", "space "] -/
#guard_msgs in
#eval shlex "cmd with trailing escaped space\\ "
/-- info: Except.ok #["echo", "foo#bar"] -/
#guard_msgs in
#eval shlex "echo foo#bar"
/-- info: Except.ok #["echo", "fooz"] -/
#guard_msgs in
#eval shlex "echo \"foo\\z\""
/-! ## Tricky error cases -/
/-- info: Except.error "Unclosed double quote" -/
#guard_msgs in
#eval shlex "echo \"unterminated"
/-- info: Except.error "Unclosed single quote" -/
#guard_msgs in
#eval shlex "echo 'unterminated"
/-- info: Except.error "Unterminated escape" -/
#guard_msgs in
#eval shlex "echo \\"
/-- info: Except.ok #["echo", "nested quotes"] -/
#guard_msgs in
#eval shlex "echo \"nested \"quotes\"\""
/-! ## Boundary cases -/
/-- info: Except.ok #["ab"] -/
#guard_msgs in
#eval shlex "a\"\"b"
/-- info: Except.ok #["ab"] -/
#guard_msgs in
#eval shlex "a''b"
/-- info: Except.ok #["ab"] -/
#guard_msgs in
#eval shlex "\"a\"\"b\""
/-- info: Except.ok #["ab"] -/
#guard_msgs in
#eval shlex "'a''b'" |
fp-lean/book/FPLean/Examples/Commands/Env.lean | import Lean.Environment
import Std.Data.HashMap
import SubVerso.Highlighting.Highlighted
import SubVerso.Module
open Lean Std
open SubVerso Highlighting Module Highlighted
namespace FPLean
structure Container where
/-- The container's temporary working directory -/
workingDirectory : System.FilePath
/-- The saved outputs from each command run in the container -/
outputs : HashMap String String := {}
initialize containersExt : (EnvExtension (NameMap Container)) ← registerEnvExtension (pure {}) |
fp-lean/book/FPLean/Monads/Class.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.Class"
#doc (Manual) "The Monad Type Class" =>
%%%
tag := "monad-type-class"
%%%
:::paragraph
Rather than having to import an operator like {lit}`ok` or {lit}`andThen` for each type that is a monad, the Lean standard library contains a type class that allow them to be overloaded, so that the same operators can be used for _any_ monad.
Monads have two operations, which are the equivalent of {lit}`ok` and {lit}`andThen`:
```anchor FakeMonad
class Monad (m : Type → Type) where
pure : α → m α
bind : m α → (α → m β) → m β
```
This definition is slightly simplified.
The actual definition in the Lean library is somewhat more involved, and will be presented later.
:::
:::paragraph
The {anchorName MonadOptionExcept}`Monad` instances for {anchorName MonadOptionExcept}`Option` and {anchorTerm MonadOptionExcept}`Except ε` can be created by adapting the definitions of their respective {lit}`andThen` operations:
```anchor MonadOptionExcept
instance : Monad Option where
pure x := some x
bind opt next :=
match opt with
| none => none
| some x => next x
instance : Monad (Except ε) where
pure x := Except.ok x
bind attempt next :=
match attempt with
| Except.error e => Except.error e
| Except.ok x => next x
```
:::
:::paragraph
As an example, {lit}`firstThirdFifthSeventh` was defined separately for {anchorTerm Names}`Option α` and {anchorTerm Names}`Except String α` return types.
Now, it can be defined polymorphically for _any_ monad.
It does, however, require a lookup function as an argument, because different monads might fail to find a result in different ways.
The infix version of {anchorName FakeMonad}`bind` is {lit}`>>=`, which plays the same role as {lit}`~~>` in the examples.
```anchor firstThirdFifthSeventhMonad
def firstThirdFifthSeventh [Monad m] (lookup : List α → Nat → m α)
(xs : List α) : m (α × α × α × α) :=
lookup xs 0 >>= fun first =>
lookup xs 2 >>= fun third =>
lookup xs 4 >>= fun fifth =>
lookup xs 6 >>= fun seventh =>
pure (first, third, fifth, seventh)
```
:::
:::paragraph
Given example lists of slow mammals and fast birds, this implementation of {anchorName firstThirdFifthSeventhMonad}`firstThirdFifthSeventh` can be used with {moduleName}`Option`:
```anchor animals
def slowMammals : List String :=
["Three-toed sloth", "Slow loris"]
def fastBirds : List String := [
"Peregrine falcon",
"Saker falcon",
"Golden eagle",
"Gray-headed albatross",
"Spur-winged goose",
"Swift",
"Anna's hummingbird"
]
```
```anchor noneSlow
#eval firstThirdFifthSeventh (fun xs i => xs[i]?) slowMammals
```
```anchorInfo noneSlow
none
```
```anchor someFast
#eval firstThirdFifthSeventh (fun xs i => xs[i]?) fastBirds
```
```anchorInfo someFast
some ("Peregrine falcon", "Golden eagle", "Spur-winged goose", "Anna's hummingbird")
```
:::
:::paragraph
After renaming {anchorName getOrExcept}`Except`'s lookup function {lit}`get` to something more specific, the very same implementation of {anchorName firstThirdFifthSeventhMonad}`firstThirdFifthSeventh` can be used with {anchorName getOrExcept}`Except` as well:
```anchor getOrExcept
def getOrExcept (xs : List α) (i : Nat) : Except String α :=
match xs[i]? with
| none =>
Except.error s!"Index {i} not found (maximum is {xs.length - 1})"
| some x =>
Except.ok x
```
```anchor errorSlow
#eval firstThirdFifthSeventh getOrExcept slowMammals
```
```anchorInfo errorSlow
Except.error "Index 2 not found (maximum is 1)"
```
```anchor okFast
#eval firstThirdFifthSeventh getOrExcept fastBirds
```
```anchorInfo okFast
Except.ok ("Peregrine falcon", "Golden eagle", "Spur-winged goose", "Anna's hummingbird")
```
The fact that {anchorName firstThirdFifthSeventhMonad}`m` must have a {anchorName firstThirdFifthSeventhMonad}`Monad` instance means that the {lit}`>>=` and {anchorName firstThirdFifthSeventhMonad}`pure` operations are available.
:::
# General Monad Operations
%%%
tag := "monad-class-polymorphism"
%%%
:::paragraph
Because many different types are monads, functions that are polymorphic over _any_ monad are very powerful.
For example, the function {anchorName mapM}`mapM` is a version of {anchorName Names (show:=map)}`Functor.map` that uses a {anchorName mapM}`Monad` to sequence and combine the results of applying a function:
```anchor mapM
def mapM [Monad m] (f : α → m β) : List α → m (List β)
| [] => pure []
| x :: xs =>
f x >>= fun hd =>
mapM f xs >>= fun tl =>
pure (hd :: tl)
```
The return type of the function argument {anchorName mapM}`f` determines which {anchorName mapM}`Monad` instance will be used.
In other words, {anchorName mapM}`mapM` can be used for functions that produce logs, for functions that can fail, or for functions that use mutable state.
Because {anchorName mapM}`f`'s type determines the available effects, they can be tightly controlled by API designers.
:::
:::paragraph
As described in {ref "numbering-tree-nodes"}[this chapter's introduction], {anchorTerm StateEx}`State σ α` represents programs that make use of a mutable variable of type {anchorName StateEx}`σ` and return a value of type {anchorName StateEx}`α`.
These programs are actually functions from a starting state to a pair of a value and a final state.
The {anchorName StateMonad}`Monad` class requires that its parameter expect a single type argument—that is, it should be a {anchorTerm StateEx}`Type → Type`.
This means that the instance for {anchorName StateMonad}`State` should mention the state type {anchorName StateMonad}`σ`, which becomes a parameter to the instance:
```anchor StateMonad
instance : Monad (State σ) where
pure x := fun s => (s, x)
bind first next :=
fun s =>
let (s', x) := first s
next x s'
```
This means that the type of the state cannot change between calls to {anchorName StateEx}`get` and {anchorName StateEx}`set` that are sequenced using {anchorName StateMonad}`bind`, which is a reasonable rule for stateful computations.
The operator {anchorName increment}`increment` increases a saved state by a given amount, returning the old value:
```anchor increment
def increment (howMuch : Int) : State Int Int :=
get >>= fun i =>
set (i + howMuch) >>= fun () =>
pure i
```
:::
:::paragraph
Using {anchorName mapMincrementOut}`mapM` with {anchorName mapMincrementOut}`increment` results in a program that computes the sum of the entries in a list.
More specifically, the mutable variable contains the sum so far, while the resulting list contains a running sum.
In other words, {anchorTerm mapMincrement}`mapM increment` has type {anchorTerm mapMincrement}`List Int → State Int (List Int)`, and expanding the definition of {anchorName StateMonad}`State` yields {anchorTerm mapMincrement2}`List Int → Int → (Int × List Int)`.
It takes an initial sum as an argument, which should be {anchorTerm mapMincrementOut}`0`:
```anchor mapMincrementOut
#eval mapM increment [1, 2, 3, 4, 5] 0
```
```anchorInfo mapMincrementOut
(15, [0, 1, 3, 6, 10])
```
:::
:::paragraph
A {ref "logging"}[logging effect] can be represented using {anchorName MonadWriter}`WithLog`.
Just like {anchorName StateEx}`State`, its {anchorName MonadWriter}`Monad` instance is polymorphic with respect to the type of the logged data:
```anchor MonadWriter
instance : Monad (WithLog logged) where
pure x := {log := [], val := x}
bind result next :=
let {log := thisOut, val := thisRes} := result
let {log := nextOut, val := nextRes} := next thisRes
{log := thisOut ++ nextOut, val := nextRes}
```
:::
:::paragraph
{anchorName saveIfEven}`saveIfEven` is a function that logs even numbers but returns its argument unchanged:
```anchor saveIfEven
def saveIfEven (i : Int) : WithLog Int Int :=
(if isEven i then
save i
else pure ()) >>= fun () =>
pure i
```
Using this function with {anchorName mapMsaveIfEven}`mapM` results in a log containing even numbers paired with an unchanged input list:
```anchor mapMsaveIfEven
#eval mapM saveIfEven [1, 2, 3, 4, 5]
```
```anchorInfo mapMsaveIfEven
{ log := [2, 4], val := [1, 2, 3, 4, 5] }
```
:::
# The Identity Monad
%%%
tag := "Id-monad"
%%%
Monads encode programs with effects, such as failure, exceptions, or logging, into explicit representations as data and functions.
Sometimes, however, an API will be written to use a monad for flexibility, but the API's client may not require any encoded effects.
The {deftech}_identity monad_ is a monad that has no effects.
It allows pure code to be used with monadic APIs:
```anchor IdMonad
def Id (t : Type) : Type := t
instance : Monad Id where
pure x := x
bind x f := f x
```
The type of {anchorName IdMonad}`pure` should be {anchorTerm IdMore}`α → Id α`, but {anchorTerm IdMore}`Id α` reduces to just {anchorTerm IdMore}`α`.
Similarly, the type of {anchorName IdMonad}`bind` should be {anchorTerm IdMore}`α → (α → Id β) → Id β`.
Because this reduces to {anchorTerm IdMore}`α → (α → β) → β`, the second argument can be applied to the first to find the result.
:::paragraph
With the identity monad, {anchorName mapMId}`mapM` becomes equivalent to {anchorName Names (show:=map)}`Functor.map`
To call it this way, however, Lean requires a hint that the intended monad is {anchorName mapMId}`Id`:
```anchor mapMId
def numbers := mapM (m := Id) (do return · + 1) [1, 2, 3, 4, 5]
```
Using {anchorName mapMIdId}`mapM` in a context in which the type doesn't provide any specific hints about which monad is to be used results in an “instance problem is stuck” message:
```anchor mapMIdId
def numbers := mapM (do return · + 1) [1, 2, 3, 4, 5]
```
```anchorError mapMIdId
typeclass instance problem is stuck
Pure ?m.6
Note: Lean will not try to resolve this typeclass instance problem because the type argument to `Pure` is a metavariable. This argument must be fully determined before Lean will try to resolve the typeclass.
Hint: Adding type annotations and supplying implicit arguments to functions can give Lean more information for typeclass resolution. For example, if you have a variable `x` that you intend to be a `Nat`, but Lean reports it as having an unresolved type like `?m`, replacing `x` with `(x : Nat)` can get typeclass resolution un-stuck.
```
:::
# The Monad Contract
%%%
tag := "monad-contract"
%%%
Just as every pair of instances of {anchorName MonadContract}`BEq` and {anchorName MonadContract}`Hashable` should ensure that any two equal values have the same hash, there is a contract that each instance of {anchorName MonadContract}`Monad` should obey.
First, {anchorName MonadContract}`pure` should be a left identity of {anchorName MonadContract}`bind`.
That is, {anchorTerm MonadContract}`bind (pure v) f` should be the same as {anchorTerm MonadContract}`f v`.
Secondly, {anchorName MonadContract}`pure` should be a right identity of {anchorName MonadContract}`bind`, so {anchorTerm MonadContract}`bind v pure` is the same as {anchorName MonadContract2}`v`.
Finally, {anchorName MonadContract}`bind` should be associative, so {anchorTerm MonadContract}`bind (bind v f) g` is the same as {anchorTerm MonadContract}`bind v (fun x => bind (f x) g)`.
This contract specifies the expected properties of programs with effects more generally.
Because {anchorName MonadContract}`pure` has no effects, sequencing its effects with {anchorName MonadContract}`bind` shouldn't change the result.
The associative property of {anchorName MonadContract}`bind` basically says that the sequencing bookkeeping itself doesn't matter, so long as the order in which things are happening is preserved.
# Exercises
%%%
tag := "monad-class-exercises"
%%%
## Mapping on a Tree
%%%
tag := none
%%%
:::paragraph
Define a function {anchorName ex1}`BinTree.mapM`.
By analogy to {anchorName mapM}`mapM` for lists, this function should apply a monadic function to each data entry in a tree, as a preorder traversal.
The type signature should be:
```anchorTerm ex1
def BinTree.mapM [Monad m] (f : α → m β) : BinTree α → m (BinTree β)
```
:::
## The Option Monad Contract
%%%
tag := none
%%%
:::paragraph
First, write a convincing argument that the {anchorName badOptionMonad}`Monad` instance for {anchorName badOptionMonad}`Option` satisfies the monad contract.
Then, consider the following instance:
```anchor badOptionMonad
instance : Monad Option where
pure x := some x
bind opt next := none
```
Both methods have the correct type.
Why does this instance violate the monad contract?
::: |
fp-lean/book/FPLean/Monads/IO.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.IO"
#doc (Manual) "The IO Monad" =>
%%%
tag := "io-monad"
%%%
{anchorName names}`IO` as a monad can be understood from two perspectives, which were described in the section on {ref "running-a-program"}[running programs].
Each can help to understand the meanings of {anchorName names}`pure` and {anchorName names}`bind` for {anchorName names}`IO`.
From the first perspective, an {anchorName names}`IO` action is an instruction to Lean's run-time system.
For example, the instruction might be “read a string from this file descriptor, then re-invoke the pure Lean code with the string”.
This perspective is an _exterior_ one, viewing the program from the perspective of the operating system.
In this case, {anchorName names}`pure` is an {anchorName names}`IO` action that does not request any effects from the RTS, and {anchorName names}`bind` instructs the RTS to first carry out one potentially-effectful operation and then invoke the rest of the program with the resulting value.
From the second perspective, an {anchorName names}`IO` action transforms the whole world.
{anchorName names}`IO` actions are actually pure, because they receive a unique world as an argument and then return the changed world.
This perspective is an _interior_ one that matches how {anchorName names}`IO` is represented inside of Lean.
The world is represented in Lean as a token, and the {anchorName names}`IO` monad is structured to make sure that each token is used exactly once.
To see how this works, it can be helpful to peel back one definition at a time.
The {kw}`#print` command reveals the internals of Lean datatypes and definitions.
For example,
```anchor printNat
#print Nat
```
results in
```anchorInfo printNat
inductive Nat : Type
number of parameters: 0
constructors:
Nat.zero : Nat
Nat.succ : Nat → Nat
```
and
```anchor printCharIsAlpha
#print Char.isAlpha
```
results in
```anchorInfo printCharIsAlpha
def Char.isAlpha : Char → Bool :=
fun c => c.isUpper || c.isLower
```
Sometimes, the output of {kw}`#print` includes Lean features that have not yet been presented in this book.
For example,
```anchor printListIsEmpty
#print List.isEmpty
```
produces
```anchorInfo printListIsEmpty
def List.isEmpty.{u} : {α : Type u} → List α → Bool :=
fun {α} x =>
match x with
| [] => true
| head :: tail => false
```
which includes a {lit}`.{u}` after the definition's name, and annotates types as {anchorTerm names}`Type u` rather than just {anchorTerm names}`Type`.
This can be safely ignored for now.
Printing the definition of {anchorName names}`IO` shows that it's defined in terms of simpler structures:
```anchor printIO
#print IO
```
```anchorInfo printIO
@[reducible] def IO : Type → Type :=
EIO IO.Error
```
{anchorName printIOError}`IO.Error` represents all the errors that could be thrown by an {anchorName names}`IO` action:
```anchor printIOError
#print IO.Error
```
```anchorInfo printIOError
inductive IO.Error : Type
number of parameters: 0
constructors:
IO.Error.alreadyExists : Option String → UInt32 → String → IO.Error
IO.Error.otherError : UInt32 → String → IO.Error
IO.Error.resourceBusy : UInt32 → String → IO.Error
IO.Error.resourceVanished : UInt32 → String → IO.Error
IO.Error.unsupportedOperation : UInt32 → String → IO.Error
IO.Error.hardwareFault : UInt32 → String → IO.Error
IO.Error.unsatisfiedConstraints : UInt32 → String → IO.Error
IO.Error.illegalOperation : UInt32 → String → IO.Error
IO.Error.protocolError : UInt32 → String → IO.Error
IO.Error.timeExpired : UInt32 → String → IO.Error
IO.Error.interrupted : String → UInt32 → String → IO.Error
IO.Error.noFileOrDirectory : String → UInt32 → String → IO.Error
IO.Error.invalidArgument : Option String → UInt32 → String → IO.Error
IO.Error.permissionDenied : Option String → UInt32 → String → IO.Error
IO.Error.resourceExhausted : Option String → UInt32 → String → IO.Error
IO.Error.inappropriateType : Option String → UInt32 → String → IO.Error
IO.Error.noSuchThing : Option String → UInt32 → String → IO.Error
IO.Error.unexpectedEof : IO.Error
IO.Error.userError : String → IO.Error
```
{anchorTerm names}`EIO ε α` represents {anchorName names}`IO` actions that will either terminate with an error of type {anchorName names}`ε` or succeed with a value of type {anchorName names}`α`.
This means that, like the {anchorTerm names}`Except ε` monad, the {anchorName names}`IO` monad includes the ability to define error handling and exceptions.
Peeling back another layer, {anchorName names}`EIO` is itself defined in terms of a simpler structure:
```anchor printEIO
#print EIO
```
```anchorInfo printEIO
def EIO : Type → Type → Type :=
fun ε α => EST ε IO.RealWorld α
```
The {anchorName printEStateM}`EST` monad includes both errors and state—it's similar to a combination of {anchorName names}`Except` and {anchorName State (module := Examples.Monads)}`State`.
It is defined using another type, {anchorName printEStateMResult}`EST.Out`:
```anchor printEStateM
#print EST
```
```anchorInfo printEStateM
def EST : Type → Type → Type → Type :=
fun ε σ α => Void σ → EST.Out ε σ α
```
In other words, a program with type {anchorTerm EStateMNames}`EST ε σ α` is a function that accepts an initial state of type {anchorName EStateMNames}`σ` and returns an {anchorTerm EStateMNames}`EST.Out ε σ α`.
The state is wrapped in the type {anchorName VoidSigma}`Void`, which is an internal primitive that causes a value to be erased from compiled code; {anchorTerm VoidSigma}`Void σ` has the same representation as {anchorName save (module:=Examples.Monads)}`Unit`.
{anchorName EStateMNames}`EST.Out` is very much like the definition of {anchorName names}`Except`, with one constructor that indicates a successful termination and one constructor that indicates an error:
```anchor printEStateMResult
#print EST.Out
```
```anchorInfo printEStateMResult
inductive EST.Out : Type → Type → Type → Type
number of parameters: 3
constructors:
EST.Out.ok : {ε σ α : Type} → α → Void σ → EST.Out ε σ α
EST.Out.error : {ε σ α : Type} → ε → Void σ → EST.Out ε σ α
```
Just like {anchorTerm Except (module:=Examples.Monads)}`Except ε α`, the {anchorName names (show := ok)}`EST.Out.ok` constructor includes a result of type {anchorName Except (module:=Examples.Monads)}`α`, and the {anchorName names (show := error)}`EST.Out.error` constructor includes an exception of type {anchorName Except (module:=Examples.Monads)}`ε`.
Unlike {anchorName names}`Except`, both constructors have an additional state field that includes the final state of the computation.
The {anchorName names}`Monad` instance for {anchorTerm names}`EST ε σ` requires {anchorName names}`pure` and {anchorName names}`bind`.
Just as with {anchorName State (module:=Examples.Monads)}`State`, the implementation of {anchorName names}`pure` for {anchorName names}`EST` accepts an initial state and returns it unchanged, and just as with {anchorName names}`Except`, it returns its argument in the {anchorName names (show := ok)}`EST.Out.ok` constructor:
```anchor printEStateMpure
#print EST.pure
```
```anchorInfo printEStateMpure
protected def EST.pure : {α ε σ : Type} → α → EST ε σ α :=
fun {α ε σ} a s => EST.Out.ok a s
```
{kw}`protected` means that the full name {anchorName printEStateMpure}`EST.pure` is needed even if the {anchorName names}`EST` namespace has been opened.
Similarly, {anchorName names}`bind` for {anchorName names}`EST` takes an initial state as an argument.
It passes this initial state to its first action.
Like {anchorName names}`bind` for {anchorName names}`Except`, it then checks whether the result is an error.
If so, the error is returned unchanged and the second argument to {anchorName names}`bind` remains unused.
If the result was a success, then the second argument is applied to both the returned value and to the resulting state.
```anchor printEStateMbind
#print EST.bind
```
```anchorInfo printEStateMbind
protected def EST.bind : {ε σ α β : Type} → EST ε σ α → (α → EST ε σ β) → EST ε σ β :=
fun {ε σ α β} x f s =>
match x s with
| EST.Out.ok a s => f a s
| EST.Out.error e s => EST.Out.error e s
```
Putting all of this together, {anchorName names}`IO` is a monad that tracks state and errors at the same time.
The collection of available errors is that given by the datatype {anchorName printIOError}`IO.Error`, which has constructors that describe many things that can go wrong in a program.
The state is a type that represents the real world, called {anchorTerm RealWorld}`IO.RealWorld`.
Each basic {anchorName names}`IO` action receives this real world and returns another one, paired either with an error or a result.
In {anchorName names}`IO`, {anchorName names}`pure` returns the world unchanged, while {anchorName names}`bind` passes the modified world from one action into the next action.
Because the entire universe doesn't fit in a computer's memory, the world being passed around is just a representation.
So long as world tokens are not re-used, the representation is safe.
The type {anchorTerm RealWorld}`IO.RealWorld` is a trivial primitive type that does not need any representation at all, because it is only used inside of {anchorName VoidSigma}`Void`. |
fp-lean/book/FPLean/Monads/Arithmetic.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.Class"
#doc (Manual) "Example: Arithmetic in Monads" =>
%%%
tag := "monads-arithmetic-example"
%%%
Monads are a way of encoding programs with side effects into a language that does not have them.
It would be easy to read this as a sort of admission that pure functional programs are missing something important, requiring programmers to jump through hoops just to write a normal program.
However, while using the {moduleName}`Monad` API does impose a syntactic cost on a program, it brings two important benefits:
1. Programs must be honest about which effects they use in their types. A quick glance at a type signature describes _everything_ that the program can do, rather than just what it accepts and what it returns.
2. Not every language provides the same effects. For example, only some languages have exceptions. Other languages have unique, exotic effects, such as [Icon's searching over multiple values](https://www2.cs.arizona.edu/icon/) and Scheme or Ruby's continuations. Because monads can encode _any_ effect, programmers can choose which ones are the best fit for a given application, rather than being stuck with what the language developers provided.
One example of a program that can make sense in a variety of monads is an evaluator for arithmetic expressions.
# Arithmetic Expressions
%%%
tag := "monads-arithmetic-example-expr"
%%%
:::paragraph
An arithmetic expression is either a literal integer or a primitive binary operator applied to two expressions. The operators are addition, subtraction, multiplication, and division:
```anchor ExprArith
inductive Expr (op : Type) where
| const : Int → Expr op
| prim : op → Expr op → Expr op → Expr op
inductive Arith where
| plus
| minus
| times
| div
```
:::
:::paragraph
The expression {lit}`2 + 3` is represented:
```anchor twoPlusThree
open Expr in
open Arith in
def twoPlusThree : Expr Arith :=
prim plus (const 2) (const 3)
```
and {lit}`14 / (45 - 5 * 9)` is represented:
```anchor exampleArithExpr
open Expr in
open Arith in
def fourteenDivided : Expr Arith :=
prim div (const 14)
(prim minus (const 45)
(prim times (const 5)
(const 9)))
```
:::
# Evaluating Expressions
%%%
tag := "monads-arithmetic-example-eval"
%%%
:::paragraph
Because expressions include division, and division by zero is undefined, evaluation might fail.
One way to represent failure is to use {anchorName evaluateOptionCommingled}`Option`:
```anchor evaluateOptionCommingled
def evaluateOption : Expr Arith → Option Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateOption e1 >>= fun v1 =>
evaluateOption e2 >>= fun v2 =>
match p with
| Arith.plus => pure (v1 + v2)
| Arith.minus => pure (v1 - v2)
| Arith.times => pure (v1 * v2)
| Arith.div => if v2 == 0 then none else pure (v1 / v2)
```
:::
:::paragraph
This definition uses the {anchorTerm MonadOptionExcept}`Monad Option` instance to propagate failures from evaluating both branches of a binary operator.
However, the function mixes two concerns: evaluating subexpressions and applying a binary operator to the results.
It can be improved by splitting it into two functions:
```anchor evaluateOptionSplit
def applyPrim : Arith → Int → Int → Option Int
| Arith.plus, x, y => pure (x + y)
| Arith.minus, x, y => pure (x - y)
| Arith.times, x, y => pure (x * y)
| Arith.div, x, y => if y == 0 then none else pure (x / y)
def evaluateOption : Expr Arith → Option Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateOption e1 >>= fun v1 =>
evaluateOption e2 >>= fun v2 =>
applyPrim p v1 v2
```
:::
:::paragraph
Running {anchorTerm fourteenDivOption}`#eval evaluateOption fourteenDivided` yields {anchorInfo fourteenDivOption}`none`, as expected, but this is not a very useful error message.
Because the code was written using {lit}`>>=` rather than by explicitly handling the {anchorName MonadOptionExcept}`none` constructor, only a small modification is required for it to provide an error message on failure:
```anchor evaluateExcept
def applyPrim : Arith → Int → Int → Except String Int
| Arith.plus, x, y => pure (x + y)
| Arith.minus, x, y => pure (x - y)
| Arith.times, x, y => pure (x * y)
| Arith.div, x, y =>
if y == 0 then
Except.error s!"Tried to divide {x} by zero"
else pure (x / y)
def evaluateExcept : Expr Arith → Except String Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateExcept e1 >>= fun v1 =>
evaluateExcept e2 >>= fun v2 =>
applyPrim p v1 v2
```
The only difference is that the type signature mentions {anchorTerm evaluateExcept}`Except String` instead of {anchorName Names}`Option`, and the failing case uses {anchorName evaluateExcept}`Except.error` instead of {anchorName evaluateM}`none`.
By making the evaluator polymorphic over its monad and passing it {anchorName evaluateM}`applyPrim` as an argument, a single evaluator becomes capable of both forms of error reporting:
```anchor evaluateM
def applyPrimOption : Arith → Int → Int → Option Int
| Arith.plus, x, y => pure (x + y)
| Arith.minus, x, y => pure (x - y)
| Arith.times, x, y => pure (x * y)
| Arith.div, x, y =>
if y == 0 then
none
else pure (x / y)
def applyPrimExcept : Arith → Int → Int → Except String Int
| Arith.plus, x, y => pure (x + y)
| Arith.minus, x, y => pure (x - y)
| Arith.times, x, y => pure (x * y)
| Arith.div, x, y =>
if y == 0 then
Except.error s!"Tried to divide {x} by zero"
else pure (x / y)
def evaluateM [Monad m]
(applyPrim : Arith → Int → Int → m Int) :
Expr Arith → m Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateM applyPrim e1 >>= fun v1 =>
evaluateM applyPrim e2 >>= fun v2 =>
applyPrim p v1 v2
```
:::
Using it with {anchorName evaluateMOption}`applyPrimOption` works just like the first evaluator:
```anchor evaluateMOption
#eval evaluateM applyPrimOption fourteenDivided
```
```anchorInfo evaluateMOption
none
```
Similarly, using it with {anchorName evaluateMExcept}`applyPrimExcept` works just like the version with error messages:
```anchor evaluateMExcept
#eval evaluateM applyPrimExcept fourteenDivided
```
```anchorInfo evaluateMExcept
Except.error "Tried to divide 14 by zero"
```
:::paragraph
The code can still be improved.
The functions {anchorName evaluateMOption}`applyPrimOption` and {anchorName evaluateMExcept}`applyPrimExcept` differ only in their treatment of division, which can be extracted into another parameter to the evaluator:
```anchor evaluateMRefactored
def applyDivOption (x : Int) (y : Int) : Option Int :=
if y == 0 then
none
else pure (x / y)
def applyDivExcept (x : Int) (y : Int) : Except String Int :=
if y == 0 then
Except.error s!"Tried to divide {x} by zero"
else pure (x / y)
def applyPrim [Monad m]
(applyDiv : Int → Int → m Int) :
Arith → Int → Int → m Int
| Arith.plus, x, y => pure (x + y)
| Arith.minus, x, y => pure (x - y)
| Arith.times, x, y => pure (x * y)
| Arith.div, x, y => applyDiv x y
def evaluateM [Monad m]
(applyDiv : Int → Int → m Int) :
Expr Arith → m Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateM applyDiv e1 >>= fun v1 =>
evaluateM applyDiv e2 >>= fun v2 =>
applyPrim applyDiv p v1 v2
```
In this refactored code, the fact that the two code paths differ only in their treatment of failure has been made fully apparent.
:::
# Further Effects
%%%
tag := "monads-arithmetic-example-effects"
%%%
Failure and exceptions are not the only kinds of effects that can be interesting when working with an evaluator.
While division's only side effect is failure, adding other primitive operators to the expressions make it possible to express other effects.
The first step is an additional refactoring, extracting division from the datatype of primitives:
```anchor PrimCanFail
inductive Prim (special : Type) where
| plus
| minus
| times
| other : special → Prim special
inductive CanFail where
| div
```
The name {anchorName PrimCanFail}`CanFail` suggests that the effect introduced by division is potential failure.
The second step is to broaden the scope of the division handler argument to {anchorName evaluateMMorePoly}`evaluateM` so that it can process any special operator:
```anchor evaluateMMorePoly
def divOption : CanFail → Int → Int → Option Int
| CanFail.div, x, y => if y == 0 then none else pure (x / y)
def divExcept : CanFail → Int → Int → Except String Int
| CanFail.div, x, y =>
if y == 0 then
Except.error s!"Tried to divide {x} by zero"
else pure (x / y)
def applyPrim [Monad m]
(applySpecial : special → Int → Int → m Int) :
Prim special → Int → Int → m Int
| Prim.plus, x, y => pure (x + y)
| Prim.minus, x, y => pure (x - y)
| Prim.times, x, y => pure (x * y)
| Prim.other op, x, y => applySpecial op x y
def evaluateM [Monad m]
(applySpecial : special → Int → Int → m Int) :
Expr (Prim special) → m Int
| Expr.const i => pure i
| Expr.prim p e1 e2 =>
evaluateM applySpecial e1 >>= fun v1 =>
evaluateM applySpecial e2 >>= fun v2 =>
applyPrim applySpecial p v1 v2
```
## No Effects
%%%
tag := "monads-arithmetic-example-no-effects"
%%%
The type {anchorName applyEmpty}`Empty` has no constructors, and thus no values, like the {Kotlin}`Nothing` type in Scala or Kotlin.
In Scala and Kotlin, {Kotlin}`Nothing` can represent computations that never return a result, such as functions that crash the program, throw exceptions, or always fall into infinite loops.
An argument to a function or method of type {Kotlin}`Nothing` indicates dead code, as there will never be a suitable argument value.
Lean doesn't support infinite loops and exceptions, but {anchorName applyEmpty}`Empty` is still useful as an indication to the type system that a function cannot be called.
Using the syntax {anchorTerm nomatch}`nomatch E` when {anchorName nomatch}`E` is an expression whose type has no constructors indicates to Lean that the current expression need not return a result, because it could never have been called.
Using {anchorName applyEmpty}`Empty` as the parameter to {anchorName PrimCanFail}`Prim` indicates that there are no additional cases beyond {anchorName evaluateMMorePoly}`Prim.plus`, {anchorName evaluateMMorePoly}`Prim.minus`, and {anchorName evaluateMMorePoly}`Prim.times`, because it is impossible to come up with a value of type {anchorName nomatch}`Empty` to place in the {anchorName evaluateMMorePoly}`Prim.other` constructor.
Because a function to apply an operator of type {anchorName nomatch}`Empty` to two integers can never be called, it doesn't need to return a result.
Thus, it can be used in _any_ monad:
```anchor applyEmpty
def applyEmpty [Monad m] (op : Empty) (_ : Int) (_ : Int) : m Int :=
nomatch op
```
This can be used together with {anchorName evalId}`Id`, the identity monad, to evaluate expressions that have no effects whatsoever:
```anchor evalId
open Expr Prim in
#eval evaluateM (m := Id) applyEmpty (prim plus (const 5) (const (-14)))
```
```anchorInfo evalId
-9
```
## Nondeterministic Search
%%%
tag := "nondeterministic-search"
%%%
Instead of simply failing when encountering division by zero, it would also be sensible to backtrack and try a different input.
Given the right monad, the very same {anchorName evalId}`evaluateM` can perform a nondeterministic search for a _set_ of answers that do not result in failure.
This requires, in addition to division, some means of specifying a choice of results.
One way to do this is to add a function {lit}`choose` to the language of expressions that instructs the evaluator to pick either of its arguments while searching for non-failing results.
The result of the evaluator is now a multiset of values, rather than a single value.
The rules for evaluation into a multiset are:
* Constants $`n` evaluate to singleton sets $`\{n\}`.
* Arithmetic operators other than division are called on each pair from the Cartesian product of the operators, so $`X + Y` evaluates to $`\{ x + y \mid x ∈ X, y ∈ Y \}`.
* Division $`X / Y` evaluates to $`\{ x / y \mid x ∈ X, y ∈ Y, y ≠ 0\}`. In other words, all $`0` values in $`Y` are thrown out.
* A choice $`\mathrm{choose}(x, y)` evaluates to $`\{ x, y \}`.
For example, $`1 + \mathrm{choose}(2, 5)` evaluates to $`\{ 3, 6 \}`, $`1 + 2 / 0` evaluates to $`\{\}`, and $`90 / (\mathrm{choose}(-5, 5) + 5)` evaluates to $`\{ 9 \}`.
Using multisets instead of true sets simplifies the code by removing the need to check for uniqueness of elements.
:::paragraph
A monad that represents this non-deterministic effect must be able to represent a situation in which there are no answers, and a situation in which there is at least one answer together with any remaining answers:
```anchor Many (module := Examples.Monads.Many)
inductive Many (α : Type) where
| none : Many α
| more : α → (Unit → Many α) → Many α
```
This datatype looks very much like {anchorName fromList (module:=Examples.Monads.Many)}`List`.
The difference is that where {anchorName etc}`List.cons` stores the rest of the list, {anchorName Many (module:=Examples.Monads.Many)}`more` stores a function that should compute the remaining values on demand.
This means that a consumer of {anchorName Many (module:=Examples.Monads.Many)}`Many` can stop the search when some number of results have been found.
:::
:::paragraph
A single result is represented by a {anchorName Many (module:=Examples.Monads.Many)}`more` constructor that returns no further results:
```anchor one (module := Examples.Monads.Many)
def Many.one (x : α) : Many α := Many.more x (fun () => Many.none)
```
:::
:::paragraph
The union of two multisets of results can be computed by checking whether the first multiset is empty.
If so, the second multiset is the union.
If not, the union consists of the first element of the first multiset followed by the union of the rest of the first multiset with the second multiset:
```anchor union (module := Examples.Monads.Many)
def Many.union : Many α → Many α → Many α
| Many.none, ys => ys
| Many.more x xs, ys => Many.more x (fun () => union (xs ()) ys)
```
:::
:::paragraph
It can be convenient to start a search process with a list of values.
{anchorName fromList (module:=Examples.Monads.Many)}`Many.fromList` converts a list into a multiset of results:
```anchor fromList (module := Examples.Monads.Many)
def Many.fromList : List α → Many α
| [] => Many.none
| x :: xs => Many.more x (fun () => fromList xs)
```
Similarly, once a search has been specified, it can be convenient to extract either a number of values, or all the values:
```anchor take (module := Examples.Monads.Many)
def Many.take : Nat → Many α → List α
| 0, _ => []
| _ + 1, Many.none => []
| n + 1, Many.more x xs => x :: (xs ()).take n
def Many.takeAll : Many α → List α
| Many.none => []
| Many.more x xs => x :: (xs ()).takeAll
```
:::
A {anchorTerm MonadMany (module:=Examples.Monads.Many)}`Monad Many` instance requires a {anchorName MonadContract}`bind` operator.
In a nondeterministic search, sequencing two operations consists of taking all possibilities from the first step and running the rest of the program on each of them, taking the union of the results.
In other words, if the first step returns three possible answers, the second step needs to be tried for all three.
Because the second step can return any number of answers for each input, taking their union represents the entire search space.
```anchor bind (module := Examples.Monads.Many)
def Many.bind : Many α → (α → Many β) → Many β
| Many.none, _ =>
Many.none
| Many.more x xs, f =>
(f x).union (bind (xs ()) f)
```
{anchorName MonadMany (module:=Examples.Monads.Many)}`Many.one` and {anchorName MonadMany (module:=Examples.Monads.Many)}`Many.bind` obey the monad contract.
To check that {anchorTerm bindLeft (module:=Examples.Monads.Many)}`Many.bind (Many.one v) f` is the same as {anchorTerm bindLeft (module:=Examples.Monads.Many)}`f v`, start by evaluating the expression as far as possible:
```anchorEvalSteps bindLeft (module := Examples.Monads.Many)
Many.bind (Many.one v) f
===>
Many.bind (Many.more v (fun () => Many.none)) f
===>
(f v).union (Many.bind Many.none f)
===>
(f v).union Many.none
```
The empty multiset is a right identity of {anchorName union (module:=Examples.Monads.Many)}`union`, so the answer is equivalent to {anchorTerm bindLeft (module:=Examples.Monads.Many)}`f v`.
To check that {anchorTerm bindOne (module:=Examples.Monads.Many)}`Many.bind v Many.one` is the same as {anchorName bindOne (module:=Examples.Monads.Many)}`v`, consider that {anchorName bindOne (module:=Examples.Monads.Many)}`Many.bind` takes the union of applying {anchorName one (module:=Examples.Monads.Many)}`Many.one` to each element of {anchorName bindOne (module:=Examples.Monads.Many)}`v`.
In other words, if {anchorName bindOne (module:=Examples.Monads.Many)}`v` has the form {anchorTerm vSet (module:=Examples.Monads.Many)}`{v₁, v₂, v₃, …, vₙ}`, then {anchorTerm bindOne (module:=Examples.Monads.Many)}`Many.bind v Many.one` is {anchorTerm vSets (module:=Examples.Monads.Many)}`{v₁} ∪ {v₂} ∪ {v₃} ∪ … ∪ {vₙ}`, which is {anchorTerm vSet (module:=Examples.Monads.Many)}`{v₁, v₂, v₃, …, vₙ}`.
Finally, to check that {anchorName bind (module:=Examples.Monads.Many)}`Many.bind` is associative, check that {anchorTerm bindBindLeft (module:=Examples.Monads.Many)}`Many.bind (Many.bind v f) g` is the same as {anchorTerm bindBindRight (module:=Examples.Monads.Many)}`Many.bind v (fun x => Many.bind (f x) g)`.
If {anchorName bindBindRight (module:=Examples.Monads.Many)}`v` has the form {anchorTerm vSet (module:=Examples.Monads.Many)}`{v₁, v₂, v₃, …, vₙ}`, then:
```anchorEvalSteps bindUnion (module := Examples.Monads.Many)
Many.bind v f
===>
f v₁ ∪ f v₂ ∪ f v₃ ∪ … ∪ f vₙ
```
which means that
```anchorEvalSteps bindBindLeft (module := Examples.Monads.Many)
Many.bind (Many.bind v f) g
===>
Many.bind (f v₁) g ∪
Many.bind (f v₂) g ∪
Many.bind (f v₃) g ∪
… ∪
Many.bind (f vₙ) g
```
Similarly,
```anchorEvalSteps bindBindRight (module := Examples.Monads.Many)
Many.bind v (fun x => Many.bind (f x) g)
===>
(fun x => Many.bind (f x) g) v₁ ∪
(fun x => Many.bind (f x) g) v₂ ∪
(fun x => Many.bind (f x) g) v₃ ∪
… ∪
(fun x => Many.bind (f x) g) vₙ
===>
Many.bind (f v₁) g ∪
Many.bind (f v₂) g ∪
Many.bind (f v₃) g ∪
… ∪
Many.bind (f vₙ) g
```
Thus, both sides are equal, so {anchorName bindAssoc (module:=Examples.Monads.Many)}`Many.bind` is associative.
The resulting monad instance is:
```anchor MonadMany (module := Examples.Monads.Many)
instance : Monad Many where
pure := Many.one
bind := Many.bind
```
An example search using this monad finds all the combinations of numbers in a list that add to 15:
```anchor addsTo (module := Examples.Monads.Many)
def addsTo (goal : Nat) : List Nat → Many (List Nat)
| [] =>
if goal == 0 then
pure []
else
Many.none
| x :: xs =>
if x > goal then
addsTo goal xs
else
(addsTo goal xs).union
(addsTo (goal - x) xs >>= fun answer =>
pure (x :: answer))
```
The search process is recursive over the list.
The empty list is a successful search when the goal is {anchorTerm addsTo (module:=Examples.Monads.Many)}`0`; otherwise, it fails.
When the list is non-empty, there are two possibilities: either the head of the list is greater than the goal, in which case it cannot participate in any successful searches, or it is not, in which case it can.
If the head of the list is _not_ a candidate, then the search proceeds to the tail of the list.
If the head is a candidate, then there are two possibilities to be combined with {anchorName union (module:=Examples.Monads.Many)}`Many.union`: either the solutions found contain the head, or they do not.
The solutions that do not contain the head are found with a recursive call on the tail, while the solutions that do contain it result from subtracting the head from the goal, and then attaching the head to the solutions that result from the recursive call.
The helper {anchorName printList (module:=Examples.Monads.Many)}`printList` ensures that one result is displayed per line:
```anchor printList (module := Examples.Monads.Many)
def printList [ToString α] : List α → IO Unit
| [] => pure ()
| x :: xs => do
IO.println x
printList xs
```
```anchor addsToFifteen (module := Examples.Monads.Many)
#eval printList (addsTo 15 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]).takeAll
```
```anchorInfo addsToFifteen (module := Examples.Monads.Many)
[7, 8]
[6, 9]
[5, 10]
[4, 5, 6]
[3, 5, 7]
[3, 4, 8]
[2, 6, 7]
[2, 5, 8]
[2, 4, 9]
[2, 3, 10]
[2, 3, 4, 6]
[1, 6, 8]
[1, 5, 9]
[1, 4, 10]
[1, 3, 5, 6]
[1, 3, 4, 7]
[1, 2, 5, 7]
[1, 2, 4, 8]
[1, 2, 3, 9]
[1, 2, 3, 4, 5]
```
:::paragraph
Returning to the arithmetic evaluator that produces multisets of results, the {anchorName NeedsSearch}`choose` operator can be used to nondeterministically select a value, with division by zero rendering prior selections invalid.
```anchor NeedsSearch
inductive NeedsSearch
| div
| choose
def applySearch : NeedsSearch → Int → Int → Many Int
| NeedsSearch.choose, x, y =>
Many.fromList [x, y]
| NeedsSearch.div, x, y =>
if y == 0 then
Many.none
else Many.one (x / y)
```
:::
:::paragraph
Using these operators, the earlier examples can be evaluated:
```anchor opening
open Expr Prim NeedsSearch
```
```anchor searchA
#eval
(evaluateM applySearch
(prim plus (const 1)
(prim (other choose) (const 2)
(const 5)))).takeAll
```
```anchorInfo searchA
[3, 6]
```
```anchor searchB
#eval
(evaluateM applySearch
(prim plus (const 1)
(prim (other div) (const 2)
(const 0)))).takeAll
```
```anchorInfo searchB
[]
```
```anchor searchC
#eval
(evaluateM applySearch
(prim (other div) (const 90)
(prim plus (prim (other choose) (const (-5)) (const 5))
(const 5)))).takeAll
```
```anchorInfo searchC
[9]
```
:::
## Custom Environments
%%%
tag := "custom-environments"
%%%
The evaluator can be made user-extensible by allowing strings to be used as operators, and then providing a mapping from strings to a function that implements them.
For example, users could extend the evaluator with a remainder operator or with one that returns the maximum of its two arguments.
The mapping from function names to function implementations is called an _environment_.
The environments needs to be passed in each recursive call.
Initially, it might seem that {anchorName evaluateM}`evaluateM` needs an extra argument to hold the environment, and that this argument should be passed to each recursive invocation.
However, passing an argument like this is another form of monad, so an appropriate {anchorName evaluateM}`Monad` instance allows the evaluator to be used unchanged.
Using functions as a monad is typically called a _reader_ monad.
When evaluating expressions in the reader monad, the following rules are used:
* Constants $`n` evaluate to constant functions $`λ e . n`,
* Arithmetic operators evaluate to functions that pass their arguments on, so $`f + g` evaluates to $`λ e . f(e) + g(e)`, and
* Custom operators evaluate to the result of applying the custom operator to the arguments, so $`f \ \mathrm{OP}\ g` evaluates to
$$`
λ e .
\begin{cases}
h(f(e), g(e)) & \mathrm{if}\ e\ \mathrm{contains}\ (\mathrm{OP}, h) \\
0 & \mathrm{otherwise}
\end{cases}
`
with $`0` serving as a fallback in case an unknown operator is applied.
:::paragraph
To define the reader monad in Lean, the first step is to define the {anchorName Reader}`Reader` type and the effect that allows users to get ahold of the environment:
```anchor Reader
def Reader (ρ : Type) (α : Type) : Type := ρ → α
def read : Reader ρ ρ := fun env => env
```
By convention, the Greek letter {anchorName Reader}`ρ`, which is pronounced “rho”, is used for environments.
:::
:::paragraph
The fact that constants in arithmetic expressions evaluate to constant functions suggests that the appropriate definition of {anchorName IdMonad}`pure` for {anchorName Reader}`Reader` is a a constant function:
```anchor ReaderPure
def Reader.pure (x : α) : Reader ρ α := fun _ => x
```
:::
On the other hand, {anchorName MonadContract}`bind` is a bit tricker.
Its type is {anchorTerm readerBindType}`Reader ρ α → (α → Reader ρ β) → Reader ρ β`.
This type can be easier to understand by unfolding the definition of {anchorName Reader}`Reader`, which yields {anchorTerm readerBindTypeEval}`(ρ → α) → (α → ρ → β) → (ρ → β)`.
It should take an environment-accepting function as its first argument, while the second argument should transform the result of the environment-accepting function into yet another environment-accepting function.
The result of combining these is itself a function, waiting for an environment.
It's possible to use Lean interactively to get help writing this function.
The first step is to write down the arguments and return type, being very explicit in order to get as much help as possible, with an underscore for the definition's body:
```anchor readerbind0
def Reader.bind {ρ : Type} {α : Type} {β : Type}
(result : ρ → α) (next : α → ρ → β) : ρ → β :=
_
```
Lean provides a message that describes which variables are available in scope, and the type that's expected for the result.
The {lit}`⊢` symbol, called a {deftech}_turnstile_ due to its resemblance to subway entrances, separates the local variables from the desired type, which is {anchorTerm readerbind0}`ρ → β` in this message:
```anchorError readerbind0
don't know how to synthesize placeholder
context:
ρ α β : Type
result : ρ → α
next : α → ρ → β
⊢ ρ → β
```
Because the return type is a function, a good first step is to wrap a {kw}`fun` around the underscore:
```anchor readerbind1
def Reader.bind {ρ : Type} {α : Type} {β : Type}
(result : ρ → α) (next : α → ρ → β) : ρ → β :=
fun env => _
```
The resulting message now shows the function's argument as a local variable:
```anchorError readerbind1
don't know how to synthesize placeholder
context:
ρ α β : Type
result : ρ → α
next : α → ρ → β
env : ρ
⊢ β
```
The only thing in the context that can produce a {anchorName readerbind2a}`β` is {anchorName readerbind2a}`next`, and it will require two arguments to do so.
Each argument can itself be an underscore:
```anchor readerbind2a
def Reader.bind {ρ : Type} {α : Type} {β : Type}
(result : ρ → α) (next : α → ρ → β) : ρ → β :=
fun env => next _ _
```
The two underscores have the following respective messages associated with them:
```anchorError readerbind2a
don't know how to synthesize placeholder
context:
ρ α β : Type
result : ρ → α
next : α → ρ → β
env : ρ
⊢ α
```
```anchorError readerbind2b
don't know how to synthesize placeholder
context:
ρ α β : Type
result : ρ → α
next : α → ρ → β
env : ρ
⊢ ρ
```
:::paragraph
Attacking the first underscore, only one thing in the context can produce an {anchorName readerbind3}`α`, namely {anchorName readerbind3}`result`:
```anchor readerbind3
def Reader.bind {ρ : Type} {α : Type} {β : Type}
(result : ρ → α) (next : α → ρ → β) : ρ → β :=
fun env => next (result _) _
```
Now, both underscores have the same error message:
```anchorError readerbind3
don't know how to synthesize placeholder
context:
ρ α β : Type
result : ρ → α
next : α → ρ → β
env : ρ
⊢ ρ
```
:::
:::paragraph
Happily, both underscores can be replaced by {anchorName readerbind4}`env`, yielding:
```anchor readerbind4
def Reader.bind {ρ : Type} {α : Type} {β : Type}
(result : ρ → α) (next : α → ρ → β) : ρ → β :=
fun env => next (result env) env
```
:::
The final version can be obtained by undoing the unfolding of {anchorName Readerbind}`Reader` and cleaning up the explicit details:
```anchor Readerbind
def Reader.bind
(result : Reader ρ α)
(next : α → Reader ρ β) : Reader ρ β :=
fun env => next (result env) env
```
It's not always possible to write correct functions by simply “following the types”, and it carries the risk of not understanding the resulting program.
However, it can also be easier to understand a program that has been written than one that has not, and the process of filling in the underscores can bring insights.
In this case, {anchorName Readerbind}`Reader.bind` works just like {anchorName IdMonad}`bind` for {anchorName IdMonad}`Id`, except it accepts an additional argument that it then passes down to its arguments, and this intuition can help in understanding how it works.
{anchorName ReaderPure}`Reader.pure` (which generates constant functions) and {anchorName Readerbind}`Reader.bind` obey the monad contract.
To check that {anchorTerm ReaderMonad1}`Reader.bind (Reader.pure v) f` is the same as {anchorTerm ReaderMonad1}`f v`, it's enough to replace definitions until the last step:
```anchorEvalSteps ReaderMonad1
Reader.bind (Reader.pure v) f
===>
fun env => f ((Reader.pure v) env) env
===>
fun env => f ((fun _ => v) env) env
===>
fun env => f v env
===>
f v
```
For every function {anchorName eta}`f`, {anchorTerm eta}`fun x => f x` is the same as {anchorName eta}`f`, so the first part of the contract is satisfied.
To check that {anchorTerm ReaderMonad2}`Reader.bind r Reader.pure` is the same as {anchorName ReaderMonad2}`r`, a similar technique works:
```anchorEvalSteps ReaderMonad2
Reader.bind r Reader.pure
===>
fun env => Reader.pure (r env) env
===>
fun env => (fun _ => (r env)) env
===>
fun env => r env
```
Because reader actions {anchorName ReaderMonad2}`r` are themselves functions, this is the same as {anchorName ReaderMonad2}`r`.
To check associativity, the same thing can be done for both {anchorEvalStep ReaderMonad3a 0}`Reader.bind (Reader.bind r f) g` and {anchorEvalStep ReaderMonad3b 0}`Reader.bind r (fun x => Reader.bind (f x) g)`:
```anchorEvalSteps ReaderMonad3a
Reader.bind (Reader.bind r f) g
===>
fun env => g ((Reader.bind r f) env) env
===>
fun env => g ((fun env' => f (r env') env') env) env
===>
fun env => g (f (r env) env) env
```
{anchorEvalStep ReaderMonad3b 0}`Reader.bind r (fun x => Reader.bind (f x) g)` reduces to the same expression:
```anchorEvalSteps ReaderMonad3b
Reader.bind r (fun x => Reader.bind (f x) g)
===>
Reader.bind r (fun x => fun env => g (f x env) env)
===>
fun env => (fun x => fun env' => g (f x env') env') (r env) env
===>
fun env => (fun env' => g (f (r env) env') env') env
===>
fun env => g (f (r env) env) env
```
Thus, a {anchorTerm MonadReaderInst}`Monad (Reader ρ)` instance is justified:
```anchor MonadReaderInst
instance : Monad (Reader ρ) where
pure x := fun _ => x
bind x f := fun env => f (x env) env
```
The custom environments that will be passed to the expression evaluator can be represented as lists of pairs:
```anchor Env
abbrev Env : Type := List (String × (Int → Int → Int))
```
For instance, {anchorName exampleEnv}`exampleEnv` contains maximum and modulus functions:
```anchor exampleEnv
def exampleEnv : Env := [("max", max), ("mod", (· % ·))]
```
Lean already has a function {anchorName etc}`List.lookup` that finds the value associated with a key in a list of pairs, so {anchorName applyPrimReader}`applyPrimReader` needs only check whether the custom function is present in the environment. It returns {anchorTerm applyPrimReader}`0` if the function is unknown:
```anchor applyPrimReader
def applyPrimReader (op : String) (x : Int) (y : Int) : Reader Env Int :=
read >>= fun env =>
match env.lookup op with
| none => pure 0
| some f => pure (f x y)
```
Using {anchorName readerEval}`evaluateM` with {anchorName readerEval}`applyPrimReader` and an expression results in a function that expects an environment.
Luckily, {anchorName readerEval}`exampleEnv` is available:
```anchor readerEval
open Expr Prim in
#eval
evaluateM applyPrimReader
(prim (other "max") (prim plus (const 5) (const 4))
(prim times (const 3)
(const 2)))
exampleEnv
```
```anchorInfo readerEval
9
```
Like {anchorName Many (module:=Examples.Monads.Many)}`Many`, {anchorName Reader}`Reader` is an example of an effect that is difficult to encode in most languages, but type classes and monads make it just as convenient as any other effect.
The dynamic or special variables found in Common Lisp, Clojure, and Emacs Lisp can be used like {anchorName Reader}`Reader`.
Similarly, Scheme and Racket's parameter objects are an effect that exactly correspond to {anchorName Reader}`Reader`.
The Kotlin idiom of context objects can solve a similar problem, but they are fundamentally a means of passing function arguments automatically, so this idiom is more like the encoding as a reader monad than it is an effect in the language.
## Exercises
%%%
tag := "monads-arithmetic-example-exercises"
%%%
### Checking Contracts
%%%
tag := none
%%%
Check the monad contract for {anchorTerm StateMonad}`State σ` and {anchorTerm MonadOptionExcept}`Except ε`.
### Readers with Failure
%%%
tag := none
%%%
Adapt the reader monad example so that it can also indicate failure when the custom operator is not defined, rather than just returning zero.
In other words, given these definitions:
```anchor ReaderFail
def ReaderOption (ρ : Type) (α : Type) : Type := ρ → Option α
def ReaderExcept (ε : Type) (ρ : Type) (α : Type) : Type := ρ → Except ε α
```
do the following:
1. Write suitable {lit}`pure` and {lit}`bind` functions
2. Check that these functions satisfy the {anchorName evaluateM}`Monad` contract
3. Write {anchorName evaluateM}`Monad` instances for {anchorName ReaderFail}`ReaderOption` and {anchorName ReaderFail}`ReaderExcept`
4. Define suitable {anchorName evaluateM}`applyPrim` operators and test them with {anchorName evaluateM}`evaluateM` on some example expressions
### A Tracing Evaluator
%%%
tag := "monads-arithmetic-example-exercise-trace"
%%%
The {anchorName MonadWriter}`WithLog` type can be used with the evaluator to add optional tracing of some operations.
In particular, the type {anchorName ToTrace}`ToTrace` can serve as a signal to trace a given operator:
```anchor ToTrace
inductive ToTrace (α : Type) : Type where
| trace : α → ToTrace α
```
For the tracing evaluator, expressions should have type {anchorTerm ToTraceExpr}`Expr (Prim (ToTrace (Prim Empty)))`.
This says that the operators in the expression consist of addition, subtraction, and multiplication, augmented with traced versions of each. The innermost argument is {anchorName ToTraceExpr}`Empty` to signal that there are no further special operators inside of {anchorName ToTrace}`trace`, only the three basic ones.
Do the following:
1. Implement a {anchorTerm MonadWriter}`Monad (WithLog logged)` instance
2. Write an {anchorName applyTracedType}`applyTraced` function to apply traced operators to their arguments, logging both the operator and the arguments, with type {anchorTerm applyTracedType}`ToTrace (Prim Empty) → Int → Int → WithLog (Prim Empty × Int × Int) Int`
If the exercise has been completed correctly, then
```anchor evalTraced
open Expr Prim ToTrace in
#eval
evaluateM applyTraced
(prim (other (trace times))
(prim (other (trace plus)) (const 1)
(const 2))
(prim (other (trace minus)) (const 3)
(const 4)))
```
should result in
```anchorInfo evalTraced
{ log := [(Prim.plus, 1, 2), (Prim.minus, 3, 4), (Prim.times, 3, -1)], val := -3 }
```
Hint: values of type {anchorTerm ToTraceExpr}`Prim Empty` will appear in the resulting log. In order to display them as a result of {kw}`#eval`, the following instances are required:
```anchor ReprInstances
deriving instance Repr for WithLog
deriving instance Repr for Empty
deriving instance Repr for Prim
``` |
fp-lean/book/FPLean/Monads/Conveniences.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.Conveniences"
#doc (Manual) "Additional Conveniences" =>
%%%
tag := "monads-conveniences"
%%%
# Shared Argument Types
%%%
tag := "shared-argument-types"
%%%
When defining a function that takes multiple arguments that have the same type, both can be written before the same colon.
For example,
```anchor equalHuhOld
def equal? [BEq α] (x : α) (y : α) : Option α :=
if x == y then
some x
else
none
```
can be written
```anchor equalHuhNew
def equal? [BEq α] (x y : α) : Option α :=
if x == y then
some x
else
none
```
This is especially useful when the type signature is large.
# Leading Dot Notation
%%%
tag := "leading-dot-notation"
%%%
The constructors of an inductive type are in a namespace.
This allows multiple related inductive types to use the same constructor names, but it can lead to programs becoming verbose.
In contexts where the inductive type in question is known, the namespace can be omitted by preceding the constructor's name with a dot, and Lean uses the expected type to resolve the constructor names.
For example, a function that mirrors a binary tree can be written:
```anchor mirrorOld
def BinTree.mirror : BinTree α → BinTree α
| BinTree.leaf => BinTree.leaf
| BinTree.branch l x r => BinTree.branch (mirror r) x (mirror l)
```
Omitting the namespaces makes it significantly shorter, at the cost of making the program harder to read in contexts like code review tools that don't include the Lean compiler:
```anchor mirrorNew
def BinTree.mirror : BinTree α → BinTree α
| .leaf => .leaf
| .branch l x r => .branch (mirror r) x (mirror l)
```
Using the expected type of an expression to disambiguate a namespace is also applicable to names other than constructors.
If {anchorName BinTreeEmpty}`BinTree.empty` is defined as an alternative way of creating {anchorName BinTreeEmpty}`BinTree`s, then it can also be used with dot notation:
```anchor BinTreeEmpty
def BinTree.empty : BinTree α := .leaf
```
```anchor emptyDot
#check (.empty : BinTree Nat)
```
```anchorInfo emptyDot
BinTree.empty : BinTree Nat
```
# Or-Patterns
%%%
tag := "or-patterns"
%%%
In contexts that allow multiple patterns, such as {kw}`match`-expressions, multiple patterns may share their result expressions.
The datatype {anchorName Weekday}`Weekday` that represents days of the week:
```anchor Weekday
inductive Weekday where
| monday
| tuesday
| wednesday
| thursday
| friday
| saturday
| sunday
deriving Repr
```
Pattern matching can be used to check whether a day is a weekend:
```anchor isWeekendA
def Weekday.isWeekend (day : Weekday) : Bool :=
match day with
| Weekday.saturday => true
| Weekday.sunday => true
| _ => false
```
This can already be simplified by using constructor dot notation:
```anchor isWeekendB
def Weekday.isWeekend (day : Weekday) : Bool :=
match day with
| .saturday => true
| .sunday => true
| _ => false
```
Because both weekend patterns have the same result expression ({anchorName isWeekendC}`true`), they can be condensed into one:
```anchor isWeekendC
def Weekday.isWeekend (day : Weekday) : Bool :=
match day with
| .saturday | .sunday => true
| _ => false
```
This can be further simplified into a version in which the argument is not named:
```anchor isWeekendD
def Weekday.isWeekend : Weekday → Bool
| .saturday | .sunday => true
| _ => false
```
Behind the scenes, the result expression is simply duplicated across each pattern.
This means that patterns can bind variables, as in this example that removes the {anchorName SumNames}`inl` and {anchorName SumNames}`inr` constructors from a sum type in which both contain the same type of value:
```anchor condense
def condense : α ⊕ α → α
| .inl x | .inr x => x
```
Because the result expression is duplicated, the variables bound by the patterns are not required to have the same types.
Overloaded functions that work for multiple types may be used to write a single result expression that works for patterns that bind variables of different types:
```anchor stringy
def stringy : Nat ⊕ Weekday → String
| .inl x | .inr x => s!"It is {repr x}"
```
In practice, only variables shared in all patterns can be referred to in the result expression, because the result must make sense for each pattern.
In {anchorName getTheNat}`getTheNat`, only {anchorName getTheNat}`n` can be accessed, and attempts to use either {anchorName getTheNat}`x` or {anchorName getTheNat}`y` lead to errors.
```anchor getTheNat
def getTheNat : (Nat × α) ⊕ (Nat × β) → Nat
| .inl (n, x) | .inr (n, y) => n
```
Attempting to access {anchorName getTheAlpha}`x` in a similar definition causes an error because there is no {anchorName getTheAlpha}`x` available in the second pattern:
```anchor getTheAlpha
def getTheAlpha : (Nat × α) ⊕ (Nat × α) → α
| .inl (n, x) | .inr (n, y) => x
```
```anchorError getTheAlpha
Unknown identifier `x`
```
The fact that the result expression is essentially copy-pasted to each branch of the pattern match can lead to some surprising behavior.
For example, the following definitions are acceptable because the {anchorName SumNames}`inr` version of the result expression refers to the global definition of {anchorName getTheString}`str`:
```anchor getTheString
def str := "Some string"
def getTheString : (Nat × String) ⊕ (Nat × β) → String
| .inl (n, str) | .inr (n, y) => str
```
Calling this function on both constructors reveals the confusing behavior.
In the first case, a type annotation is needed to tell Lean which type {anchorName getTheString}`β` should be:
```anchor getOne
#eval getTheString (.inl (20, "twenty") : (Nat × String) ⊕ (Nat × String))
```
```anchorInfo getOne
"twenty"
```
In the second case, the global definition is used:
```anchor getTwo
#eval getTheString (.inr (20, "twenty"))
```
```anchorInfo getTwo
"Some string"
```
Using or-patterns can vastly simplify some definitions and increase their clarity, as in {anchorName isWeekendD}`Weekday.isWeekend`.
Because there is a potential for confusing behavior, it's a good idea to be careful when using them, especially when variables of multiple types or disjoint sets of variables are involved. |
fp-lean/book/FPLean/Monads/Summary.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.Class"
#doc (Manual) "Summary" =>
%%%
tag := "monads-summary"
%%%
# Encoding Side Effects
%%%
tag := none
%%%
Lean is a pure functional language.
This means that it does not include side effects such as mutable variables, logging, or exceptions.
However, most side effects can be _encoded_ using a combination of functions and inductive types or structures.
For example, mutable state can be encoded as a function from an initial state to a pair of a final state and a result, and exceptions can be encoded as an inductive type with constructors for successful termination and errors.
Each set of encoded effects is a type.
As a result, if a program uses these encoded effects, then this is apparent in its type.
Functional programming does not mean that programs can't use effects, it simply requires that they be _honest_ about which effects they use.
A Lean type signature describes not only the types of arguments that a function expects and the type of result that it returns, but also which effects it may use.
# The Monad Type Class
%%%
tag := none
%%%
It's possible to write purely functional programs in languages that allow effects anywhere.
For example, {python}`2 + 3` is a valid Python program that has no effects at all.
Similarly, combining programs that have effects requires a way to state the order in which the effects must occur.
It matters whether an exception is thrown before or after modifying a variable, after all.
The type class {anchorName FakeMonad}`Monad` captures these two important properties.
It has two methods: {anchorName FakeMonad}`pure` represents programs that have no effects, and {anchorName FakeMonad}`bind` sequences effectful programs.
The contract for {anchorName FakeMonad}`Monad` instances ensures that {anchorName FakeMonad}`bind` and {anchorName FakeMonad}`pure` actually capture pure computation and sequencing.
# {kw}`do`-Notation for Monads
%%%
tag := none
%%%
Rather than being limited to {moduleName}`IO`, {kw}`do`-notation works for any monad.
It allows programs that use monads to be written in a style that is reminiscent of statement-oriented languages, with statements sequenced after one another.
Additionally, {kw}`do`-notation enables a number of additional convenient shorthands, such as nested actions.
A program written with {kw}`do` is translated to applications of {lit}`>>=` behind the scenes.
# Custom Monads
%%%
tag := none
%%%
Different languages provide different sets of side effects.
While most languages feature mutable variables and file I/O, not all have features like exceptions.
Other languages offer effects that are rare or unique, like Icon's search-based program execution, Scheme and Ruby's continuations, and Common Lisp's resumable exceptions.
An advantage to encoding effects with monads is that programs are not limited to the set of effects that are provided by the language.
Because Lean is designed to make programming with any monad convenient, programmers are free to choose exactly the set of side effects that make sense for any given application.
# The {lit}`IO` Monad
%%%
tag := none
%%%
Programs that can affect the real world are written as {moduleName}`IO` actions in Lean.
{moduleName}`IO` is one monad among many.
The {moduleName}`IO` monad encodes state and exceptions, with the state being used to keep track of the state of the world and the exceptions modeling failure and recovery. |
fp-lean/book/FPLean/Monads/Do.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Monads.Do"
#doc (Manual) "{kw}`do`-Notation for Monads" =>
%%%
tag := "monad-do-notation"
%%%
While APIs based on monads are very powerful, the explicit use of {lit}`>>=` with anonymous functions is still somewhat noisy.
Just as infix operators are used instead of explicit calls to {anchorName names}`HAdd.hAdd`, Lean provides a syntax for monads called _{kw}`do`-notation_ that can make programs that use monads easier to read and write.
This is the very same {kw}`do`-notation that is used to write programs in {anchorName names}`IO`, and {anchorName names}`IO` is also a monad.
In {ref "hello-world"}[Hello, World!], the {kw}`do` syntax is used to combine {anchorName names}`IO` actions, but the meaning of these programs is explained directly.
Understanding how to program with monads means that {kw}`do` can now be explained in terms of how it translates into uses of the underlying monad operators.
The first translation of {kw}`do` is used when the only statement in the {kw}`do` is a single expression {anchorName doSugar1a}`E`.
In this case, the {kw}`do` is removed, so
```anchor doSugar1a
do E
```
translates to
```anchor doSugar1b
E
```
The second translation is used when the first statement of the {kw}`do` is a {kw}`let` with an arrow, binding a local variable.
This translates to a use of {lit}`>>=` together with a function that binds that very same variable, so
```anchor doSugar2a
do let x ← E₁
Stmt
…
Eₙ
```
translates to
```anchor doSugar2b
E₁ >>= fun x =>
do Stmt
…
Eₙ
```
When the first statement of the {kw}`do` block is an expression, then it is considered to be a monadic action that returns {anchorName names}`Unit`, so the function matches the {anchorName names}`Unit` constructor and
```anchor doSugar3a
do E₁
Stmt
…
Eₙ
```
translates to
```anchor doSugar3b
E₁ >>= fun () =>
do Stmt
…
Eₙ
```
Finally, when the first statement of the {kw}`do` block is a {kw}`let` that uses {lit}`:=`, the translated form is an ordinary let expression, so
```anchor doSugar4a
do let x := E₁
Stmt
…
Eₙ
```
translates to
```anchor doSugar4b
let x := E₁
do Stmt
…
Eₙ
```
:::paragraph
The definition of {anchorName firstThirdFifthSeventhMonad (module := Examples.Monads.Class)}`firstThirdFifthSeventh` that uses the {anchorName firstThirdFifthSeventhMonad (module := Examples.Monads.Class)}`Monad` class looks like this:
```anchor firstThirdFifthSeventhMonad (module := Examples.Monads.Class)
def firstThirdFifthSeventh [Monad m] (lookup : List α → Nat → m α)
(xs : List α) : m (α × α × α × α) :=
lookup xs 0 >>= fun first =>
lookup xs 2 >>= fun third =>
lookup xs 4 >>= fun fifth =>
lookup xs 6 >>= fun seventh =>
pure (first, third, fifth, seventh)
```
Using {kw}`do`-notation, it becomes significantly more readable:
```anchor firstThirdFifthSeventhDo
def firstThirdFifthSeventh [Monad m] (lookup : List α → Nat → m α)
(xs : List α) : m (α × α × α × α) := do
let first ← lookup xs 0
let third ← lookup xs 2
let fifth ← lookup xs 4
let seventh ← lookup xs 6
pure (first, third, fifth, seventh)
```
:::
:::paragraph
Without the {anchorName mapM}`Monad` type class, the function {anchorName numberMonadicish (module := Examples.Monads)}`number` that numbers the nodes of a tree was written:
```anchor numberMonadicish (module := Examples.Monads)
def number (t : BinTree α) : BinTree (Nat × α) :=
let rec helper : BinTree α → State Nat (BinTree (Nat × α))
| BinTree.leaf => ok BinTree.leaf
| BinTree.branch left x right =>
helper left ~~> fun numberedLeft =>
get ~~> fun n =>
set (n + 1) ~~> fun () =>
helper right ~~> fun numberedRight =>
ok (BinTree.branch numberedLeft (n, x) numberedRight)
(helper t 0).snd
```
With {anchorName mapM}`Monad` and {kw}`do`, its definition is much less noisy:
```anchor numberDo
def number (t : BinTree α) : BinTree (Nat × α) :=
let rec helper : BinTree α → State Nat (BinTree (Nat × α))
| BinTree.leaf => pure BinTree.leaf
| BinTree.branch left x right => do
let numberedLeft ← helper left
let n ← get
set (n + 1)
let numberedRight ← helper right
ok (BinTree.branch numberedLeft (n, x) numberedRight)
(helper t 0).snd
```
:::
All of the conveniences from {kw}`do` with {anchorName names}`IO` are also available when using it with other monads.
For example, nested actions also work in any monad.
The original definition of {anchorName mapM (module:=Examples.Monads.Class)}`mapM` was:
```anchor mapM (module := Examples.Monads.Class)
def mapM [Monad m] (f : α → m β) : List α → m (List β)
| [] => pure []
| x :: xs =>
f x >>= fun hd =>
mapM f xs >>= fun tl =>
pure (hd :: tl)
```
With {kw}`do`-notation, it can be written:
```anchor mapM
def mapM [Monad m] (f : α → m β) : List α → m (List β)
| [] => pure []
| x :: xs => do
let hd ← f x
let tl ← mapM f xs
pure (hd :: tl)
```
Using nested actions makes it almost as short as the original non-monadic {anchorName names}`map`:
```anchor mapMNested
def mapM [Monad m] (f : α → m β) : List α → m (List β)
| [] => pure []
| x :: xs => do
pure ((← f x) :: (← mapM f xs))
```
Using nested actions, {anchorName numberDoShort}`number` can be made much more concise:
```anchor numberDoShort
def increment : State Nat Nat := do
let n ← get
set (n + 1)
pure n
def number (t : BinTree α) : BinTree (Nat × α) :=
let rec helper : BinTree α → State Nat (BinTree (Nat × α))
| BinTree.leaf => pure BinTree.leaf
| BinTree.branch left x right => do
pure
(BinTree.branch
(← helper left)
((← increment), x)
(← helper right))
(helper t 0).snd
```
# Exercises
%%%
tag := "monad-do-notation-exercises"
%%%
* Rewrite {anchorName evaluateM (module:=Examples.Monads.Class)}`evaluateM`, its helpers, and the different specific use cases using {kw}`do`-notation instead of explicit calls to {lit}`>>=`.
* Rewrite {anchorName firstThirdFifthSeventhDo}`firstThirdFifthSeventh` using nested actions. |
fp-lean/book/FPLean/GettingToKnow/Conveniences.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Additional Conveniences" =>
%%%
tag := "getting-to-know-conveniences"
%%%
Lean contains a number of convenience features that make programs much more concise.
# Automatic Implicit Parameters
%%%
tag := "automatic-implicit-parameters"
%%%
:::paragraph
When writing polymorphic functions in Lean, it is typically not necessary to list all the implicit parameters.
Instead, they can simply be mentioned.
If Lean can determine their type, then they are automatically inserted as implicit parameters.
In other words, the previous definition of {anchorName lengthImp}`length`:
```anchor lengthImp
def length {α : Type} (xs : List α) : Nat :=
match xs with
| [] => 0
| y :: ys => Nat.succ (length ys)
```
can be written without {anchorTerm lengthImp}`{α : Type}`:
```anchor lengthImpAuto
def length (xs : List α) : Nat :=
match xs with
| [] => 0
| y :: ys => Nat.succ (length ys)
```
This can greatly simplify highly polymorphic definitions that take many implicit parameters.
:::
# Pattern-Matching Definitions
%%%
tag := "pattern-matching-definitions"
%%%
When defining functions with {kw}`def`, it is quite common to name an argument and then immediately use it with pattern matching.
For instance, in {anchorName lengthImpAuto}`length`, the argument {anchorName lengthImpAuto}`xs` is used only in {kw}`match`.
In these situations, the cases of the {kw}`match` expression can be written directly, without naming the argument at all.
:::paragraph
The first step is to move the arguments' types to the right of the colon, so the return type is a function type.
For instance, the type of {anchorName lengthMatchDef}`length` is {anchorTerm lengthMatchDef}`List α → Nat`.
Then, replace the {lit}`:=` with each case of the pattern match:
```anchor lengthMatchDef
def length : List α → Nat
| [] => 0
| y :: ys => Nat.succ (length ys)
```
This syntax can also be used to define functions that take more than one argument.
In this case, their patterns are separated by commas.
For instance, {anchorName drop}`drop` takes a number $`n` and a list, and returns the list after removing the first $`n` entries.
```anchor drop
def drop : Nat → List α → List α
| Nat.zero, xs => xs
| _, [] => []
| Nat.succ n, x :: xs => drop n xs
```
:::
:::paragraph
Named arguments and patterns can also be used in the same definition.
For instance, a function that takes a default value and an optional value, and returns the default when the optional value is {anchorName fromOption}`none`, can be written:
```anchor fromOption
def fromOption (default : α) : Option α → α
| none => default
| some x => x
```
This function is called {anchorTerm fragments}`Option.getD` in the standard library, and can be called with dot notation:
```anchor getD
#eval (some "salmonberry").getD ""
```
```anchorInfo getD
"salmonberry"
```
```anchor getDNone
#eval none.getD ""
```
```anchorInfo getDNone
""
```
:::
# Local Definitions
%%%
tag := "local-definitions"
%%%
It is often useful to name intermediate steps in a computation.
In many cases, intermediate values represent useful concepts all on their own, and naming them explicitly can make the program easier to read.
In other cases, the intermediate value is used more than once.
As in most other languages, writing down the same code twice in Lean causes it to be computed twice, while saving the result in a variable leads to the result of the computation being saved and re-used.
:::paragraph
For instance, {anchorName unzipBad}`unzip` is a function that transforms a list of pairs into a pair of lists.
When the list of pairs is empty, then the result of {anchorName unzipBad}`unzip` is a pair of empty lists.
When the list of pairs has a pair at its head, then the two fields of the pair are added to the result of unzipping the rest of the list.
This definition of {anchorName unzipBad}`unzip` follows that description exactly:
```anchor unzipBad
def unzip : List (α × β) → List α × List β
| [] => ([], [])
| (x, y) :: xys =>
(x :: (unzip xys).fst, y :: (unzip xys).snd)
```
Unfortunately, there is a problem: this code is slower than it needs to be.
Each entry in the list of pairs leads to two recursive calls, which makes this function take exponential time.
However, both recursive calls will have the same result, so there is no reason to make the recursive call twice.
:::
:::paragraph
In Lean, the result of the recursive call can be named, and thus saved, using {kw}`let`.
Local definitions with {kw}`let` resemble top-level definitions with {kw}`def`: it takes a name to be locally defined, arguments if desired, a type signature, and then a body following {lit}`:=`.
After the local definition, the expression in which the local definition is available (called the _body_ of the {kw}`let`-expression) must be on a new line, starting at a column in the file that is less than or equal to that of the {kw}`let` keyword.
A local definition with {kw}`let` in {anchorName unzip}`unzip` looks like this:
```anchor unzip
def unzip : List (α × β) → List α × List β
| [] => ([], [])
| (x, y) :: xys =>
let unzipped : List α × List β := unzip xys
(x :: unzipped.fst, y :: unzipped.snd)
```
To use {kw}`let` on a single line, separate the local definition from the body with a semicolon.
:::
:::paragraph
Local definitions with {kw}`let` may also use pattern matching when one pattern is enough to match all cases of a datatype.
In the case of {anchorName unzip}`unzip`, the result of the recursive call is a pair.
Because pairs have only a single constructor, the name {anchorName unzip}`unzipped` can be replaced with a pair pattern:
```anchor unzipPat
def unzip : List (α × β) → List α × List β
| [] => ([], [])
| (x, y) :: xys =>
let (xs, ys) : List α × List β := unzip xys
(x :: xs, y :: ys)
```
Judicious use of patterns with {kw}`let` can make code easier to read, compared to writing the accessor calls by hand.
:::
:::paragraph
The biggest difference between {kw}`let` and {kw}`def` is that recursive {kw}`let` definitions must be explicitly indicated by writing {kw}`let rec`.
For instance, one way to reverse a list involves a recursive helper function, as in this definition:
```anchor reverse
def reverse (xs : List α) : List α :=
let rec helper : List α → List α → List α
| [], soFar => soFar
| y :: ys, soFar => helper ys (y :: soFar)
helper xs []
```
The helper function walks down the input list, moving one entry at a time over to {anchorName reverse}`soFar`.
When it reaches the end of the input list, {anchorName reverse}`soFar` contains a reversed version of the input.
:::
# Type Inference
%%%
tag := "type-inference"
%%%
:::paragraph
In many situations, Lean can automatically determine an expression's type.
In these cases, explicit types may be omitted from both top-level definitions (with {kw}`def`) and local definitions (with {kw}`let`).
For example, the recursive call to {anchorName unzipNT}`unzip` does not need an annotation:
```anchor unzipNT
def unzip : List (α × β) → List α × List β
| [] => ([], [])
| (x, y) :: xys =>
let unzipped := unzip xys
(x :: unzipped.fst, y :: unzipped.snd)
```
:::
As a rule of thumb, omitting the types of literal values (like strings and numbers) usually works, although Lean may pick a type for literal numbers that is more specific than the intended type.
Lean can usually determine a type for a function application, because it already knows the argument types and the return type.
Omitting return types for function definitions will often work, but function parameters typically require annotations.
Definitions that are not functions, like {anchorName unzipNT}`unzipped` in the example, do not need type annotations if their bodies do not need type annotations, and the body of this definition is a function application.
:::paragraph
Omitting the return type for {anchorName unzipNRT}`unzip` is possible when using an explicit {kw}`match` expression:
```anchor unzipNRT
def unzip (pairs : List (α × β)) :=
match pairs with
| [] => ([], [])
| (x, y) :: xys =>
let unzipped := unzip xys
(x :: unzipped.fst, y :: unzipped.snd)
```
:::
:::paragraph
Generally speaking, it is a good idea to err on the side of too many, rather than too few, type annotations.
First off, explicit types communicate assumptions about the code to readers.
Even if Lean can determine the type on its own, it can still be easier to read code without having to repeatedly query Lean for type information.
Secondly, explicit types help localize errors.
The more explicit a program is about its types, the more informative the error messages can be.
This is especially important in a language like Lean that has a very expressive type system.
Thirdly, explicit types make it easier to write the program in the first place.
The type is a specification, and the compiler's feedback can be a helpful tool in writing a program that meets the specification.
Finally, Lean's type inference is a best-effort system.
Because Lean's type system is so expressive, there is no “best” or most general type to find for all expressions.
This means that even if you get a type, there's no guarantee that it's the _right_ type for a given application.
For instance, {anchorTerm fourteenNat}`14` can be a {anchorName length1}`Nat` or an {anchorName fourteenInt}`Int`:
```anchor fourteenNat
#check 14
```
```anchorInfo fourteenNat
14 : Nat
```
```anchor fourteenInt
#check (14 : Int)
```
```anchorInfo fourteenInt
14 : Int
```
:::
:::paragraph
Missing type annotations can give confusing error messages.
Omitting all types from the definition of {anchorName unzipNoTypesAtAll}`unzip`:
```anchor unzipNoTypesAtAll
def unzip pairs :=
match pairs with
| [] => ([], [])
| (x, y) :: xys =>
let unzipped := unzip xys
(x :: unzipped.fst, y :: unzipped.snd)
```
leads to a message about the {kw}`match` expression:
```anchorError unzipNoTypesAtAll
Invalid match expression: This pattern contains metavariables:
[]
```
This is because {kw}`match` needs to know the type of the value being inspected, but that type was not available.
A “metavariable” is an unknown part of a program, written {lit}`?m.XYZ` in error messages—they are described in the {ref "polymorphism"}[section on Polymorphism].
In this program, the type annotation on the argument is required.
:::
:::paragraph
Even some very simple programs require type annotations.
For instance, the identity function just returns whatever argument it is passed.
With argument and type annotations, it looks like this:
```anchor idA
def id (x : α) : α := x
```
Lean is capable of determining the return type on its own:
```anchor idB
def id (x : α) := x
```
Omitting the argument type, however, causes an error:
```anchor identNoTypes
def id x := x
```
```anchorError identNoTypes
Failed to infer type of binder `x`
```
:::
In general, messages that say something like “failed to infer” or that mention metavariables are often a sign that more type annotations are necessary.
Especially while still learning Lean, it is useful to provide most types explicitly.
# Simultaneous Matching
%%%
tag := "simultaneous-matching"
%%%
:::paragraph
Pattern-matching expressions, just like pattern-matching definitions, can match on multiple values at once.
Both the expressions to be inspected and the patterns that they match against are written with commas between them, similarly to the syntax used for definitions.
Here is a version of {anchorName dropMatch}`drop` that uses simultaneous matching:
```anchor dropMatch
def drop (n : Nat) (xs : List α) : List α :=
match n, xs with
| Nat.zero, ys => ys
| _, [] => []
| Nat.succ n , y :: ys => drop n ys
```
:::
:::paragraph
Simultaneous matching resembles matching on a pair, but there is an important difference.
Lean tracks the connection between the expression being matched and the patterns, and this information is used for purposes that include checking for termination and propagating static type information.
As a result, the version of {anchorName sameLengthPair}`sameLength` that matches a pair is rejected by the termination checker, because the connection between {anchorName sameLengthPair}`xs` and {anchorTerm sameLengthPair}`x :: xs'` is obscured by the intervening pair:
```anchor sameLengthPair
def sameLength (xs : List α) (ys : List β) : Bool :=
match (xs, ys) with
| ([], []) => true
| (x :: xs', y :: ys') => sameLength xs' ys'
| _ => false
```
```anchorError sameLengthPair
fail to show termination for
sameLength
with errors
failed to infer structural recursion:
Not considering parameter α of sameLength:
it is unchanged in the recursive calls
Not considering parameter β of sameLength:
it is unchanged in the recursive calls
Cannot use parameter xs:
failed to eliminate recursive application
sameLength xs' ys'
Cannot use parameter ys:
failed to eliminate recursive application
sameLength xs' ys'
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
xs ys
1) 1748:28-46 ? ?
Please use `termination_by` to specify a decreasing measure.
```
Simultaneously matching both lists is accepted:
```anchor sameLengthOk2
def sameLength (xs : List α) (ys : List β) : Bool :=
match xs, ys with
| [], [] => true
| x :: xs', y :: ys' => sameLength xs' ys'
| _, _ => false
```
:::
# Natural Number Patterns
%%%
tag := "natural-number-patterns"
%%%
:::paragraph
In the section on {ref "datatypes-and-patterns"}[datatypes and patterns], {anchorName even}`even` was defined like this:
```anchor even
def even (n : Nat) : Bool :=
match n with
| Nat.zero => true
| Nat.succ k => not (even k)
```
Just as there is special syntax to make list patterns more readable than using {anchorName length1}`List.cons` and {anchorName length1}`List.nil` directly, natural numbers can be matched using literal numbers and {anchorTerm evenFancy}`+`.
For example, {anchorName evenFancy}`even` can also be defined like this:
```anchor evenFancy
def even : Nat → Bool
| 0 => true
| n + 1 => not (even n)
```
In this notation, the arguments to the {anchorTerm evenFancy}`+` pattern serve different roles.
Behind the scenes, the left argument ({anchorName evenFancy}`n` above) becomes an argument to some number of {anchorName even}`Nat.succ` patterns, and the right argument ({anchorTerm evenFancy}`1` above) determines how many {anchorName even}`Nat.succ`s to wrap around the pattern.
The explicit patterns in {anchorName explicitHalve}`halve`, which divides a {anchorName explicitHalve}`Nat` by two and drops the remainder:
```anchor explicitHalve
def halve : Nat → Nat
| Nat.zero => 0
| Nat.succ Nat.zero => 0
| Nat.succ (Nat.succ n) => halve n + 1
```
can be replaced by numeric literals and {anchorTerm halve}`+`:
```anchor halve
def halve : Nat → Nat
| 0 => 0
| 1 => 0
| n + 2 => halve n + 1
```
Behind the scenes, both definitions are completely equivalent.
Remember: {anchorTerm halve}`halve n + 1` is equivalent to {anchorTerm halveParens}`(halve n) + 1`, not {anchorTerm halveParens}`halve (n + 1)`.
:::
:::paragraph
When using this syntax, the second argument to {anchorTerm halveFlippedPat}`+` should always be a literal {anchorName halveFlippedPat}`Nat`.
Even though addition is commutative, flipping the arguments in a pattern can result in errors like the following:
```anchor halveFlippedPat
def halve : Nat → Nat
| 0 => 0
| 1 => 0
| 2 + n => halve n + 1
```
```anchorError halveFlippedPat
Invalid pattern(s): `n` is an explicit pattern variable, but it only occurs in positions that are inaccessible to pattern matching:
.(Nat.add 2 n)
```
This restriction enables Lean to transform all uses of the {anchorTerm halveFlippedPat}`+` notation in a pattern into uses of the underlying {anchorName even}`Nat.succ`, keeping the language simpler behind the scenes.
:::
# Anonymous Functions
%%%
tag := "anonymous-functions"
%%%
:::paragraph
Functions in Lean need not be defined at the top level.
As expressions, functions are produced with the {kw}`fun` syntax.
Function expressions begin with the keyword {kw}`fun`, followed by one or more parameters, which are separated from the return expression using {lit}`=>`.
For instance, a function that adds one to a number can be written:
```anchor incr
#check fun x => x + 1
```
```anchorInfo incr
fun x => x + 1 : Nat → Nat
```
Type annotations are written the same way as on {kw}`def`, using parentheses and colons:
```anchor incrInt
#check fun (x : Int) => x + 1
```
```anchorInfo incrInt
fun x => x + 1 : Int → Int
```
Similarly, implicit parameters may be written with curly braces:
```anchor identLambda
#check fun {α : Type} (x : α) => x
```
```anchorInfo identLambda
fun {α} x => x : {α : Type} → α → α
```
This style of anonymous function expression is often referred to as a _lambda expression_, because the typical notation used in mathematical descriptions of programming languages uses the Greek letter λ (lambda) where Lean has the keyword {kw}`fun`.
Even though Lean does permit {kw}`λ` to be used instead of {kw}`fun`, it is most common to write {kw}`fun`.
:::
:::paragraph
Anonymous functions also support the multiple-pattern style used in {kw}`def`.
For instance, a function that returns the predecessor of a natural number if it exists can be written:
```anchor predHuh
#check fun
| 0 => none
| n + 1 => some n
```
```anchorInfo predHuh
fun x =>
match x with
| 0 => none
| n.succ => some n : Nat → Option Nat
```
Note that Lean's own description of the function has a named argument and a {kw}`match` expression.
Many of Lean's convenient syntactic shorthands are expanded to simpler syntax behind the scenes, and the abstraction sometimes leaks.
:::
:::paragraph
Definitions using {kw}`def` that take arguments may be rewritten as function expressions.
For instance, a function that doubles its argument can be written as follows:
```anchor doubleLambda
def double : Nat → Nat := fun
| 0 => 0
| k + 1 => double k + 2
```
When an anonymous function is very simple, like {anchorEvalStep incrSteps 0}`fun x => x + 1`, the syntax for creating the function can be fairly verbose.
In that particular example, six non-whitespace characters are used to introduce the function, and its body consists of only three non-whitespace characters.
For these simple cases, Lean provides a shorthand.
In an expression surrounded by parentheses, a centered dot character {anchorTerm incrSteps}`·` can stand for a parameter, and the expression inside the parentheses becomes the function's body.
That particular function can also be written {anchorEvalStep incrSteps 1}`(· + 1)`.
:::
:::paragraph
The centered dot always creates a function out of the _closest_ surrounding set of parentheses.
For instance, {anchorEvalStep funPair 0}`(· + 5, 3)` is a function that returns a pair of numbers, while {anchorEvalStep pairFun 0}`((· + 5), 3)` is a pair of a function and a number.
If multiple dots are used, then they become parameters from left to right:
```anchorEvalSteps twoDots
(· , ·) 1 2
===>
(1, ·) 2
===>
(1, 2)
```
Anonymous functions can be applied in precisely the same way as functions defined using {kw}`def` or {kw}`let`.
The command {anchor applyLambda}`#eval (fun x => x + x) 5` results in:
```anchorInfo applyLambda
10
```
while {anchor applyCdot}`#eval (· * 2) 5` results in:
```anchorInfo applyCdot
10
```
:::
# Namespaces
%%%
tag := "namespaces"
%%%
Each name in Lean occurs in a _namespace_, which is a collection of names.
Names are placed in namespaces using {lit}`.`, so {anchorName fragments}`List.map` is the name {anchorName fragments}`map` in the {lit}`List` namespace.
Names in different namespaces do not conflict with each other, even if they are otherwise identical.
This means that {anchorName fragments}`List.map` and {anchorName fragments}`Array.map` are different names.
Namespaces may be nested, so {lit}`Project.Frontend.User.loginTime` is the name {lit}`loginTime` in the nested namespace {lit}`Project.Frontend.User`.
:::paragraph
Names can be directly defined within a namespace.
For instance, the name {anchorName fragments}`double` can be defined in the {anchorName even}`Nat` namespace:
```anchor NatDouble
def Nat.double (x : Nat) : Nat := x + x
```
Because {anchorName even}`Nat` is also the name of a type, dot notation is available to call {anchorName fragments}`Nat.double` on expressions with type {anchorName even}`Nat`:
```anchor NatDoubleFour
#eval (4 : Nat).double
```
```anchorInfo NatDoubleFour
8
```
:::
:::paragraph
In addition to defining names directly in a namespace, a sequence of declarations can be placed in a namespace using the {kw}`namespace` and {kw}`end` commands.
For instance, this defines {anchorName NewNamespace}`triple` and {anchorName NewNamespace}`quadruple` in the namespace {lit}`NewNamespace`:
```anchor NewNamespace
namespace NewNamespace
def triple (x : Nat) : Nat := 3 * x
def quadruple (x : Nat) : Nat := 2 * x + 2 * x
end NewNamespace
```
To refer to them, prefix their names with {lit}`NewNamespace.`:
```anchor tripleNamespace
#check NewNamespace.triple
```
```anchorInfo tripleNamespace
NewNamespace.triple (x : Nat) : Nat
```
```anchor quadrupleNamespace
#check NewNamespace.quadruple
```
```anchorInfo quadrupleNamespace
NewNamespace.quadruple (x : Nat) : Nat
```
:::
:::paragraph
Namespaces may be _opened_, which allows the names in them to be used without explicit qualification.
Writing {kw}`open` {lit}`MyNamespace `{kw}`in` before an expression causes the contents of {lit}`MyNamespace` to be available in the expression.
For example, {anchorName quadrupleOpenDef}`timesTwelve` uses both {anchorName quadrupleOpenDef}`quadruple` and {anchorName quadrupleOpenDef}`triple` after opening {anchorTerm NewNamespace}`NewNamespace`:
```anchor quadrupleOpenDef
def timesTwelve (x : Nat) :=
open NewNamespace in
quadruple (triple x)
```
:::
:::paragraph
Namespaces can also be opened prior to a command.
This allows all parts of the command to refer to the contents of the namespace, rather than just a single expression.
To do this, place the {kw}`open`{lit}` ... `{kw}`in` prior to the command.
```anchor quadrupleNamespaceOpen
open NewNamespace in
#check quadruple
```
```anchorInfo quadrupleNamespaceOpen
NewNamespace.quadruple (x : Nat) : Nat
```
Function signatures show the name's full namespace.
Namespaces may additionally be opened for _all_ following commands for the rest of the file.
To do this, simply omit the {kw}`in` from a top-level usage of {kw}`open`.
:::
# {lit}`if let`
%%%
tag := "if-let"
%%%
:::paragraph
When consuming values that have a sum type, it is often the case that only a single constructor is of interest.
For example, given this type that represents a subset of Markdown inline elements:
```anchor Inline
inductive Inline : Type where
| lineBreak
| string : String → Inline
| emph : Inline → Inline
| strong : Inline → Inline
```
a function that recognizes string elements and extracts their contents can be written:
```anchor inlineStringHuhMatch
def Inline.string? (inline : Inline) : Option String :=
match inline with
| Inline.string s => some s
| _ => none
```
:::
:::paragraph
An alternative way of writing this function's body uses {kw}`if` together with {kw}`let`:
```anchor inlineStringHuh
def Inline.string? (inline : Inline) : Option String :=
if let Inline.string s := inline then
some s
else none
```
This is very much like the pattern-matching {kw}`let` syntax.
The difference is that it can be used with sum types, because a fallback is provided in the {kw}`else` case.
In some contexts, using {kw}`if let` instead of {kw}`match` can make code easier to read.
:::
# Positional Structure Arguments
%%%
tag := "positional-structure-arguments"
%%%
The {ref "structures"}[section on structures] presents two ways of constructing structures:
1. The constructor can be called directly, as in {anchorTerm pointCtor}`Point.mk 1 2`.
2. Brace notation can be used, as in {anchorTerm pointBraces}`{ x := 1, y := 2 }`.
In some contexts, it can be convenient to pass arguments positionally, rather than by name, but without naming the constructor directly.
For instance, defining a variety of similar structure types can help keep domain concepts separate, but the natural way to read the code may treat each of them as being essentially a tuple.
In these contexts, the arguments can be enclosed in angle brackets {lit}`⟨` and {lit}`⟩`.
A {anchorName pointBraces}`Point` can be written {anchorTerm pointPos}`⟨1, 2⟩`.
Be careful!
Even though they look like the less-than sign {lit}`<` and greater-than sign {lit}`>`, these brackets are different.
They can be input using {lit}`\<` and {lit}`\>`, respectively.
:::paragraph
Just as with the brace notation for named constructor arguments, this positional syntax can only be used in a context where Lean can determine the structure's type, either from a type annotation or from other type information in the program.
For instance, {anchorTerm pointPosEvalNoType}`#eval ⟨1, 2⟩` yields the following error:
```anchorError pointPosEvalNoType
Invalid `⟨...⟩` notation: The expected type of this term could not be determined
```
This error occurs because there is no type information available.
Adding an annotation, such as in {anchorTerm pointPosWithType}`#eval (⟨1, 2⟩ : Point)`, solves the problem:
```anchorInfo pointPosWithType
{ x := 1.000000, y := 2.000000 }
```
:::
# String Interpolation
%%%
tag := "string-interpolation"
%%%
:::paragraph
In Lean, prefixing a string with {kw}`s!` triggers _interpolation_, where expressions contained in curly braces inside the string are replaced with their values.
This is similar to {python}`f`-strings in Python and {CSharp}`$`-prefixed strings in C#.
For instance,
```anchor interpolation
#eval s!"three fives is {NewNamespace.triple 5}"
```
yields the output
```anchorInfo interpolation
"three fives is 15"
```
:::
:::paragraph
Not all expressions can be interpolated into a string.
For instance, attempting to interpolate a function results in an error.
```anchor interpolationOops
#check s!"three fives is {NewNamespace.triple}"
```
yields the error
```anchorError interpolationOops
failed to synthesize
ToString (Nat → Nat)
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
This is because there is no standard way to convert functions into strings.
Just as the compiler maintains a table that describes how to display the result of evaluating expressions of various types, it maintains a table that describes how to convert values of various types into strings.
The message {lit}`failed to synthesize instance` means that the Lean compiler didn't find an entry in this table for the given type.
The chapter on {ref "type-classes"}[type classes] describes this mechanism in more detail, including the means of adding new entries to the table.
::: |
fp-lean/book/FPLean/GettingToKnow/Evaluating.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
example_module Examples.Intro
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Evaluating Expressions" =>
%%%
tag := "evaluating"
%%%
The most important thing to understand as a programmer learning Lean is how evaluation works.
Evaluation is the process of finding the value of an expression, just as one does in arithmetic.
For instance, the value of $`15 - 6` is $`9` and the value of $`2 × (3 + 1)` is $`8`.
To find the value of the latter expression, $`3 + 1` is first replaced by $`4`, yielding $`2 × 4`, which itself can be reduced to $`8`.
Sometimes, mathematical expressions contain variables: the value of $`x + 1` cannot be computed until we know what the value of $`x` is.
In Lean, programs are first and foremost expressions, and the primary way to think about computation is as evaluating expressions to find their values.
Most programming languages are _imperative_, where a program consists of a series of statements that should be carried out in order to find the program's result.
Programs have access to mutable memory, so the value referred to by a variable can change over time.
In addition to mutable state, programs may have other side effects, such as deleting files, making outgoing network connections,
throwing or catching exceptions, and reading data from a database.
“Side effects” is essentially a catch-all term for describing things that may happen in a program that don't follow the model of evaluating mathematical expressions.
In Lean, however, programs work the same way as mathematical expressions.
Once given a value, variables cannot be reassigned. Evaluating an expression cannot have side effects.
If two expressions have the same value, then replacing one with the other will not cause the program to compute a different result.
This does not mean that Lean cannot be used to write {lit}`Hello, world!` to the console, but performing I/O is not a core part of the experience of using Lean in the same way.
Thus, this chapter focuses on how to evaluate expressions interactively with Lean, while the next chapter describes how to write, compile, and run the {lit}`Hello, world!` program.
:::paragraph
To ask Lean to evaluate an expression, write {kw}`#eval` before it in your editor, which will then report the result back.
Typically, the result is found by putting the cursor or mouse pointer over {kw}`#eval`.
For instance,
```anchor threeEval
#eval 1 + 2
```
yields the value
```anchorInfo threeEval
3
```
:::
:::paragraph
Lean obeys the ordinary rules of precedence and associativity for
arithmetic operators. That is,
```anchor orderOfOperations
#eval 1 + 2 * 5
```
yields the value {anchorInfo orderOfOperations}`11` rather than {anchorInfo orderOfOperationsWrong}`15`.
:::
:::paragraph
While both ordinary mathematical notation and the majority of programming languages use parentheses (e.g. {lit}`f(x)`) to apply a function to its arguments, Lean simply writes the function next to its arguments (e.g. {lit}`f x`).
Function application is one of the most common operations, so it pays to keep it concise.
Rather than writing
```
#eval String.append("Hello, ", "Lean!")
```
to compute {anchorInfo stringAppendHello}`"Hello, Lean!"`, one would instead write
```anchor stringAppendHello
#eval String.append "Hello, " "Lean!"
```
where the function's two arguments are simply written next to it with spaces.
:::
:::paragraph
Just as the order-of-operations rules for arithmetic demand parentheses in the expression {anchorTerm orderOfOperationsWrong}`(1 + 2) * 5`, parentheses are also necessary when a function's argument is to be computed via another function call.
For instance, parentheses are required in
```anchor stringAppendNested
#eval String.append "great " (String.append "oak " "tree")
```
because otherwise the second {moduleTerm (anchor := stringAppendNested)}`String.append` would be interpreted as an argument to the first, rather than as a function being passed {moduleTerm (anchor := stringAppendNested)}`"oak "` and {moduleTerm (anchor := stringAppendNested)}`"tree"` as arguments.
The value of the inner {anchorTerm stringAppendNested}`String.append` call must be found first, after which it can be appended to {moduleTerm (anchor := stringAppendNested)}`"great "`, yielding the final value {anchorInfo stringAppendNested}`"great oak tree"`.
:::
:::paragraph
Imperative languages often have two kinds of conditional: a conditional _statement_ that determines which instructions to carry out based on a Boolean value, and a conditional _expression_ that determines which of two expressions to evaluate based on a Boolean value.
For instance, in C and C++, the conditional statement is written using {c}`if` and {c}`else`, while the conditional expression is written with a ternary operator in which {c}`?` and {c}`:` separate the condition from the branches.
In Python, the conditional statement begins with {python}`if`, while the conditional expression puts {python}`if` in the middle.
Because Lean is an expression-oriented functional language, there are no conditional statements, only conditional expressions.
They are written using {kw}`if`, {kw}`then`, and {kw}`else`.
For example,
```anchorEvalStep stringAppend 0
String.append "it is " (if 1 > 2 then "yes" else "no")
```
evaluates to
```anchorEvalStep stringAppend 1
String.append "it is " (if false then "yes" else "no")
```
which evaluates to
```anchorEvalStep stringAppend 2
String.append "it is " "no"
```
which finally evaluates to {anchorEvalStep stringAppend 3}`"it is no"`.
:::
:::paragraph
For the sake of brevity, a series of evaluation steps like this will sometimes be written with arrows between them:
```anchorEvalSteps stringAppend
String.append "it is " (if 1 > 2 then "yes" else "no")
===>
String.append "it is " (if false then "yes" else "no")
===>
String.append "it is " "no"
===>
"it is no"
```
:::
# Messages You May Meet
%%%
tag := "evaluating-messages"
%%%
:::paragraph
Asking Lean to evaluate a function application that is missing an argument will lead to an error message.
In particular, the example
```anchor stringAppendReprFunction
#eval String.append "it is "
```
yields a quite long error message:
```anchorError stringAppendReprFunction
could not synthesize a `ToExpr`, `Repr`, or `ToString` instance for type
String → String
```
:::
This message occurs because Lean functions that are applied to only some of their arguments return new functions that are waiting for the rest of the arguments.
Lean cannot display functions to users, and thus returns an error when asked to do so.
# Exercises
%%%
tag := "evaluating-exercises"
%%%
What are the values of the following expressions? Work them out by hand,
then enter them into Lean to check your work.
* {anchorTerm evalEx}`42 + 19`
* {anchorTerm evalEx}`String.append "A" (String.append "B" "C")`
* {anchorTerm evalEx}`String.append (String.append "A" "B") "C"`
* {anchorTerm evalEx}`if 3 == 3 then 5 else 7`
* {anchorTerm evalEx}`if 3 == 4 then "equal" else "not equal"` |
fp-lean/book/FPLean/GettingToKnow/Summary.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Summary" =>
%%%
tag := "getting-to-know-summary"
%%%
# Evaluating Expressions
%%%
tag := none
%%%
In Lean, computation occurs when expressions are evaluated.
This follows the usual rules of mathematical expressions: sub-expressions are replaced by their values following the usual order of operations, until the entire expression has become a value.
When evaluating an {kw}`if` or a {kw}`match`, the expressions in the branches are not evaluated until the value of the condition or the match subject has been found.
Once they have been given a value, variables never change.
Similarly to mathematics but unlike most programming languages, Lean variables are simply placeholders for values, rather than addresses to which new values can be written.
Variables' values may come from global definitions with {kw}`def`, local definitions with {kw}`let`, as named arguments to functions, or from pattern matching.
# Functions
%%%
tag := none
%%%
Functions in Lean are first-class values, meaning that they can be passed as arguments to other functions, saved in variables, and used like any other value.
Every Lean function takes exactly one argument.
To encode a function that takes more than one argument, Lean uses a technique called currying, where providing the first argument returns a function that expects the remaining arguments.
To encode a function that takes no arguments, Lean uses the {moduleName}`Unit` type, which is the least informative possible argument.
There are three primary ways of creating functions:
1. Anonymous functions are written using {kw}`fun`.
For instance, a function that swaps the fields of a {anchorName fragments}`Point` can be written {anchorTerm swapLambda}`fun (point : Point) => { x := point.y, y := point.x : Point }`
2. Very simple anonymous functions are written by placing one or more centered dots {anchorTerm subOneDots}`·` inside of parentheses.
Each centered dot becomes an argument to the function, and the parentheses delimit its body.
For instance, a function that subtracts one from its argument can be written as {anchorTerm subOneDots}`(· - 1)` instead of as {anchorTerm subOneDots}`fun x => x - 1`.
3. Functions can be defined using {kw}`def` or {kw}`let` by adding an argument list or by using pattern-matching notation.
# Types
%%%
tag := none
%%%
Lean checks that every expression has a type.
Types, such as {anchorName fragments}`Int`, {anchorName fragments}`Point`, {anchorTerm fragments}`{α : Type} → Nat → α → List α`, and {anchorTerm fragments}`Option (String ⊕ (Nat × String))`, describe the values that may eventually be found for an expression.
Like other languages, types in Lean can express lightweight specifications for programs that are checked by the Lean compiler, obviating the need for certain classes of unit test.
Unlike most languages, Lean's types can also express arbitrary mathematics, unifying the worlds of programming and theorem proving.
While using Lean for proving theorems is mostly out of scope for this book, _[Theorem Proving in Lean 4](https://leanprover.github.io/theorem_proving_in_lean4/)_ contains more information on this topic.
Some expressions can be given multiple types.
For instance, {lit}`3` can be an {anchorName fragments}`Int` or a {anchorName fragments}`Nat`.
In Lean, this should be understood as two separate expressions, one with type {anchorName fragments}`Nat` and one with type {anchorName fragments}`Int`, that happen to be written in the same way, rather than as two different types for the same thing.
Lean is sometimes able to determine types automatically, but types must often be provided by the user.
This is because Lean's type system is so expressive.
Even when Lean can find a type, it may not find the desired type—{lit}`3` could be intended to be used as an {anchorName fragments}`Int`, but Lean will give it the type {anchorName fragments}`Nat` if there are no further constraints.
In general, it is a good idea to write most types explicitly, only letting Lean fill out the very obvious types.
This improves Lean's error messages and helps make programmer intent more clear.
Some functions or datatypes take types as arguments.
They are called _polymorphic_.
Polymorphism allows programs such as one that calculates the length of a list without caring what type the entries in the list have.
Because types are first class in Lean, polymorphism does not require any special syntax, so types are passed just like other arguments.
Naming an argument in a function type allows later types to mention that name, and when the function is applied to an argument, the type of the resulting term is found by replacing the argument's name with the actual value it was applied to.
# Structures and Inductive Types
%%%
tag := none
%%%
Brand new datatypes can be introduced to Lean using the {kw}`structure` or {kw}`inductive` features.
These new types are not considered to be equivalent to any other type, even if their definitions are otherwise identical.
Datatypes have _constructors_ that explain the ways in which their values can be constructed, and each constructor takes some number of arguments.
Constructors in Lean are not the same as constructors in object-oriented languages: Lean's constructors are inert holders of data, rather than active code that initializes an allocated object.
Typically, {kw}`structure` is used to introduce a product type (that is, a type with just one constructor that takes any number of arguments), while {kw}`inductive` is used to introduce a sum type (that is, a type with many distinct constructors).
Datatypes defined with {kw}`structure` are provided with one accessor function for each field.
Both structures and inductive datatypes may be consumed with pattern matching, which exposes the values stored inside of constructors using a subset of the syntax used to call said constructors.
Pattern matching means that knowing how to create a value implies knowing how to consume it.
# Recursion
%%%
tag := none
%%%
A definition is recursive when the name being defined is used in the definition itself.
Because Lean is an interactive theorem prover in addition to being a programming language, there are certain restrictions placed on recursive definitions.
In Lean's logical side, circular definitions could lead to logical inconsistency.
In order to ensure that recursive definitions do not undermine the logical side of Lean, Lean must be able to prove that all recursive functions terminate, no matter what arguments they are called with.
In practice, this means either that recursive calls are all performed on a structurally-smaller piece of the input, which ensures that there is always progress towards a base case, or that users must provide some other evidence that the function always terminates.
Similarly, recursive inductive types are not allowed to have a constructor that takes a function _from_ the type as an argument, because this would make it possible to encode non-terminating functions. |
fp-lean/book/FPLean/GettingToKnow/Polymorphism.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
example_module Examples.Intro
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Polymorphism" =>
%%%
tag := "polymorphism"
%%%
Just as in most languages, types in Lean can take arguments.
For instance, the type {anchorTerm fragments}`List Nat` describes lists of natural numbers, {anchorTerm fragments}`List String` describes lists of strings, and {anchorTerm fragments}`List (List Point)` describes lists of lists of points.
This is very similar to {CSharp}`List<Nat>`, {CSharp}`List<String>`, or {CSharp}`List<List<Point>>` in a language like C# or Java.
Just as Lean uses a space to pass an argument to a function, it uses a space to pass an argument to a type.
In functional programming, the term _polymorphism_ typically refers to datatypes and definitions that take types as arguments.
This is different from the object-oriented programming community, where the term typically refers to subclasses that may override some behavior of their superclass.
In this book, “polymorphism” always refers to the first sense of the word.
These type arguments can be used in the datatype or definition, which allows the same datatype or definition to be used with any type that results from replacing the arguments' names with some other types.
:::paragraph
The {anchorName Point}`Point` structure requires that both the {anchorName Point}`x` and {anchorName Point}`y` fields are {anchorName Point}`Float`s.
There is, however, nothing about points that require a specific representation for each coordinate.
A polymorphic version of {anchorName Point}`Point`, called {anchorName PPoint}`PPoint`, can take a type as an argument, and then use that type for both fields:
```anchor PPoint
structure PPoint (α : Type) where
x : α
y : α
```
:::
Just as a function definition's arguments are written immediately after the name being defined, a structure's arguments are written immediately after the structure's name.
It is customary to use Greek letters to name type arguments in Lean when no more specific name suggests itself.
{anchorTerm PPoint}`Type` is a type that describes other types, so {anchorName Nat}`Nat`, {anchorTerm fragments}`List String`, and {anchorTerm fragments}`PPoint Int` all have type {anchorTerm PPoint}`Type`.
:::paragraph
Just like {anchorName fragments}`List`, {anchorName PPoint}`PPoint` can be used by providing a specific type as its argument:
```anchor natPoint
def natOrigin : PPoint Nat :=
{ x := Nat.zero, y := Nat.zero }
```
In this example, both fields are expected to be {anchorName natPoint}`Nat`s.
Just as a function is called by replacing its argument variables with its argument values, providing {anchorName PPoint}`PPoint` with the type {anchorName fragments}`Nat` as an argument yields a structure in which the fields {anchorName PPoint}`x` and {anchorName PPoint}`y` have the type {anchorName fragments}`Nat`, because the argument name {anchorName PPoint}`α` has been replaced by the argument type {anchorName fragments}`Nat`.
Types are ordinary expressions in Lean, so passing arguments to polymorphic types (like {anchorName PPoint}`PPoint`) doesn't require any special syntax.
:::
:::paragraph
Definitions may also take types as arguments, which makes them polymorphic.
The function {anchorName replaceX}`replaceX` replaces the {anchorName replaceX}`x` field of a {anchorName replaceX}`PPoint` with a new value.
In order to allow {anchorName replaceX}`replaceX` to work with _any_ polymorphic point, it must be polymorphic itself.
This is achieved by having its first argument be the type of the point's fields, with later arguments referring back to the first argument's name.
```anchor replaceX
def replaceX (α : Type) (point : PPoint α) (newX : α) : PPoint α :=
{ point with x := newX }
```
In other words, when the types of the arguments {anchorName replaceX}`point` and {anchorName replaceX}`newX` mention {anchorName replaceX}`α`, they are referring to _whichever type was provided as the first argument_.
This is similar to the way that function argument names refer to the values that were provided when they occur in the function's body.
:::
:::paragraph
This can be seen by asking Lean to check the type of {anchorName replaceX}`replaceX`, and then asking it to check the type of {anchorTerm replaceXNatOriginFiveT}`replaceX Nat`.
```anchorTerm replaceXT
#check (replaceX)
```
```anchorInfo replaceXT
replaceX : (α : Type) → PPoint α → α → PPoint α
```
This function type includes the _name_ of the first argument, and later arguments in the type refer back to this name.
Just as the value of a function application is found by replacing the argument name with the provided argument value in the function's body, the type of a function application is found by replacing the argument's name with the provided value in the function's return type.
Providing the first argument, {anchorName replaceXNatT}`Nat`, causes all occurrences of {anchorName replaceX}`α` in the remainder of the type to be replaced with {anchorName replaceXNatT}`Nat`:
```anchorTerm replaceXNatT
#check replaceX Nat
```
```anchorInfo replaceXNatT
replaceX Nat : PPoint Nat → Nat → PPoint Nat
```
Because the remaining arguments are not explicitly named, no further substitution occurs as more arguments are provided:
```anchorTerm replaceXNatOriginT
#check replaceX Nat natOrigin
```
```anchorInfo replaceXNatOriginT
replaceX Nat natOrigin : Nat → PPoint Nat
```
```anchorTerm replaceXNatOriginFiveT
#check replaceX Nat natOrigin 5
```
```anchorInfo replaceXNatOriginFiveT
replaceX Nat natOrigin 5 : PPoint Nat
```
:::
:::paragraph
The fact that the type of the whole function application expression was determined by passing a type as an argument has no bearing on the ability to evaluate it.
```anchorTerm replaceXNatOriginFiveV
#eval replaceX Nat natOrigin 5
```
```anchorInfo replaceXNatOriginFiveV
{ x := 5, y := 0 }
```
:::
:::paragraph
Polymorphic functions work by taking a named type argument and having later types refer to the argument's name.
However, there's nothing special about type arguments that allows them to be named.
Given a datatype that represents positive or negative signs:
```anchor Sign
inductive Sign where
| pos
| neg
```
:::
:::paragraph
it is possible to write a function whose argument is a sign.
If the argument is positive, the function returns a {anchorName posOrNegThree}`Nat`, while if it's negative, it returns an {anchorName posOrNegThree}`Int`:
```anchor posOrNegThree
def posOrNegThree (s : Sign) :
match s with | Sign.pos => Nat | Sign.neg => Int :=
match s with
| Sign.pos => (3 : Nat)
| Sign.neg => (-3 : Int)
```
Because types are first class and can be computed using the ordinary rules of the Lean language, they can be computed by pattern-matching against a datatype.
When Lean is checking this function, it uses the fact that the {kw}`match`-expression in the function's body corresponds to the {kw}`match`-expression in the type to make {anchorName posOrNegThree}`Nat` be the expected type for the {anchorName Sign}`pos` case and to make {anchorName posOrNegThree}`Int` be the expected type for the {anchorName Sign}`neg` case.
:::
:::paragraph
Applying {anchorName posOrNegThree}`posOrNegThree` to {anchorName Sign}`pos` results in the argument name {anchorName posOrNegThree}`s` in both the body of the function and its return type being replaced by {anchorName Sign}`pos`.
Evaluation can occur both in the expression and its type:
```anchorEvalSteps posOrNegThreePos
(posOrNegThree Sign.pos :
match Sign.pos with | Sign.pos => Nat | Sign.neg => Int)
===>
((match Sign.pos with
| Sign.pos => (3 : Nat)
| Sign.neg => (-3 : Int)) :
match Sign.pos with | Sign.pos => Nat | Sign.neg => Int)
===>
((3 : Nat) : Nat)
===>
3
```
:::
# Linked Lists
%%%
tag := "linked-lists"
%%%
:::paragraph
Lean's standard library includes a canonical linked list datatype, called {anchorName fragments}`List`, and special syntax that makes it more convenient to use.
Lists are written in square brackets.
For instance, a list that contains the prime numbers less than 10 can be written:
```anchor primesUnder10
def primesUnder10 : List Nat := [2, 3, 5, 7]
```
:::
:::paragraph
Behind the scenes, {anchorName List}`List` is an inductive datatype, defined like this:
```anchor List
inductive List (α : Type) where
| nil : List α
| cons : α → List α → List α
```
The actual definition in the standard library is slightly different, because it uses features that have not yet been presented, but it is substantially similar.
This definition says that {anchorName List}`List` takes a single type as its argument, just as {anchorName PPoint}`PPoint` did.
This type is the type of the entries stored in the list.
According to the constructors, a {anchorTerm List}`List α` can be built with either {anchorName List}`nil` or {anchorName List}`cons`.
The constructor {anchorName List}`nil` represents empty lists and the constructor {anchorName List}`cons` is used for non-empty lists.
The first argument to {anchorName List}`cons` is the head of the list, and the second argument is its tail.
A list that contains $`n` entries contains $`n` {anchorName List}`cons` constructors, the last of which has {anchorName List}`nil` as its tail.
:::
:::paragraph
The {anchorName primesUnder10}`primesUnder10` example can be written more explicitly by using {anchorName List}`List`'s constructors directly:
```anchor explicitPrimesUnder10
def explicitPrimesUnder10 : List Nat :=
List.cons 2 (List.cons 3 (List.cons 5 (List.cons 7 List.nil)))
```
These two definitions are completely equivalent, but {anchorName primesUnder10}`primesUnder10` is much easier to read than {anchorName explicitPrimesUnder10}`explicitPrimesUnder10`.
:::
:::paragraph
Functions that consume {anchorName List}`List`s can be defined in much the same way as functions that consume {anchorName Nat}`Nat`s.
Indeed, one way to think of a linked list is as a {anchorName Nat}`Nat` that has an extra data field dangling off each {anchorName Nat}`succ` constructor.
From this point of view, computing the length of a list is the process of replacing each {anchorName List}`cons` with a {anchorName Nat}`succ` and the final {anchorName List}`nil` with a {anchorName Nat}`zero`.
Just as {anchorName replaceX}`replaceX` took the type of the fields of the point as an argument, {anchorName length1EvalSummary}`length` takes the type of the list's entries.
For example, if the list contains strings, then the first argument is {anchorName length1EvalSummary}`String`: {anchorEvalStep length1EvalSummary 0}`length String ["Sourdough", "bread"]`.
It should compute like this:
```anchorEvalSteps length1EvalSummary
length String ["Sourdough", "bread"]
===>
length String (List.cons "Sourdough" (List.cons "bread" List.nil))
===>
Nat.succ (length String (List.cons "bread" List.nil))
===>
Nat.succ (Nat.succ (length String List.nil))
===>
Nat.succ (Nat.succ Nat.zero)
===>
2
```
:::
:::paragraph
The definition of {anchorName length1}`length` is both polymorphic (because it takes the list entry type as an argument) and recursive (because it refers to itself).
Generally, functions follow the shape of the data: recursive datatypes lead to recursive functions, and polymorphic datatypes lead to polymorphic functions.
```anchor length1
def length (α : Type) (xs : List α) : Nat :=
match xs with
| List.nil => Nat.zero
| List.cons y ys => Nat.succ (length α ys)
```
:::
Names such as {lit}`xs` and {lit}`ys` are conventionally used to stand for lists of unknown values.
The {lit}`s` in the name indicates that they are plural, so they are pronounced “exes” and “whys” rather than “x s” and “y s”.
:::paragraph
To make it easier to read functions on lists, the bracket notation {anchorTerm length2}`[]` can be used to pattern-match against {anchorName List}`nil`, and an infix {anchorTerm length2}`::` can be used in place of {anchorName List}`cons`:
```anchor length2
def length (α : Type) (xs : List α) : Nat :=
match xs with
| [] => 0
| y :: ys => Nat.succ (length α ys)
```
:::
# Implicit Arguments
%%%
tag := "implicit-parameters"
%%%
:::paragraph
Both {anchorName replaceX}`replaceX` and {anchorName length1}`length` are somewhat bureaucratic to use, because the type argument is typically uniquely determined by the later values.
Indeed, in most languages, the compiler is perfectly capable of determining type arguments on its own, and only occasionally needs help from users.
This is also the case in Lean.
Arguments can be declared _implicit_ by wrapping them in curly braces instead of parentheses when defining a function.
For example, a version of {anchorName replaceXImp}`replaceX` with an implicit type argument looks like this:
```anchor replaceXImp
def replaceX {α : Type} (point : PPoint α) (newX : α) : PPoint α :=
{ point with x := newX }
```
It can be used with {anchorName replaceXImpNat}`natOrigin` without providing {anchorName NatDoubleFour}`Nat` explicitly, because Lean can _infer_ the value of {anchorName replaceXImp}`α` from the later arguments:
```anchor replaceXImpNat
#eval replaceX natOrigin 5
```
```anchorInfo replaceXImpNat
{ x := 5, y := 0 }
```
:::
:::paragraph
Similarly, {anchorName lengthImp}`length` can be redefined to take the entry type implicitly:
```anchor lengthImp
def length {α : Type} (xs : List α) : Nat :=
match xs with
| [] => 0
| y :: ys => Nat.succ (length ys)
```
This {anchorName lengthImp}`length` function can be applied directly to {anchorName lengthImpPrimes}`primesUnder10`:
```anchor lengthImpPrimes
#eval length primesUnder10
```
```anchorInfo lengthImpPrimes
4
```
:::
:::paragraph
In the standard library, Lean calls this function {anchorName lengthExpNat}`List.length`, which means that the dot syntax that is used for structure field access can also be used to find the length of a list:
```anchor lengthDotPrimes
#eval primesUnder10.length
```
```anchorInfo lengthDotPrimes
4
```
:::
:::paragraph
Just as C# and Java require type arguments to be provided explicitly from time to time, Lean is not always capable of finding implicit arguments.
In these cases, they can be provided using their names.
For example, a version of {anchorName lengthExpNat}`List.length` that only works for lists of integers can be specified by setting {anchorTerm lengthExpNat}`α` to {anchorName lengthExpNat}`Int`:
```anchor lengthExpNat
#check List.length (α := Int)
```
```anchorInfo lengthExpNat
List.length : List Int → Nat
```
:::
# More Built-In Datatypes
%%%
tag := "more-built-in-types"
%%%
In addition to lists, Lean's standard library contains a number of other structures and inductive datatypes that can be used in a variety of contexts.
## {lit}`Option`
%%%
tag := "Option"
%%%
Not every list has a first entry—some lists are empty.
Many operations on collections may fail to find what they are looking for.
For instance, a function that finds the first entry in a list may not find any such entry.
It must therefore have a way to signal that there was no first entry.
Many languages have a {CSharp}`null` value that represents the absence of a value.
Instead of equipping existing types with a special {CSharp}`null` value, Lean provides a datatype called {anchorName Option}`Option` that equips some other type with an indicator for missing values.
For instance, a nullable {anchorName fragments}`Int` is represented by {anchorTerm nullOne}`Option Int`, and a nullable list of strings is represented by the type {anchorTerm fragments}`Option (List String)`.
Introducing a new type to represent nullability means that the type system ensures that checks for {CSharp}`null` cannot be forgotten, because an {anchorTerm nullOne}`Option Int` can't be used in a context where an {anchorName nullOne}`Int` is expected.
:::paragraph
{anchorName Option}`Option` has two constructors, called {anchorName Option}`some` and {anchorName Option}`none`, that respectively represent the non-null and null versions of the underlying type.
The non-null constructor, {anchorName Option}`some`, contains the underlying value, while {anchorName Option}`none` takes no arguments:
```anchor Option
inductive Option (α : Type) : Type where
| none : Option α
| some (val : α) : Option α
```
:::
The {anchorName Option}`Option` type is very similar to nullable types in languages like C# and Kotlin, but it is not identical.
In these languages, if a type (say, {CSharp}`Boolean`) always refers to actual values of the type ({CSharp}`true` and {CSharp}`false`), the type {CSharp}`Boolean?` or {CSharp}`Nullable<Boolean>` additionally admits the {CSharp}`null` value.
Tracking this in the type system is very useful: the type checker and other tooling can help programmers remember to check for {CSharp}`null`, and APIs that explicitly describe nullability through type signatures are more informative than ones that don't.
However, these nullable types differ from Lean's {anchorName Option}`Option` in one very important way, which is that they don't allow multiple layers of optionality.
{anchorTerm nullThree}`Option (Option Int)` can be constructed with {anchorTerm nullOne}`none`, {anchorTerm nullTwo}`some none`, or {anchorTerm nullThree}`some (some 360)`.
Kotlin, on the other hand, treats {Kotlin}`T??` as being equivalent to {Kotlin}`T?`.
This subtle difference is rarely relevant in practice, but it can matter from time to time.
:::paragraph
To find the first entry in a list, if it exists, use {anchorName headHuh}`List.head?`.
The question mark is part of the name, and is not related to the use of question marks to indicate nullable types in C# or Kotlin.
In the definition of {anchorName headHuh}`List.head?`, an underscore is used to represent the tail of the list.
In patterns, underscores match anything at all, but do not introduce variables to refer to the matched data.
Using underscores instead of names is a way to clearly communicate to readers that part of the input is ignored.
```anchor headHuh
def List.head? {α : Type} (xs : List α) : Option α :=
match xs with
| [] => none
| y :: _ => some y
```
:::
A Lean naming convention is to define operations that might fail in groups using the suffixes {lit}`?` for a version that returns an {anchorName Option}`Option`, {lit}`!` for a version that crashes when provided with invalid input, and {lit}`D` for a version that returns a default value when the operation would otherwise fail.
Following this pattern, {anchorName fragments}`List.head` requires the caller to provide mathematical evidence that the list is not empty, {anchorName fragments}`List.head?` returns an {anchorName Option}`Option`, {anchorName fragments}`List.head!` crashes the program when passed an empty list, and {anchorName fragments}`List.headD` takes a default value to return in case the list is empty.
The question mark and exclamation mark are part of the name, not special syntax, as Lean's naming rules are more liberal than many languages.
:::paragraph
Because {anchorName fragments}`head?` is defined in the {lit}`List` namespace, it can be used with accessor notation:
```anchor headSome
#eval primesUnder10.head?
```
```anchorInfo headSome
some 2
```
However, attempting to test it on the empty list leads to two errors:
```anchor headNoneBad
#eval [].head?
```
```anchorError headNoneBad
don't know how to synthesize implicit argument `α`
@List.nil ?m.3
context:
⊢ Type ?u.71462
```
```anchorError headNoneBad
don't know how to synthesize implicit argument `α`
@_root_.List.head? ?m.3 []
context:
⊢ Type ?u.71462
```
:::
:::paragraph
This is because Lean was unable to fully determine the expression's type.
In particular, it could neither find the implicit type argument to {anchorName fragments}`List.head?`, nor the implicit type argument to {anchorName fragments}`List.nil`.
In Lean's output, {lit}`?m.XYZ` represents a part of a program that could not be inferred.
These unknown parts are called _metavariables_, and they occur in some error messages.
In order to evaluate an expression, Lean needs to be able to find its type, and the type was unavailable because the empty list does not have any entries from which the type can be found.
Explicitly providing a type allows Lean to proceed:
```anchor headNone
#eval [].head? (α := Int)
```
```anchorInfo headNone
none
```
The type can also be provided with a type annotation:
```anchor headNoneTwo
#eval ([] : List Int).head?
```
```anchorInfo headNoneTwo
none
```
The error messages provide a useful clue.
Both messages use the _same_ metavariable to describe the missing implicit argument, which means that Lean has determined that the two missing pieces will share a solution, even though it was unable to determine the actual value of the solution.
:::
## {lit}`Prod`
%%%
tag := "prod"
%%%
The {anchorName Prod}`Prod` structure, short for “Product”, is a generic way of joining two values together.
For instance, a {anchorTerm fragments}`Prod Nat String` contains a {anchorName fragments}`Nat` and a {anchorName fragments}`String`.
In other words, {anchorTerm natPoint}`PPoint Nat` could be replaced by {anchorTerm fragments}`Prod Nat Nat`.
{anchorName fragments}`Prod` is very much like C#'s tuples, the {Kotlin}`Pair` and {Kotlin}`Triple` types in Kotlin, and {cpp}`tuple` in C++.
Many applications are best served by defining their own structures, even for simple cases like {anchorName Point}`Point`, because using domain terminology can make it easier to read the code.
Additionally, defining structure types helps catch more errors by assigning different types to different domain concepts, preventing them from being mixed up.
On the other hand, there are some cases where it is not worth the overhead of defining a new type.
Additionally, some libraries are sufficiently generic that there is no more specific concept than “pair”.
Finally, the standard library contains a variety of convenience functions that make it easier to work with the built-in pair type.
:::paragraph
The structure {anchorName Prod}`Prod` is defined with two type arguments:
```anchor Prod
structure Prod (α : Type) (β : Type) : Type where
fst : α
snd : β
```
:::
:::paragraph
Lists are used so frequently that there is special syntax to make them more readable.
For the same reason, both the product type and its constructor have special syntax.
The type {anchorTerm ProdSugar}`Prod α β` is typically written {anchorTerm ProdSugar}`α × β`, mirroring the usual notation for a Cartesian product of sets.
Similarly, the usual mathematical notation for pairs is available for {anchorName ProdSugar}`Prod`.
In other words, instead of writing:
```anchor fivesStruct
def fives : String × Int := { fst := "five", snd := 5 }
```
it suffices to write:
```anchor fives
def fives : String × Int := ("five", 5)
```
:::
:::paragraph
Both notations are right-associative.
This means that the following definitions are equivalent:
```anchor sevens
def sevens : String × Int × Nat := ("VII", 7, 4 + 3)
```
```anchor sevensNested
def sevens : String × (Int × Nat) := ("VII", (7, 4 + 3))
```
In other words, all products of more than two types, and their corresponding constructors, are actually nested products and nested pairs behind the scenes.
:::
## {anchorName Sum}`Sum`
%%%
tag := "Sum"
%%%
The {anchorName Sum}`Sum` datatype is a generic way of allowing a choice between values of two different types.
For instance, a {anchorTerm fragments}`Sum String Int` is either a {anchorName fragments}`String` or an {anchorName fragments}`Int`.
Like {anchorName Prod}`Prod`, {anchorName Sum}`Sum` should be used either when writing very generic code, for a very small section of code where there is no sensible domain-specific type, or when the standard library contains useful functions.
In most situations, it is more readable and maintainable to use a custom inductive type.
:::paragraph
Values of type {anchorTerm Sumαβ}`Sum α β` are either the constructor {anchorName Sum}`inl` applied to a value of type {anchorName Sum}`α` or the constructor {anchorName Sum}`inr` applied to a value of type {anchorName Sum}`β`:
```anchor Sum
inductive Sum (α : Type) (β : Type) : Type where
| inl : α → Sum α β
| inr : β → Sum α β
```
These names are abbreviations for “left injection” and “right injection”, respectively.
Just as the Cartesian product notation is used for {anchorName Prod}`Prod`, a “circled plus” notation is used for {anchorName SumSugar}`Sum`, so {anchorTerm SumSugar}`α ⊕ β` is another way to write {anchorTerm SumSugar}`Sum α β`.
There is no special syntax for {anchorName FakeSum}`Sum.inl` and {anchorName FakeSum}`Sum.inr`.
:::
:::paragraph
As an example, if pet names can either be dog names or cat names, then a type for them can be introduced as a sum of strings:
```anchor PetName
def PetName : Type := String ⊕ String
```
In a real program, it would usually be better to define a custom inductive datatype for this purpose with informative constructor names.
Here, {anchorName animals}`Sum.inl` is to be used for dog names, and {anchorName animals}`Sum.inr` is to be used for cat names.
These constructors can be used to write a list of animal names:
```anchor animals
def animals : List PetName :=
[Sum.inl "Spot", Sum.inr "Tiger", Sum.inl "Fifi",
Sum.inl "Rex", Sum.inr "Floof"]
```
:::
:::paragraph
Pattern matching can be used to distinguish between the two constructors.
For instance, a function that counts the number of dogs in a list of animal names (that is, the number of {anchorName howManyDogs}`Sum.inl` constructors) looks like this:
```anchor howManyDogs
def howManyDogs (pets : List PetName) : Nat :=
match pets with
| [] => 0
| Sum.inl _ :: morePets => howManyDogs morePets + 1
| Sum.inr _ :: morePets => howManyDogs morePets
```
Function calls are evaluated before infix operators, so {anchorTerm howManyDogsAdd}`howManyDogs morePets + 1` is the same as {anchorTerm howManyDogsAdd}`(howManyDogs morePets) + 1`.
As expected, {anchor dogCount}`#eval howManyDogs animals` yields {anchorInfo dogCount}`3`.
:::
## {anchorName Unit}`Unit`
%%%
tag := "Unit"
%%%
:::paragraph
{anchorName Unit}`Unit` is a type with just one argumentless constructor, called {anchorName Unit}`unit`.
In other words, it describes only a single value, which consists of said constructor applied to no arguments whatsoever.
{anchorName Unit}`Unit` is defined as follows:
```anchor Unit
inductive Unit : Type where
| unit : Unit
```
:::
:::paragraph
On its own, {anchorName Unit}`Unit` is not particularly useful.
However, in polymorphic code, it can be used as a placeholder for data that is missing.
For instance, the following inductive datatype represents arithmetic expressions:
```anchor ArithExpr
inductive ArithExpr (ann : Type) : Type where
| int : ann → Int → ArithExpr ann
| plus : ann → ArithExpr ann → ArithExpr ann → ArithExpr ann
| minus : ann → ArithExpr ann → ArithExpr ann → ArithExpr ann
| times : ann → ArithExpr ann → ArithExpr ann → ArithExpr ann
```
The type argument {anchorName ArithExpr}`ann` stands for annotations, and each constructor is annotated.
Expressions coming from a parser might be annotated with source locations, so a return type of {anchorTerm ArithExprEx}`ArithExpr SourcePos` ensures that the parser put a {anchorName ArithExprEx}`SourcePos` at each subexpression.
Expressions that don't come from the parser, however, will not have source locations, so their type can be {anchorTerm ArithExprEx}`ArithExpr Unit`.
:::
Additionally, because all Lean functions have arguments, zero-argument functions in other languages can be represented as functions that take a {anchorName ArithExprEx}`Unit` argument.
In a return position, the {anchorName ArithExprEx}`Unit` type is similar to {CSharp}`void` in languages derived from C.
In the C family, a function that returns {CSharp}`void` will return control to its caller, but it will not return any interesting value.
By being an intentionally uninteresting value, {anchorName ArithExprEx}`Unit` allows this to be expressed without requiring a special-purpose {CSharp}`void` feature in the type system.
Unit's constructor can be written as empty parentheses: {anchorTerm unitParens}`() : Unit`.
## {lit}`Empty`
%%%
tag := "Empty"
%%%
The {anchorName fragments}`Empty` datatype has no constructors whatsoever.
Thus, it indicates unreachable code, because no series of calls can ever terminate with a value at type {anchorName fragments}`Empty`.
{anchorName fragments}`Empty` is not used nearly as often as {anchorName fragments}`Unit`.
However, it is useful in some specialized contexts.
Many polymorphic datatypes do not use all of their type arguments in all of their constructors.
For instance, {anchorName animals}`Sum.inl` and {anchorName animals}`Sum.inr` each use only one of {anchorName fragments}`Sum`'s type arguments.
Using {anchorName fragments}`Empty` as one of the type arguments to {anchorName fragments}`Sum` can rule out one of the constructors at a particular point in a program.
This can allow generic code to be used in contexts that have additional restrictions.
## Naming: Sums, Products, and Units
%%%
tag := "sum-products-units"
%%%
Generally speaking, types that offer multiple constructors are called _sum types_, while types whose single constructor takes multiple arguments are called {deftech}_product types_.
These terms are related to sums and products used in ordinary arithmetic.
The relationship is easiest to see when the types involved contain a finite number of values.
If {anchorName SumProd}`α` and {anchorName SumProd}`β` are types that contain $`n` and $`k` distinct values, respectively, then {anchorTerm SumProd}`α ⊕ β` contains $`n + k` distinct values and {anchorTerm SumProd}`α × β` contains $`n \times k` distinct values.
For instance, {anchorName fragments}`Bool` has two values: {anchorName BoolNames}`true` and {anchorName BoolNames}`false`, and {anchorName Unit}`Unit` has one value: {anchorName BooloUnit}`Unit.unit`.
The product {anchorTerm fragments}`Bool × Unit` has the two values {anchorTerm BoolxUnit}`(true, Unit.unit)` and {anchorTerm BoolxUnit}`(false, Unit.unit)`, and the sum {anchorTerm fragments}`Bool ⊕ Unit` has the three values {anchorTerm BooloUnit}`Sum.inl true`, {anchorTerm BooloUnit}`Sum.inl false`, and {anchorTerm BooloUnit}`Sum.inr Unit.unit`.
Similarly, $`2 \times 1 = 2`, and $`2 + 1 = 3`.
# Messages You May Meet
%%%
tag := "polymorphism-messages"
%%%
:::paragraph
Not all definable structures or inductive types can have the type {anchorTerm Prod}`Type`.
In particular, if a constructor takes an arbitrary type as an argument, then the inductive type must have a different type.
These errors usually state something about “universe levels”.
For example, for this inductive type:
```anchor TypeInType
inductive MyType : Type where
| ctor : (α : Type) → α → MyType
```
Lean gives the following error:
```anchorError TypeInType
Invalid universe level in constructor `MyType.ctor`: Parameter `α` has type
Type
at universe level
2
which is not less than or equal to the inductive type's resulting universe level
1
```
A later chapter describes why this is the case, and how to modify definitions to make them work.
For now, try making the type an argument to the inductive type as a whole, rather than to the constructor.
:::
:::paragraph
Similarly, if a constructor's argument is a function that takes the datatype being defined as an argument, then the definition is rejected.
For example:
```anchor Positivity
inductive MyType : Type where
| ctor : (MyType → Int) → MyType
```
yields the message:
```anchorError Positivity
(kernel) arg #1 of 'MyType.ctor' has a non positive occurrence of the datatypes being declared
```
For technical reasons, allowing these datatypes could make it possible to undermine Lean's internal logic, making it unsuitable for use as a theorem prover.
:::
:::paragraph
Recursive functions that take two parameters should not match against the pair, but rather match each parameter independently.
Otherwise, the mechanism in Lean that checks whether recursive calls are made on smaller values is unable to see the connection between the input value and the argument in the recursive call.
For example, this function that determines whether two lists have the same length is rejected:
```anchor sameLengthPair
def sameLength (xs : List α) (ys : List β) : Bool :=
match (xs, ys) with
| ([], []) => true
| (x :: xs', y :: ys') => sameLength xs' ys'
| _ => false
```
The error message is:
```anchorError sameLengthPair
fail to show termination for
sameLength
with errors
failed to infer structural recursion:
Not considering parameter α of sameLength:
it is unchanged in the recursive calls
Not considering parameter β of sameLength:
it is unchanged in the recursive calls
Cannot use parameter xs:
failed to eliminate recursive application
sameLength xs' ys'
Cannot use parameter ys:
failed to eliminate recursive application
sameLength xs' ys'
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
xs ys
1) 1760:28-46 ? ?
Please use `termination_by` to specify a decreasing measure.
```
The problem can be fixed through nested pattern matching:
```anchor sameLengthOk1
def sameLength (xs : List α) (ys : List β) : Bool :=
match xs with
| [] =>
match ys with
| [] => true
| _ => false
| x :: xs' =>
match ys with
| y :: ys' => sameLength xs' ys'
| _ => false
```
{ref "simultaneous-matching"}[Simultaneous matching], described in the next section, is another way to solve the problem that is often more elegant.
:::
:::paragraph
Forgetting an argument to an inductive type can also yield a confusing message.
For example, when the argument {anchorName MissingTypeArg}`α` is not passed to {anchorName MissingTypeArg}`MyType` in {anchorTerm MissingTypeArg}`ctor`'s type:
```anchor MissingTypeArg
inductive MyType (α : Type) : Type where
| ctor : α → MyType
```
Lean replies with the following error:
```anchorError MissingTypeArg
type expected, got
(MyType : Type → Type)
```
The error message is saying that {anchorName MissingTypeArg}`MyType`'s type, which is {anchorTerm MissingTypeArgT}`Type → Type`, does not itself describe types.
{anchorName MissingTypeArg}`MyType` requires an argument to become an actual honest-to-goodness type.
:::
:::paragraph
The same message can appear when type arguments are omitted in other contexts, such as in a type signature for a definition:
```anchor MyTypeDef
inductive MyType (α : Type) : Type where
| ctor : α → MyType α
```
```anchor MissingTypeArg2
def ofFive : MyType := ctor 5
```
```anchorError MissingTypeArg2
type expected, got
(MyType : Type → Type)
```
:::
:::paragraph
Evaluating expressions that use polymorphic types may trigger a situation in which Lean is incapable of displaying a value.
The {anchorTerm evalAxe}`#eval` command evaluates the provided expression, using the expression's type to determine how to display the result.
For some types, such as functions, this process fails, but Lean is perfectly capable of automatically generating display code for most other types.
There is no need, for example, to provide Lean with any specific display code for {anchorName WoodSplittingTool}`WoodSplittingTool`:
```anchor WoodSplittingTool
inductive WoodSplittingTool where
| axe
| maul
| froe
```
```anchor evalAxe
#eval WoodSplittingTool.axe
```
```anchorInfo evalAxe
WoodSplittingTool.axe
```
There are limits to the automation that Lean uses here, however.
{anchorName allTools}`allTools` is a list of all three tools:
```anchor allTools
def allTools : List WoodSplittingTool := [
WoodSplittingTool.axe,
WoodSplittingTool.maul,
WoodSplittingTool.froe
]
```
Evaluating it leads to an error:
```anchor evalAllTools
#eval allTools
```
```anchorError evalAllTools
could not synthesize a `ToExpr`, `Repr`, or `ToString` instance for type
List WoodSplittingTool
```
This is because Lean attempts to use code from a built-in table to display a list, but this code demands that display code for {anchorName WoodSplittingTool}`WoodSplittingTool` already exists.
This error can be worked around by instructing Lean to generate this display code when a datatype is defined, instead of at the last moment as part of {anchorTerm evalAllTools}`#eval`, by adding {anchorTerm Firewood}`deriving Repr` to its definition:
```anchor Firewood
inductive Firewood where
| birch
| pine
| beech
deriving Repr
```
Evaluating a list of {anchorName Firewood}`Firewood` succeeds:
```anchor allFirewood
def allFirewood : List Firewood := [
Firewood.birch,
Firewood.pine,
Firewood.beech
]
```
```anchor evalAllFirewood
#eval allFirewood
```
```anchorInfo evalAllFirewood
[Firewood.birch, Firewood.pine, Firewood.beech]
```
:::
# Exercises
%%%
tag := "polymorphism-exercises"
%%%
* Write a function to find the last entry in a list. It should return an {anchorName fragments}`Option`.
* Write a function that finds the first entry in a list that satisfies a given predicate. Start the definition with {anchorTerm List.findFirst?Ex}`def List.findFirst? {α : Type} (xs : List α) (predicate : α → Bool) : Option α := …`.
* Write a function {anchorName Prod.switchEx}`Prod.switch` that switches the two fields in a pair for each other. Start the definition with {anchor Prod.switchEx}`def Prod.switch {α β : Type} (pair : α × β) : β × α := …`.
* Rewrite the {anchorName PetName}`PetName` example to use a custom datatype and compare it to the version that uses {anchorName Sum}`Sum`.
* Write a function {anchorName zipEx}`zip` that combines two lists into a list of pairs. The resulting list should be as long as the shortest input list. Start the definition with {anchor zipEx}`def zip {α β : Type} (xs : List α) (ys : List β) : List (α × β) := …`.
* Write a polymorphic function {anchorName takeOne}`take` that returns the first $`n` entries in a list, where $`n` is a {anchorName fragments}`Nat`. If the list contains fewer than $`n` entries, then the resulting list should be the entire input list. {anchorTerm takeThree}`#eval take 3 ["bolete", "oyster"]` should yield {anchorInfo takeThree}`["bolete", "oyster"]`, and {anchor takeOne}`#eval take 1 ["bolete", "oyster"]` should yield {anchorInfo takeOne}`["bolete"]`.
* Using the analogy between types and arithmetic, write a function that distributes products over sums. In other words, it should have type {anchorTerm distr}`α × (β ⊕ γ) → (α × β) ⊕ (α × γ)`.
* Using the analogy between types and arithmetic, write a function that turns multiplication by two into a sum. In other words, it should have type {anchorTerm distr}`Bool × α → α ⊕ α`. |
fp-lean/book/FPLean/GettingToKnow/Types.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
example_module Examples.Intro
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Types" =>
%%%
tag := "getting-to-know-types"
%%%
Types classify programs based on the values that they can
compute. Types serve a number of roles in a program:
1. They allow the compiler to make decisions about the in-memory representation of a value.
2. They help programmers to communicate their intent to others, serving as a lightweight specification for the inputs and outputs of a function.
The compiler ensures that the program adheres to this specification.
3. They prevent various potential mistakes, such as adding a number to a string, and thus reduce the number of tests that are necessary for a program.
4. They help the Lean compiler automate the production of auxiliary code that can save boilerplate.
Lean's type system is unusually expressive.
Types can encode strong specifications like “this sorting function returns a permutation of its input” and flexible specifications like “this function has different return types, depending on the value of its argument”.
The type system can even be used as a full-blown logic for proving mathematical theorems.
This cutting-edge expressive power doesn't make simpler types unnecessary, however, and understanding these simpler types is a prerequisite for using the more advanced features.
:::paragraph
Every program in Lean must have a type. In particular, every
expression must have a type before it can be evaluated. In the
examples so far, Lean has been able to discover a type on its own, but
it is sometimes necessary to provide one. This is done using the colon
operator inside parentheses:
```anchor onePlusTwoEval
#eval (1 + 2 : Nat)
```
Here, {anchorName onePlusTwoEval}`Nat` is the type of _natural numbers_, which are arbitrary-precision unsigned integers.
In Lean, {anchorName onePlusTwoEval}`Nat` is the default type for non-negative integer literals.
This default type is not always the best choice.
In C, unsigned integers underflow to the largest representable numbers when subtraction would otherwise yield a result less than zero.
{anchorName onePlusTwoEval}`Nat`, however, can represent arbitrarily-large unsigned numbers, so there is no largest number to underflow to.
Thus, subtraction on {anchorName onePlusTwoEval}`Nat` returns {anchorName Nat}`zero` when the answer would have otherwise been negative.
For instance,
```anchor oneMinusTwoEval
#eval (1 - 2 : Nat)
```
evaluates to {anchorInfo oneMinusTwoEval}`0` rather than {lit}`-1`.
To use a type that can represent the negative integers, provide it directly:
```anchor oneMinusTwoIntEval
#eval (1 - 2 : Int)
```
With this type, the result is {anchorInfo oneMinusTwoIntEval}`-1`, as expected.
:::
:::paragraph
To check the type of an expression without evaluating it, use {kw}`#check` instead of {kw}`#eval`. For instance:
```anchor oneMinusTwoIntType
#check (1 - 2 : Int)
```
reports {anchorInfo oneMinusTwoIntType}`1 - 2 : Int` without actually performing the subtraction.
:::
:::paragraph
When a program can't be given a type, an error is returned from both {kw}`#check` and {kw}`#eval`. For instance:
```anchor stringAppendList
#check String.append ["hello", " "] "world"
```
outputs
```anchorError stringAppendList
Application type mismatch: The argument
["hello", " "]
has type
List String
but is expected to have type
String
in the application
String.append ["hello", " "]
```
because the first argument to {anchorName stringAppendList}`String.append` is expected to be a string, but a list of strings was provided instead.
::: |
fp-lean/book/FPLean/GettingToKnow/Structures.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
example_module Examples.Intro
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Structures" =>
%%%
tag := "structures"
%%%
The first step in writing a program is usually to identify the problem domain's concepts, and then find suitable representations for them in code.
Sometimes, a domain concept is a collection of other, simpler, concepts.
In that case, it can be convenient to group these simpler components together into a single “package”, which can then be given a meaningful name.
In Lean, this is done using _structures_, which are analogous to {c}`struct`s in C or Rust and {CSharp}`record`s in C#.
Defining a structure introduces a completely new type to Lean that can't be reduced to any other type.
This is useful because multiple structures might represent different concepts that nonetheless contain the same data.
For instance, a point might be represented using either Cartesian or polar coordinates, each being a pair of floating-point numbers.
Defining separate structures prevents API clients from confusing one for another.
Lean's floating-point number type is called {anchorName zeroFloat}`Float`, and floating-point numbers are written in the usual notation.
```anchorTerm onePointTwo
#check 1.2
```
```anchorInfo onePointTwo
1.2 : Float
```
```anchorTerm negativeLots
#check -454.2123215
```
```anchorInfo negativeLots
-454.2123215 : Float
```
```anchorTerm zeroPointZero
#check 0.0
```
```anchorInfo zeroPointZero
0.0 : Float
```
When floating point numbers are written with the decimal point, Lean will infer the type {anchorName zeroFloat}`Float`. If they are written without it, then a type annotation may be necessary.
```anchorTerm zeroNat
#check 0
```
```anchorInfo zeroNat
0 : Nat
```
```anchorTerm zeroFloat
#check (0 : Float)
```
```anchorInfo zeroFloat
0 : Float
```
A Cartesian point is a structure with two {anchorName zeroFloat}`Float` fields, called {anchorName Point}`x` and {anchorName Point}`y`.
This is declared using the {kw}`structure` keyword.
```anchor Point
structure Point where
x : Float
y : Float
```
After this declaration, {anchorName Point}`Point` is a new structure type.
The typical way to create a value of a structure type is to provide values for all of its fields inside of curly braces.
The origin of a Cartesian plane is where {anchorName Point}`x` and {anchorName Point}`y` are both zero:
```anchor origin
def origin : Point := { x := 0.0, y := 0.0 }
```
The result of {anchorTerm originEval}`#eval origin` looks very much like the definition of {anchorName origin}`origin`.
```anchorInfo originEval
{ x := 0.000000, y := 0.000000 }
```
Because structures exist to “bundle up” a collection of data, naming it and treating it as a single unit, it is also important to be able to extract the individual fields of a structure.
This is done using dot notation, as in C, Python, Rust, or JavaScript.
```anchorTerm originx
#eval origin.x
```
```anchorInfo originx
0.000000
```
```anchorTerm originy
#eval origin.y
```
```anchorInfo originy
0.000000
```
:::paragraph
This can be used to define functions that take structures as arguments.
For instance, addition of points is performed by adding the underlying coordinate values.
It should be the case that
```anchorTerm addPointsEx
#eval addPoints { x := 1.5, y := 32 } { x := -8, y := 0.2 }
```
yields
```anchorInfo addPointsEx
{ x := -6.500000, y := 32.200000 }
```
:::
The function itself takes two {anchorName Point}`Point`s as arguments, called {anchorName addPoints}`p1` and {anchorName addPoints}`p2`.
The resulting point is based on the {anchorName addPoints}`x` and {anchorName addPoints}`y` fields of both {anchorName addPoints}`p1` and {anchorName addPoints}`p2`:
```anchor addPoints
def addPoints (p1 : Point) (p2 : Point) : Point :=
{ x := p1.x + p2.x, y := p1.y + p2.y }
```
Similarly, the distance between two points, which is the square root of the sum of the squares of the differences in their {anchorName Point}`x` and {anchorName Point}`y` components, can be written:
```anchor distance
def distance (p1 : Point) (p2 : Point) : Float :=
Float.sqrt (((p2.x - p1.x) ^ 2.0) + ((p2.y - p1.y) ^ 2.0))
```
For example, the distance between $`(1, 2)` and $`(5, -1)` is $`5`:
```anchorTerm evalDistance
#eval distance { x := 1.0, y := 2.0 } { x := 5.0, y := -1.0 }
```
```anchorInfo evalDistance
5.000000
```
Multiple structures may have fields with the same names.
A three-dimensional point datatype may share the fields {anchorName Point3D}`x` and {anchorName Point3D}`y`, and be instantiated with the same field names:
```anchor Point3D
structure Point3D where
x : Float
y : Float
z : Float
```
```anchor origin3D
def origin3D : Point3D := { x := 0.0, y := 0.0, z := 0.0 }
```
This means that the structure's expected type must be known in order to use the curly-brace syntax.
If the type is not known, Lean will not be able to instantiate the structure.
For example,
```anchorTerm originNoType
#check { x := 0.0, y := 0.0 }
```
leads to the error
```anchorError originNoType
invalid {...} notation, expected type is not known
```
As usual, the situation can be remedied by providing a type annotation.
```anchorTerm originWithAnnot
#check ({ x := 0.0, y := 0.0 } : Point)
```
```anchorInfo originWithAnnot
{ x := 0.0, y := 0.0 } : Point
```
To make programs more concise, Lean also allows the structure type annotation inside the curly braces.
```anchorTerm originWithAnnot2
#check { x := 0.0, y := 0.0 : Point}
```
```anchorInfo originWithAnnot2
{ x := 0.0, y := 0.0 } : Point
```
# Updating Structures
%%%
tag := "updating-structures"
%%%
Imagine a function {anchorName zeroXBad}`zeroX` that replaces the {anchorName zeroXBad}`x` field of a {anchorName zeroXBad}`Point` with {anchorTerm zeroX}`0`.
In most programming language communities, this sentence would mean that the memory location pointed to by {anchorName Point}`x` was to be overwritten with a new value.
However, Lean is a functional programming language.
In functional programming communities, what is almost always meant by this kind of statement is that a fresh {anchorName Point}`Point` is allocated with the {anchorName Point}`x` field pointing to the new value, and all other fields pointing to the original values from the input.
One way to write {anchorName zeroXBad}`zeroX` is to follow this description literally, filling out the new value for {anchorName Point}`x` and manually transferring {anchorName Point}`y`:
```anchor zeroXBad
def zeroX (p : Point) : Point :=
{ x := 0, y := p.y }
```
This style of programming has drawbacks, however.
First off, if a new field is added to a structure, then every site that updates any field at all must be updated, causing maintenance difficulties.
Secondly, if the structure contains multiple fields with the same type, then there is a real risk of copy-paste coding leading to field contents being duplicated or switched.
Finally, the program becomes long and bureaucratic.
Lean provides a convenient syntax for replacing some fields in a structure while leaving the others alone.
This is done by using the {kw}`with` keyword in a structure initialization.
The source of unchanged fields occurs before the {kw}`with`, and the new fields occur after.
For example, {anchorName zeroX}`zeroX` can be written with only the new {anchorName Point}`x` value:
```anchor zeroX
def zeroX (p : Point) : Point :=
{ p with x := 0 }
```
Remember that this structure update syntax does not modify existing values—it creates new values that share some fields with old values.
Given the point {anchorName fourAndThree}`fourAndThree`:
```anchor fourAndThree
def fourAndThree : Point :=
{ x := 4.3, y := 3.4 }
```
evaluating it, then evaluating an update of it using {anchorName zeroX}`zeroX`, then evaluating it again yields the original value:
```anchorTerm fourAndThreeEval
#eval fourAndThree
```
```anchorInfo fourAndThreeEval
{ x := 4.300000, y := 3.400000 }
```
```anchorTerm zeroXFourAndThreeEval
#eval zeroX fourAndThree
```
```anchorInfo zeroXFourAndThreeEval
{ x := 0.000000, y := 3.400000 }
```
```anchorTerm fourAndThreeEval
#eval fourAndThree
```
```anchorInfo fourAndThreeEval
{ x := 4.300000, y := 3.400000 }
```
One consequence of the fact that structure updates do not modify the original structure is that it becomes easier to reason about cases where the new value is computed from the old one.
All references to the old structure continue to refer to the same field values in all of the new values provided.
# Behind the Scenes
%%%
tag := "behind-the-scenes"
%%%
Every structure has a _constructor_.
Here, the term “constructor” may be a source of confusion.
Unlike constructors in languages such as Java or Python, constructors in Lean are not arbitrary code to be run when a datatype is initialized.
Instead, constructors simply gather the data to be stored in the newly-allocated data structure.
It is not possible to provide a custom constructor that pre-processes data or rejects invalid arguments.
This is really a case of the word “constructor” having different, but related, meanings in the two contexts.
By default, the constructor for a structure named {lit}`S` is named {lit}`S.mk`.
Here, {lit}`S` is a namespace qualifier, and {lit}`mk` is the name of the constructor itself.
Instead of using curly-brace initialization syntax, the constructor can also be applied directly.
```anchorTerm checkPointMk
#check Point.mk 1.5 2.8
```
However, this is not generally considered to be good Lean style, and Lean even returns its feedback using the standard structure initializer syntax.
```anchorInfo checkPointMk
{ x := 1.5, y := 2.8 } : Point
```
Constructors have function types, which means they can be used anywhere that a function is expected.
For instance, {anchorName Pointmk}`Point.mk` is a function that accepts two {anchorName Point}`Float`s (respectively {anchorName Point}`x` and {anchorName Point}`y`) and returns a new {anchorName Point}`Point`.
```anchorTerm Pointmk
#check (Point.mk)
```
```anchorInfo Pointmk
Point.mk : Float → Float → Point
```
To override a structure's constructor name, write it with two colons at the beginning.
For instance, to use {anchorName PointCtorNameName}`Point.point` instead of {anchorName Pointmk}`Point.mk`, write:
```anchor PointCtorName
structure Point where
point ::
x : Float
y : Float
```
In addition to the constructor, an accessor function is defined for each field of a structure.
These have the same name as the field, in the structure's namespace.
For {anchorName Point}`Point`, accessor functions {anchorName Pointx}`Point.x` and {anchorName Pointy}`Point.y` are generated.
```anchorTerm Pointx
#check (Point.x)
```
```anchorInfo Pointx
Point.x : Point → Float
```
```anchorTerm Pointy
#check (Point.y)
```
```anchorInfo Pointy
Point.y : Point → Float
```
In fact, just as the curly-braced structure construction syntax is converted to a call to the structure's constructor behind the scenes, the syntax {anchorName addPoints}`x` in the prior definition of {anchorName addPoints}`addPoints` is converted into a call to the {anchorName addPoints}`x` accessor.
That is, {anchorTerm originx}`#eval origin.x` and {anchorTerm originx1}`#eval Point.x origin` both yield
```anchorInfo originx1
0.000000
```
Accessor dot notation is usable with more than just structure fields.
It can also be used for functions that take any number of arguments.
More generally, accessor notation has the form {lit}`TARGET.f ARG1 ARG2 ...`.
If {lit}`TARGET` has type {lit}`T`, the function named {lit}`T.f` is called.
{lit}`TARGET` becomes its leftmost argument of type {lit}`T`, which is often but not always the first one, and {lit}`ARG1 ARG2 ...` are provided in order as the remaining arguments.
For instance, {anchorName stringAppend}`String.append` can be invoked from a string with accessor notation, even though {anchorName Inline}`String` is not a structure with an {anchorName stringAppendDot}`append` field.
```anchorTerm stringAppendDot
#eval "one string".append " and another"
```
```anchorInfo stringAppendDot
"one string and another"
```
In that example, {lit}`TARGET` represents {anchorTerm stringAppendDot}`"one string"` and {lit}`ARG1` represents {anchorTerm stringAppendDot}`" and another"`.
The function {anchorName modifyBoth}`Point.modifyBoth` (that is, {anchorName modifyBothTest}`modifyBoth` defined in the {lit}`Point` namespace) applies a function to both fields in a {anchorName Point}`Point`:
```anchor modifyBoth
def Point.modifyBoth (f : Float → Float) (p : Point) : Point :=
{ x := f p.x, y := f p.y }
```
Even though the {anchorName Point}`Point` argument comes after the function argument, it can be used with dot notation as well:
```anchorTerm modifyBothTest
#eval fourAndThree.modifyBoth Float.floor
```
```anchorInfo modifyBothTest
{ x := 4.000000, y := 3.000000 }
```
In this case, {lit}`TARGET` represents {anchorName fourAndThree}`fourAndThree`, while {lit}`ARG1` is {anchorName modifyBothTest}`Float.floor`.
This is because the target of the accessor notation is used as the first argument in which the type matches, not necessarily the first argument.
# Exercises
%%%
tag := "structure-exercises"
%%%
* Define a structure named {anchorName RectangularPrism}`RectangularPrism` that contains the height, width, and depth of a rectangular prism, each as a {anchorName RectangularPrism}`Float`.
* Define a function named {anchorTerm RectangularPrism}`volume : RectangularPrism → Float` that computes the volume of a rectangular prism.
* Define a structure named {anchorName RectangularPrism}`Segment` that represents a line segment by its endpoints, and define a function {lit}`length : Segment → Float` that computes the length of a line segment. {anchorName RectangularPrism}`Segment` should have at most two fields.
* Which names are introduced by the declaration of {anchorName RectangularPrism}`RectangularPrism`?
* Which names are introduced by the following declarations of {anchorName Hamster}`Hamster` and {anchorName Book}`Book`? What are their types?
```anchor Hamster
structure Hamster where
name : String
fluffy : Bool
```
```anchor Book
structure Book where
makeBook ::
title : String
author : String
price : Float
``` |
fp-lean/book/FPLean/GettingToKnow/FunctionsDefinitions.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Functions and Definitions" =>
%%%
tag := "functions-and-definitions"
%%%
:::paragraph
In Lean, definitions are introduced using the {kw}`def` keyword.
For instance, to define the name {anchorTerm helloNameVal}`hello` to refer to the string {anchorTerm helloNameVal}`"Hello"`, write:
```anchor hello
def hello := "Hello"
```
In Lean, new names are defined using the colon-equal operator {anchorTerm hello}`:=` rather than {anchorTerm helloNameVal}`=`.
This is because {anchorTerm helloNameVal}`=` is used to describe equalities between existing expressions, and using two different operators helps prevent confusion.
:::
:::paragraph
In the definition of {anchorTerm helloNameVal}`hello`, the expression {anchorTerm helloNameVal}`"Hello"` is simple enough that Lean is able to determine the definition's type automatically.
However, most definitions are not so simple, so it will usually be necessary to add a type.
This is done using a colon after the name being defined:
```anchor lean
def lean : String := "Lean"
```
:::
:::paragraph
Now that the names have been defined, they can be used, so
```anchor helloLean
#eval String.append hello (String.append " " lean)
```
outputs
```anchorInfo helloLean
"Hello Lean"
```
In Lean, defined names may only be used after their definitions.
:::
In many languages, definitions of functions use a different syntax than definitions of other values.
For instance, Python function definitions begin with the {kw}`def` keyword, while other definitions are defined with an equals sign.
In Lean, functions are defined using the same {kw}`def` keyword as other values.
Nonetheless, definitions such as {anchorTerm helloNameVal}`hello` introduce names that refer _directly_ to their values, rather than to zero-argument functions that return equivalent results each time they are called.
# Defining Functions
%%%
tag := "defining-functions"
%%%
:::paragraph
There are a variety of ways to define functions in Lean. The simplest is to place the function's arguments before the definition's type, separated by spaces. For instance, a function that adds one to its argument can be written:
```anchor add1
def add1 (n : Nat) : Nat := n + 1
```
Testing this function with {kw}`#eval` gives {anchorInfo add1_7}`8`, as expected:
```anchor add1_7
#eval add1 7
```
:::
:::paragraph
Just as functions are applied to multiple arguments by writing spaces between each argument, functions that accept multiple arguments are defined with spaces between the arguments' names and types. The function {anchorName maximum}`maximum`, whose result is equal to the greatest of its two arguments, takes two {anchorName maximum}`Nat` arguments {anchorName Nat}`n` and {anchorName maximum}`k` and returns a {anchorName maximum}`Nat`.
```anchor maximum
def maximum (n : Nat) (k : Nat) : Nat :=
if n < k then
k
else n
```
Similarly, the function {anchorName spaceBetween}`spaceBetween` joins two strings with a space between them.
```anchor spaceBetween
def spaceBetween (before : String) (after : String) : String :=
String.append before (String.append " " after)
```
:::
:::paragraph
When a defined function like {anchorName maximum_eval}`maximum` has been provided with its arguments, the result is determined by first replacing the argument names with the provided values in the body, and then evaluating the resulting body. For example:
```anchorEvalSteps maximum_eval
maximum (5 + 8) (2 * 7)
===>
maximum 13 14
===>
if 13 < 14 then 14 else 13
===>
14
```
:::
Expressions that evaluate to natural numbers, integers, and strings have types that say this ({anchorName Nat}`Nat`, {anchorName Positivity}`Int`, and {anchorName Book}`String`, respectively).
This is also true of functions.
A function that accepts a {anchorName Nat}`Nat` and returns a {anchorName Bool}`Bool` has type {anchorTerm evenFancy}`Nat → Bool`, and a function that accepts two {anchorName Nat}`Nat`s and returns a {anchorName Nat}`Nat` has type {anchorTerm currying}`Nat → Nat → Nat`.
As a special case, Lean returns a function's signature when its name is used directly with {kw}`#check`.
Entering {anchorTerm add1sig}`#check add1` yields {anchorInfo add1sig}`add1 (n : Nat) : Nat`.
However, Lean can be “tricked” into showing the function's type by writing the function's name in parentheses, which causes the function to be treated as an ordinary expression, so {anchorTerm add1type}`#check (add1)` yields {anchorInfo add1type}`add1 : Nat → Nat` and {anchorTerm maximumType}`#check (maximum)` yields {anchorInfo maximumType}`maximum : Nat → Nat → Nat`.
This arrow can also be written with an ASCII alternative arrow {anchorTerm add1typeASCII}`->`, so the preceding function types can be written {anchorTerm add1typeASCII}`example : Nat -> Nat := add1` and {anchorTerm maximumTypeASCII}`example : Nat -> Nat -> Nat := maximum`, respectively.
Behind the scenes, all functions actually expect precisely one argument.
Functions like {anchorName maximum3Type}`maximum` that seem to take more than one argument are in fact functions that take one argument and then return a new function.
This new function takes the next argument, and the process continues until no more arguments are expected.
This can be seen by providing one argument to a multiple-argument function: {anchorTerm maximum3Type}`#check maximum 3` yields {anchorInfo maximum3Type}`maximum 3 : Nat → Nat` and {anchorTerm stringAppendHelloType}`#check spaceBetween "Hello "` yields {anchorInfo stringAppendHelloType}`spaceBetween "Hello " : String → String`.
Using a function that returns a function to implement multiple-argument functions is called _currying_ after the mathematician Haskell Curry.
Function arrows associate to the right, which means that {anchorTerm currying}`Nat → Nat → Nat` should be parenthesized {anchorTerm currying}`Nat → (Nat → Nat)`.
## Exercises
%%%
tag := "function-definition-exercises"
%%%
* Define the function {anchorName joinStringsWithEx}`joinStringsWith` with type {anchorTerm joinStringsWith}`String → String → String → String` that creates a new string by placing its first argument between its second and third arguments. {anchorEvalStep joinStringsWithEx 0}`joinStringsWith ", " "one" "and another"` should evaluate to {anchorEvalStep joinStringsWithEx 1}`"one, and another"`.
* What is the type of {anchorTerm joinStringsWith}`joinStringsWith ": "`? Check your answer with Lean.
* Define a function {anchorName volume}`volume` with type {anchorTerm volume}`Nat → Nat → Nat → Nat` that computes the volume of a rectangular prism with the given height, width, and depth.
# Defining Types
%%%
tag := "defining-types"
%%%
Most typed programming languages have some means of defining aliases for types, such as C's {c}`typedef`.
In Lean, however, types are a first-class part of the language—they are expressions like any other.
This means that definitions can refer to types just as well as they can refer to other values.
:::paragraph
For example, if {anchorName StringTypeDef}`String` is too much to type, a shorter abbreviation {anchorName StringTypeDef}`Str` can be defined:
```anchor StringTypeDef
def Str : Type := String
```
It is then possible to use {anchorName aStr}`Str` as a definition's type instead of {anchorName StringTypeDef}`String`:
```anchor aStr
def aStr : Str := "This is a string."
```
:::
The reason this works is that types follow the same rules as the rest of Lean.
Types are expressions, and in an expression, a defined name can be replaced with its definition.
Because {anchorName aStr}`Str` has been defined to mean {anchorName Book}`String`, the definition of {anchorName aStr}`aStr` makes sense.
## Messages You May Meet
%%%
tag := "abbrev-vs-def"
%%%
:::paragraph
Experimenting with using definitions for types is made more complicated by the way that Lean supports overloaded integer literals.
If {anchorName NaturalNumberTypeDef}`Nat` is too short, a longer name {anchorName NaturalNumberTypeDef}`NaturalNumber` can be defined:
```anchor NaturalNumberTypeDef
def NaturalNumber : Type := Nat
```
However, using {anchorName NaturalNumberTypeDef}`NaturalNumber` as a definition's type instead of {anchorName NaturalNumberTypeDef}`Nat` does not have the expected effect.
In particular, the definition:
```anchor thirtyEight
def thirtyEight : NaturalNumber := 38
```
results in the following error:
```anchorError thirtyEight
failed to synthesize
OfNat NaturalNumber 38
numerals are polymorphic in Lean, but the numeral `38` cannot be used in a context where the expected type is
NaturalNumber
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
:::
This error occurs because Lean allows number literals to be _overloaded_.
When it makes sense to do so, natural number literals can be used for new types, just as if those types were built in to the system.
This is part of Lean's mission of making it convenient to represent mathematics, and different branches of mathematics use number notation for very different purposes.
The specific feature that allows this overloading does not replace all defined names with their definitions before looking for overloading, which is what leads to the error message above.
:::paragraph
One way to work around this limitation is by providing the type {anchorName thirtyEightFixed}`Nat` on the right-hand side of the definition, causing {anchorName thirtyEightFixed}`Nat`'s overloading rules to be used for {anchorTerm thirtyEightFixed}`38`:
```anchor thirtyEightFixed
def thirtyEight : NaturalNumber := (38 : Nat)
```
The definition is still type-correct because {anchorEvalStep NaturalNumberDef 0}`NaturalNumber` is the same type as {anchorEvalStep NaturalNumberDef 1}`Nat`—by definition!
:::
Another solution is to define an overloading for {anchorName NaturalNumberDef}`NaturalNumber` that works equivalently to the one for {anchorName NaturalNumberDef}`Nat`.
This requires more advanced features of Lean, however.
:::paragraph
Finally, defining the new name for {anchorName NaturalNumberDef}`Nat` using {kw}`abbrev` instead of {kw}`def` allows overloading resolution to replace the defined name with its definition.
Definitions written using {kw}`abbrev` are always unfolded.
For instance,
```anchor NTypeDef
abbrev N : Type := Nat
```
and
```anchor thirtyNine
def thirtyNine : N := 39
```
are accepted without issue.
:::
Behind the scenes, some definitions are internally marked as being unfoldable during overload resolution, while others are not.
Definitions that are to be unfolded are called _reducible_.
Control over reducibility is essential to allow Lean to scale: fully unfolding all definitions can result in very large types that are slow for a machine to process and difficult for users to understand.
Definitions produced with {kw}`abbrev` are marked as reducible. |
fp-lean/book/FPLean/GettingToKnow/DatatypesPatterns.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
example_module Examples.Intro
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Intro"
#doc (Manual) "Datatypes and Patterns" =>
%%%
tag := "datatypes-and-patterns"
%%%
Structures enable multiple independent pieces of data to be combined into a coherent whole that is represented by a brand new type.
Types such as structures that group together a collection of values are called _product types_.
Many domain concepts, however, can't be naturally represented as structures.
For instance, an application might need to track user permissions, where some users are document owners, some may edit documents, and others may only read them.
A calculator has a number of binary operators, such as addition, subtraction, and multiplication.
Structures do not provide an easy way to encode multiple choices.
Similarly, while a structure is an excellent way to keep track of a fixed set of fields, many applications require data that may contain an arbitrary number of elements.
Most classic data structures, such as trees and lists, have a recursive structure, where the tail of a list is itself a list, or where the left and right branches of a binary tree are themselves binary trees.
In the aforementioned calculator, the structure of expressions themselves is recursive.
The summands in an addition expression may themselves be multiplication expressions, for instance.
Datatypes that allow choices are called _sum types_ and datatypes that can include instances of themselves are called _recursive datatypes_.
Recursive sum types are called _inductive datatypes_, because mathematical induction may be used to prove statements about them.
When programming, inductive datatypes are consumed through pattern matching and recursive functions.
:::paragraph
Many of the built-in types are actually inductive datatypes in the standard library.
For instance, {anchorName Bool}`Bool` is an inductive datatype:
```anchor Bool
inductive Bool where
| false : Bool
| true : Bool
```
This definition has two main parts.
The first line provides the name of the new type ({anchorName Bool}`Bool`), while the remaining lines each describe a constructor.
As with constructors of structures, constructors of inductive datatypes are mere inert receivers of and containers for other data, rather than places to insert arbitrary initialization and validation code.
Unlike structures, inductive datatypes may have multiple constructors.
Here, there are two constructors, {anchorName Bool}`true` and {anchorName Bool}`false`, and neither takes any arguments.
Just as a structure declaration places its names in a namespace named after the declared type, an inductive datatype places the names of its constructors in a namespace.
In the Lean standard library, {anchorName BoolNames}`true` and {anchorName BoolNames}`false` are re-exported from this namespace so that they can be written alone, rather than as {anchorName BoolNames}`Bool.true` and {anchorName BoolNames}`Bool.false`, respectively.
:::
:::paragraph
From a data modeling perspective, inductive datatypes are used in many of the same contexts where a sealed abstract class might be used in other languages.
In languages like C# or Java, one might write a similar definition of {anchorName Bool}`Bool`:
```CSharp
abstract class Bool {}
class True : Bool {}
class False : Bool {}
```
However, the specifics of these representations are fairly different. In particular, each non-abstract class creates both a new type and new ways of allocating data. In the object-oriented example, {CSharp}`True` and {CSharp}`False` are both types that are more specific than {CSharp}`Bool`, while the Lean definition introduces only the new type {anchorName Bool}`Bool`.
:::
The type {anchorName Nat}`Nat` of non-negative integers is an inductive datatype:
```anchor Nat
inductive Nat where
| zero : Nat
| succ (n : Nat) : Nat
```
Here, {anchorName NatNames}`zero` represents 0, while {anchorName NatNames}`succ` represents the successor of some other number.
The {anchorName Nat}`Nat` mentioned in {anchorName NatNames}`succ`'s declaration is the very type {anchorName Nat}`Nat` that is in the process of being defined.
_Successor_ means “one greater than”, so the successor of five is six and the successor of 32,185 is 32,186.
Using this definition, {anchorEvalStep four 1}`4` is represented as {anchorEvalStep four 0}`Nat.succ (Nat.succ (Nat.succ (Nat.succ Nat.zero)))`.
This definition is almost like the definition of {anchorName even}`Bool` with slightly different names.
The only real difference is that {anchorName NatNames}`succ` is followed by {anchorTerm Nat}`(n : Nat)`, which specifies that the constructor {anchorName NatNames}`succ` takes an argument of type {anchorName Nat}`Nat` which happens to be named {anchorName Nat}`n`.
The names {anchorName NatNames}`zero` and {anchorName NatNames}`succ` are in a namespace named after their type, so they must be referred to as {anchorName NatNames}`Nat.zero` and {anchorName NatNames}`Nat.succ`, respectively.
Argument names, such as {anchorName Nat}`n`, may occur in Lean's error messages and in feedback provided when writing mathematical proofs.
Lean also has an optional syntax for providing arguments by name.
Generally, however, the choice of argument name is less important than the choice of a structure field name, as it does not form as large a part of the API.
In C# or Java, {CSharp}`Nat` could be defined as follows:
```CSharp
abstract class Nat {}
class Zero : Nat {}
class Succ : Nat {
public Nat n;
public Succ(Nat pred) {
n = pred;
}
}
```
Just as in the {anchorName Bool}`Bool` example above, this defines more types than the Lean equivalent.
Additionally, this example highlights how Lean datatype constructors are much more like subclasses of an abstract class than they are like constructors in C# or Java, as the constructor shown here contains initialization code to be executed.
Sum types are also similar to using a string tag to encode discriminated unions in TypeScript.
In TypeScript, {typescript}`Nat` could be defined as follows:
```typescript
interface Zero {
tag: "zero";
}
interface Succ {
tag: "succ";
predecessor: Nat;
}
type Nat = Zero | Succ;
```
Just like C# and Java, this encoding ends up with more types than in Lean, because {typescript}`Zero` and {typescript}`Succ` are each a type on their own.
It also illustrates that Lean constructors correspond to objects in JavaScript or TypeScript that include a tag that identifies the contents.
# Pattern Matching
%%%
tag := "pattern-matching"
%%%
In many languages, these kinds of data are consumed by first using an instance-of operator to check which subclass has been received and then reading the values of the fields that are available in the given subclass.
The instance-of check determines which code to run, ensuring that the data needed by this code is available, while the fields themselves provide the data.
In Lean, both of these purposes are simultaneously served by _pattern matching_.
An example of a function that uses pattern matching is {anchorName isZero}`isZero`, which is a function that returns {anchorName isZero}`true` when its argument is {anchorName isZero}`Nat.zero`, or false otherwise.
```anchor isZero
def isZero (n : Nat) : Bool :=
match n with
| Nat.zero => true
| Nat.succ k => false
```
The {kw}`match` expression is provided the function's argument {anchorName isZero}`n` for destructuring.
If {anchorName isZero}`n` was constructed by {anchorName isZero}`Nat.zero`, then the first branch of the pattern match is taken, and the result is {anchorName isZero}`true`.
If {anchorName isZero}`n` was constructed by {anchorName isZero}`Nat.succ`, then the second branch is taken, and the result is {anchorName isZero}`false`.
:::paragraph
Step-by-step, evaluation of {anchorEvalStep isZeroZeroSteps 0}`isZero Nat.zero` proceeds as follows:
```anchorEvalSteps isZeroZeroSteps
isZero Nat.zero
===>
match Nat.zero with
| Nat.zero => true
| Nat.succ k => false
===>
true
```
:::
:::paragraph
Evaluation of {anchorEvalStep isZeroFiveSteps 0}`isZero 5` proceeds similarly:
```anchorEvalSteps isZeroFiveSteps
isZero 5
===>
isZero (Nat.succ (Nat.succ (Nat.succ (Nat.succ (Nat.succ Nat.zero)))))
===>
match Nat.succ (Nat.succ (Nat.succ (Nat.succ (Nat.succ Nat.zero)))) with
| Nat.zero => true
| Nat.succ k => false
===>
false
```
:::
The {anchorName isZero}`k` in the second branch of the pattern in {anchorName isZero}`isZero` is not decorative.
It makes the {anchorName isZero}`Nat` that is the argument to {anchorName isZero}`Nat.succ` visible, with the provided name.
That smaller number can then be used to compute the final result of the expression.
:::paragraph
Just as the successor of some number $`n` is one greater than $`n` (that is, $`n + 1`), the predecessor of a number is one less than it.
If {anchorName pred}`pred` is a function that finds the predecessor of a {anchorName pred}`Nat`, then it should be the case that the following examples find the expected result:
```anchor predFive
#eval pred 5
```
```anchorInfo predFive
4
```
```anchor predBig
#eval pred 839
```
```anchorInfo predBig
838
```
:::
:::paragraph
Because {anchorName Nat}`Nat` cannot represent negative numbers, {anchorName NatNames}`Nat.zero` is a bit of a conundrum.
Usually, when working with {anchorName Nat}`Nat`, operators that would ordinarily produce a negative number are redefined to produce {anchorName NatNames}`zero` itself:
```anchor predZero
#eval pred 0
```
```anchorInfo predZero
0
```
:::
To find the predecessor of a {anchorName pred}`Nat`, the first step is to check which constructor was used to create it.
If it was {anchorName pred}`Nat.zero`, then the result is {anchorName pred}`Nat.zero`.
If it was {anchorName pred}`Nat.succ`, then the name {anchorName pred}`k` is used to refer to the {anchorName plus}`Nat` underneath it.
And this {anchorName pred}`Nat` is the desired predecessor, so the result of the {anchorName pred}`Nat.succ` branch is {anchorName pred}`k`.
```anchor pred
def pred (n : Nat) : Nat :=
match n with
| Nat.zero => Nat.zero
| Nat.succ k => k
```
:::paragraph
Applying this function to {anchorTerm predFiveSteps}`5` yields the following steps:
```anchorEvalSteps predFiveSteps
pred 5
===>
pred (Nat.succ 4)
===>
match Nat.succ 4 with
| Nat.zero => Nat.zero
| Nat.succ k => k
===>
4
```
:::
:::paragraph
Pattern matching can be used with structures as well as with sum types.
For instance, a function that extracts the third dimension from a {anchorName depth}`Point3D` can be written as follows:
```anchor depth
def depth (p : Point3D) : Float :=
match p with
| { x:= h, y := w, z := d } => d
```
In this case, it would have been much simpler to just use the {anchorName fragments}`Point3D.z` accessor, but structure patterns are occasionally the simplest way to write a function.
:::
# Recursive Functions
%%%
tag := "recursive-functions"
%%%
Definitions that refer to the name being defined are called _recursive definitions_.
Inductive datatypes are allowed to be recursive; indeed, {anchorName Nat}`Nat` is an example of such a datatype because {anchorName Nat}`succ` demands another {anchorName Nat}`Nat`.
Recursive datatypes can represent arbitrarily large data, limited only by technical factors like available memory.
Just as it would be impossible to write down one constructor for each natural number in the datatype definition, it is also impossible to write down a pattern match case for each possibility.
:::paragraph
Recursive datatypes are nicely complemented by recursive functions.
A simple recursive function over {anchorName even}`Nat` checks whether its argument is even.
In this case, {anchorName even}`Nat.zero` is even.
Non-recursive branches of the code like this one are called _base cases_.
The successor of an odd number is even, and the successor of an even number is odd.
This means that a number built with {anchorName even}`Nat.succ` is even if and only if its argument is not even.
```anchor even
def even (n : Nat) : Bool :=
match n with
| Nat.zero => true
| Nat.succ k => not (even k)
```
:::
This pattern of thought is typical for writing recursive functions on {anchorName even}`Nat`.
First, identify what to do for {anchorName even}`Nat.zero`.
Then, determine how to transform a result for an arbitrary {anchorName even}`Nat` into a result for its successor, and apply this transformation to the result of the recursive call.
This pattern is called _structural recursion_.
:::paragraph
Unlike many languages, Lean ensures by default that every recursive function will eventually reach a base case.
From a programming perspective, this rules out accidental infinite loops.
But this feature is especially important when proving theorems, where infinite loops cause major difficulties.
A consequence of this is that Lean will not accept a version of {anchorName even}`even` that attempts to invoke itself recursively on the original number:
```anchor evenLoops
def evenLoops (n : Nat) : Bool :=
match n with
| Nat.zero => true
| Nat.succ k => not (evenLoops n)
```
The important part of the error message is that Lean could not determine that the recursive function always reaches a base case (because it doesn't).
```anchorError evenLoops
fail to show termination for
evenLoops
with errors
failed to infer structural recursion:
Not considering parameter n of evenLoops:
it is unchanged in the recursive calls
no parameters suitable for structural recursion
well-founded recursion cannot be used, `evenLoops` does not take any (non-fixed) arguments
```
:::
:::paragraph
Even though addition takes two arguments, only one of them needs to be inspected.
To add zero to a number $`n`, just return $`n`.
To add the successor of $`k` to $`n`, take the successor of the result of adding $`k` to $`n`.
```anchor plus
def plus (n : Nat) (k : Nat) : Nat :=
match k with
| Nat.zero => n
| Nat.succ k' => Nat.succ (plus n k')
```
:::
:::paragraph
In the definition of {anchorName plus}`plus`, the name {anchorName plus}`k'` is chosen to indicate that it is connected to, but not identical with, the argument {anchorName plus}`k`.
For instance, walking through the evaluation of {anchorEvalStep plusThreeTwo 0}`plus 3 2` yields the following steps:
```anchorEvalSteps plusThreeTwo
plus 3 2
===>
plus 3 (Nat.succ (Nat.succ Nat.zero))
===>
match Nat.succ (Nat.succ Nat.zero) with
| Nat.zero => 3
| Nat.succ k' => Nat.succ (plus 3 k')
===>
Nat.succ (plus 3 (Nat.succ Nat.zero))
===>
Nat.succ (match Nat.succ Nat.zero with
| Nat.zero => 3
| Nat.succ k' => Nat.succ (plus 3 k'))
===>
Nat.succ (Nat.succ (plus 3 Nat.zero))
===>
Nat.succ (Nat.succ (match Nat.zero with
| Nat.zero => 3
| Nat.succ k' => Nat.succ (plus 3 k')))
===>
Nat.succ (Nat.succ 3)
===>
5
```
:::
:::paragraph
One way to think about addition is that $`n + k` applies {anchorName times}`Nat.succ` $`k` times to $`n`.
Similarly, multiplication $`n × k` adds $`n` to itself $`k` times and subtraction $`n - k` takes $`n`'s predecessor $`k` times.
```anchor times
def times (n : Nat) (k : Nat) : Nat :=
match k with
| Nat.zero => Nat.zero
| Nat.succ k' => plus n (times n k')
```
```anchor minus
def minus (n : Nat) (k : Nat) : Nat :=
match k with
| Nat.zero => n
| Nat.succ k' => pred (minus n k')
```
:::
:::paragraph
Not every function can be easily written using structural recursion.
The understanding of addition as iterated {anchorName plus}`Nat.succ`, multiplication as iterated addition, and subtraction as iterated predecessor suggests an implementation of division as iterated subtraction.
In this case, if the numerator is less than the divisor, the result is zero.
Otherwise, the result is the successor of dividing the numerator minus the divisor by the divisor.
```anchor div
def div (n : Nat) (k : Nat) : Nat :=
if n < k then
0
else Nat.succ (div (n - k) k)
```
:::
:::paragraph
As long as the second argument is not {anchorTerm div}`0`, this program terminates, as it always makes progress towards the base case.
However, it is not structurally recursive, because it doesn't follow the pattern of finding a result for zero and transforming a result for a smaller {anchorName div}`Nat` into a result for its successor.
In particular, the recursive invocation of the function is applied to the result of another function call, rather than to an input constructor's argument.
Thus, Lean rejects it with the following message:
```anchorError div
fail to show termination for
div
with errors
failed to infer structural recursion:
Not considering parameter k of div:
it is unchanged in the recursive calls
Cannot use parameter k:
failed to eliminate recursive application
div (n - k) k
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
k n : Nat
h✝ : ¬n < k
⊢ n - k < n
```
This message means that {anchorName div}`div` requires a manual proof of termination.
This topic is explored in {ref "division-as-iterated-subtraction"}[the final chapter].
::: |
fp-lean/book/FPLean/TypeClasses/Conveniences.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Additional Conveniences" =>
%%%
tag := "type-class-conveniences"
%%%
# Constructor Syntax for Instances
%%%
tag := "instance-constructor-syntax"
%%%
Behind the scenes, type classes are structure types and instances are values of these types.
The only differences are that Lean stores additional information about type classes, such as which parameters are output parameters, and that instances are registered for searching.
While values that have structure types are typically defined using either {lit}`⟨...⟩` syntax or with braces and fields, and instances are typically defined using {kw}`where`, both syntaxes work for both kinds of definition.
:::paragraph
For example, a forestry application might represent trees as follows:
```anchor trees
structure Tree : Type where
latinName : String
commonNames : List String
def oak : Tree :=
⟨"Quercus robur", ["common oak", "European oak"]⟩
def birch : Tree :=
{ latinName := "Betula pendula",
commonNames := ["silver birch", "warty birch"]
}
def sloe : Tree where
latinName := "Prunus spinosa"
commonNames := ["sloe", "blackthorn"]
```
All three syntaxes are equivalent.
:::
:::paragraph
Similarly, type class instances can be defined using all three syntaxes:
```anchor Display
class Display (α : Type) where
displayName : α → String
instance : Display Tree :=
⟨Tree.latinName⟩
instance : Display Tree :=
{ displayName := Tree.latinName }
instance : Display Tree where
displayName t := t.latinName
```
The {kw}`where` syntax is typically used for instances, while structures use either the curly-brace syntax or the {kw}`where` syntax.
The {lit}`⟨...⟩` syntax can be useful when emphasizing that a structure type is very much like a tuple in which the fields happen to be named, but the names are not important at the moment.
However, there are situations where it can make sense to use other alternatives.
In particular, a library might provide a function that constructs an instance value.
Placing a call to this function after {lit}`:=` in an instance declaration is the easiest way to use such a function.
:::
# Examples
%%%
tag := "example-command"
%%%
When experimenting with Lean code, definitions can be more convenient to use than {kw}`#eval` or {kw}`#check` commands.
First off, definitions don't produce any output, which can help keep the reader's focus on the most interesting output.
Secondly, it's easiest to write most Lean programs by starting with a type signature, allowing Lean to provide more assistance and better error messages while writing the program itself.
On the other hand, {kw}`#eval` and {kw}`#check` are easiest to use in contexts where Lean is able to determine the type from the provided expression.
Thirdly, {kw}`#eval` cannot be used with expressions whose types don't have {moduleName}`ToString` or {moduleName}`Repr` instances, such as functions.
Finally, multi-step {kw}`do` blocks, {kw}`let`-expressions, and other syntactic forms that take multiple lines are particularly difficult to write with a type annotation in {kw}`#eval` or {kw}`#check`, simply because the required parenthesization can be difficult to predict.
:::paragraph
To work around these issues, Lean supports the explicit indication of examples in a source file.
An example is like a definition without a name.
For instance, a non-empty list of birds commonly found in Copenhagen's green spaces can be written:
```anchor birdExample
example : NonEmptyList String :=
{ head := "Sparrow",
tail := ["Duck", "Swan", "Magpie", "Eurasian coot", "Crow"]
}
```
:::
:::paragraph
Examples may define functions by accepting arguments:
```anchor commAdd
example (n : Nat) (k : Nat) : Bool :=
n + k == k + n
```
While this creates a function behind the scenes, this function has no name and cannot be called.
Nonetheless, this is useful for demonstrating how a library can be used with arbitrary or unknown values of some given type.
In source files, {kw}`example` declarations are best paired with comments that explain how the example illustrates the concepts of the library.
::: |
fp-lean/book/FPLean/TypeClasses/Coercions.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Coercions" =>
%%%
tag := "coercions"
%%%
In mathematics, it is common to use the same symbol to stand for different aspects of some object in different contexts.
For example, if a ring is referred to in a context where a set is expected, then it is understood that the ring's underlying set is what's intended.
In programming languages, it is common to have rules to automatically translate values of one type into values of another type.
Java allows a {java}`byte` to be automatically promoted to an {java}`int`, and Kotlin allows a non-nullable type to be used in a context that expects a nullable version of the type.
In Lean, both purposes are served by a mechanism called {deftech}_coercions_.
When Lean encounters an expression of one type in a context that expects a different type, it will attempt to coerce the expression before reporting a type error.
Unlike Java, C, and Kotlin, the coercions are extensible by defining instances of type classes.
# Strings and Paths
%%%
tag := "string-path-coercion"
%%%
In the {ref "handling-input"}[source code to {lit}`feline`], a {moduleName}`String` is converted to a {moduleName}`FilePath` using the anonymous constructor syntax.
In fact, this was not necessary: Lean defines a coercion from {moduleName}`String` to {moduleName}`FilePath`, so a string can be used in an position where a path is expected.
Even though the function {anchorTerm readFile}`IO.FS.readFile` has type {anchorTerm readFile}`System.FilePath → IO String`, the following code is accepted by Lean:
```anchor fileDumper
def fileDumper : IO Unit := do
let stdin ← IO.getStdin
let stdout ← IO.getStdout
stdout.putStr "Which file? "
stdout.flush
let f := (← stdin.getLine).trim
stdout.putStrLn s!"'The file {f}' contains:"
stdout.putStrLn (← IO.FS.readFile f)
```
{moduleName}`String.trim` removes leading and trailing whitespace from a string.
On the last line of {anchorName fileDumper}`fileDumper`, the coercion from {moduleName}`String` to {moduleName}`FilePath` automatically converts {anchorName fileDumper}`f`, so it is not necessary to write {lit}`IO.FS.readFile ⟨f⟩`.
# Positive Numbers
%%%
tag := "positive-number-coercion"
%%%
Every positive number corresponds to a natural number.
The function {anchorName posToNat}`Pos.toNat` that was defined earlier converts a {moduleName}`Pos` to the corresponding {moduleName}`Nat`:
```anchor posToNat
def Pos.toNat : Pos → Nat
| Pos.one => 1
| Pos.succ n => n.toNat + 1
```
The function {anchorName drop}`List.drop`, with type {anchorTerm drop}`{α : Type} → Nat → List α → List α`, removes a prefix of a list.
Applying {anchorName drop}`List.drop` to a {moduleName}`Pos`, however, leads to a type error:
```anchorTerm dropPos
[1, 2, 3, 4].drop (2 : Pos)
```
```anchorError dropPos
Application type mismatch: The argument
2
has type
Pos
but is expected to have type
Nat
in the application
List.drop 2
```
Because the author of {anchorName drop}`List.drop` did not make it a method of a type class, it can't be overridden by defining a new instance.
:::paragraph
The type class {moduleName}`Coe` describes overloaded ways of coercing from one type to another:
```anchor Coe
class Coe (α : Type) (β : Type) where
coe : α → β
```
An instance of {anchorTerm CoePosNat}`Coe Pos Nat` is enough to allow the prior code to work:
```anchor CoePosNat
instance : Coe Pos Nat where
coe x := x.toNat
```
```anchor dropPosCoe
#eval [1, 2, 3, 4].drop (2 : Pos)
```
```anchorInfo dropPosCoe
[3, 4]
```
Using {kw}`#check` shows the result of the instance search that was used behind the scenes:
```anchor checkDropPosCoe
#check [1, 2, 3, 4].drop (2 : Pos)
```
```anchorInfo checkDropPosCoe
List.drop (Pos.toNat 2) [1, 2, 3, 4] : List Nat
```
:::
# Chaining Coercions
%%%
tag := "chaining-coercions"
%%%
When searching for coercions, Lean will attempt to assemble a coercion out of a chain of smaller coercions.
For example, there is already a coercion from {anchorName chapterIntro}`Nat` to {anchorName chapterIntro}`Int`.
Because of that instance, combined with the {anchorTerm CoePosNat}`Coe Pos Nat` instance, the following code is accepted:
```anchor posInt
def oneInt : Int := Pos.one
```
This definition uses two coercions: from {anchorTerm CoePosNat}`Pos` to {anchorTerm CoePosNat}`Nat`, and then from {anchorTerm CoePosNat}`Nat` to {anchorTerm chapterIntro}`Int`.
The Lean compiler does not get stuck in the presence of circular coercions.
For example, even if two types {anchorName CoercionCycle}`A` and {anchorName CoercionCycle}`B` can be coerced to one another, their mutual coercions can be used to find a path:
```anchor CoercionCycle
inductive A where
| a
inductive B where
| b
instance : Coe A B where
coe _ := B.b
instance : Coe B A where
coe _ := A.a
instance : Coe Unit A where
coe _ := A.a
def coercedToB : B := ()
```
Remember: the double parentheses {anchorTerm CoercionCycle}`()` is short for the constructor {anchorName chapterIntro}`Unit.unit`.
After deriving a {anchorTerm ReprBTm}`Repr B` instance with {anchorTerm ReprB}`deriving instance Repr for B`,
```anchor coercedToBEval
#eval coercedToB
```
results in:
```anchorInfo coercedToBEval
B.b
```
:::paragraph
The {anchorName CoeOption}`Option` type can be used similarly to nullable types in C# and Kotlin: the {anchorName NEListGetHuh}`none` constructor represents the absence of a value.
The Lean standard library defines a coercion from any type {anchorName CoeOption}`α` to {anchorTerm CoeOption}`Option α` that wraps the value in {anchorName CoeOption}`some`.
This allows option types to be used in a manner even more similar to nullable types, because {anchorName CoeOption}`some` can be omitted.
For instance, the function {anchorName lastHuh}`List.last?` that finds the last entry in a list can be written without a {anchorName CoeOption}`some` around the return value {anchorName lastHuh}`x`:
```anchor lastHuh
def List.last? : List α → Option α
| [] => none
| [x] => x
| _ :: x :: xs => last? (x :: xs)
```
Instance search finds the coercion, and inserts a call to {anchorName Coe}`coe`, which wraps the argument in {anchorName CoeOption}`some`.
These coercions can be chained, so that nested uses of {anchorName CoeOption}`Option` don't require nested {anchorName CoeOption}`some` constructors:
```anchor perhapsPerhapsPerhaps
def perhapsPerhapsPerhaps : Option (Option (Option String)) :=
"Please don't tell me"
```
:::
:::paragraph
Coercions are only activated automatically when Lean encounters a mismatch between an inferred type and a type that is imposed from the rest of the program.
In cases with other errors, coercions are not activated.
For example, if the error is that an instance is missing, coercions will not be used:
```anchor ofNatBeforeCoe
def perhapsPerhapsPerhapsNat : Option (Option (Option Nat)) :=
392
```
```anchorError ofNatBeforeCoe
failed to synthesize
OfNat (Option (Option (Option Nat))) 392
numerals are polymorphic in Lean, but the numeral `392` cannot be used in a context where the expected type is
Option (Option (Option Nat))
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
:::
:::paragraph
This can be worked around by manually indicating the desired type to be used for {moduleName}`OfNat`:
```anchor perhapsPerhapsPerhapsNat
def perhapsPerhapsPerhapsNat : Option (Option (Option Nat)) :=
(392 : Nat)
```
Additionally, coercions can be manually inserted using an up arrow:
```anchor perhapsPerhapsPerhapsNatUp
def perhapsPerhapsPerhapsNat : Option (Option (Option Nat)) :=
↑(392 : Nat)
```
In some cases, this can be used to ensure that Lean finds the right instances.
It can also make the programmer's intentions more clear.
:::
# Non-Empty Lists and Dependent Coercions
%%%
tag := "CoeDep"
%%%
An instance of {anchorTerm chapterIntro}`Coe α β` makes sense when the type {anchorName chapterIntro}`β` has a value that can represent each value from the type {anchorName chapterIntro}`α`.
Coercing from {moduleName}`Nat` to {moduleName}`Int` makes sense, because the type {moduleName}`Int` contains all the natural numbers, but a coercion from {moduleName}`Int` to {moduleName}`Nat` is a poor idea because {moduleName}`Nat` does not contain the negative numbers.
Similarly, a coercion from non-empty lists to ordinary lists makes sense because the {moduleName}`List` type can represent every non-empty list:
```anchor CoeNEList
instance : Coe (NonEmptyList α) (List α) where
coe
| { head := x, tail := xs } => x :: xs
```
This allows non-empty lists to be used with the entire {moduleName}`List` API.
On the other hand, it is impossible to write an instance of {anchorTerm coeNope}`Coe (List α) (NonEmptyList α)`, because there's no non-empty list that can represent the empty list.
This limitation can be worked around by using another version of coercions, which are called _dependent coercions_.
Dependent coercions can be used when the ability to coerce from one type to another depends on which particular value is being coerced.
Just as the {anchorName OfNat}`OfNat` type class takes the particular {moduleName}`Nat` being overloaded as a parameter, dependent coercion takes the value being coerced as a parameter:
```anchor CoeDep
class CoeDep (α : Type) (x : α) (β : Type) where
coe : β
```
This is a chance to select only certain values, either by imposing further type class constraints on the value or by writing certain constructors directly.
For example, any {moduleName}`List` that is not actually empty can be coerced to a {moduleName}`NonEmptyList`:
```anchor CoeDepListNEList
instance : CoeDep (List α) (x :: xs) (NonEmptyList α) where
coe := { head := x, tail := xs }
```
# Coercing to Types
%%%
tag := "CoeSort"
%%%
In mathematics, it is common to have a concept that consists of a set equipped with additional structure.
For example, a monoid is some set $`S`, an element $`s` of $`S`, and an associative binary operator on $`S`, such that $`s` is neutral on the left and right of the operator.
$`S` is referred to as the “carrier set” of the monoid.
The natural numbers with zero and addition form a monoid, because addition is associative and adding zero to any number is the identity.
Similarly, the natural numbers with one and multiplication also form a monoid.
Monoids are also widely used in functional programming: lists, the empty list, and the append operator form a monoid, as do strings, the empty string, and string append:
```anchor Monoid
structure Monoid where
Carrier : Type
neutral : Carrier
op : Carrier → Carrier → Carrier
def natMulMonoid : Monoid :=
{ Carrier := Nat, neutral := 1, op := (· * ·) }
def natAddMonoid : Monoid :=
{ Carrier := Nat, neutral := 0, op := (· + ·) }
def stringMonoid : Monoid :=
{ Carrier := String, neutral := "", op := String.append }
def listMonoid (α : Type) : Monoid :=
{ Carrier := List α, neutral := [], op := List.append }
```
Given a monoid, it is possible to write the {anchorName firstFoldMap}`foldMap` function that, in a single pass, transforms the entries in a list into a monoid's carrier set and then combines them using the monoid's operator.
Because monoids have a neutral element, there is a natural result to return when the list is empty, and because the operator is associative, clients of the function don't have to care whether the recursive function combines elements from left to right or from right to left.
```anchor firstFoldMap
def foldMap (M : Monoid) (f : α → M.Carrier) (xs : List α) : M.Carrier :=
let rec go (soFar : M.Carrier) : List α → M.Carrier
| [] => soFar
| y :: ys => go (M.op soFar (f y)) ys
go M.neutral xs
```
Even though a monoid consists of three separate pieces of information, it is common to just refer to the monoid's name in order to refer to its set.
Instead of saying “Let A be a monoid and let _x_ and _y_ be elements of its carrier set”, it is common to say “Let _A_ be a monoid and let _x_ and _y_ be elements of _A_”.
This practice can be encoded in Lean by defining a new kind of coercion, from the monoid to its carrier set.
The {anchorName CoeMonoid}`CoeSort` class is just like the {anchorName CoePosNat}`Coe` class, with the exception that the target of the coercion must be a _sort_, namely {anchorTerm chapterIntro}`Type` or {anchorTerm chapterIntro}`Prop`.
The term _sort_ in Lean refers to these types that classify other types—{anchorTerm Coe}`Type` classifies types that themselves classify data, and {anchorTerm chapterIntro}`Prop` classifies propositions that themselves classify evidence of their truth.
Just as {anchorName CoePosNat}`Coe` is checked when a type mismatch occurs, {anchorName CoeMonoid}`CoeSort` is used when something other than a sort is provided in a context where a sort would be expected.
The coercion from a monoid into its carrier set extracts the carrier:
```anchor CoeMonoid
instance : CoeSort Monoid Type where
coe m := m.Carrier
```
With this coercion, the type signatures become less bureaucratic:
```anchor foldMap
def foldMap (M : Monoid) (f : α → M) (xs : List α) : M :=
let rec go (soFar : M) : List α → M
| [] => soFar
| y :: ys => go (M.op soFar (f y)) ys
go M.neutral xs
```
Another useful example of {anchorName CoeMonoid}`CoeSort` is used to bridge the gap between {anchorName types}`Bool` and {anchorTerm chapterIntro}`Prop`.
As discussed in {ref "equality-and-ordering"}[the section on ordering and equality], Lean's {kw}`if` expression expects the condition to be a decidable proposition rather than a {anchorName types}`Bool`.
Programs typically need to be able to branch based on Boolean values, however.
Rather than have two kinds of {kw}`if` expression, the Lean standard library defines a coercion from {anchorName types}`Bool` to the proposition that the {anchorName types}`Bool` in question is equal to {anchorName types}`true`:
```anchor CoeBoolProp
instance : CoeSort Bool Prop where
coe b := b = true
```
In this case, the sort in question is {anchorTerm chapterIntro}`Prop` rather than {anchorTerm chapterIntro}`Type`.
# Coercing to Functions
%%%
tag := "CoeFun"
%%%
Many datatypes that occur regularly in programming consist of a function along with some extra information about it.
For example, a function might be accompanied by a name to show in logs or by some configuration data.
Additionally, putting a type in a field of a structure, similarly to the {anchorName Monoid}`Monoid` example, can make sense in contexts where there is more than one way to implement an operation and more manual control is needed than type classes would allow.
For example, the specific details of values emitted by a JSON serializer may be important because another application expects a particular format.
Sometimes, the function itself may be derivable from just the configuration data.
A type class called {anchorName CoeFun}`CoeFun` can transform values from non-function types to function types.
{anchorName CoeFun}`CoeFun` has two parameters: the first is the type whose values should be transformed into functions, and the second is an output parameter that determines exactly which function type is being targeted.
```anchor CoeFun
class CoeFun (α : Type) (makeFunctionType : outParam (α → Type)) where
coe : (x : α) → makeFunctionType x
```
The second parameter is itself a function that computes a type.
In Lean, types are first-class and can be passed to functions or returned from them, just like anything else.
For example, a function that adds a constant amount to its argument can be represented as a wrapper around the amount to add, rather than by defining an actual function:
```anchor Adder
structure Adder where
howMuch : Nat
```
A function that adds five to its argument has a {anchorTerm add5}`5` in the {anchorName Adder}`howMuch` field:
```anchor add5
def add5 : Adder := ⟨5⟩
```
This {anchorName Adder}`Adder` type is not a function, and applying it to an argument results in an error:
```anchor add5notfun
#eval add5 3
```
```anchorError add5notfun
Function expected at
add5
but this term has type
Adder
Note: Expected a function because this term is being applied to the argument
3
```
Defining a {anchorName CoeFunAdder}`CoeFun` instance causes Lean to transform the adder into a function with type {anchorTerm CoeFunAdder}`Nat → Nat`:
```anchor CoeFunAdder
instance : CoeFun Adder (fun _ => Nat → Nat) where
coe a := (· + a.howMuch)
```
```anchor add53
#eval add5 3
```
```anchorInfo add53
8
```
Because all {anchorName CoeFunAdder}`Adder`s should be transformed into {anchorTerm CoeFunAdder}`Nat → Nat` functions, the argument to {anchorName CoeFunAdder}`CoeFun`'s second parameter was ignored.
:::paragraph
When the value itself is needed to determine the right function type, then {anchorName CoeFunAdder}`CoeFun`'s second parameter is no longer ignored.
For example, given the following representation of JSON values:
```anchor JSON
inductive JSON where
| true : JSON
| false : JSON
| null : JSON
| string : String → JSON
| number : Float → JSON
| object : List (String × JSON) → JSON
| array : List JSON → JSON
```
a JSON serializer is a structure that tracks the type it knows how to serialize along with the serialization code itself:
```anchor Serializer
structure Serializer where
Contents : Type
serialize : Contents → JSON
```
A serializer for strings need only wrap the provided string in the {anchorName StrSer}`JSON.string` constructor:
```anchor StrSer
def Str : Serializer :=
{ Contents := String,
serialize := JSON.string
}
```
:::
:::paragraph
Viewing JSON serializers as functions that serialize their argument requires extracting the inner type of serializable data:
```anchor CoeFunSer
instance : CoeFun Serializer (fun s => s.Contents → JSON) where
coe s := s.serialize
```
Given this instance, a serializer can be applied directly to an argument:
```anchor buildResponse
def buildResponse (title : String) (R : Serializer)
(record : R.Contents) : JSON :=
JSON.object [
("title", JSON.string title),
("status", JSON.number 200),
("record", R record)
]
```
The serializer can be passed directly to {anchorName buildResponseOut}`buildResponse`:
```anchor buildResponseOut
#eval buildResponse "Functional Programming in Lean" Str "Programming is fun!"
```
```anchorInfo buildResponseOut
JSON.object
[("title", JSON.string "Functional Programming in Lean"),
("status", JSON.number 200.000000),
("record", JSON.string "Programming is fun!")]
```
:::
## Aside: JSON as a String
%%%
tag := "json-string"
%%%
It can be a bit difficult to understand JSON when encoded as Lean objects.
To help make sure that the serialized response was what was expected, it can be convenient to write a simple converter from {anchorName JSON}`JSON` to {anchorName dropDecimals}`String`.
The first step is to simplify the display of numbers.
{anchorName JSON}`JSON` doesn't distinguish between integers and floating point numbers, and the type {anchorName JSON}`Float` is used to represent both.
In Lean, {anchorName chapterIntro}`Float.toString` includes a number of trailing zeros:
```anchor fiveZeros
#eval (5 : Float).toString
```
```anchorInfo fiveZeros
"5.000000"
```
The solution is to write a little function that cleans up the presentation by dropping all trailing zeros, followed by a trailing decimal point:
```anchor dropDecimals
def dropDecimals (numString : String) : String :=
if numString.contains '.' then
let noTrailingZeros := numString.dropRightWhile (· == '0')
noTrailingZeros.dropRightWhile (· == '.')
else numString
```
With this definition, {anchorTerm dropDecimalExample}`dropDecimals (5 : Float).toString` yields {anchorTerm dropDecimalExample}`5`, and {anchorTerm dropDecimalExample2}`dropDecimals (5.2 : Float).toString` yields {anchorTerm dropDecimalExample2}`5.2`.
The next step is to define a helper function to append a list of strings with a separator in between them:
```anchor Stringseparate
def String.separate (sep : String) (strings : List String) : String :=
match strings with
| [] => ""
| x :: xs => String.join (x :: xs.map (sep ++ ·))
```
This function is useful to account for comma-separated elements in JSON arrays and objects.
{anchorTerm sep2ex}`", ".separate ["1", "2"]` yields {anchorInfo sep2ex}`"1, 2"`, {anchorTerm sep1ex}`", ".separate ["1"]` yields {anchorInfo sep1ex}`"1"`, and {anchorTerm sep0ex}`", ".separate []` yields {anchorInfo sep0ex}`""`.
In the Lean standard library, this function is called {anchorName chapterIntro}`String.intercalate`.
Finally, a string escaping procedure is needed for JSON strings, so that the Lean string containing {anchorTerm chapterIntro}`"Hello!"` can be output as {anchorTerm escapeQuotes}`"\"Hello!\""`.
Fortunately, the Lean compiler contains an internal function for escaping JSON strings already, called {anchorName escapeQuotes}`Lean.Json.escape`.
To access this function, add {lit}`import Lean` to the beginning of your file.
The function that emits a string from a {anchorName JSONasString}`JSON` value is declared {kw}`partial` because Lean cannot see that it terminates.
This is because recursive calls to {anchorName JSONasString}`asString` occur in functions that are being applied by {anchorName chapterIntro}`List.map`, and this pattern of recursion is complicated enough that Lean cannot see that the recursive calls are actually being performed on smaller values.
In an application that just needs to produce JSON strings and doesn't need to mathematically reason about the process, having the function be {kw}`partial` is not likely to cause problems.
```anchor JSONasString
partial def JSON.asString (val : JSON) : String :=
match val with
| true => "true"
| false => "false"
| null => "null"
| string s => "\"" ++ Lean.Json.escape s ++ "\""
| number n => dropDecimals n.toString
| object members =>
let memberToString mem :=
"\"" ++ Lean.Json.escape mem.fst ++ "\": " ++ asString mem.snd
"{" ++ ", ".separate (members.map memberToString) ++ "}"
| array elements =>
"[" ++ ", ".separate (elements.map asString) ++ "]"
```
With this definition, the output of serialization is easier to read:
```anchor buildResponseStr
#eval (buildResponse "Functional Programming in Lean" Str "Programming is fun!").asString
```
```anchorInfo buildResponseStr
"{\"title\": \"Functional Programming in Lean\", \"status\": 200, \"record\": \"Programming is fun!\"}"
```
# Messages You May Meet
%%%
tag := "coercion-messages"
%%%
Natural number literals are overloaded with the {anchorName OfNat}`OfNat` type class.
Because coercions fire in cases where types don't match, rather than in cases of missing instances, a missing {anchorName OfNat}`OfNat` instance for a type does not cause a coercion from {moduleName}`Nat` to be applied:
```anchor ofNatBeforeCoe
def perhapsPerhapsPerhapsNat : Option (Option (Option Nat)) :=
392
```
```anchorError ofNatBeforeCoe
failed to synthesize
OfNat (Option (Option (Option Nat))) 392
numerals are polymorphic in Lean, but the numeral `392` cannot be used in a context where the expected type is
Option (Option (Option Nat))
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
# Design Considerations
%%%
tag := "coercion-design-considerations"
%%%
Coercions are a powerful tool that should be used responsibly.
On the one hand, they can allow an API to naturally follow the everyday rules of the domain being modeled.
This can be the difference between a bureaucratic mess of manual conversion functions and a clear program.
As Abelson and Sussman wrote in the preface to _Structure and Interpretation of Computer Programs_ (MIT Press, 1996),
> Programs must be written for people to read, and only incidentally for machines to execute.
Coercions, used wisely, are a valuable means of achieving readable code that can serve as the basis for communication with domain experts.
APIs that rely heavily on coercions have a number of important limitations, however.
Think carefully about these limitations before using coercions in your own libraries.
First off, coercions are only applied in contexts where enough type information is available for Lean to know all of the types involved, because there are no output parameters in the coercion type classes. This means that a return type annotation on a function can be the difference between a type error and a successfully applied coercion.
For example, the coercion from non-empty lists to lists makes the following program work:
```anchor lastSpiderA
def lastSpider : Option String :=
List.getLast? idahoSpiders
```
On the other hand, if the type annotation is omitted, then the result type is unknown, so Lean is unable to find the coercion:
```anchor lastSpiderB
def lastSpider :=
List.getLast? idahoSpiders
```
```anchorError lastSpiderB
Application type mismatch: The argument
idahoSpiders
has type
NonEmptyList String
but is expected to have type
List ?m.3
in the application
List.getLast? idahoSpiders
```
More generally, when a coercion is not applied for some reason, the user receives the original type error, which can make it difficult to debug chains of coercions.
Finally, coercions are not applied in the context of field accessor notation.
This means that there is still an important difference between expressions that need to be coerced and those that don't, and this difference is visible to users of your API. |
fp-lean/book/FPLean/TypeClasses/OutParams.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Controlling Instance Search" =>
%%%
tag := "out-params"
%%%
An instance of the {moduleName}`Add` class is sufficient to allow two expressions with type {moduleName}`Pos` to be conveniently added, producing another {moduleName}`Pos`.
However, in many cases, it can be useful to be more flexible and allow _heterogeneous_ operator overloading, where the arguments may have different types.
For example, adding a {moduleName}`Nat` to a {moduleName}`Pos` or a {moduleName}`Pos` to a {moduleName}`Nat` will always yield a {moduleName}`Pos`:
```anchor addNatPos
def addNatPos : Nat → Pos → Pos
| 0, p => p
| n + 1, p => Pos.succ (addNatPos n p)
def addPosNat : Pos → Nat → Pos
| p, 0 => p
| p, n + 1 => Pos.succ (addPosNat p n)
```
These functions allow natural numbers to be added to positive numbers, but they cannot be used with the {moduleName}`Add` type class, which expects both arguments to {moduleName}`add` to have the same type.
# Heterogeneous Overloadings
%%%
tag := "heterogeneous-operators"
%%%
As mentioned in the section on {ref "overloaded-addition"}[overloaded addition], Lean provides a type class called {anchorName chapterIntro}`HAdd` for overloading addition heterogeneously.
The {anchorName chapterIntro}`HAdd` class takes three type parameters: the two argument types and the return type.
Instances of {anchorTerm haddInsts}`HAdd Nat Pos Pos` and {anchorTerm haddInsts}`HAdd Pos Nat Pos` allow ordinary addition notation to be used to mix the types:
```anchor haddInsts
instance : HAdd Nat Pos Pos where
hAdd := addNatPos
instance : HAdd Pos Nat Pos where
hAdd := addPosNat
```
Given the above two instances, the following examples work:
```anchor posNatEx
#eval (3 : Pos) + (5 : Nat)
```
```anchorInfo posNatEx
8
```
```anchor natPosEx
#eval (3 : Nat) + (5 : Pos)
```
```anchorInfo natPosEx
8
```
:::paragraph
The definition of the {anchorName chapterIntro}`HAdd` type class is very much like the following definition of {moduleName}`HPlus` with the corresponding instances:
```anchor HPlus
class HPlus (α : Type) (β : Type) (γ : Type) where
hPlus : α → β → γ
```
```anchor HPlusInstances
instance : HPlus Nat Pos Pos where
hPlus := addNatPos
instance : HPlus Pos Nat Pos where
hPlus := addPosNat
```
However, instances of {moduleName}`HPlus` are significantly less useful than instances of {anchorName chapterIntro}`HAdd`.
When attempting to use these instances with {kw}`#eval`, an error occurs:
```anchor hPlusOops
#eval toString (HPlus.hPlus (3 : Pos) (5 : Nat))
```
```anchorError hPlusOops
typeclass instance problem is stuck
HPlus Pos Nat ?m.6
Note: Lean will not try to resolve this typeclass instance problem because the third type argument to `HPlus` is a metavariable. This argument must be fully determined before Lean will try to resolve the typeclass.
Hint: Adding type annotations and supplying implicit arguments to functions can give Lean more information for typeclass resolution. For example, if you have a variable `x` that you intend to be a `Nat`, but Lean reports it as having an unresolved type like `?m`, replacing `x` with `(x : Nat)` can get typeclass resolution un-stuck.
```
The message indicates that this happens because there is a metavariable in the type, and Lean has no way to solve it.
:::
As discussed in {ref "polymorphism"}[the initial description of polymorphism], metavariables represent unknown parts of a program that could not be inferred.
When an expression is written following {kw}`#eval`, Lean attempts to determine its type automatically.
In this case, it could not.
Because the third type parameter for {anchorName HPlusInstances}`HPlus` was unknown, Lean couldn't carry out type class instance search, but instance search is the only way that Lean could determine the expression's type.
That is, the {anchorTerm HPlusInstances}`HPlus Pos Nat Pos` instance can only apply if the expression should have type {moduleName}`Pos`, but there's nothing in the program other than the instance itself to indicate that it should have this type.
One solution to the problem is to ensure that all three types are available by adding a type annotation to the whole expression:
```anchor hPlusLotsaTypes
#eval (HPlus.hPlus (3 : Pos) (5 : Nat) : Pos)
```
```anchorInfo hPlusLotsaTypes
8
```
However, this solution is not very convenient for users of the positive number library.
# Output Parameters
%%%
tag := "output-parameters"
%%%
This problem can also be solved by declaring {anchorName HPlus}`γ` to be an _output parameter_.
Most type class parameters are inputs to the search algorithm: they are used to select an instance.
For example, in an {moduleName}`OfNat` instance, both the type and the natural number are used to select a particular interpretation of a natural number literal.
However, in some cases, it can be convenient to start the search process even when some of the type parameters are not yet known, and use the instances that are discovered in the search to determine values for metavariables.
The parameters that aren't needed to start instance search are outputs of the process, which is declared with the {moduleName}`outParam` modifier:
```anchor HPlusOut
class HPlus (α : Type) (β : Type) (γ : outParam Type) where
hPlus : α → β → γ
```
With this output parameter, type class instance search is able to select an instance without knowing {anchorName HPlusOut}`γ` in advance.
For instance:
```anchor hPlusWorks
#eval HPlus.hPlus (3 : Pos) (5 : Nat)
```
```anchorInfo hPlusWorks
8
```
It might be helpful to think of output parameters as defining a kind of function.
Any given instance of a type class that has one or more output parameters provides Lean with instructions for determining the outputs from the inputs.
The process of searching for an instance, possibly recursively, ends up being more powerful than mere overloading.
Output parameters can determine other types in the program, and instance search can assemble a collection of underlying instances into a program that has this type.
# Default Instances
%%%
tag := "default-instances"
%%%
Deciding whether a parameter is an input or an output controls the circumstances under which Lean will initiate type class search.
In particular, type class search does not occur until all inputs are known.
However, in some cases, output parameters are not enough, and instance search should also occur when some inputs are unknown.
This is a bit like default values for optional function arguments in Python or Kotlin, except default _types_ are being selected.
_Default instances_ are instances that are available for instance search _even when not all their inputs are known_.
When one of these instances can be used, it will be used.
This can cause programs to successfully type check, rather than failing with errors related to unknown types and metavariables.
On the other hand, default instances can make instance selection less predictable.
In particular, if an undesired default instance is selected, then an expression may have a different type than expected, which can cause confusing type errors to occur elsewhere in the program.
Be selective about where default instances are used!
One example of where default instances can be useful is an instance of {anchorName HPlusOut}`HPlus` that can be derived from an {moduleName}`Add` instance.
In other words, ordinary addition is a special case of heterogeneous addition in which all three types happen to be the same.
This can be implemented using the following instance:
```anchor notDefaultAdd
instance [Add α] : HPlus α α α where
hPlus := Add.add
```
With this instance, {anchorName notDefaultAdd}`hPlus` can be used for any addable type, like {moduleName}`Nat`:
```anchor hPlusNatNat
#eval HPlus.hPlus (3 : Nat) (5 : Nat)
```
```anchorInfo hPlusNatNat
8
```
However, this instance will only be used in situations where the types of both arguments are known.
For example,
```anchor plusFiveThree
#check HPlus.hPlus (5 : Nat) (3 : Nat)
```
yields the type
```anchorInfo plusFiveThree
HPlus.hPlus 5 3 : Nat
```
as expected, but
```anchor plusFiveMeta
#check HPlus.hPlus (5 : Nat)
```
yields a type that contains two metavariables, one for the remaining argument and one for the return type:
```anchorInfo plusFiveMeta
HPlus.hPlus 5 : ?m.15752 → ?m.15754
```
In the vast majority of cases, when someone supplies one argument to addition, the other argument will have the same type.
To make this instance into a default instance, apply the {anchorTerm defaultAdd}`default_instance` attribute:
```anchor defaultAdd
@[default_instance]
instance [Add α] : HPlus α α α where
hPlus := Add.add
```
With this default instance, the example has a more useful type:
```anchor plusFive
#check HPlus.hPlus (5 : Nat)
```
yields
```anchorInfo plusFive
HPlus.hPlus 5 : Nat → Nat
```
Each operator that exists in overloadable heterogeneous and homogeneous versions follows the pattern of a default instance that allows the homogeneous version to be used in contexts where the heterogeneous is expected.
The infix operator is replaced with a call to the heterogeneous version, and the homogeneous default instance is selected when possible.
Similarly, simply writing {anchorTerm fiveType}`5` gives a {anchorTerm fiveType}`Nat` rather than a type with a metavariable that is waiting for more information in order to select an {moduleName}`OfNat` instance.
This is because the {moduleName}`OfNat` instance for {moduleName}`Nat` is a default instance.
Default instances can also be assigned _priorities_ that affect which will be chosen in situations where more than one might apply.
For more information on default instance priorities, please consult the Lean manual.
# Exercises
%%%
tag := "out-params-exercises"
%%%
Define an instance of {anchorTerm MulPPoint}`HMul (PPoint α) α (PPoint α)` that multiplies both projections by the scalar.
It should work for any type {anchorName MulPPoint}`α` for which there is a {anchorTerm MulPPoint}`Mul α` instance.
For example,
```anchor HMulPPoint
#eval {x := 2.5, y := 3.7 : PPoint Float} * 2.0
```
should yield
```anchorInfo HMulPPoint
{ x := 5.000000, y := 7.400000 }
``` |
fp-lean/book/FPLean/TypeClasses/Summary.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Summary" =>
%%%
tag :="type-classes-summary"
%%%
# Type Classes and Overloading
%%%
tag := none
%%%
Type classes are Lean's mechanism for overloading functions and operators.
A polymorphic function can be used with multiple types, but it behaves in the same manner no matter which type it is used with.
For example, a polymorphic function that appends two lists can be used no matter the type of the entries in the list, but it is unable to have different behavior depending on which particular type is found.
An operation that is overloaded with type classes, on the other hand, can also be used with multiple types.
However, each type requires its own implementation of the overloaded operation.
This means that the behavior can vary based on which type is provided.
A _type class_ has a name, parameters, and a body that consists of a number of names with types.
The name is a way to refer to the overloaded operations, the parameters determine which aspects of the definitions can be overloaded, and the body provides the names and type signatures of the overloadable operations.
Each overloadable operation is called a {deftech}_method_ of the type class.
Type classes may provide default implementations of some methods in terms of the others, freeing implementors from defining each overload by hand when it is not needed.
An {deftech}_instance_ of a type class provides implementations of the methods for given parameters.
Instances may be polymorphic, in which case they can work for a variety of parameters, and they may optionally provide more specific implementations of default methods in cases where a more efficient version exists for some particular type.
Type class parameters are either {deftech}_input parameters_ (the default), or {deftech}_output parameters_ (indicated by an {moduleName}`outParam` modifier).
Lean will not begin searching for an instance until all input parameters are no longer metavariables, while output parameters may be solved while searching for instances.
Parameters to a type class need not be types—they may also be ordinary values.
The {moduleName}`OfNat` type class, used to overload natural number literals, takes the overloaded {moduleName}`Nat` itself as a parameter, which allows instances to restrict the allowed numbers.
Instances may be marked with a {anchorTerm defaultAdd}`@[default_instance]` attribute.
When an instance is a default instance, then it will be chosen as a fallback when Lean would otherwise fail to find an instance due to the presence of metavariables in the type.
# Type Classes for Common Syntax
%%%
tag := none
%%%
Most infix operators in Lean are overridden with a type class.
For instance, the addition operator corresponds to a type class called {moduleName}`Add`.
Most of these operators have a corresponding heterogeneous version, in which the two arguments need not have the same type.
These heterogeneous operators are overloaded using a version of the class whose name starts with {lit}`H`, such as {moduleName}`HAdd`.
Indexing syntax is overloaded using a type class called {moduleName}`GetElem`, which involves proofs.
{moduleName}`GetElem` has two output parameters, which are the type of elements to be extracted from the collection and a function that can be used to determine what counts as evidence that the index value is in bounds for the collection.
This evidence is described by a proposition, and Lean attempts to prove this proposition when array indexing is used.
When Lean is unable to check that list or array access operations are in bounds at compile time, the check can be deferred to run time by appending a {lit}`?` to the indexing syntax.
# Functors
%%%
tag := none
%%%
A functor is a polymorphic type that supports a mapping operation.
This mapping operation transforms all elements “in place”, changing no other structure.
For instance, lists are functors and the mapping operation may neither drop, duplicate, nor mix up entries in the list.
While functors are defined by having {anchorName FunctorDef}`map`, the {anchorName FunctorDef}`Functor` type class in Lean contains an additional default method that is responsible for mapping the constant function over a value, replacing all values whose type are given by polymorphic type variable with the same new value.
For some functors, this can be done more efficiently than traversing the entire structure.
# Deriving Instances
%%%
tag := none
%%%
Many type classes have very standard implementations.
For instance, the Boolean equality class {moduleName}`BEq` is usually implemented by first checking whether both arguments are built with the same constructor, and then checking whether all their arguments are equal.
Instances for these classes can be created _automatically_.
When defining an inductive type or a structure, a {kw}`deriving` clause at the end of the declaration will cause instances to be created automatically.
Additionally, the {kw}`deriving instance`{lit}` ... `{kw}`for`{lit}` ...` command can be used outside of the definition of a datatype to cause an instance to be generated.
Because each class for which instances can be derived requires special handling, not all classes are derivable.
# Coercions
%%%
tag := none
%%%
Coercions allow Lean to recover from what would normally be a compile-time error by inserting a call to a function that transforms data from one type to another.
For example, the coercion from any type {anchorName CoeOption}`α` to the type {anchorTerm CoeOption}`Option α` allows values to be written directly, rather than with the {anchorName CoeOption}`some` constructor, making {anchorName CoeOption}`Option` work more like nullable types from object-oriented languages.
There are multiple kinds of coercion.
They can recover from different kinds of errors, and they are represented by their own type classes.
The {anchorName CoeOption}`Coe` class is used to recover from type errors.
When Lean has an expression of type {anchorName Coe}`α` in a context that expects something with type {anchorName Coe}`β`, Lean first attempts to string together a chain of coercions that can transform {anchorName Coe}`α`s into {anchorName Coe}`β`s, and only displays the error when this cannot be done.
The {moduleName}`CoeDep` class takes the specific value being coerced as an extra parameter, allowing either further type class search to be done on the value or allowing constructors to be used in the instance to limit the scope of the conversion.
The {moduleName}`CoeFun` class intercepts what would otherwise be a “not a function” error when compiling a function application, and allows the value in the function position to be transformed into an actual function if possible. |
fp-lean/book/FPLean/TypeClasses/Pos.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Positive Numbers" =>
%%%
tag := "positive-numbers"
%%%
In some applications, only positive numbers make sense.
For example, compilers and interpreters typically use one-indexed line and column numbers for source positions, and a datatype that represents only non-empty lists will never report a length of zero.
Rather than relying on natural numbers, and littering the code with assertions that the number is not zero, it can be useful to design a datatype that represents only positive numbers.
One way to represent positive numbers is very similar to {anchorTerm chapterIntro}`Nat`, except with {anchorTerm Pos}`one` as the base case instead of {anchorTerm Nat.zero}`zero`:
```anchor Pos
inductive Pos : Type where
| one : Pos
| succ : Pos → Pos
```
This datatype represents exactly the intended set of values, but it is not very convenient to use.
For example, numeric literals are rejected:
```anchor sevenOops
def seven : Pos := 7
```
```anchorError sevenOops
failed to synthesize
OfNat Pos 7
numerals are polymorphic in Lean, but the numeral `7` cannot be used in a context where the expected type is
Pos
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
Instead, the constructors must be used directly:
```anchor seven
def seven : Pos :=
Pos.succ (Pos.succ (Pos.succ (Pos.succ (Pos.succ (Pos.succ Pos.one)))))
```
Similarly, addition and multiplication are not easy to use:
```anchor fourteenOops
def fourteen : Pos := seven + seven
```
```anchorError fourteenOops
failed to synthesize
HAdd Pos Pos ?m.3
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
```anchor fortyNineOops
def fortyNine : Pos := seven * seven
```
```anchorError fortyNineOops
failed to synthesize
HMul Pos Pos ?m.3
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
Each of these error messages begins with {lit}`failed to synthesize`.
This indicates that the error is due to an overloaded operation that has not been implemented, and it describes the type class that must be implemented.
# Classes and Instances
%%%
tag := "classes-and-instances"
%%%
A type class consists of a name, some parameters, and a collection of {deftech}_methods_.
The parameters describe the types for which overloadable operations are being defined, and the methods are the names and type signatures of the overloadable operations.
Once again, there is a terminology clash with object-oriented languages.
In object-oriented programming, a method is essentially a function that is connected to a particular object in memory, with special access to the object's private state.
Objects are interacted with via their methods.
In Lean, the term “method” refers to an operation that has been declared to be overloadable, with no special connection to objects or values or private fields.
One way to overload addition is to define a type class named {anchorName Plus}`Plus`, with an addition method named {anchorName Plus}`plus`.
Once an instance of {anchorTerm Plus}`Plus` for {anchorTerm chapterIntro}`Nat` has been defined, it becomes possible to add two {anchorTerm chapterIntro}`Nat`s using {anchorName plusNatFiveThree}`Plus.plus`:
```anchor plusNatFiveThree
#eval Plus.plus 5 3
```
```anchorInfo plusNatFiveThree
8
```
Adding more instances allows {anchorName plusNatFiveThree}`Plus.plus` to take more types of arguments.
In the following type class declaration, {anchorName Plus}`Plus` is the name of the class, {anchorTerm Plus}`α : Type` is the only argument, and {anchorTerm Plus}`plus : α → α → α` is the only method:
```anchor Plus
class Plus (α : Type) where
plus : α → α → α
```
This declaration says that there is a type class {anchorName Plus}`Plus` that overloads operations with respect to a type {anchorName Plus}`α`.
In particular, there is one overloaded operation called {anchorName Plus}`plus` that takes two {anchorName Plus}`α`s and returns an {anchorName Plus}`α`.
Type classes are first class, just as types are first class.
In particular, a type class is another kind of type.
The type of {anchorTerm PlusType}`Plus` is {anchorTerm PlusType}`Type → Type`, because it takes a type as an argument ({anchorName Plus}`α`) and results in a new type that describes the overloading of {anchorName Plus}`Plus`'s operation for {anchorName Plus}`α`.
To overload {anchorName PlusNat}`plus` for a particular type, write an instance:
```anchor PlusNat
instance : Plus Nat where
plus := Nat.add
```
The colon after {anchorTerm PlusNat}`instance` indicates that {anchorTerm PlusNat}`Plus Nat` is indeed a type.
Each method of class {anchorName Plus}`Plus` should be assigned a value using {anchorTerm PlusNat}`:=`.
In this case, there is only one method: {anchorName PlusNat}`plus`.
By default, type class methods are defined in a namespace with the same name as the type class.
It can be convenient to {anchorTerm openPlus}`open` the namespace so that users don't need to type the name of the class first.
Parentheses in an {kw}`open` command indicate that only the indicated names from the namespace are to be made accessible:
```anchor openPlus
open Plus (plus)
```
```anchor plusNatFiveThreeAgain
#eval plus 5 3
```
```anchorInfo plusNatFiveThreeAgain
8
```
Defining an addition function for {anchorName PlusPos}`Pos` and an instance of {anchorTerm PlusPos}`Plus Pos` allows {anchorName PlusPos}`plus` to be used to add both {anchorName PlusPos}`Pos` and {anchorTerm chapterIntro}`Nat` values:
```anchor PlusPos
def Pos.plus : Pos → Pos → Pos
| Pos.one, k => Pos.succ k
| Pos.succ n, k => Pos.succ (n.plus k)
instance : Plus Pos where
plus := Pos.plus
def fourteen : Pos := plus seven seven
```
Because there is not yet an instance of {anchorTerm PlusFloat}`Plus Float`, attempting to add two floating-point numbers with {anchorName plusFloatFail}`plus` fails with a familiar message:
```anchor plusFloatFail
#eval plus 5.2 917.25861
```
```anchorError plusFloatFail
failed to synthesize
Plus Float
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
These errors mean that Lean was unable to find an instance for a given type class.
# Overloaded Addition
%%%
tag := "overloaded-addition"
%%%
Lean's built-in addition operator is syntactic sugar for a type class called {anchorName chapterIntro}`HAdd`, which flexibly allows the arguments to addition to have different types.
{anchorName chapterIntro}`HAdd` is short for _heterogeneous addition_.
For example, an {anchorName chapterIntro}`HAdd` instance can be written to allow a {anchorName chapterIntro}`Nat` to be added to a {anchorName fiveZeros}`Float`, resulting in a new {anchorName fiveZeros}`Float`.
When a programmer writes {anchorTerm plusDesugar}`x + y`, it is interpreted as meaning {anchorTerm plusDesugar}`HAdd.hAdd x y`.
While an understanding of the full generality of {anchorName chapterIntro}`HAdd` relies on features that are discussed in {ref "out-params"}[another section in this chapter], there is a simpler type class called {anchorName AddPos}`Add` that does not allow the types of the arguments to be mixed.
The Lean libraries are set up so that an instance of {anchorName AddPos}`Add` will be found when searching for an instance of {anchorName chapterIntro}`HAdd` in which both arguments have the same type.
Defining an instance of {anchorTerm AddPos}`Add Pos` allows {anchorTerm AddPos}`Pos` values to use ordinary addition syntax:
```anchor AddPos
instance : Add Pos where
add := Pos.plus
```
```anchor betterFourteen
def fourteen : Pos := seven + seven
```
# Conversion to Strings
%%%
tag := "conversion-to-strings"
%%%
Another useful built-in class is called {anchorName UglyToStringPos}`ToString`.
Instances of {anchorName UglyToStringPos}`ToString` provide a standard way of converting values from a given type into strings.
For example, a {anchorName UglyToStringPos}`ToString` instance is used when a value occurs in an interpolated string, and it determines how the {anchorName printlnType}`IO.println` function used at the {ref "running-a-program"}[beginning of the description of {anchorName readFile}`IO`] will display a value.
For example, one way to convert a {anchorName Pos}`Pos` into a {anchorName readFile}`String` is to reveal its inner structure.
The function {anchorName posToStringStructure}`posToString` takes a {anchorName posToStringStructure}`Bool` that determines whether to parenthesize uses of {anchorName posToStringStructure}`Pos.succ`, which should be {anchorName CoeBoolProp}`true` in the initial call to the function and {anchorName posToStringStructure}`false` in all recursive calls.
```anchor posToStringStructure
def posToString (atTop : Bool) (p : Pos) : String :=
let paren s := if atTop then s else "(" ++ s ++ ")"
match p with
| Pos.one => "Pos.one"
| Pos.succ n => paren s!"Pos.succ {posToString false n}"
```
Using this function for a {anchorName UglyToStringPos}`ToString` instance:
```anchor UglyToStringPos
instance : ToString Pos where
toString := posToString true
```
results in informative, yet overwhelming, output:
```anchor sevenLong
#eval s!"There are {seven}"
```
```anchorInfo sevenLong
"There are Pos.succ (Pos.succ (Pos.succ (Pos.succ (Pos.succ (Pos.succ Pos.one)))))"
```
On the other hand, every positive number has a corresponding {anchorTerm chapterIntro}`Nat`.
Converting it to a {anchorTerm chapterIntro}`Nat` and then using the {anchorTerm chapterIntro}`ToString Nat` instance (that is, the overloading of {anchorName UglyToStringPos}`ToString` for {anchorTerm chapterIntro}`Nat`) is a quick way to generate much shorter output:
```anchor posToNat
def Pos.toNat : Pos → Nat
| Pos.one => 1
| Pos.succ n => n.toNat + 1
```
```anchor PosToStringNat
instance : ToString Pos where
toString x := toString (x.toNat)
```
```anchor sevenShort
#eval s!"There are {seven}"
```
```anchorInfo sevenShort
"There are 7"
```
When more than one instance is defined, the most recent takes precedence.
Additionally, if a type has a {anchorName UglyToStringPos}`ToString` instance, then it can be used to display the result of {kw}`#eval` so {anchorTerm sevenEvalStr}`#eval seven` outputs {anchorInfo sevenEvalStr}`7`.
# Overloaded Multiplication
%%%
tag := "overloaded-multiplication"
%%%
For multiplication, there is a type class called {anchorName MulPPoint}`HMul` that allows mixed argument types, just like {anchorName chapterIntro}`HAdd`.
Just as {anchorTerm plusDesugar}`x + y` is interpreted as {anchorTerm plusDesugar}[`HAdd.hAdd x y`], {anchorTerm timesDesugar}`x * y` is interpreted as {anchorTerm timesDesugar}`HMul.hMul x y`.
For the common case of multiplication of two arguments with the same type, a {anchorName PosMul}`Mul` instance suffices.
An instance of {anchorTerm PosMul}`Mul` allows ordinary multiplication syntax to be used with {anchorName PosMul}`Pos`:
```anchor PosMul
def Pos.mul : Pos → Pos → Pos
| Pos.one, k => k
| Pos.succ n, k => n.mul k + k
instance : Mul Pos where
mul := Pos.mul
```
With this instance, multiplication works as expected:
```anchor muls
#eval [seven * Pos.one,
seven * seven,
Pos.succ Pos.one * seven]
```
```anchorInfo muls
[7, 49, 14]
```
# Literal Numbers
%%%
tag := "literal-numbers"
%%%
It is quite inconvenient to write out a sequence of constructors for positive numbers.
One way to work around the problem would be to provide a function to convert a {anchorTerm chapterIntro}`Nat` into a {anchorName Pos}`Pos`.
However, this approach has downsides.
First off, because {anchorName PosMul}`Pos` cannot represent {anchorTerm nats}`0`, the resulting function would either convert a {anchorTerm chapterIntro}`Nat` to a bigger number, or it would return {anchorTerm PosStuff}`Option Pos`.
Neither is particularly convenient for users.
Secondly, the need to call the function explicitly would make programs that use positive numbers much less convenient to write than programs that use {anchorTerm chapterIntro}`Nat`.
Having a trade-off between precise types and convenient APIs means that the precise types become less useful.
There are three type classes that are used to overload numeric literals: {anchorName Zero}`Zero`, {anchorName One}`One`, and {anchorName OfNat}`OfNat`.
Because many types have values that are naturally written with {anchorTerm nats}`0`, the {anchorName Zero}`Zero` class allow these specific values to be overridden.
It is defined as follows:
```anchor Zero
class Zero (α : Type) where
zero : α
```
Because {anchorTerm nats}`0` is not a positive number, there should be no instance of {anchorTerm PosStuff}`Zero Pos`.
Similarly, many types have values that are naturally written with {anchorTerm nats}`1`.
The {anchorName One}`One` class allows these to be overridden:
```anchor One
class One (α : Type) where
one : α
```
An instance of {anchorTerm OnePos}`One Pos` makes perfect sense:
```anchor OnePos
instance : One Pos where
one := Pos.one
```
With this instance, {anchorTerm onePos}`1` can be used for {anchorTerm OnePos}`Pos.one`:
```anchor onePos
#eval (1 : Pos)
```
```anchorInfo onePos
1
```
In Lean, natural number literals are interpreted using a type class called {anchorName OfNat}`OfNat`:
```anchor OfNat
class OfNat (α : Type) (_ : Nat) where
ofNat : α
```
This type class takes two arguments: {anchorTerm OfNat}`α` is the type for which a natural number is overloaded, and the unnamed {anchorTerm chapterIntro}`Nat` argument is the actual literal number that was encountered in the program.
The method {anchorName OfNat}`ofNat` is then used as the value of the numeric literal.
Because the class contains the {anchorTerm chapterIntro}`Nat` argument, it becomes possible to define only instances for those values where the number makes sense.
{anchorTerm OfNat}`OfNat` demonstrates that the arguments to type classes do not need to be types.
Because types in Lean are first-class participants in the language that can be passed as arguments to functions and given definitions with {kw}`def` and {kw}`abbrev`, there is no barrier that prevents non-type arguments in positions where a less-flexible language could not permit them.
This flexibility allows overloaded operations to be provided for particular values as well as particular types.
Additionally, it allows the Lean standard library to arrange for there to be a {anchorTerm ListSumZ}`Zero α` instance whenever there's an {anchorTerm ListSum}`OfNat α 0` instance, and vice versa.
Similarly, an instance of {anchorTerm OneExamples}`One α` implies an instance of {anchorTerm OneExamples}`OfNat α 1`, just as an instance of {anchorTerm OneExamples}`OfNat α 1` implies an instance of {anchorTerm OneExamples}`One α`.
A sum type that represents natural numbers less than four can be defined as follows:
```anchor LT4
inductive LT4 where
| zero
| one
| two
| three
```
While it would not make sense to allow _any_ literal number to be used for this type, numbers less than four clearly make sense:
```anchor LT4ofNat
instance : OfNat LT4 0 where
ofNat := LT4.zero
instance : OfNat LT4 1 where
ofNat := LT4.one
instance : OfNat LT4 2 where
ofNat := LT4.two
instance : OfNat LT4 3 where
ofNat := LT4.three
```
With these instances, the following examples work:
```anchor LT4three
#eval (3 : LT4)
```
```anchorInfo LT4three
LT4.three
```
```anchor LT4zero
#eval (0 : LT4)
```
```anchorInfo LT4zero
LT4.zero
```
On the other hand, out-of-bounds literals are still not allowed:
```anchor LT4four
#eval (4 : LT4)
```
```anchorError LT4four
failed to synthesize
OfNat LT4 4
numerals are polymorphic in Lean, but the numeral `4` cannot be used in a context where the expected type is
LT4
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
For {anchorName PosMul}`Pos`, the {anchorTerm OfNat}`OfNat` instance should work for _any_ {anchorTerm chapterIntro}`Nat` other than {anchorName PosStuff}`Nat.zero`.
Another way to phrase this is to say that for all natural numbers {anchorTerm posrec}`n`, the instance should work for {anchorTerm posrec}`n + 1`.
Just as names like {anchorTerm posrec}`α` automatically become implicit arguments to functions that Lean fills out on its own, instances can take automatic implicit arguments.
In this instance, the argument {anchorTerm OfNatPos}`n` stands for any {anchorTerm chapterIntro}`Nat`, and the instance is defined for a {anchorTerm chapterIntro}`Nat` that's one greater:
```anchor OfNatPos
instance : OfNat Pos (n + 1) where
ofNat :=
let rec natPlusOne : Nat → Pos
| 0 => Pos.one
| k + 1 => Pos.succ (natPlusOne k)
natPlusOne n
```
Because {anchorTerm OfNatPos}`n` stands for a {anchorTerm chapterIntro}`Nat` that's one less than what the user wrote, the helper function {anchorName OfNatPos}`natPlusOne` returns a {anchorName OfNatPos}`Pos` that's one greater than its argument.
This makes it possible to use natural number literals for positive numbers, but not for zero:
```anchor eight
def eight : Pos := 8
```
```anchor zeroBad
def zero : Pos := 0
```
```anchorError zeroBad
failed to synthesize
OfNat Pos 0
numerals are polymorphic in Lean, but the numeral `0` cannot be used in a context where the expected type is
Pos
due to the absence of the instance above
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
# Exercises
%%%
tag := "positive-numbers-exercises"
%%%
## Another Representation
%%%
tag := "positive-numbers-another-representation"
%%%
An alternative way to represent a positive number is as the successor of some {anchorTerm chapterIntro}`Nat`.
Replace the definition of {anchorName PosStuff}`Pos` with a structure whose constructor is named {anchorName AltPos}`succ` that contains a {anchorTerm chapterIntro}`Nat`:
```anchor AltPos
structure Pos where
succ ::
pred : Nat
```
Define instances of {moduleName}`Add`, {moduleName}`Mul`, {anchorName UglyToStringPos}`ToString`, and {moduleName}`OfNat` that allow this version of {anchorName AltPos}`Pos` to be used conveniently.
## Even Numbers
%%%
tag := "even-numbers-ex"
%%%
Define a datatype that represents only even numbers. Define instances of {moduleName}`Add`, {moduleName}`Mul`, and {anchorName UglyToStringPos}`ToString` that allow it to be used conveniently.
{moduleName}`OfNat` requires a feature that is introduced in {ref "tc-polymorphism"}[the next section].
## HTTP Requests
%%%
tag := "http-request-ex"
%%%
An HTTP request begins with an identification of a HTTP method, such as {lit}`GET` or {lit}`POST`, along with a URI and an HTTP version.
Define an inductive type that represents an interesting subset of the HTTP methods, and a structure that represents HTTP responses.
Responses should have a {anchorName UglyToStringPos}`ToString` instance that makes it possible to debug them.
Use a type class to associate different {moduleName}`IO` actions with each HTTP method, and write a test harness as an {moduleName}`IO` action that calls each method and prints the result. |
fp-lean/book/FPLean/TypeClasses/Polymorphism.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Type Classes and Polymorphism" =>
%%%
tag := "tc-polymorphism"
%%%
It can be useful to write functions that work for _any_ overloading of a given function.
For example, {anchorTerm printlnType}`IO.println` works for any type that has an instance of {anchorTerm printlnType}`ToString`.
This is indicated using square brackets around the required instance: the type of {anchorTerm printlnType}`IO.println` is {anchorTerm printlnType}`{α : Type} → [ToString α] → α → IO Unit`.
This type says that {anchorTerm printlnType}`IO.println` accepts an argument of type {anchorTerm printlnType}`α`, which Lean should determine automatically, and that there must be a {anchorTerm printlnType}`ToString` instance available for {anchorTerm printlnType}`α`.
It returns an {anchorTerm printlnType}`IO` action.
# Checking Polymorphic Functions' Types
%%%
tag := "checking-polymorphic-types"
%%%
Checking the type of a function that takes implicit arguments or uses type classes requires the use of some additional syntax.
Simply writing
```anchor printlnMetas
#check (IO.println)
```
yields a type with metavariables:
```anchorInfo printlnMetas
IO.println : ?m.2620 → IO Unit
```
This is because Lean does its best to discover implicit arguments, and the presence of metavariables indicates that it did not yet discover enough type information to do so.
To understand the signature of a function, this feature can be suppressed with an at-sign ({anchorTerm printlnNoMetas}`@`) before the function's name:
```anchor printlnNoMetas
#check @IO.println
```
```anchorInfo printlnNoMetas
@IO.println : {α : Type u_1} → [ToString α] → α → IO Unit
```
There is a {lit}`u_1` after {lit}`Type`, which uses a feature of Lean that has not yet been introduced.
For now, ignore these parameters to {lit}`Type`.
# Defining Polymorphic Functions with Instance Implicits
%%%
tag := "defining-polymorphic-functions-with-instance-implicits"
%%%
:::paragraph
A function that sums all entries in a list needs two instances: {moduleName}`Add` allows the entries to be added, and an {moduleName}`OfNat` instance for {anchorTerm ListSum}`0` provides a sensible value to return for the empty list:
```anchor ListSum
def List.sumOfContents [Add α] [OfNat α 0] : List α → α
| [] => 0
| x :: xs => x + xs.sumOfContents
```
This function can be also defined with a {anchorTerm ListSumZ}`Zero α` requirement instead of {anchorTerm ListSum}`OfNat α 0`.
Both are equivalent, but {anchorTerm ListSumZ}`Zero α` can be easier to read:
```anchor ListSumZ
def List.sumOfContents [Add α] [Zero α] : List α → α
| [] => 0
| x :: xs => x + xs.sumOfContents
```
:::
:::paragraph
This function can be used for a list of {anchorTerm fourNats}`Nat`s:
```anchor fourNats
def fourNats : List Nat := [1, 2, 3, 4]
```
```anchor fourNatsSum
#eval fourNats.sumOfContents
```
```anchorInfo fourNatsSum
10
```
but not for a list of {anchorTerm fourPos}`Pos` numbers:
```anchor fourPos
def fourPos : List Pos := [1, 2, 3, 4]
```
```anchor fourPosSum
#eval fourPos.sumOfContents
```
```anchorError fourPosSum
failed to synthesize
Zero Pos
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
The Lean standard library includes this function, where it is called {moduleName}`List.sum`.
:::
Specifications of required instances in square brackets are called _instance implicits_.
Behind the scenes, every type class defines a structure that has a field for each overloaded operation.
Instances are values of that structure type, with each field containing an implementation.
At a call site, Lean is responsible for finding an instance value to pass for each instance implicit argument.
The most important difference between ordinary implicit arguments and instance implicits is the strategy that Lean uses to find an argument value.
In the case of ordinary implicit arguments, Lean uses a technique called _unification_ to find a single unique argument value that would allow the program to pass the type checker.
This process relies only on the specific types involved in the function's definition and the call site.
For instance implicits, Lean instead consults a built-in table of instance values.
Just as the {anchorTerm OfNatPos}`OfNat` instance for {anchorName OfNatPos}`Pos` took a natural number {anchorName OfNatPos}`n` as an automatic implicit argument, instances may also take instance implicit arguments themselves.
The {ref "polymorphism"}[section on polymorphism] presented a polymorphic point type:
```anchor PPoint
structure PPoint (α : Type) where
x : α
y : α
```
Addition of points should add the underlying {anchorName PPoint}`x` and {anchorName PPoint}`y` fields.
Thus, an {anchorName AddPPoint}`Add` instance for {anchorName AddPPoint}`PPoint` requires an {anchorName AddPPoint}`Add` instance for whatever type these fields have.
In other words, the {anchorName AddPPoint}`Add` instance for {anchorName AddPPoint}`PPoint` requires a further {anchorName AddPPoint}`Add` instance for {anchorName AddPPoint}`α`:
```anchor AddPPoint
instance [Add α] : Add (PPoint α) where
add p1 p2 := { x := p1.x + p2.x, y := p1.y + p2.y }
```
When Lean encounters an addition of two points, it searches for and finds this instance.
It then performs a further search for the {anchorTerm AddPPoint}`Add α` instance.
The instance values that are constructed in this way are values of the type class's structure type.
A successful recursive instance search results in a structure value that has a reference to another structure value.
An instance of {anchorTerm AddPPointNat}`Add (PPoint Nat)` contains a reference to the instance of {anchorTerm AddPPointNat}`Add Nat` that was found.
This recursive search process means that type classes offer significantly more power than plain overloaded functions.
A library of polymorphic instances is a set of code building blocks that the compiler will assemble on its own, given nothing but the desired type.
Polymorphic functions that take instance arguments are latent requests to the type class mechanism to assemble helper functions behind the scenes.
The API's clients are freed from the burden of plumbing together all of the necessary parts by hand.
# Methods and Implicit Arguments
%%%
tag := "method-implicit-params"
%%%
The type of {anchorTerm ofNatType}`OfNat.ofNat` may be surprising.
It is {anchorTerm ofNatType}`: {α : Type} → (n : Nat) → [OfNat α n] → α`, in which the {anchorTerm ofNatType}`Nat` argument {anchorTerm ofNatType}`n` occurs as an explicit function parameter.
In the declaration of the method, however, {anchorName OfNat}`ofNat` simply has type {anchorName ofNatType}`α`.
This seeming discrepancy is because declaring a type class really results in the following:
* A structure type to contain the implementation of each overloaded operation
* A namespace with the same name as the class
* For each method, a function in the class's namespace that retrieves its implementation from an instance
This is analogous to the way that declaring a new structure also declares accessor functions.
The primary difference is that a structure's accessors take the structure value as an explicit parameter, while the type class methods take the instance value as an instance implicit to be found automatically by Lean.
In order for Lean to find an instance, its parameters must be available.
This means that each parameter to the type class must be a parameter to the method that occurs before the instance.
It is most convenient when these parameters are implicit, because Lean does the work of discovering their values.
For example, {anchorTerm addType}`Add.add` has the type {anchorTerm addType}`{α : Type} → [Add α] → α → α → α`.
In this case, the type parameter {anchorTerm addType}`α` can be implicit because the arguments to {anchorTerm addType}`Add.add` provide information about which type the user intended.
This type can then be used to search for the {anchorTerm addType}`Add` instance.
In the case of {anchorName ofNatType}`OfNat.ofNat`, however, the particular {moduleName}`Nat` literal to be decoded does not appear as part of any other parameter's type.
This means that Lean would have no information to use when attempting to figure out the implicit parameter {anchorName ofNatType}`n`.
The result would be a very inconvenient API.
Thus, in these cases, Lean uses an explicit parameter for the class's method.
# Exercises
%%%
tag := "type-class-polymorphism-exercises"
%%%
## Even Number Literals
%%%
tag := none
%%%
Write an instance of {anchorName ofNatType}`OfNat` for the even number datatype from the {ref "even-numbers-ex"}[previous section's exercises] that uses recursive instance search.
## Recursive Instance Search Depth
%%%
tag := none
%%%
There is a limit to how many times the Lean compiler will attempt a recursive instance search.
This places a limit on the size of even number literals defined in the previous exercise.
Experimentally determine what the limit is. |
fp-lean/book/FPLean/TypeClasses/StandardClasses.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Standard Classes" =>
%%%
tag := "standard-classes"
%%%
This section presents a variety of operators and functions that can be overloaded using type classes in Lean.
Each operator or function corresponds to a method of a type class.
Unlike C++, infix operators in Lean are defined as abbreviations for named functions; this means that overloading them for new types is not done using the operator itself, but rather using the underlying name (such as {moduleName}`HAdd.hAdd`).
# Arithmetic
%%%
tag := "arithmetic-classes"
%%%
Most arithmetic operators are available in a heterogeneous form, where the arguments may have different type and an output parameter decides the type of the resulting expression.
For each heterogeneous operator, there is a corresponding homogeneous version that can found by removing the letter {lit}`h`, so that {moduleName}`HAdd.hAdd` becomes {moduleName}`Add.add`.
The following arithmetic operators are overloaded:
:::table +header
*
- Expression
- Desugaring
- Class Name
*
- {anchorTerm plusDesugar}`x + y`
- {anchorTerm plusDesugar}`HAdd.hAdd x y`
- {moduleName}`HAdd`
*
- {anchorTerm minusDesugar}`x - y`
- {anchorTerm minusDesugar}`HSub.hSub x y`
- {moduleName}`HSub`
*
- {anchorTerm timesDesugar}`x * y`
- {anchorTerm timesDesugar}`HMul.hMul x y`
- {moduleName}`HMul`
*
- {anchorTerm divDesugar}`x / y`
- {anchorTerm divDesugar}`HDiv.hDiv x y`
- {moduleName}`HDiv`
*
- {anchorTerm modDesugar}`x % y`
- {anchorTerm modDesugar}`HMod.hMod x y`
- {moduleName}`HMod`
*
- {anchorTerm powDesugar}`x ^ y`
- {anchorTerm powDesugar}`HPow.hPow x y`
- {moduleName}`HPow`
*
- {anchorTerm negDesugar}`- x`
- {anchorTerm negDesugar}`Neg.neg x`
- {moduleName}`Neg`
:::
# Bitwise Operators
%%%
tag := "bitwise-classes"
%%%
Lean contains a number of standard bitwise operators that are overloaded using type classes.
There are instances for fixed-width types such as {anchorTerm UInt8}`UInt8`, {anchorTerm UInt16}`UInt16`, {anchorTerm UInt32}`UInt32`, {anchorTerm UInt64}`UInt64`, and {anchorTerm USize}`USize`.
The latter is the size of words on the current platform, typically 32 or 64 bits.
The following bitwise operators are overloaded:
:::table +header
*
- Expression
- Desugaring
- Class Name
*
- {anchorTerm bAndDesugar}`x &&& y`
- {anchorTerm bAndDesugar}`HAnd.hAnd x y`
- {moduleName}`HAnd`
*
- {anchorTerm bOrDesugar}`x ||| y`
- {anchorTerm bOrDesugar}`HOr.hOr x y`
- {moduleName}`HOr`
*
- {anchorTerm bXorDesugar}`x ^^^ y`
- {anchorTerm bXorDesugar}`HXor.hXor x y`
- {moduleName}`HXor`
*
- {anchorTerm complementDesugar}`~~~x`
- {anchorTerm complementDesugar}`Complement.complement x`
- {moduleName}`Complement`
*
- {anchorTerm shrDesugar}`x >>> y`
- {anchorTerm shrDesugar}`HShiftRight.hShiftRight x y`
- {moduleName}`HShiftRight`
*
- {anchorTerm shlDesugar}`x <<< y`
- {anchorTerm shlDesugar}`HShiftLeft.hShiftLeft x y`
- {moduleName}`HShiftLeft`
:::
Because the names {anchorName chapterIntro}`And` and {anchorName chapterIntro}`Or` are already taken as the names of logical connectives, the homogeneous versions of {anchorName chapterIntro}`HAnd` and {anchorName chapterIntro}`HOr` are called {anchorName moreOps}`AndOp` and {anchorName moreOps}`OrOp` rather than {anchorName chapterIntro}`And` and {anchorName chapterIntro}`Or`.
# Equality and Ordering
%%%
tag := "equality-and-ordering"
%%%
Testing equality of two values typically uses the {moduleName}`BEq` class, which is short for “Boolean equality”.
Due to Lean's use as a theorem prover, there are really two kinds of equality operators in Lean:
* {deftech}_Boolean equality_ is the same kind of equality that is found in other programming languages. It is a function that takes two values and returns a {anchorName CoeBoolProp}`Bool`. Boolean equality is written with two equals signs, just as in Python and C#. Because Lean is a pure functional language, there's no separate notions of reference vs value equality—pointers cannot be observed directly.
* {deftech}_Propositional equality_ is the mathematical statement that two things are equal. Propositional equality is not a function; rather, it is a mathematical statement that admits proof. It is written with a single equals sign. A statement of propositional equality is like a type that classifies evidence of this equality.
Both notions of equality are important, and used for different purposes.
Boolean equality is useful in programs, when a decision needs to be made about whether two values are equal.
For example, {anchorTerm boolEqTrue}`"Octopus" == "Cuttlefish"` evaluates to {anchorTerm boolEqTrue}`false`, and {anchorTerm boolEqFalse}`"Octopodes" == "Octo".append "podes"` evaluates to {anchorTerm boolEqFalse}`true`.
Some values, such as functions, cannot be checked for equality.
For example, {anchorTerm functionEq}`(fun (x : Nat) => 1 + x) == (Nat.succ ·)` yields the error:
```anchorError functionEq
failed to synthesize
BEq (Nat → Nat)
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
As this message indicates, {lit}`==` is overloaded using a type class.
The expression {anchorTerm beqDesugar}`x == y` is actually shorthand for {anchorTerm beqDesugar}`BEq.beq x y`.
Propositional equality is a mathematical statement rather than an invocation of a program.
Because propositions are like types that describe evidence for some statement, propositional equality has more in common with types like {anchorName readFile}`String` and {anchorTerm moreOps}`Nat → List Int` than it does with Boolean equality.
This means that it can't automatically be checked.
However, the equality of any two expressions can be stated in Lean, so long as they have the same type.
The statement {anchorTerm functionEqProp}`(fun (x : Nat) => 1 + x) = (Nat.succ ·)` is a perfectly reasonable statement.
From the perspective of mathematics, two functions are equal if they map equal inputs to equal outputs, so this statement is even true, though it requires a one-line proof to convince Lean of this fact.
Generally speaking, when using Lean as a programming language, it's easiest to stick to Boolean functions rather than propositions.
However, as the names {moduleName}`true` and {moduleName}`false` for {moduleName}`Bool`'s constructors suggest, this difference is sometimes blurred.
Some propositions are _decidable_, which means that they can be checked just like a Boolean function.
The function that checks whether the proposition is true or false is called a _decision procedure_, and it returns _evidence_ of the truth or falsity of the proposition.
Some examples of decidable propositions include equality and inequality of natural numbers, equality of strings, and “ands” and “ors” of propositions that are themselves decidable.
:::paragraph
In Lean, {kw}`if` works with decidable propositions.
For example, {anchorTerm twoLessFour}`2 < 4` is a proposition:
```anchor twoLessFour
#check 2 < 4
```
```anchorInfo twoLessFour
2 < 4 : Prop
```
Nonetheless, it is perfectly acceptable to write it as the condition in an {kw}`if`.
For example, {anchorTerm ifProp}`if 2 < 4 then 1 else 2` has type {moduleName}`Nat` and evaluates to {anchorTerm ifProp}`1`.
:::
Not all propositions are decidable.
If they were, then computers would be able to prove any true proposition just by running the decision procedure, and mathematicians would be out of a job.
More specifically, decidable propositions have an instance of the {anchorName DecLTLEPos}`Decidable` type class, which contains the decision procedure.
Trying to use a proposition that isn't decidable as if it were a {anchorName CoeBoolProp}`Bool` results in a failure to find the {anchorName DecLTLEPos}`Decidable` instance.
For example, {anchorTerm funEqDec}`if (fun (x : Nat) => 1 + x) = (Nat.succ ·) then "yes" else "no"` results in:
```anchorError funEqDec
failed to synthesize
Decidable ((fun x => 1 + x) = fun x => x.succ)
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
The following propositions, that are usually decidable, are overloaded with type classes:
:::table +header
*
- Expression
- Desugaring
- Class Name
*
- {anchorTerm ltDesugar}`x < y`
- {anchorTerm ltDesugar}`LT.lt x y`
- {moduleName}`LT`
*
- {anchorTerm leDesugar}`x ≤ y`
- {anchorTerm leDesugar}`LE.le x y`
- {moduleName}`LE`
*
- {anchorTerm gtDesugar}`x > y`
- {anchorTerm gtDesugar}`LT.lt y x`
- {moduleName}`LT`
*
- {anchorTerm geDesugar}`x ≥ y`
- {anchorTerm geDesugar}`LE.le y x`
- {moduleName}`LE`
:::
Because defining new propositions hasn't yet been demonstrated, it may be difficult to define completely new instances of {moduleName}`LT` and {moduleName}`LE`.
However, they can be defined in terms of existing instances.
{moduleName}`LT` and {moduleName}`LE` instances for {anchorName LTPos}`Pos` can use the existing instances for {moduleName}`Nat`:
```anchor LTPos
instance : LT Pos where
lt x y := LT.lt x.toNat y.toNat
```
```anchor LEPos
instance : LE Pos where
le x y := LE.le x.toNat y.toNat
```
These propositions are not decidable by default because Lean doesn't unfold the definitions of propositions while synthesizing an instance.
This can be bridged using the {anchorName DecLTLEPos}`inferInstanceAs` operator, which finds an instance for a given class if it exists:
```anchor DecLTLEPos
instance {x : Pos} {y : Pos} : Decidable (x < y) :=
inferInstanceAs (Decidable (x.toNat < y.toNat))
instance {x : Pos} {y : Pos} : Decidable (x ≤ y) :=
inferInstanceAs (Decidable (x.toNat ≤ y.toNat))
```
The type checker confirms that the definitions of the propositions match.
Confusing them results in an error:
```anchor LTLEMismatch
instance {x : Pos} {y : Pos} : Decidable (x ≤ y) :=
inferInstanceAs (Decidable (x.toNat < y.toNat))
```
```anchorError LTLEMismatch
Type mismatch
inferInstanceAs (Decidable (x.toNat < y.toNat))
has type
Decidable (x.toNat < y.toNat)
but is expected to have type
Decidable (x ≤ y)
```
:::paragraph
Comparing values using {lit}`<`, {lit}`==`, and {lit}`>` can be inefficient.
Checking first whether one value is less than another, and then whether they are equal, can require two traversals over large data structures.
To solve this problem, Java and C# have standard {java}`compareTo` and {CSharp}`CompareTo` methods (respectively) that can be overridden by a class in order to implement all three operations at the same time.
These methods return a negative integer if the receiver is less than the argument, zero if they are equal, and a positive integer if the receiver is greater than the argument.
Rather than overloading the meaning of integers, Lean has a built-in inductive type that describes these three possibilities:
```anchor Ordering
inductive Ordering where
| lt
| eq
| gt
```
The {anchorName OrdPos}`Ord` type class can be overloaded to produce these comparisons.
For {anchorName OrdPos}`Pos`, an implementation can be:
```anchor OrdPos
def Pos.comp : Pos → Pos → Ordering
| Pos.one, Pos.one => Ordering.eq
| Pos.one, Pos.succ _ => Ordering.lt
| Pos.succ _, Pos.one => Ordering.gt
| Pos.succ n, Pos.succ k => comp n k
instance : Ord Pos where
compare := Pos.comp
```
In situations where {java}`compareTo` would be the right approach in Java, use {moduleName}`Ord.compare` in Lean.
:::
# Hashing
%%%
tag := "hashing"
%%%
Java and C# have {java}`hashCode` and {CSharp}`GetHashCode` methods, respectively, that compute a hash of a value for use in data structures such as hash tables.
The Lean equivalent is a type class called {anchorName Hashable}`Hashable`:
```anchor Hashable
class Hashable (α : Type) where
hash : α → UInt64
```
If two values are considered equal according to a {moduleName}`BEq` instance for their type, then they should have the same hashes.
In other words, if {anchorTerm HashableSpec}`x == y` then {anchorTerm HashableSpec}`hash x == hash y`.
If {anchorTerm HashableSpec}`x ≠ y`, then {anchorTerm HashableSpec}`hash x` won't necessarily differ from {anchorTerm HashableSpec}`hash y` (after all, there are infinitely more {moduleName}`Nat` values than there are {moduleName}`UInt64` values), but data structures built on hashing will have better performance if unequal values are likely to have unequal hashes.
This is the same expectation as in Java and C#.
The standard library contains a function {anchorTerm mixHash}`mixHash` with type {anchorTerm mixHash}`UInt64 → UInt64 → UInt64` that can be used to combine hashes for different fields for a constructor.
A reasonable hash function for an inductive datatype can be written by assigning a unique number to each constructor, and then mixing that number with the hashes of each field.
For example, a {anchorName HashablePos}`Hashable` instance for {anchorName HashablePos}`Pos` can be written:
```anchor HashablePos
def hashPos : Pos → UInt64
| Pos.one => 0
| Pos.succ n => mixHash 1 (hashPos n)
instance : Hashable Pos where
hash := hashPos
```
:::paragraph
{anchorTerm HashableNonEmptyList}`Hashable` instances for polymorphic types can use recursive instance search.
Hashing a {anchorTerm HashableNonEmptyList}`NonEmptyList α` is only possible when {anchorName HashableNonEmptyList}`α` can be hashed:
```anchor HashableNonEmptyList
instance [Hashable α] : Hashable (NonEmptyList α) where
hash xs := mixHash (hash xs.head) (hash xs.tail)
```
:::
:::paragraph
Binary trees use both recursion and recursive instance search in the implementations of {anchorName TreeHash}`BEq` and {anchorName TreeHash}`Hashable`:
```anchor TreeHash
inductive BinTree (α : Type) where
| leaf : BinTree α
| branch : BinTree α → α → BinTree α → BinTree α
def eqBinTree [BEq α] : BinTree α → BinTree α → Bool
| BinTree.leaf, BinTree.leaf =>
true
| BinTree.branch l x r, BinTree.branch l2 x2 r2 =>
x == x2 && eqBinTree l l2 && eqBinTree r r2
| _, _ =>
false
instance [BEq α] : BEq (BinTree α) where
beq := eqBinTree
def hashBinTree [Hashable α] : BinTree α → UInt64
| BinTree.leaf =>
0
| BinTree.branch left x right =>
mixHash 1
(mixHash (hashBinTree left)
(mixHash (hash x)
(hashBinTree right)))
instance [Hashable α] : Hashable (BinTree α) where
hash := hashBinTree
```
:::
# Deriving Standard Classes
%%%
tag := "deriving-standard-classes"
%%%
Instance of classes like {moduleName}`BEq` and {moduleName}`Hashable` are often quite tedious to implement by hand.
Lean includes a feature called _instance deriving_ that allows the compiler to automatically construct well-behaved instances of many type classes.
In fact, the {anchorTerm Firewood (module := Examples.Intro)}`deriving Repr` phrase in the definition of {anchorName Firewood (module:=Examples.Intro)}`Firewood` in the {ref "polymorphism"}[first section on polymorphism] is an example of instance deriving.
Instances can be derived in two ways.
The first can be used when defining a structure or inductive type.
In this case, add {kw}`deriving` to the end of the type declaration followed by the names of the classes for which instances should be derived.
For a type that is already defined, a standalone {kw}`deriving` command can be used.
Write {kw}`deriving instance`{lit}` C1, C2, ... `{kw}`for`{lit}` T` to derive instances of {lit}`C1, C2, ...` for the type {lit}`T` after the fact.
{moduleName}`BEq` and {moduleName}`Hashable` instances can be derived for {anchorName BEqHashableDerive}`Pos` and {anchorName BEqHashableDerive}`NonEmptyList` using a very small amount of code:
```anchor BEqHashableDerive
deriving instance BEq, Hashable for Pos
deriving instance BEq, Hashable for NonEmptyList
```
Instances can be derived for at least the following classes:
* {moduleName}`Inhabited`
* {moduleName}`BEq`
* {moduleName}`Repr`
* {moduleName}`Hashable`
* {moduleName}`Ord`
In some cases, however, the derived {moduleName}`Ord` instance may not produce precisely the ordering desired in an application.
When this is the case, it's fine to write an {moduleName}`Ord` instance by hand.
The collection of classes for which instances can be derived can be extended by advanced users of Lean.
Aside from the clear advantages in programmer productivity and code readability, deriving instances also makes code easier to maintain, because the instances are updated as the definitions of types evolve.
When reviewing changes to code, modifications that involve updates to datatypes are much easier to read without line after line of formulaic modifications to equality tests and hash computation.
# Appending
%%%
tag := "append-class"
%%%
Many datatypes have some sort of append operator.
In Lean, appending two values is overloaded with the type class {anchorName HAppend}`HAppend`, which is a heterogeneous operation like that used for arithmetic operations:
```anchor HAppend
class HAppend (α : Type) (β : Type) (γ : outParam Type) where
hAppend : α → β → γ
```
The syntax {anchorTerm desugarHAppend}`xs ++ ys` desugars to {anchorTerm desugarHAppend}`HAppend.hAppend xs ys`.
For homogeneous cases, it's enough to implement an instance of {moduleName}`Append`, which follows the usual pattern:
```anchor AppendNEList
instance : Append (NonEmptyList α) where
append xs ys :=
{ head := xs.head, tail := xs.tail ++ ys.head :: ys.tail }
```
After defining the above instance,
```anchor appendSpiders
#eval idahoSpiders ++ idahoSpiders
```
has the following output:
```anchorInfo appendSpiders
{ head := "Banded Garden Spider",
tail := ["Long-legged Sac Spider",
"Wolf Spider",
"Hobo Spider",
"Cat-faced Spider",
"Banded Garden Spider",
"Long-legged Sac Spider",
"Wolf Spider",
"Hobo Spider",
"Cat-faced Spider"] }
```
Similarly, a definition of {moduleName}`HAppend` allows non-empty lists to be appended to ordinary lists:
```anchor AppendNEListList
instance : HAppend (NonEmptyList α) (List α) (NonEmptyList α) where
hAppend xs ys :=
{ head := xs.head, tail := xs.tail ++ ys }
```
With this instance available,
```anchor appendSpidersList
#eval idahoSpiders ++ ["Trapdoor Spider"]
```
results in
```anchorInfo appendSpidersList
{ head := "Banded Garden Spider",
tail := ["Long-legged Sac Spider", "Wolf Spider", "Hobo Spider", "Cat-faced Spider", "Trapdoor Spider"] }
```
# Functors
%%%
tag := "Functor"
%%%
A polymorphic type is a {deftech}_functor_ if it has an overload for a function named {anchorName FunctorDef}`map` that transforms every element contained in it by a function.
While most languages use this terminology, C#'s equivalent of {anchorName FunctorDef}`map` is called {CSharp}`System.Linq.Enumerable.Select`.
For example, mapping a function over a list constructs a new list in which each entry from the starting list has been replaced by the result of the function on that entry.
Mapping a function {anchorName optionFMeta}`f` over an {anchorName optionFMeta}`Option` leaves {anchorName optionFMeta}`none` untouched, and replaces {anchorTerm optionFMeta}`some x` with {anchorTerm optionFMeta}`some (f x)`.
Here are some examples of functors and how their {anchorName FunctorDef}`Functor` instances overload {anchorName FunctorDef}`map`:
* {anchorTerm mapList}`Functor.map (· + 5) [1, 2, 3]` evaluates to {anchorTerm mapList}`[6, 7, 8]`
* {anchorTerm mapOption}`Functor.map toString (some (List.cons 5 List.nil))` evaluates to {anchorTerm mapOption}`some "[5]"`
* {anchorTerm mapListList}`Functor.map List.reverse [[1, 2, 3], [4, 5, 6]]` evaluates to {anchorTerm mapListList}`[[3, 2, 1], [6, 5, 4]]`
Because {anchorName mapList}`Functor.map` is a bit of a long name for this common operation, Lean also provides an infix operator for mapping a function, namely {lit}`<$>`.
The prior examples can be rewritten as follows:
* {anchorTerm mapInfixList}`(· + 5) <$> [1, 2, 3]` evaluates to {anchorTerm mapInfixList}`[6, 7, 8]`
* {anchorTerm mapInfixOption}`toString <$> (some (List.cons 5 List.nil))` evaluates to {anchorTerm mapInfixOption}`some "[5]"`
* {anchorTerm mapInfixListList}`List.reverse <$> [[1, 2, 3], [4, 5, 6]]` evaluates to {anchorTerm mapInfixListList}`[[3, 2, 1], [6, 5, 4]]`
An instance of {anchorTerm FunctorNonEmptyList}`Functor` for {anchorTerm FunctorNonEmptyList}`NonEmptyList` requires specifying the {anchorName FunctorNonEmptyList}`map` function.
```anchor FunctorNonEmptyList
instance : Functor NonEmptyList where
map f xs := { head := f xs.head, tail := f <$> xs.tail }
```
Here, {anchorTerm FunctorNonEmptyList}`map` uses the {anchorTerm FunctorNonEmptyList}`Functor` instance for {moduleName}`List` to map the function over the tail.
This instance is defined for {anchorTerm FunctorNonEmptyList}`NonEmptyList` rather than for {anchorTerm FunctorNonEmptyListA}`NonEmptyList α` because the argument type {anchorTerm FunctorNonEmptyListA}`α` plays no role in resolving the type class.
A {anchorTerm FunctorNonEmptyList}`NonEmptyList` can have a function mapped over it _no matter what the type of entries is_.
If {anchorTerm FunctorNonEmptyListA}`α` were a parameter to the class, then it would be possible to make versions of {anchorTerm FunctorNonEmptyList}`Functor` that only worked for {anchorTerm FunctorNonEmptyListA}`NonEmptyList Nat`, but part of being a functor is that {anchorName FunctorNonEmptyList}`map` works for any entry type.
:::paragraph
Here is an instance of {anchorTerm FunctorPPoint}`Functor` for {anchorTerm FunctorPPoint}`PPoint`:
```anchor FunctorPPoint
instance : Functor PPoint where
map f p := { x := f p.x, y := f p.y }
```
In this case, {anchorName FunctorPPoint}`f` has been applied to both {anchorName FunctorPPoint}`x` and {anchorName FunctorPPoint}`y`.
:::
Even when the type contained in a functor is itself a functor, mapping a function only goes down one layer.
That is, when using {anchorName FunctorPPoint}`map` on a {anchorTerm NEPP}`NonEmptyList (PPoint Nat)`, the function being mapped should take {anchorTerm NEPP}`PPoint Nat` as its argument rather than {moduleName}`Nat`.
The definition of the {anchorName FunctorLaws}`Functor` class uses one more language feature that has not yet been discussed: default method definitions.
Normally, a class will specify some minimal set of overloadable operations that make sense together, and then use polymorphic functions with instance implicit arguments that build on the overloaded operations to provide a larger library of features.
For example, the function {anchorName concat}`concat` can concatenate any non-empty list whose entries are appendable:
```anchor concat
def concat [Append α] (xs : NonEmptyList α) : α :=
let rec catList (start : α) : List α → α
| [] => start
| (z :: zs) => catList (start ++ z) zs
catList xs.head xs.tail
```
However, for some classes, there are operations that can be more efficiently implemented with knowledge of the internals of a datatype.
In these cases, a default method definition can be provided.
A default method definition provides a default implementation of a method in terms of the other methods.
However, instance implementors may choose to override this default with something more efficient.
Default method definitions contain {lit}`:=` in a {kw}`class` definition.
In the case of {anchorName FunctorDef}`Functor`, some types have a more efficient way of implementing {anchorName FunctorDef}`map` when the function being mapped ignores its argument.
Functions that ignore their arguments are called _constant functions_ because they always return the same value.
Here is the definition of {anchorName FunctorDef}`Functor`, in which {anchorName FunctorDef}`mapConst` has a default implementation:
```anchor FunctorDef
class Functor (f : Type → Type) where
map : {α β : Type} → (α → β) → f α → f β
mapConst {α β : Type} (x : α) (coll : f β) : f α :=
map (fun _ => x) coll
```
Just as a {anchorName HashableSpec}`Hashable` instance that doesn't respect {moduleName}`BEq` is buggy, a {moduleName}`Functor` instance that moves around the data as it maps the function is also buggy.
For example, a buggy {moduleName}`Functor` instance for {moduleName}`List` might throw away its argument and always return the empty list, or it might reverse the list.
A bad {moduleName}`Functor` instance for {moduleName}`PPoint` might place {anchorTerm FunctorPPointBad}`f x` in both the {anchorName FunctorPPointBad}`x` and the {anchorName FunctorPPointBad}`y` fields, or swap them.
Specifically, {anchorName FunctorDef}`Functor` instances should follow two rules:
1. Mapping the identity function should result in the original argument.
2. Mapping two composed functions should have the same effect as composing their mapping.
More formally, the first rule says that {anchorTerm FunctorLaws}`id <$> x` equals {anchorTerm FunctorLaws}`x`.
The second rule says that {anchorTerm FunctorLaws}`map (fun y => f (g y)) x` equals {anchorTerm FunctorLaws}`map f (map g x)`.
The composition {anchorTerm compDef}`f ∘ g` can also be written {anchorTerm compDef}`fun y => f (g y)`.
These rules prevent implementations of {anchorName FunctorDef}`map` that move the data around or delete some of it.
# Messages You May Meet
%%%
tag := "standard-classes-messages"
%%%
Lean is not able to derive instances for all classes.
For example, the code
```anchor derivingNotFound
deriving instance ToString for NonEmptyList
```
results in the following error:
```anchorError derivingNotFound
No deriving handlers have been implemented for class `ToString`
```
Invoking {anchorTerm derivingNotFound}`deriving instance` causes Lean to consult an internal table of code generators for type class instances.
If the code generator is found, then it is invoked on the provided type to create the instance.
This message, however, means that no code generator was found for {anchorName derivingNotFound}`ToString`.
# Exercises
%%%
tag := "standard-classes-exercises"
%%%
* Write an instance of {anchorTerm moreOps}`HAppend (List α) (NonEmptyList α) (NonEmptyList α)` and test it.
* Implement a {anchorTerm FunctorLaws}`Functor` instance for the binary tree datatype. |
fp-lean/book/FPLean/TypeClasses/Indexing.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso Code External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.Classes"
set_option pp.rawOnError true
#doc (Manual) "Arrays and Indexing" =>
The {ref "props-proofs-indexing"}[Interlude] describes how to use indexing notation in order to look up entries in a list by their position.
This syntax is also governed by a type class, and it can be used for a variety of different types.
# Arrays
%%%
tag := "array-indexing"
%%%
For instance, Lean arrays are much more efficient than linked lists for most purposes.
In Lean, the type {anchorTerm arrVsList}`Array α` is a dynamically-sized array holding values of type {anchorName arrVsList}`α`, much like a Java {java}`ArrayList`, a C++ {cpp}`std::vector`, or a Rust {rust}`Vec`.
Unlike {anchorTerm arrVsList}`List`, which has a pointer indirection on each use of the {anchorName arrVsList}`cons` constructor, arrays occupy a contiguous region of memory, which is much better for processor caches.
Also, looking up a value in an array takes constant time, while lookup in a linked list takes time proportional to the index being accessed.
In pure functional languages like Lean, it is not possible to mutate a given position in a data structure.
Instead, a copy is made that has the desired modifications.
However, copying is not always necessary: the Lean compiler and runtime contain an optimization that can allow modifications to be implemented as mutations behind the scenes when there is only a single unique reference to an array.
Arrays are written similarly to lists, but with a leading {lit}`#`:
```anchor northernTrees
def northernTrees : Array String :=
#["sloe", "birch", "elm", "oak"]
```
The number of values in an array can be found using {anchorName arrVsList}`Array.size`.
For instance, {anchorTerm northernTreesSize}`northernTrees.size` evaluates to {anchorTerm northernTreesSize}`4`.
For indices that are smaller than an array's size, indexing notation can be used to find the corresponding value, just as with lists.
That is, {anchorTerm northernTreesTwo}`northernTrees[2]` evaluates to {anchorTerm northernTreesTwo}`"elm"`.
Similarly, the compiler requires a proof that an index is in bounds, and attempting to look up a value outside the bounds of the array results in a compile-time error, just as with lists.
For instance, {anchorTerm northernTreesEight}`northernTrees[8]` results in:
```anchorError northernTreesEight
failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid
- Use `a[i]?` notation instead, result is an `Option` type
- Use `a[i]'h` notation instead, where `h` is a proof that index is valid
⊢ 8 < northernTrees.size
```
# Non-Empty Lists
%%%
tag := "non-empty-list-indexing"
%%%
A datatype that represents non-empty lists can be defined as a structure with a field for the head of the list and a field for the tail, which is an ordinary, potentially empty list:
```anchor NonEmptyList
structure NonEmptyList (α : Type) : Type where
head : α
tail : List α
```
For example, the non-empty list {moduleName}`idahoSpiders` (which contains some spider species native to the US state of Idaho) consists of {anchorTerm firstSpider}`"Banded Garden Spider"` followed by four other spiders, for a total of five spiders:
```anchor idahoSpiders
def idahoSpiders : NonEmptyList String := {
head := "Banded Garden Spider",
tail := [
"Long-legged Sac Spider",
"Wolf Spider",
"Hobo Spider",
"Cat-faced Spider"
]
}
```
Looking up the value at a specific index in this list with a recursive function should consider three possibilities:
1. The index is {anchorTerm NEListGetHuh}`0`, in which case the head of the list should be returned.
2. The index is {anchorTerm NEListGetHuh}`n + 1` and the tail is empty, in which case the index is out of bounds.
3. The index is {anchorTerm NEListGetHuh}`n + 1` and the tail is non-empty, in which case the function can be called recursively on the tail and {anchorTerm NEListGetHuh}`n`.
For example, a lookup function that returns an {moduleName}`Option` can be written as follows:
```anchor NEListGetHuh
def NonEmptyList.get? : NonEmptyList α → Nat → Option α
| xs, 0 => some xs.head
| {head := _, tail := []}, _ + 1 => none
| {head := _, tail := h :: t}, n + 1 => get? {head := h, tail := t} n
```
Each case in the pattern match corresponds to one of the possibilities above.
The recursive call to {anchorName NEListGetHuh}`get?` does not require a {moduleName}`NonEmptyList` namespace qualifier because the body of the definition is implicitly in the definition's namespace.
Another way to write this function uses a list lookup {anchorTerm NEListGetHuhList}`xs.tail[n]?` when the index is greater than zero:
```anchor NEListGetHuhList
def NonEmptyList.get? : NonEmptyList α → Nat → Option α
| xs, 0 => some xs.head
| xs, n + 1 => xs.tail[n]?
```
If the list contains one entry, then only {anchorTerm nats}`0` is a valid index.
If it contains two entries, then both {anchorTerm nats}`0` and {anchorTerm nats}`1` are valid indices.
If it contains three entries, then {anchorTerm nats}`0`, {anchorTerm nats}`1`, and {anchorTerm nats}`2` are valid indices.
In other words, the valid indices into a non-empty list are natural numbers that are strictly less than the length of the list, which are less than or equal to the length of the tail.
The definition of what it means for an index to be in bounds should be written as an {kw}`abbrev` because the tactics used to find evidence that indices are acceptable are able to solve inequalities of numbers, but they don't know anything about the name {moduleName}`NonEmptyList.inBounds`:
```anchor inBoundsNEList
abbrev NonEmptyList.inBounds (xs : NonEmptyList α) (i : Nat) : Prop :=
i ≤ xs.tail.length
```
This function returns a proposition that might be true or false.
For instance, {anchorTerm spiderBoundsChecks}`2` is in bounds for {moduleName}`idahoSpiders`, while {anchorTerm spiderBoundsChecks}`5` is not:
```anchor spiderBoundsChecks
theorem atLeastThreeSpiders : idahoSpiders.inBounds 2 := by decide
theorem notSixSpiders : ¬idahoSpiders.inBounds 5 := by decide
```
The logical negation operator has a very low precedence, which means that {anchorTerm spiderBoundsChecks}`¬idahoSpiders.inBounds 5` is equivalent to {anchorTerm spiderBoundsChecks'}`¬(idahoSpiders.inBounds 5)`.
This fact can be used to write a lookup function that requires evidence that the index is valid, and thus need not return {moduleName}`Option`, by delegating to the version for lists that checks the evidence at compile time:
```anchor NEListGet
def NonEmptyList.get (xs : NonEmptyList α)
(i : Nat) (ok : xs.inBounds i) : α :=
match i with
| 0 => xs.head
| n + 1 => xs.tail[n]
```
It is, of course, possible to write this function to use the evidence directly, rather than delegating to a standard library function that happens to be able to use the same evidence.
This requires techniques for working with proofs and propositions that are described later in this book.
# Overloading Indexing
%%%
tag := "overloading-indexing"
%%%
Indexing notation for a collection type can be overloaded by defining an instance of the {anchorName GetElem}`GetElem` type class.
For the sake of flexibility, {anchorName GetElem}`GetElem` has four parameters:
* The type of the collection
* The type of the index
* The type of elements that are extracted from the collection
* A function that determines what counts as evidence that the index is in bounds
The element type and the evidence function are both output parameters.
{anchorName GetElem}`GetElem` has a single method, {anchorName GetElem}`getElem`, which takes a collection value, an index value, and evidence that the index is in bounds as arguments, and returns an element:
```anchor GetElem
class GetElem
(coll : Type)
(idx : Type)
(item : outParam Type)
(inBounds : outParam (coll → idx → Prop)) where
getElem : (c : coll) → (i : idx) → inBounds c i → item
```
In the case of {anchorTerm GetElemNEList}`NonEmptyList α`, these parameters are:
* The collection is {anchorTerm GetElemNEList}`NonEmptyList α`
* Indices have type {anchorName GetElemNEList}`Nat`
* The type of elements is {anchorName GetElemNEList}`α`
* An index is in bounds if it is less than or equal to the length of the tail
In fact, the {anchorTerm GetElemNEList}`GetElem` instance can delegate directly to {anchorTerm GetElemNEList}`NonEmptyList.get`:
```anchor GetElemNEList
instance : GetElem (NonEmptyList α) Nat α NonEmptyList.inBounds where
getElem := NonEmptyList.get
```
With this instance, {anchorTerm GetElemNEList}`NonEmptyList` becomes just as convenient to use as {moduleName}`List`.
Evaluating {anchorTerm firstSpider}`idahoSpiders.head` yields {anchorTerm firstSpider}`"Banded Garden Spider"`, while {anchorTerm tenthSpider}`idahoSpiders[9]` leads to the compile-time error:
```anchorError tenthSpider
failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid
- Use `a[i]?` notation instead, result is an `Option` type
- Use `a[i]'h` notation instead, where `h` is a proof that index is valid
⊢ idahoSpiders.inBounds 9
```
Because both the collection type and the index type are input parameters to the {anchorTerm ListPosElem}`GetElem` type class, new types can be used to index into existing collections.
The positive number type {anchorTerm ListPosElem}`Pos` is a perfectly reasonable index into a {anchorTerm ListPosElem}`List`, with the caveat that it cannot point at the first entry.
The following instance of {anchorTerm ListPosElem}`GetElem` allows {anchorTerm ListPosElem}`Pos` to be used just as conveniently as {moduleName}`Nat` to find a list entry:
```anchor ListPosElem
instance : GetElem (List α) Pos α
(fun list n => list.length > n.toNat) where
getElem (xs : List α) (i : Pos) ok := xs[i.toNat]
```
Indexing can also make sense for non-numeric indices.
For example, {moduleName}`Bool` can be used to select between the fields in a point, with {moduleName}`false` corresponding to {anchorTerm PPointBoolGetElem}`x` and {moduleName}`true` corresponding to {anchorTerm PPointBoolGetElem}`y`:
```anchor PPointBoolGetElem
instance : GetElem (PPoint α) Bool α (fun _ _ => True) where
getElem (p : PPoint α) (i : Bool) _ :=
if not i then p.x else p.y
```
In this case, both Booleans are valid indices.
Because every possible {moduleName}`Bool` is in bounds, the evidence is simply the true proposition {moduleName}`True`. |
fp-lean/book/FPLean/ProgramsProofs/SpecialTypes.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.SpecialTypes"
#doc (Manual) "Special Types" =>
%%%
tag := "runtime-special-types"
%%%
Understanding the representation of data in memory is very important.
Usually, the representation can be understood from the definition of a datatype.
Each constructor corresponds to an object in memory that has a header that includes a tag and a reference count.
The constructor's arguments are each represented by a pointer to some other object.
In other words, {anchorName all}`List` really is a linked list and extracting a field from a {kw}`structure` really does just chase a pointer.
There are, however, some important exceptions to this rule.
A number of types are treated specially by the compiler.
For example, the type {anchorName all}`UInt32` is defined as {anchorTerm all}`Fin (2 ^ 32)`, but it is replaced at run-time with an actual native implementation based on machine words.
Similarly, even though the definition of {anchorName all}`Nat` suggests an implementation similar to {anchorTerm all}`List Unit`, the actual run-time representation uses immediate machine words for sufficiently-small numbers and an efficient arbitrary-precision arithmetic library for larger numbers.
The Lean compiler translates from definitions that use pattern matching into the appropriate operations for this representation, and calls to operations like addition and subtraction are mapped to fast operations from the underlying arithmetic library.
After all, addition should not take time linear in the size of the addends.
The fact that some types have special representations also means that care is needed when working with them.
Most of these types consist of a {kw}`structure` that is treated specially by the compiler.
With these structures, using the constructor or the field accessors directly can trigger an expensive conversion from an efficient representation to a slow one that is convenient for proofs.
For example, {anchorName all}`String` is defined as a structure that contains a list of characters, but the run-time representation of strings uses UTF-8, not linked lists of pointers to characters.
Applying the constructor to a list of characters creates a byte array that encodes them in UTF-8, and accessing the field of the structure takes time linear in the length of the string to decode the UTF-8 representation and allocate a linked list.
Arrays are represented similarly.
From the logical perspective, arrays are structures that contain a list of array elements, but the run-time representation is a dynamically-sized array.
At run time, the constructor translates the list into an array, and the field accessor allocates a linked list from the array.
The various array operations are replaced with efficient versions by the compiler that mutate the array when possible instead of allocating a new one.
Both types themselves and proofs of propositions are completely erased from compiled code.
In other words, they take up no space, and any computations that might have been performed as part of a proof are similarly erased.
This means that proofs can take advantage of the convenient interface to strings and arrays as inductively-defined lists, including using induction to prove things about them, without imposing slow conversion steps while the program is running.
For these built-in types, a convenient logical representation of the data does not imply that the program must be slow.
If a structure type has only a single non-type non-proof field, then the constructor itself disappears at run time, being replaced with its single argument.
In other words, a subtype is represented identically to its underlying type, rather than with an extra layer of indirection.
Similarly, {anchorName all}`Fin` is just {anchorName all}`Nat` in memory, and single-field structures can be created to keep track of different uses of {anchorName all}`Nat`s or {anchorName all}`String`s without paying a performance penalty.
If a constructor has no non-type non-proof arguments, then the constructor also disappears and is replaced with a constant value where the pointer would otherwise be used.
This means that {anchorName all}`true`, {anchorName all}`false`, and {anchorName all}`none` are constant values, rather than pointers to heap-allocated objects.
The following types have special representations:
:::table +header
*
* Type
* Logical representation
* Run-time Representation
*
* {anchorName all}`Nat`
* Unary, with one pointer from each {anchorTerm all}`Nat.succ`
* Efficient arbitrary-precision integers
*
* {anchorName all}`Int`
* A sum type with constructors for positive or negative values, each containing a {anchorName all}`Nat`
* Efficient arbitrary-precision integers
*
* {anchorTerm all}`BitVec w`
* A {anchorName all}`Fin` with an appropriate bound $`2^w`
* Efficient arbitrary-precision integers
*
* {anchorName all}`UInt8`, {anchorName all}`UInt16`, {anchorName all}`UInt32`, {anchorName all}`UInt64`, {anchorName all}`USize`
* A bitvector of the correct width
* Fixed-precision machine integers
*
* {anchorName all}`Int8`, {anchorName all}`Int16`, {anchorName all}`Int32`, {anchorName all}`Int64`, {anchorName all}`ISize`
* A wrapped unsigned integer of the same width
* Fixed-precision machine integers
*
* {anchorName all}`Char`
* A {anchorName all}`UInt32` paired with a proof that it's a valid code point
* Ordinary characters
*
* {anchorName all}`String`
* A structure that contains a {anchorTerm all}`List Char` in a field called {anchorTerm StringDetail}`data`
* UTF-8-encoded string
*
* {anchorTerm sequences}`Array α`
* A structure that contains a {anchorTerm sequences}`List α` in a field called {anchorName sequences}`toList`
* Packed arrays of pointers to {anchorName sequences}`α` values
*
* {anchorTerm all}`Sort u`
* A type
* Erased completely
*
* Proofs of propositions
* Whatever data is suggested by the proposition when considered as a type of evidence
* Erased completely
:::
# Exercise
%%%
tag := "runtime-special-types-exercise"
%%%
The {ref "positive-numbers"}[definition of {anchorName Pos (module := Examples.Classes)}`Pos`] does not take advantage of Lean's compilation of {anchorName all}`Nat` to an efficient type.
At run time, it is essentially a linked list.
Alternatively, a subtype can be defined that allows Lean's fast {anchorName all}`Nat` type to be used internally, as described {ref "subtypes"}[in the initial section on subtypes].
At run time, the proof will be erased.
Because the resulting structure has only a single data field, it is represented as that field, which means that this new representation of {anchorName Pos (module := Examples.Classes)}`Pos` is identical to that of {anchorName all}`Nat`.
After proving the theorem {anchorTerm all}`∀ {n k : Nat}, n ≠ 0 → k ≠ 0 → n + k ≠ 0`, define instances of {anchorName all}`ToString`, and {anchorName all}`Add` for this new representation of {anchorName Pos (module := Examples.Classes)}`Pos`. Then, define an instance of {anchorName all}`Mul`, proving any necessary theorems along the way. |
fp-lean/book/FPLean/ProgramsProofs/Fin.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.Fin"
#doc (Manual) "Bounded Numbers" =>
%%%
tag := "Fin"
%%%
The {anchorTerm sundries}`GetElem` instance for {anchorName sundries}`Array` and {anchorName sundries}`Nat` requires a proof that the provided {anchorName sundries}`Nat` is smaller than the array.
In practice, these proofs often end up being passed to functions along with the indices.
Rather than passing an index and a proof separately, a type called {anchorName Fin}`Fin` can be used to bundle up the index and the proof into a single value.
This can make code easier to read.
The type {anchorTerm sundries}`Fin n` represents numbers that are strictly less than {anchorName sundries}`n`.
In other words, {anchorTerm sundries}`Fin 3` describes {anchorTerm sundries}`0`, {anchorTerm sundries}`1`, and {anchorTerm sundries}`2`, while {anchorTerm sundries}`Fin 0` has no values at all.
The definition of {anchorName Fin}`Fin` resembles {anchorName sundries}`Subtype`, as a {anchorTerm sundries}`Fin n` is a structure that contains a {anchorName Fin}`Nat` and a proof that it is less than {anchorName sundries}`n`:
```anchor Fin
structure Fin (n : Nat) where
val : Nat
isLt : LT.lt val n
```
Lean includes instances of {anchorName sundries}`ToString` and {anchorName sundries}`OfNat` that allow {anchorName Fin}`Fin` values to be conveniently used as numbers.
In other words, the output of {anchorTerm fiveFinEight}`#eval (5 : Fin 8)` is {anchorInfo fiveFinEight}`5`, rather than something like {lit}`{val := 5, isLt := _}`.
Instead of failing when the provided number is larger than the bound, the {anchorName sundries}`OfNat` instance for {anchorName Fin}`Fin` returns a value modulo the bound.
This means that {anchorTerm finOverflow}`#eval (45 : Fin 10)` results in {anchorInfo finOverflow}`5` rather than a compile-time error.
In a return type, a {anchorName Fin}`Fin` returned as a found index makes its connection to the data structure in which it was found more clear.
The {anchorName ArrayFind}`Array.find` in the {ref "proving-termination"}[previous section] returns an index that the caller cannot immediately use to perform lookups into the array, because the information about its validity has been lost.
A more specific type results in a value that can be used without making the program significantly more complicated:
```anchor ArrayFindHelper
def findHelper (arr : Array α) (p : α → Bool) (i : Nat) :
Option (Fin arr.size × α) :=
if h : i < arr.size then
let x := arr[i]
if p x then
some (⟨i, h⟩, x)
else findHelper arr p (i + 1)
else none
```
```anchor ArrayFind
def Array.find (arr : Array α) (p : α → Bool) : Option (Fin arr.size × α) :=
findHelper arr p 0
```
# Exercise
%%%
tag := "Fin-exercises"
%%%
Write a function {anchorTerm exercise}`Fin.next? : Fin n → Option (Fin n)` that returns the next largest {anchorName nextThreeFin}`Fin` when it would be in bounds, or {anchorName ArrayFindHelper}`none` if not.
Check that
```anchor nextThreeFin
#eval (3 : Fin 8).next?
```
outputs
```anchorInfo nextThreeFin
some 4
```
and that
```anchor nextSevenFin
#eval (7 : Fin 8).next?
```
outputs
```anchorInfo nextSevenFin
none
``` |
fp-lean/book/FPLean/ProgramsProofs/ArraysTermination.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.Arrays"
#doc (Manual) "Arrays and Termination" =>
%%%
tag := "array-termination"
%%%
To write efficient code, it is important to select appropriate data structures.
Linked lists have their place: in some applications, the ability to share the tails of lists is very important.
However, most use cases for a variable-length sequential collection of data are better served by arrays, which have both less memory overhead and better locality.
Arrays, however, have two drawbacks relative to lists:
1. Arrays are accessed through indexing, rather than by pattern matching, which imposes {ref "props-proofs-indexing"}[proof obligations] in order to maintain safety.
2. A loop that processes an entire array from left to right is a tail-recursive function, but it does not have an argument that decreases on each call.
Making effective use of arrays requires knowing how to prove to Lean that an array index is in bounds, and how to prove that an array index that approaches the size of the array also causes the program to terminate.
Both of these are expressed using an inequality proposition, rather than propositional equality.
# Inequality
%%%
tag := "inequality"
%%%
Because different types have different notions of ordering, inequality is governed by two type classes, called {anchorName ordSugarClasses (module := Examples.Classes)}`LE` and {anchorName ordSugarClasses (module := Examples.Classes)}`LT`.
The table in the section on {ref "equality-and-ordering"}[standard type classes] describes how these classes relate to the syntax:
:::table +header
*
* Expression
* Desugaring
* Class Name
*
* {anchorTerm ltDesugar (module := Examples.Classes)}`x < y`
* {anchorTerm ltDesugar (module := Examples.Classes)}`LT.lt x y`
* {anchorName ordSugarClasses (module := Examples.Classes)}`LT`
*
* {anchorTerm leDesugar (module := Examples.Classes)}`x ≤ y`
* {anchorTerm leDesugar (module := Examples.Classes)}`LE.le x y`
* {anchorName ordSugarClasses (module := Examples.Classes)}`LE`
*
* {anchorTerm gtDesugar (module := Examples.Classes)}`x > y`
* {anchorTerm gtDesugar (module := Examples.Classes)}`LT.lt y x`
* {anchorName ordSugarClasses (module := Examples.Classes)}`LT`
*
* {anchorTerm geDesugar (module := Examples.Classes)}`x ≥ y`
* {anchorTerm geDesugar (module := Examples.Classes)}`LE.le y x`
* {anchorName ordSugarClasses (module := Examples.Classes)}`LE`
:::
In other words, a type may customize the meaning of the {anchorTerm ltDesugar (module:=Examples.Classes)}`<` and {anchorTerm leDesugar (module:=Examples.Classes)}`≤` operators, while {anchorTerm gtDesugar (module:=Examples.Classes)}`>` and {anchorTerm geDesugar (module:=Examples.Classes)}`≥` derive their meanings from {anchorTerm ltDesugar (module:=Examples.Classes)}`<` and {anchorTerm leDesugar (module:=Examples.Classes)}`≤`.
The classes {anchorName ordSugarClasses (module := Examples.Classes)}`LT` and {anchorName ordSugarClasses (module := Examples.Classes)}`LE` have methods that return propositions rather than {anchorName CoeBoolProp (module:=Examples.Classes)}`Bool`s:
```anchor less
class LE (α : Type u) where
le : α → α → Prop
class LT (α : Type u) where
lt : α → α → Prop
```
The instance of {anchorName LENat}`LE` for {anchorName LENat}`Nat` delegates to {anchorName LENat}`Nat.le`:
```anchor LENat
instance : LE Nat where
le := Nat.le
```
Defining {anchorName LENat}`Nat.le` requires a feature of Lean that has not yet been presented: it is an inductively-defined relation.
## Inductively-Defined Propositions, Predicates, and Relations
%%%
tag := "inductive-props"
%%%
{anchorName LENat}`Nat.le` is an _inductively-defined relation_.
Just as {kw}`inductive` can be used to create new datatypes, it can be used to create new propositions.
When a proposition takes an argument, it is referred to as a _predicate_ that may be true for some, but not all, potential arguments.
Propositions that take multiple arguments are called _relations_.
Each constructor of an inductively defined proposition is a way to prove it.
In other words, the declaration of the proposition describes the different forms of evidence that it is true.
A proposition with no arguments that has a single constructor can be quite easy to prove:
```anchor EasyToProve
inductive EasyToProve : Prop where
| heresTheProof : EasyToProve
```
The proof consists of using its constructor:
```anchor fairlyEasy
theorem fairlyEasy : EasyToProve := by
constructor
```
In fact, the proposition {anchorName True}`True`, which should always be easy to prove, is defined just like {anchorName EasyToProve}`EasyToProve`:
```anchor True
inductive True : Prop where
| intro : True
```
Inductively-defined propositions that don't take arguments are not nearly as interesting as inductively-defined datatypes.
This is because data is interesting in its own right—the natural number {anchorTerm IsThree}`3` is different from the number {lit}`35`, and someone who has ordered 3 pizzas will be upset if 35 arrive at their door 30 minutes later.
The constructors of a proposition describe ways in which the proposition can be true, but once a proposition has been proved, there is no need to know _which_ underlying constructors were used.
This is why most interesting inductively-defined types in the {anchorTerm IsThree}`Prop` universe take arguments.
:::paragraph
The inductively-defined predicate {anchorName IsThree}`IsThree` states that its argument is three:
```anchor IsThree
inductive IsThree : Nat → Prop where
| isThree : IsThree 3
```
The mechanism used here is just like {ref "column-pointers"}[indexed families such as {moduleName (module := Examples.DependentTypes.DB)}`HasCol`], except the resulting type is a proposition that can be proved rather than data that can be used.
:::
Using this predicate, it is possible to prove that three is indeed three:
```anchor threeIsThree
theorem three_is_three : IsThree 3 := by
constructor
```
Similarly, {anchorName IsFive}`IsFive` is a predicate that states that its argument is {anchorTerm IsFive}`5`:
```anchor IsFive
inductive IsFive : Nat → Prop where
| isFive : IsFive 5
```
If a number is three, then the result of adding two to it should be five.
This can be expressed as a theorem statement:
```anchor threePlusTwoFive0
theorem three_plus_two_five : IsThree n → IsFive (n + 2) := by
skip
```
The resulting goal has a function type:
```anchorError threePlusTwoFive0
unsolved goals
n : Nat
⊢ IsThree n → IsFive (n + 2)
```
Thus, the {anchorTerm threePlusTwoFive1}`intro` tactic can be used to convert the argument into an assumption:
```anchor threePlusTwoFive1
theorem three_plus_two_five : IsThree n → IsFive (n + 2) := by
intro three
```
```anchorError threePlusTwoFive1
unsolved goals
n : Nat
three : IsThree n
⊢ IsFive (n + 2)
```
Given the assumption that {anchorName threePlusTwoFive1a}`n` is three, it should be possible to use the constructor of {anchorName threePlusTwoFive1a}`IsFive` to complete the proof:
```anchor threePlusTwoFive1a
theorem three_plus_two_five : IsThree n → IsFive (n + 2) := by
intro three
constructor
```
However, this results in an error:
```anchorError threePlusTwoFive1a
Tactic `constructor` failed: no applicable constructor found
n : Nat
three : IsThree n
⊢ IsFive (n + 2)
```
This error occurs because {anchorTerm threePlusTwoFive2}`n + 2` is not definitionally equal to {anchorTerm IsFive}`5`.
In an ordinary function definition, dependent pattern matching on the assumption {anchorName threePlusTwoFive2}`three` could be used to refine {anchorName threePlusTwoFive2}`n` to {anchorTerm threeIsThree}`3`.
The tactic equivalent of dependent pattern matching is {anchorTerm threePlusTwoFive2}`cases`, which has a syntax similar to that of {kw}`induction`:
```anchor threePlusTwoFive2
theorem three_plus_two_five : IsThree n → IsFive (n + 2) := by
intro three
cases three with
| isThree => skip
```
In the remaining case, {anchorName threePlusTwoFive2}`n` has been refined to {anchorTerm IsThree}`3`:
```anchorError threePlusTwoFive2
unsolved goals
case isThree
⊢ IsFive (3 + 2)
```
Because {anchorTerm various}`3 + 2` is definitionally equal to {anchorTerm IsFive}`5`, the constructor is now applicable:
```anchor threePlusTwoFive3
theorem three_plus_two_five : IsThree n → IsFive (n + 2) := by
intro three
cases three with
| isThree => constructor
```
The standard false proposition {anchorName various}`False` has no constructors, making it impossible to provide direct evidence for.
The only way to provide evidence for {anchorName various}`False` is if an assumption is itself impossible, similarly to how {kw}`nomatch` can be used to mark code that the type system can see is unreachable.
As described in {ref "connectives"}[the initial Interlude on proofs], the negation {anchorTerm various}`Not A` is short for {anchorTerm various}`A → False`.
{anchorTerm various}`Not A` can also be written {anchorTerm various}`¬A`.
It is not the case that four is three:
```anchor fourNotThree0
theorem four_is_not_three : ¬ IsThree 4 := by
skip
```
The initial proof goal contains {anchorName fourNotThree1}`Not`:
```anchorError fourNotThree0
unsolved goals
⊢ ¬IsThree 4
```
The fact that it's actually a function type can be exposed using {anchorTerm fourNotThree1}`unfold`:
```anchor fourNotThree1
theorem four_is_not_three : ¬ IsThree 4 := by
unfold Not
```
```anchorError fourNotThree1
unsolved goals
⊢ IsThree 4 → False
```
Because the goal is a function type, {anchorTerm fourNotThree2}`intro` can be used to convert the argument into an assumption.
There is no need to keep {anchorTerm fourNotThree1}`unfold`, as {anchorTerm fourNotThree2}`intro` can unfold the definition of {anchorName fourNotThree1}`Not` itself:
```anchor fourNotThree2
theorem four_is_not_three : ¬ IsThree 4 := by
intro h
```
```anchorError fourNotThree2
unsolved goals
h : IsThree 4
⊢ False
```
In this proof, the {anchorTerm fourNotThreeDone}`cases` tactic solves the goal immediately:
```anchor fourNotThreeDone
theorem four_is_not_three : ¬ IsThree 4 := by
intro h
cases h
```
Just as a pattern match on a {anchorTerm otherEx (module:=Examples.DependentTypes)}`Vect String 2` doesn't need to include a case for {anchorName otherEx (module:=Examples.DependentTypes)}`Vect.nil`, a proof by cases over {anchorTerm fourNotThreeDone}`IsThree 4` doesn't need to include a case for {anchorName IsThree}`isThree`.
## Inequality of Natural Numbers
%%%
tag := "inequality-of-natural-numbers"
%%%
The definition of {anchorName NatLe}`Nat.le` has a parameter and an index:
```anchor NatLe
inductive Nat.le (n : Nat) : Nat → Prop
| refl : Nat.le n n
| step : Nat.le n m → Nat.le n (m + 1)
```
The parameter {anchorName NatLe}`n` is the number that should be smaller, while the index is the number that should be greater than or equal to {anchorName NatLe}`n`.
The {anchorName NatLe}`refl` constructor is used when both numbers are equal, while the {anchorName NatLe}`step` constructor is used when the index is greater than {anchorName NatLe}`n`.
From the perspective of evidence, a proof that $`n \leq k` consists of finding some number $`d` such that $`n + d = m`.
In Lean, the proof then consists of a {anchorName leNames}`Nat.le.refl` constructor wrapped by $`d` instances of {anchorName leNames}`Nat.le.step`.
Each {anchorName NatLe}`step` constructor adds one to its index argument, so $`d` {anchorName NatLe}`step` constructors adds $`d` to the larger number.
For example, evidence that four is less than or equal to seven consists of three {anchorName NatLe}`step`s around a {anchorName NatLe}`refl`:
```anchor four_le_seven
theorem four_le_seven : 4 ≤ 7 :=
open Nat.le in
step (step (step refl))
```
The strict less-than relation is defined by adding one to the number on the left:
```anchor NatLt
def Nat.lt (n m : Nat) : Prop :=
Nat.le (n + 1) m
instance : LT Nat where
lt := Nat.lt
```
Evidence that four is strictly less than seven consists of two {anchorName four_lt_seven}`step`'s around a {anchorName four_lt_seven}`refl`:
```anchor four_lt_seven
theorem four_lt_seven : 4 < 7 :=
open Nat.le in
step (step refl)
```
This is because {anchorTerm four_lt_seven}`4 < 7` is equivalent to {anchorTerm four_lt_seven_alt}`5 ≤ 7`.
# Proving Termination
%%%
tag := "proving-termination"
%%%
The function {anchorName ArrayMap}`Array.map` transforms an array with a function, returning a new array that contains the result of applying the function to each element of the input array.
Writing it as a tail-recursive function follows the usual pattern of delegating to a function that passes the output array in an accumulator.
The accumulator is initialized with an empty array.
The accumulator-passing helper function also takes an argument that tracks the current index into the array, which starts at {anchorTerm ArrayMap}`0`:
```anchor ArrayMap
def Array.map (f : α → β) (arr : Array α) : Array β :=
arrayMapHelper f arr Array.empty 0
```
The helper should, at each iteration, check whether the index is still in bounds.
If so, it should loop again with the transformed element added to the end of the accumulator and the index incremented by {anchorTerm mapHelperIndexIssue}`1`.
If not, then it should terminate and return the accumulator.
An initial implementation of this code fails because Lean is unable to prove that the array index is valid:
```anchor mapHelperIndexIssue
def arrayMapHelper (f : α → β) (arr : Array α)
(soFar : Array β) (i : Nat) : Array β :=
if i < arr.size then
arrayMapHelper f arr (soFar.push (f arr[i])) (i + 1)
else soFar
```
```anchorError mapHelperIndexIssue
failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid
- Use `a[i]?` notation instead, result is an `Option` type
- Use `a[i]'h` notation instead, where `h` is a proof that index is valid
α : Type ?u.1811
β : Type ?u.1814
f : α → β
arr : Array α
soFar : Array β
i : Nat
⊢ i < arr.size
```
However, the conditional expression already checks the precise condition that the array index's validity demands (namely, {anchorTerm arrayMapHelperTermIssue}`i < arr.size`).
Adding a name to the {kw}`if` resolves the issue, because it adds an assumption that the array indexing tactic can use:
```anchor arrayMapHelperTermIssue
def arrayMapHelper (f : α → β) (arr : Array α)
(soFar : Array β) (i : Nat) : Array β :=
if inBounds : i < arr.size then
arrayMapHelper f arr (soFar.push (f arr[i])) (i + 1)
else soFar
```
Lean accepts the modified program, even though the recursive call is not made on an argument to one of the input constructors.
In fact, both the accumulator and the index grow, rather than shrinking.
Behind the scenes, Lean's proof automation constructs a termination proof.
Reconstructing this proof can make it easier to understand the cases that Lean cannot automatically recognize.
Why does {anchorName arrayMapHelperTermIssue}`arrayMapHelper` terminate?
Each iteration checks whether the index {anchorName arrayMapHelperTermIssue}`i` is still in bounds for the array {anchorName arrayMapHelperTermIssue}`arr`.
If so, {anchorName arrayMapHelperTermIssue}`i` is incremented and the loop repeats.
If not, the program terminates.
Because {anchorTerm arrayMapHelperTermIssue}`arr.size` is a finite number, {anchorName arrayMapHelperTermIssue}`i` can be incremented only a finite number of times.
Even though no argument to the function decreases on each call, {anchorTerm ArrayMapHelperOk}`arr.size - i` decreases toward zero.
The value that decreases at each recursive call is called a _measure_.
Lean can be instructed to use a specific expression as the measure of termination by providing a {kw}`termination_by` clause at the end of a definition.
For {anchorName ArrayMapHelperOk}`arrayMapHelper`, the explicit measure looks like this:
```anchor ArrayMapHelperOk
def arrayMapHelper (f : α → β) (arr : Array α)
(soFar : Array β) (i : Nat) : Array β :=
if inBounds : i < arr.size then
arrayMapHelper f arr (soFar.push (f arr[i])) (i + 1)
else soFar
termination_by arr.size - i
```
A similar termination proof can be used to write {anchorName ArrayFind}`Array.find`, a function that finds the first element in an array that satisfies a Boolean function and returns both the element and its index:
```anchor ArrayFind
def Array.find (arr : Array α) (p : α → Bool) :
Option (Nat × α) :=
findHelper arr p 0
```
Once again, the helper function terminates because {lit}`arr.size - i` decreases as {lit}`i` increases:
```anchor ArrayFindHelper
def findHelper (arr : Array α) (p : α → Bool)
(i : Nat) : Option (Nat × α) :=
if h : i < arr.size then
let x := arr[i]
if p x then
some (i, x)
else findHelper arr p (i + 1)
else none
```
Adding a question mark to {kw}`termination_by` (that is, using {kw}`termination_by?`) causes Lean to explicitly suggest the measure that it chose.
Clicking {lit}`[apply]` replaces {kw}`termination_by?` with the suggested measure:
```anchor ArrayFindHelperSugg
def findHelper (arr : Array α) (p : α → Bool)
(i : Nat) : Option (Nat × α) :=
if h : i < arr.size then
let x := arr[i]
if p x then
some (i, x)
else findHelper arr p (i + 1)
else none
termination_by?
```
```anchorInfo ArrayFindHelperSugg
Try this:
[apply] termination_by arr.size - i
```
Not all termination arguments are as quite as simple as this one.
However, the basic structure of identifying some expression based on the function's arguments that will decrease in each call occurs in all termination proofs.
Sometimes, creativity can be required in order to figure out just why a function terminates, and sometimes Lean requires additional proofs in order to accept that the measure in fact decreases.
# Exercises
%%%
tag := "array-termination-exercises"
%%%
* Implement a {anchorTerm ForMArr}`ForM m (Array α)` instance on arrays using a tail-recursive accumulator-passing function and a {kw}`termination_by` clause.
* Reimplement {anchorName ArrayMap}`Array.map`, {anchorName ArrayFind}`Array.find`, and the {anchorName ForMArr}`ForM` instance using {kw}`for`{lit}` ... `{kw}`in`{lit}` ...` loops in the identity monad and compare the resulting code.
* Reimplement array reversal using a {kw}`for`{lit}` ... `{kw}`in`{lit}` ...` loop in the identity monad. Compare it to the tail-recursive function. |
fp-lean/book/FPLean/ProgramsProofs/Inequalities.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.Inequalities"
#doc (Manual) "More Inequalities" =>
%%%
tag := "more-inequalities"
%%%
Lean's built-in proof automation is sufficient to check that {anchorName ArrayMapHelperOk (module:=Examples.ProgramsProofs.Arrays)}`arrayMapHelper` and {anchorName ArrayFindHelper (module:=Examples.ProgramsProofs.Arrays)}`findHelper` terminate.
All that was needed was to provide an expression whose value decreases with each recursive call.
However, Lean's built-in automation is not magic, and it often needs some help.
# Merge Sort
%%%
tag := "merge-sort"
%%%
One example of a function whose termination proof is non-trivial is merge sort on {moduleName}`List`.
Merge sort consists of two phases: first, a list is split in half.
Each half is sorted using merge sort, and then the results are merged using a function that combines two sorted lists into a larger sorted list.
The base cases are the empty list and the singleton list, both of which are already considered to be sorted.
To merge two sorted lists, there are two basic cases to consider:
1. If one of the input lists is empty, then the result is the other list.
2. If both lists are non-empty, then their heads should be compared. The result of the function is the smaller of the two heads, followed by the result of merging the remaining entries of both lists.
This is not structurally recursive on either list.
The recursion terminates because an entry is removed from one of the two lists in each recursive call, but it could be either list.
Behind the scenes, Lean uses this fact to prove that it terminates:
```anchor merge
def merge [Ord α] (xs : List α) (ys : List α) : List α :=
match xs, ys with
| [], _ => ys
| _, [] => xs
| x'::xs', y'::ys' =>
match Ord.compare x' y' with
| .lt | .eq => x' :: merge xs' (y' :: ys')
| .gt => y' :: merge (x'::xs') ys'
```
A simple way to split a list is to add each entry in the input list to two alternating output lists:
```anchor splitList
def splitList (lst : List α) : (List α × List α) :=
match lst with
| [] => ([], [])
| x :: xs =>
let (a, b) := splitList xs
(x :: b, a)
```
This splitting function is structurally recursive.
Merge sort checks whether a base case has been reached.
If so, it returns the input list.
If not, it splits the input, and merges the result of sorting each half:
```anchor mergeSortNoTerm
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
merge (mergeSort halves.fst) (mergeSort halves.snd)
```
Lean's pattern match compiler is able to tell that the assumption {anchorName mergeSortNoTerm}`h` introduced by the {kw}`if` that tests whether {anchorTerm mergeSortNoTerm}`xs.length < 2` rules out lists longer than one entry, so there is no “missing cases” error.
However, even though this program always terminates, it is not structurally recursive, and Lean is unable to automatically discover a decreasing measure:
```anchorError mergeSortNoTerm
fail to show termination for
mergeSort
with errors
failed to infer structural recursion:
Not considering parameter α of mergeSort:
it is unchanged in the recursive calls
Not considering parameter #2 of mergeSort:
it is unchanged in the recursive calls
Cannot use parameter xs:
failed to eliminate recursive application
mergeSort halves.fst
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
xs #1
1) 70:11-31 ? ?
2) 70:34-54 _ _
#1: xs.length
Please use `termination_by` to specify a decreasing measure.
```
The reason it terminates is that {anchorName mergeSortNoTerm}`splitList` always returns lists that are shorter than its input, at least when applied to lists that contain at least two elements.
Thus, the length of {anchorTerm mergeSortNoTerm}`halves.fst` and {anchorTerm mergeSortNoTerm}`halves.snd` are less than the length of {anchorName mergeSortNoTerm}`xs`.
This can be expressed using a {kw}`termination_by` clause:
```anchor mergeSortGottaProveIt
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
merge (mergeSort halves.fst) (mergeSort halves.snd)
termination_by xs.length
```
With this clause, the error message changes.
Instead of complaining that the function isn't structurally recursive, Lean instead points out that it was unable to automatically prove that {lit}`(splitList xs).fst.length < xs.length`:
```anchorError mergeSortGottaProveIt
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
α : Type u_1
xs : List α
h : ¬xs.length < 2
halves : List α × List α := splitList xs
⊢ (splitList xs).fst.length < xs.length
```
# Splitting a List Makes it Shorter
%%%
tag := "splitting-shortens"
%%%
It will also be necessary to prove that {lit}`(splitList xs).snd.length < xs.length`.
Because {anchorName splitList}`splitList` alternates between adding entries to the two lists, it is easiest to prove both statements at once, so the structure of the proof can follow the algorithm used to implement {anchorName splitList}`splitList`.
In other words, it is easiest to prove that {anchorTerm splitList_shorter_bad_ty}`∀(lst : List α), (splitList lst).fst.length < lst.length ∧ (splitList lst).snd.length < lst.length`.
Unfortunately, the statement is false.
In particular, {anchorTerm splitListEmpty}`splitList []` is {anchorTerm splitListEmpty}`([], [])`. Both output lists have length {anchorTerm ArrayMap (module:=Examples.ProgramsProofs.Arrays)}`0`, which is not less than {anchorTerm ArrayMap (module:=Examples.ProgramsProofs.Arrays)}`0`, the length of the input list.
Similarly, {anchorTerm splitListOne}`splitList ["basalt"]` evaluates to {anchorTerm splitListOne}`(["basalt"], [])`, and {anchorTerm splitListOne}`["basalt"]` is not shorter than {anchorTerm splitListOne}`["basalt"]`.
However, {anchorTerm splitListTwo}`splitList ["basalt", "granite"]` evaluates to {anchorTerm splitListTwo}`(["basalt"], ["granite"])`, and both of these output lists are shorter than the input list.
It turns out that the lengths of the output lists are always less than or equal to the length of the input list, but they are only strictly shorter when the input list contains at least two entries.
It turns out to be easiest to prove the former statement, then extend it to the latter statement.
Begin with a theorem statement:
```anchor splitList_shorter_le0
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
skip
```
```anchorError splitList_shorter_le0
unsolved goals
α : Type u_1
lst : List α
⊢ (splitList lst).fst.length ≤ lst.length ∧ (splitList lst).snd.length ≤ lst.length
```
Because {anchorName splitList}`splitList` is structurally recursive on the list, the proof should use induction.
The structural recursion in {anchorName splitList}`splitList` fits a proof by induction perfectly: the base case of the induction matches the base case of the recursion, and the inductive step matches the recursive call.
The {kw}`induction` tactic gives two goals:
```anchor splitList_shorter_le1a
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => skip
| cons x xs ih => skip
```
```anchorError splitList_shorter_le1a
unsolved goals
case nil
α : Type u_1
⊢ (splitList []).fst.length ≤ [].length ∧ (splitList []).snd.length ≤ [].length
```
```anchorError splitList_shorter_le1b
unsolved goals
case cons
α : Type u_1
x : α
xs : List α
ih : (splitList xs).fst.length ≤ xs.length ∧ (splitList xs).snd.length ≤ xs.length
⊢ (splitList (x :: xs)).fst.length ≤ (x :: xs).length ∧ (splitList (x :: xs)).snd.length ≤ (x :: xs).length
```
The goal for the {anchorName splitList_shorter_le2}`nil` case can be proved by invoking the simplifier and instructing it to unfold the definition of {anchorName splitList}`splitList`, because the length of the empty list is less than or equal to the length of the empty list.
Similarly, simplifying with {anchorName splitList}`splitList` in the {anchorName splitList_shorter_le2}`cons` case places {anchorName various}`Nat.succ` around the lengths in the goal:
```anchor splitList_shorter_le2
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => simp [splitList]
| cons x xs ih =>
simp [splitList]
```
```anchorError splitList_shorter_le2
unsolved goals
case cons
α : Type u_1
x : α
xs : List α
ih : (splitList xs).fst.length ≤ xs.length ∧ (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).snd.length ≤ xs.length ∧ (splitList xs).fst.length ≤ xs.length + 1
```
This is because the call to {anchorName various}`List.length` consumes the head of the list {anchorTerm splitList}`x :: xs`, converting it to a {anchorName various}`Nat.succ`, in both the length of the input list and the length of the first output list.
Writing {anchorTerm various}`A ∧ B` in Lean is short for {anchorTerm various}`And A B`.
{anchorName And}`And` is a structure type in the {anchorTerm And}`Prop` universe:
```anchor And
structure And (a b : Prop) : Prop where
intro ::
left : a
right : b
```
In other words, a proof of {anchorTerm various}`A ∧ B` consists of the {anchorName AndUse}`And.intro` constructor applied to a proof of {anchorName AndUse}`A` in the {anchorName And}`left` field and a proof of {anchorName AndUse}`B` in the {anchorName And}`right` field.
The {kw}`cases` tactic allows a proof to consider each constructor of a datatype or each potential proof of a proposition in turn.
It corresponds to a {kw}`match` expression without recursion.
Using {kw}`cases` on a structure results in the structure being broken apart, with an assumption added for each field of the structure, just as a pattern match expression extracts the field of a structure for use in a program.
Because structures have only one constructor, using {kw}`cases` on a structure does not result in additional goals.
Because {anchorName splitList_shorter_le3}`ih` is a proof of {lit}`List.length (splitList xs).fst ≤ List.length xs ∧ List.length (splitList xs).snd ≤ List.length xs`, using {anchorTerm splitList_shorter_le3}`cases ih` results in an assumption that {lit}`List.length (splitList xs).fst ≤ List.length xs` and an assumption that {lit}`List.length (splitList xs).snd ≤ List.length xs`:
```anchor splitList_shorter_le3
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => simp [splitList]
| cons x xs ih =>
simp [splitList]
cases ih
```
```anchorError splitList_shorter_le3
unsolved goals
case cons.intro
α : Type u_1
x : α
xs : List α
left✝ : (splitList xs).fst.length ≤ xs.length
right✝ : (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).snd.length ≤ xs.length ∧ (splitList xs).fst.length ≤ xs.length + 1
```
Because the goal of the proof is also an {anchorName AndUse}`And`, the {kw}`constructor` tactic can be used to apply {anchorName AndUse}`And.intro`, resulting in a goal for each argument:
```anchor splitList_shorter_le4
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => simp [splitList]
| cons x xs ih =>
simp [splitList]
cases ih
constructor
```
```anchorError splitList_shorter_le4
unsolved goals
case cons.intro.left
α : Type u_1
x : α
xs : List α
left✝ : (splitList xs).fst.length ≤ xs.length
right✝ : (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).snd.length ≤ xs.length
case cons.intro.right
α : Type u_1
x : α
xs : List α
left✝ : (splitList xs).fst.length ≤ xs.length
right✝ : (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).fst.length ≤ xs.length + 1
```
The {anchorTerm splitList_shorter_le5}`left` goal is identical to the {lit}`left✝` assumption, so the {kw}`assumption` tactic dispatches it:
```anchor splitList_shorter_le5
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => simp [splitList]
| cons x xs ih =>
simp [splitList]
cases ih
constructor
case left => assumption
```
```anchorError splitList_shorter_le5
unsolved goals
case cons.intro.right
α : Type u_1
x : α
xs : List α
left✝ : (splitList xs).fst.length ≤ xs.length
right✝ : (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).fst.length ≤ xs.length + 1
```
The {anchorTerm splitList_shorter_le}`right` goal resembles the {lit}`right✝` assumption, except the goal adds a {anchorTerm le_succ_of_le}`+ 1` only to the length of the input list.
It's time to prove that the inequality holds.
## Adding One to the Greater Side
%%%
tag := "le-succ-of-le"
%%%
The inequality needed to prove {anchorName splitList_shorter_le}`splitList_shorter_le` is {anchorTerm le_succ_of_le_statement}`∀(n m : Nat), n ≤ m → n ≤ m + 1`.
The incoming assumption that {anchorTerm le_succ_of_le_statement}`n ≤ m` essentially tracks the difference between {anchorName le_succ_of_le_statement}`n` and {anchorName le_succ_of_le_statement}`m` in the number of {anchorName le_succ_of_le_apply}`Nat.le.step` constructors.
Thus, the proof should add an extra {anchorName le_succ_of_le_apply}`Nat.le.step` in the base case.
Starting out, the statement reads:
```anchor le_succ_of_le0
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
skip
```
```anchorError le_succ_of_le0
unsolved goals
n m : Nat
⊢ n ≤ m → n ≤ m + 1
```
The first step is to introduce a name for the assumption that {anchorTerm le_succ_of_le1}`n ≤ m`:
```anchor le_succ_of_le1
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
```
```anchorError le_succ_of_le1
unsolved goals
n m : Nat
h : n ≤ m
⊢ n ≤ m + 1
```
The proof is by induction on this assumption:
```anchor le_succ_of_le2a
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => skip
| step _ ih => skip
```
In the case for {anchorName le_succ_of_le2a}`refl`, where {lit}`n = m`, the goal is to prove that {lit}`n ≤ n + 1`:
```anchorError le_succ_of_le2a
unsolved goals
case refl
n m : Nat
⊢ n ≤ n + 1
```
In the case for {anchorName le_succ_of_le2b}`step`, the goal is to prove that {anchorTerm le_succ_of_le2b}`n ≤ m + 1` under the assumption that {anchorTerm le_succ_of_le2b}`n ≤ m`:
```anchorError le_succ_of_le2b
unsolved goals
case step
n m m✝ : Nat
a✝ : n.le m✝
ih : n ≤ m✝ + 1
⊢ n ≤ m✝.succ + 1
```
For the {anchorName le_succ_of_le3}`refl` case, the {anchorName le_succ_of_le3}`step` constructor can be applied:
```anchor le_succ_of_le3
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => constructor
| step _ ih => skip
```
```anchorError le_succ_of_le3
unsolved goals
case refl.a
n m : Nat
⊢ n.le n
```
After {anchorName Nat.le_ctors}`step`, {anchorName Nat.le_ctors}`refl` can be used, which leaves only the goal for {anchorName le_succ_of_le4}`step`:
```anchor le_succ_of_le4
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => constructor; constructor
| step _ ih => skip
```
```anchorError le_succ_of_le4
unsolved goals
case step
n m m✝ : Nat
a✝ : n.le m✝
ih : n ≤ m✝ + 1
⊢ n ≤ m✝.succ + 1
```
For the step, applying the {anchorName Nat.le_ctors}`step` constructor transforms the goal into the induction hypothesis:
```anchor le_succ_of_le5
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => constructor; constructor
| step _ ih => constructor
```
```anchorError le_succ_of_le5
unsolved goals
case step.a
n m m✝ : Nat
a✝ : n.le m✝
ih : n ≤ m✝ + 1
⊢ n.le (m✝ + 1)
```
The final proof is as follows:
```anchor le_succ_of_le
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => constructor; constructor
| step => constructor; assumption
```
To reveal what's going on behind the scenes, the {kw}`apply` and {kw}`exact` tactics can be used to indicate exactly which constructor is being applied.
The {kw}`apply` tactic solves the current goal by applying a function or constructor whose return type matches, creating new goals for each argument that was not provided, while {kw}`exact` fails if any new goals would be needed:
```anchor le_succ_of_le_apply
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1 := by
intro h
induction h with
| refl => apply Nat.le.step; exact Nat.le.refl
| step _ ih => apply Nat.le.step; exact ih
```
The proof can be golfed:
```anchor le_succ_of_le_golf
theorem Nat.le_succ_of_le (h : n ≤ m) : n ≤ m + 1:= by
induction h <;> repeat (first | constructor | assumption)
```
In this short tactic script, both goals introduced by {kw}`induction` are addressed using {anchorTerm le_succ_of_le_golf}`repeat (first | constructor | assumption)`.
The tactic {lit}`first | T1 | T2 | ... | Tn` means to use try {lit}`T1` through {lit}`Tn` in order, using the first tactic that succeeds.
In other words, {anchorTerm le_succ_of_le_golf}`repeat (first | constructor | assumption)` applies constructors as long as it can, and then attempts to solve the goal using an assumption.
The proof can be shortened even further by using {tactic}`grind`, which includes a solver for linear arithmetic:
```anchor le_succ_of_le_grind
theorem Nat.le_succ_of_le (h : n ≤ m) : n ≤ m + 1:= by
grind
```
Finally, the proof can be written as a recursive function:
```anchor le_succ_of_le_recursive
theorem Nat.le_succ_of_le : n ≤ m → n ≤ m + 1
| .refl => .step .refl
| .step h => .step (Nat.le_succ_of_le h)
```
Each style of proof can be appropriate to different circumstances.
The detailed proof script is useful in cases where beginners may be reading the code, or where the steps of the proof provide some kind of insight.
The short, highly-automated proof script is typically easier to maintain, because automation is frequently both flexible and robust in the face of small changes to definitions and datatypes.
The recursive function is typically both harder to understand from the perspective of mathematical proofs and harder to maintain, but it can be a useful bridge for programmers who are beginning to work with interactive theorem proving.
## Finishing the Proof
%%%
tag := "finishing-splitList-shorter-proof"
%%%
Now that both helper theorems have been proved, the rest of {anchorName splitList_shorter_le5}`splitList_shorter_le` will be completed quickly.
The current proof state has one goal remaining:
```anchorError splitList_shorter_le5
unsolved goals
case cons.intro.right
α : Type u_1
x : α
xs : List α
left✝ : (splitList xs).fst.length ≤ xs.length
right✝ : (splitList xs).snd.length ≤ xs.length
⊢ (splitList xs).fst.length ≤ xs.length + 1
```
Using {anchorName splitList_shorter_le}`Nat.le_succ_of_le` together with the {lit}`right✝` assumption completes the proof:
```anchor splitList_shorter_le
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
induction lst with
| nil => simp [splitList]
| cons x xs ih =>
simp [splitList]
cases ih
constructor
case left => assumption
case right =>
apply Nat.le_succ_of_le
assumption
```
The next step is to return to the actual theorem that is needed to prove that merge sort terminates: that so long as a list has at least two entries, both results of splitting it are strictly shorter.
```anchor splitList_shorter_start
theorem splitList_shorter (lst : List α) (_ : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length ∧
(splitList lst).snd.length < lst.length := by
skip
```
```anchorError splitList_shorter_start
unsolved goals
α : Type u_1
lst : List α
x✝ : lst.length ≥ 2
⊢ (splitList lst).fst.length < lst.length ∧ (splitList lst).snd.length < lst.length
```
Pattern matching works just as well in tactic scripts as it does in programs.
Because {anchorName splitList_shorter_1}`lst` has at least two entries, they can be exposed with {kw}`match`, which also refines the type through dependent pattern matching:
```anchor splitList_shorter_1
theorem splitList_shorter (lst : List α) (_ : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length ∧
(splitList lst).snd.length < lst.length := by
match lst with
| x :: y :: xs =>
skip
```
```anchorError splitList_shorter_1
unsolved goals
α : Type u_1
lst : List α
x y : α
xs : List α
x✝ : (x :: y :: xs).length ≥ 2
⊢ (splitList (x :: y :: xs)).fst.length < (x :: y :: xs).length ∧
(splitList (x :: y :: xs)).snd.length < (x :: y :: xs).length
```
Simplifying using {anchorName splitList}`splitList` removes {anchorName splitList_shorter_2}`x` and {anchorName splitList_shorter_2}`y`, resulting in the computed lengths of lists each gaining a {anchorTerm le_succ_of_le}`+ 1`:
```anchor splitList_shorter_2
theorem splitList_shorter (lst : List α) (_ : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length ∧
(splitList lst).snd.length < lst.length := by
match lst with
| x :: y :: xs =>
simp [splitList]
```
```anchorError splitList_shorter_2
unsolved goals
α : Type u_1
lst : List α
x y : α
xs : List α
x✝ : (x :: y :: xs).length ≥ 2
⊢ (splitList xs).fst.length < xs.length + 1 ∧ (splitList xs).snd.length < xs.length + 1
```
Replacing {anchorTerm splitList_shorter_2b}`simp` with {anchorTerm splitList_shorter_2b}`simp +arith` removes these {anchorTerm le_succ_of_le}`+ 1`s, because {anchorTerm splitList_shorter_2b}`simp +arith` makes use of the fact that {anchorTerm Nat.lt_imp}`n + 1 < m + 1` implies {anchorTerm Nat.lt_imp}`n < m`:
```anchor splitList_shorter_2b
theorem splitList_shorter (lst : List α) (_ : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length ∧
(splitList lst).snd.length < lst.length := by
match lst with
| x :: y :: xs =>
simp +arith [splitList]
```
```anchorError splitList_shorter_2b
unsolved goals
α : Type u_1
lst : List α
x y : α
xs : List α
x✝ : (x :: y :: xs).length ≥ 2
⊢ (splitList xs).fst.length ≤ xs.length ∧ (splitList xs).snd.length ≤ xs.length
```
This goal now matches {anchorName splitList_shorter}`splitList_shorter_le`, which can be used to conclude the proof:
```anchor splitList_shorter
theorem splitList_shorter (lst : List α) (_ : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length ∧
(splitList lst).snd.length < lst.length := by
match lst with
| x :: y :: xs =>
simp +arith [splitList]
apply splitList_shorter_le
```
The facts needed to prove that {anchorName mergeSort}`mergeSort` terminates can be pulled out of the resulting {anchorName AndUse}`And`:
```anchor splitList_shorter_sides
theorem splitList_shorter_fst (lst : List α) (h : lst.length ≥ 2) :
(splitList lst).fst.length < lst.length :=
splitList_shorter lst h |>.left
theorem splitList_shorter_snd (lst : List α) (h : lst.length ≥ 2) :
(splitList lst).snd.length < lst.length :=
splitList_shorter lst h |>.right
```
## A Simpler Proof
%%%
tag := "splitList-shorter-le-simpler-proof"
%%%
:::paragraph
Instead of using ordinary induction, {anchorName splitList_shorter_le_funInd1}`splitList_shorter_le` can be proved using functional induction, resulting in one case for each branch of {anchorName splitList}`splitList`:
```anchor splitList_shorter_le_funInd1
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
fun_induction splitList with
| case1 => skip
| case2 x xs a b splitEq ih => skip
```
The first case matches the base case of {anchorName splitList}`splitList`.
_Both_ applications of {anchorName splitList}`splitList` have been replaced by the result of this first branch:
```anchorError splitList_shorter_le_funInd1
unsolved goals
case case1
α : Type u_1
⊢ ([], []).fst.length ≤ [].length ∧ ([], []).snd.length ≤ [].length
```
The second case matches the recursive branch of {anchorName splitList}`splitList`.
In addition to the induction hypothesis, the value of the {anchorTerm splitList}`let` in {anchorName splitList}`splitList` is tracked in an assumption:
```anchorError splitList_shorter_le_funInd1
unsolved goals
case case2
α : Type u_1
x : α
xs a b : List α
splitEq : splitList xs = (a, b)
ih : (splitList xs).fst.length ≤ xs.length ∧ (splitList xs).snd.length ≤ xs.length
⊢ (x :: b, a).fst.length ≤ (x :: xs).length ∧ (x :: b, a).snd.length ≤ (x :: xs).length
```
:::
While the second case looks a bit complicated, everything needed to complete the proof is present.
Indeed, {tactic}`grind` can prove both goals immediately:
```anchor splitList_shorter_le_funInd2
theorem splitList_shorter_le (lst : List α) :
(splitList lst).fst.length ≤ lst.length ∧
(splitList lst).snd.length ≤ lst.length := by
fun_induction splitList <;> grind
```
# Merge Sort Terminates
%%%
tag := "merge-sort-terminates"
%%%
Merge sort has two recursive calls, one for each sub-list returned by {anchorName splitList}`splitList`.
Each recursive call will require a proof that the length of the list being passed to it is shorter than the length of the input list.
It's usually convenient to write a termination proof in two steps: first, write down the propositions that will allow Lean to verify termination, and then prove them.
Otherwise, it's possible to put a lot of effort into proving the propositions, only to find out that they aren't quite what's needed to establish that the recursive calls are on smaller inputs.
The {lit}`sorry` tactic can prove any goal, even false ones.
It isn't intended for use in production code or final proofs, but it is a convenient way to “sketch out” a proof or program ahead of time.
Any definitions or theorems that use {lit}`sorry` are annotated with a warning.
The initial sketch of {anchorName mergeSortSorry}`mergeSort`'s termination argument that uses {lit}`sorry` can be written by copying the goals that Lean couldn't prove into {kw}`have`-expressions.
In Lean, {kw}`have` is similar to {kw}`let`.
When using {kw}`have`, the name is optional.
Typically, {kw}`let` is used to define names that refer to interesting values, while {kw}`have` is used to locally prove propositions that can be found when Lean is searching for evidence that an array lookup is in-bounds or that a function terminates.
```anchor mergeSortSorry
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
have : halves.fst.length < xs.length := by
sorry
have : halves.snd.length < xs.length := by
sorry
merge (mergeSort halves.fst) (mergeSort halves.snd)
termination_by xs.length
```
The warning is located on the name {anchorName mergeSortSorry}`mergeSort`:
```anchorWarning mergeSortSorry
declaration uses 'sorry'
```
Because there are no errors, the proposed propositions are enough to establish termination.
The proofs begin by applying the helper theorems:
```anchor mergeSortNeedsGte
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
have : halves.fst.length < xs.length := by
apply splitList_shorter_fst
have : halves.snd.length < xs.length := by
apply splitList_shorter_snd
merge (mergeSort halves.fst) (mergeSort halves.snd)
termination_by xs.length
```
Both proofs fail, because {anchorName mergeSortNeedsGte}`splitList_shorter_fst` and {anchorName mergeSortNeedsGte}`splitList_shorter_snd` both require a proof that {anchorTerm mergeSortGteStarted}`xs.length ≥ 2`:
```anchorError mergeSortNeedsGte
unsolved goals
case h
α : Type ?u.80367
inst✝ : Ord α
xs : List α
h : ¬xs.length < 2
halves : List α × List α := ⋯
⊢ xs.length ≥ 2
```
To check that this will be enough to complete the proof, add it using {lit}`sorry` and check for errors:
```anchor mergeSortGteStarted
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
have : xs.length ≥ 2 := by sorry
have : halves.fst.length < xs.length := by
apply splitList_shorter_fst
assumption
have : halves.snd.length < xs.length := by
apply splitList_shorter_snd
assumption
merge (mergeSort halves.fst) (mergeSort halves.snd)
termination_by xs.length
```
Once again, there is only a warning.
```anchorWarning mergeSortGteStarted
declaration uses 'sorry'
```
There is one promising assumption available: {lit}`h : ¬List.length xs < 2`, which comes from the {kw}`if`.
Clearly, if it is not the case that {anchorTerm mergeSort}`xs.length < 2`, then {anchorTerm mergeSort}`xs.length ≥ 2`.
The {anchorTerm mergeSort}`grind` tactic solves this goal, and the program is now complete:
```anchor mergeSort
def mergeSort [Ord α] (xs : List α) : List α :=
if h : xs.length < 2 then
match xs with
| [] => []
| [x] => [x]
else
let halves := splitList xs
have : xs.length ≥ 2 := by
grind
have : halves.fst.length < xs.length := by
apply splitList_shorter_fst
assumption
have : halves.snd.length < xs.length := by
apply splitList_shorter_snd
assumption
merge (mergeSort halves.fst) (mergeSort halves.snd)
termination_by xs.length
```
The function can be tested on examples:
```anchor mergeSortRocks
#eval mergeSort ["soapstone", "geode", "mica", "limestone"]
```
```anchorInfo mergeSortRocks
["geode", "limestone", "mica", "soapstone"]
```
```anchor mergeSortNumbers
#eval mergeSort [5, 3, 22, 15]
```
```anchorInfo mergeSortNumbers
[3, 5, 15, 22]
```
# Division as Iterated Subtraction
%%%
tag := "division-as-iterated-subtraction"
%%%
Just as multiplication is iterated addition and exponentiation is iterated multiplication, division can be understood as iterated subtraction.
The {ref "recursive-functions"}[very first description of recursive functions in this book] presents a version of division that terminates when the divisor is not zero, but that Lean does not accept.
Proving that division terminates requires the use of a fact about inequalities.
Lean cannot prove that this definition of division terminates:
```anchor divTermination (module := Examples.ProgramsProofs.Div)
def div (n k : Nat) : Nat :=
if n < k then
0
else
1 + div (n - k) k
```
```anchorError divTermination (module := Examples.ProgramsProofs.Div)
fail to show termination for
div
with errors
failed to infer structural recursion:
Not considering parameter k of div:
it is unchanged in the recursive calls
Cannot use parameter k:
failed to eliminate recursive application
div (n - k) k
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
k n : Nat
h✝ : ¬n < k
⊢ n - k < n
```
That's a good thing, because it doesn't!
When {anchorName divTermination (module:=Examples.ProgramsProofs.Div)}`k` is {anchorTerm divTermination (module:=Examples.ProgramsProofs.Div)}`0`, value of {anchorName divTermination (module:=Examples.ProgramsProofs.Div)}`n` does not decrease, so the program is an infinite loop.
:::paragraph
Rewriting the function to take evidence that {anchorName divRecursiveWithProof (module:=Examples.ProgramsProofs.Div)}`k` is not {anchorTerm divRecursiveNeedsProof (module:=Examples.ProgramsProofs.Div)}`0` allows Lean to automatically prove termination:
```anchor divRecursiveNeedsProof (module := Examples.ProgramsProofs.Div)
def div (n k : Nat) (ok : k ≠ 0) : Nat :=
if h : n < k then
0
else
1 + div (n - k) k ok
```
:::
This definition of {anchorName divRecursiveWithProof (module:=Examples.ProgramsProofs.Div)}`div` terminates because the first argument {anchorName divRecursiveWithProof (module:=Examples.ProgramsProofs.Div)}`n` is smaller on each recursive call.
This can be expressed using a {kw}`termination_by` clause:
```anchor divRecursiveWithProof (module := Examples.ProgramsProofs.Div)
def div (n k : Nat) (ok : k ≠ 0) : Nat :=
if h : n < k then
0
else
1 + div (n - k) k ok
termination_by n
```
# Exercises
%%%
tag := "inequalities-exercises"
%%%
Prove the following theorems without using {tactic}`grind`:
* For all natural numbers $`n`, $`0 < n + 1`.
* For all natural numbers $`n`, $`0 \leq n`.
* For all natural numbers $`n` and $`k`, $`(n + 1) - (k + 1) = n - k`
* For all natural numbers $`n` and $`k`, if $`k < n` then $`n \neq 0`
* For all natural numbers $`n`, $`n - n = 0`
* For all natural numbers $`n` and $`k`, if $`n + 1 < k` then $`n < k` |
fp-lean/book/FPLean/ProgramsProofs/InsertionSort.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.InsertionSort"
#doc (Manual) "Insertion Sort and Array Mutation" =>
%%%
tag := "insertion-sort-mutation"
%%%
While insertion sort does not have the optimal worst-case time complexity for a sorting algorithm, it still has a number of useful properties:
* It is simple and straightforward to implement and understand
* It is an in-place algorithm, requiring no additional space to run
* It is a stable sort
* It is fast when the input is already almost sorted
In-place algorithms are particularly useful in Lean due to the way it manages memory.
In some cases, operations that would normally copy an array can be optimized into mutation.
This includes swapping elements in an array.
Most languages and run-time systems with automatic memory management, including JavaScript, the JVM, and .NET, use tracing garbage collection.
When memory needs to be reclaimed, the system starts at a number of _roots_ (such as the call stack and global values) and then determines which values can be reached by recursively chasing pointers.
Any values that can't be reached are deallocated, freeing memory.
Reference counting is an alternative to tracing garbage collection that is used by a number of languages, including Python, Swift, and Lean.
In a system with reference counting, each object in memory has a field that tracks how many references there are to it.
When a new reference is established, the counter is incremented.
When a reference ceases to exist, the counter is decremented.
When the counter reaches zero, the object is immediately deallocated.
Reference counting has one major disadvantage compared to a tracing garbage collector: circular references can lead to memory leaks.
If object $`A` references object $`B` , and object $`B` references object $`A`, they will never be deallocated, even if nothing else in the program references either $`A` or $`B`.
Circular references result either from uncontrolled recursion or from mutable references.
Because Lean supports neither, it is impossible to construct circular references.
Reference counting means that the Lean runtime system's primitives for allocating and deallocating data structures can check whether a reference count is about to fall to zero, and re-use an existing object instead of allocating a new one.
This is particularly important when working with large arrays.
An implementation of insertion sort for Lean arrays should satisfy the following criteria:
1. Lean should accept the function without a {kw}`partial` annotation
2. If passed an array to which there are no other references, it should modify the array in-place rather than allocating a new one
The first criterion is easy to check: if Lean accepts the definition, then it is satisfied.
The second, however, requires a means of testing it.
Lean provides a built-in function called {anchorName dbgTraceIfSharedSig}`dbgTraceIfShared` with the following signature:
```anchor dbgTraceIfSharedSig
#check dbgTraceIfShared
```
```anchorInfo dbgTraceIfSharedSig
dbgTraceIfShared.{u} {α : Type u} (s : String) (a : α) : α
```
It takes a string and a value as arguments, and prints a message that uses the string to standard error if the value has more than one reference, returning the value.
This is not, strictly speaking, a pure function.
However, it is intended to be used only during development to check that a function is in fact able to re-use memory rather than allocating and copying.
When learning to use {anchorName dbgTraceIfSharedSig}`dbgTraceIfShared`, it's important to know that {kw}`#eval` will report that many more values are shared than in compiled code.
This can be confusing.
It's important to build an executable with {lit}`lake` rather than experimenting in an editor.
Insertion sort consists of two loops.
The outer loop moves a pointer from left to right across the array to be sorted.
After each iteration, the region of the array to the left of the pointer is sorted, while the region to the right may not yet be sorted.
The inner loop takes the element pointed to by the pointer and moves it to the left until the appropriate location has been found and the loop invariant has been restored.
In other words, each iteration inserts the next element of the array into the appropriate location in the sorted region.
# The Inner Loop
%%%
tag := "inner-insertion-sort-loop"
%%%
The inner loop of insertion sort can be implemented as a tail-recursive function that takes the array and the index of the element being inserted as arguments.
The element being inserted is repeatedly swapped with the element to its left until either the element to the left is smaller or the beginning of the array is reached.
The inner loop is structurally recursive on the {anchorName insertionSortLoop}`Nat` that is inside the {anchorName insertSorted}`Fin` used to index into the array:
```anchor insertSorted
def insertSorted [Ord α] (arr : Array α) (i : Fin arr.size) : Array α :=
match i with
| ⟨0, _⟩ => arr
| ⟨i' + 1, _⟩ =>
have : i' < arr.size := by
grind
match Ord.compare arr[i'] arr[i] with
| .lt | .eq => arr
| .gt =>
insertSorted (arr.swap i' i) ⟨i', by simp [*]⟩
```
If the index {anchorName insertSorted}`i` is {anchorTerm insertSorted}`0`, then the element being inserted into the sorted region has reached the beginning of the region and is the smallest.
If the index is {anchorTerm insertSorted}`i' + 1`, then the element at {anchorName insertSorted}`i'` should be compared to the element at {anchorName insertSorted}`i`.
Note that while {anchorName insertSorted}`i` is a {anchorTerm insertSorted}`Fin arr.size`, {anchorName insertSorted}`i'` is just a {anchorName insertionSortLoop}`Nat` because it results from the {anchorName names}`val` field of {anchorName insertSorted}`i`.
Nonetheless, the proof automation used for checking array index notation includes a solver for linear integer arithmetic, so {anchorName insertSorted}`i'` is automatically usable as an index.
The two elements are looked up and compared.
If the element to the left is less than or equal to the element being inserted, then the loop is finished and the invariant has been restored.
If the element to the left is greater than the element being inserted, then the elements are swapped and the inner loop begins again.
{anchorName names}`Array.swap` takes both of its indices as {anchorName names}`Nat`s, using the same tactics as array indexing behind the scenes to ensure that they are in bounds.
Nonetheless, the {anchorName names}`Fin` used for the recursive call needs a proof that {anchorName insertSorted}`i'` is in bounds for the result of swapping two elements.
The {anchorTerm insertSorted}`simp` tactic's database contains the fact that swapping two elements of an array doesn't change its size, and the {anchorTerm insertSorted}`[*]` argument instructs it to additionally use the assumption introduced by {kw}`have`.
Omitting the {kw}`have`-expression with the proof that {anchorTerm insertSorted}`i' < arr.size` reveals the following goal:
```anchorError insertSortedNoProof
unsolved goals
α : Type ?u.7
inst✝ : Ord α
arr : Array α
i : Fin arr.size
i' : Nat
isLt✝ : i' + 1 < arr.size
⊢ i' < arr.size
```
# The Outer Loop
%%%
tag := "outer-insertion-sort-loop"
%%%
The outer loop of insertion sort moves the pointer from left to right, invoking {anchorName insertionSortLoop}`insertSorted` at each iteration to insert the element at the pointer into the correct position in the array.
The basic form of the loop resembles the implementation of {anchorTerm etc}`Array.map`:
```anchor insertionSortLoopTermination
def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
```
An error occurs because there is no argument that decreases at every recursive call:
```anchorError insertionSortLoopTermination
fail to show termination for
insertionSortLoop
with errors
failed to infer structural recursion:
Not considering parameter α of insertionSortLoop:
it is unchanged in the recursive calls
Not considering parameter #2 of insertionSortLoop:
it is unchanged in the recursive calls
Cannot use parameter arr:
the type Array α does not have a `.brecOn` recursor
Cannot use parameter i:
failed to eliminate recursive application
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
Could not find a decreasing measure.
The basic measures relate at each recursive call as follows:
(<, ≤, =: relation proved, ? all proofs failed, _: no proof attempted)
arr i #1
1) 569:4-55 ? ? ?
#1: arr.size - i
Please use `termination_by` to specify a decreasing measure.
```
While Lean can prove that a {anchorName insertionSortLoop}`Nat` that increases towards a constant bound at each iteration leads to a terminating function, this function has no constant bound because the array is replaced with the result of calling {anchorName insertionSortLoop}`insertSorted` at each iteration.
Before constructing the termination proof, it can be convenient to test the definition with a {kw}`partial` modifier to make sure that it returns the expected answers:
```anchor partialInsertionSortLoop
partial def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
```
```anchor insertionSortPartialOne
#eval insertionSortLoop #[5, 17, 3, 8] 0
```
```anchorInfo insertionSortPartialOne
#[3, 5, 8, 17]
```
```anchor insertionSortPartialTwo
#eval insertionSortLoop #["metamorphic", "igneous", "sedimentary"] 0
```
```anchorInfo insertionSortPartialTwo
#["igneous", "metamorphic", "sedimentary"]
```
## Termination
%%%
tag := "insertionSortLoop-termination"
%%%
Once again, the function terminates because the difference between the index and the size of the array being processed decreases on each recursive call.
This time, however, Lean does not accept the {kw}`termination_by`:
```anchor insertionSortLoopProof1
def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
termination_by arr.size - i
```
```anchorError insertionSortLoopProof1
failed to prove termination, possible solutions:
- Use `have`-expressions to prove the remaining goals
- Use `termination_by` to specify a different well-founded relation
- Use `decreasing_by` to specify your own tactic for discharging this kind of goal
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Nat
h : i < arr.size
⊢ (insertSorted arr ⟨i, h⟩).size - (i + 1) < arr.size - i
```
The problem is that Lean has no way to know that {anchorName insertionSortLoop}`insertSorted` returns an array that's the same size as the one it is passed.
In order to prove that {anchorName insertionSortLoop}`insertionSortLoop` terminates, it is necessary to first prove that {anchorName insertionSortLoop}`insertSorted` doesn't change the size of the array.
Copying the unproved termination condition from the error message to the function and “proving” it with {anchorTerm insertionSortLoopSorry}`sorry` allows the function to be temporarily accepted:
```anchor insertionSortLoopSorry
def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
have : (insertSorted arr ⟨i, h⟩).size - (i + 1) < arr.size - i := by
sorry
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
termination_by arr.size - i
```
```anchorWarning insertionSortLoopSorry
declaration uses 'sorry'
```
Because {anchorName insertionSortLoop}`insertSorted` is structurally recursive on the index of the element being inserted, the proof should be by induction on the index.
In the base case, the array is returned unchanged, so its length certainly does not change.
For the inductive step, the induction hypothesis is that a recursive call on the next smaller index will not change the length of the array.
There are two cases two consider: either the element has been fully inserted into the sorted region and the array is returned unchanged, in which case the length is also unchanged, or the element is swapped with the next one before the recursive call.
However, swapping two elements in an array doesn't change the size of it, and the induction hypothesis states that the recursive call with the next index returns an array that's the same size as its argument.
Thus, the size remains unchanged.
Translating this English-language theorem statement to Lean and proceeding using the techniques from this chapter is enough to prove the base case and make progress in the inductive step:
```anchor insert_sorted_size_eq_0
theorem insert_sorted_size_eq [Ord α] (arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
match i with
| ⟨j, isLt⟩ =>
induction j with
| zero => simp [insertSorted]
| succ j' ih =>
simp [insertSorted]
```
The simplification using {anchorName insert_sorted_size_eq_0}`insertSorted` in the inductive step revealed the pattern match in {anchorName insert_sorted_size_eq_0}`insertSorted`:
```anchorError insert_sorted_size_eq_0
unsolved goals
case succ
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Fin arr.size
j' : Nat
ih : ∀ (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
isLt : j' + 1 < arr.size
⊢ (match compare arr[j'] arr[j' + 1] with
| Ordering.lt => arr
| Ordering.eq => arr
| Ordering.gt => insertSorted (arr.swap j' (j' + 1) ⋯ ⋯) ⟨j', ⋯⟩).size =
arr.size
```
When faced with a goal that includes {kw}`if` or {kw}`match`, the {anchorTerm insert_sorted_size_eq_1}`split` tactic (not to be confused with the {anchorName splitList (module := Examples.ProgramsProofs.Inequalities)}`splitList` function used in the definition of merge sort) replaces the goal with one new goal for each path of control flow:
```anchor insert_sorted_size_eq_1
theorem insert_sorted_size_eq [Ord α] (arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
match i with
| ⟨j, isLt⟩ =>
induction j with
| zero => simp [insertSorted]
| succ j' ih =>
simp [insertSorted]
split
```
Because it typically doesn't matter _how_ a statement was proved, but only _that_ it was proved, proofs in Lean's output are typically replaced by {lit}`⋯`.
Additionally, each new goal has an assumption that indicates which branch led to that goal, named {lit}`heq✝` in this case:
```anchorError insert_sorted_size_eq_1
unsolved goals
case h_1
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Fin arr.size
j' : Nat
ih : ∀ (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
isLt : j' + 1 < arr.size
x✝ : Ordering
heq✝ : compare arr[j'] arr[j' + 1] = Ordering.lt
⊢ arr.size = arr.size
case h_2
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Fin arr.size
j' : Nat
ih : ∀ (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
isLt : j' + 1 < arr.size
x✝ : Ordering
heq✝ : compare arr[j'] arr[j' + 1] = Ordering.eq
⊢ arr.size = arr.size
case h_3
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Fin arr.size
j' : Nat
ih : ∀ (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
isLt : j' + 1 < arr.size
x✝ : Ordering
heq✝ : compare arr[j'] arr[j' + 1] = Ordering.gt
⊢ (insertSorted (arr.swap j' (j' + 1) ⋯ ⋯) ⟨j', ⋯⟩).size = arr.size
```
Rather than write proofs for both simple cases, adding {anchorTerm insert_sorted_size_eq_2}`<;> try rfl` after {anchorTerm insert_sorted_size_eq_2}`split` causes the two straightforward cases to disappear immediately, leaving only a single goal:
```anchor insert_sorted_size_eq_2
theorem insert_sorted_size_eq [Ord α] (arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
match i with
| ⟨j, isLt⟩ =>
induction j with
| zero => simp [insertSorted]
| succ j' ih =>
simp [insertSorted]
split <;> try rfl
```
```anchorError insert_sorted_size_eq_2
unsolved goals
case h_3
α : Type u_1
inst✝ : Ord α
arr : Array α
i : Fin arr.size
j' : Nat
ih : ∀ (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
isLt : j' + 1 < arr.size
x✝ : Ordering
heq✝ : compare arr[j'] arr[j' + 1] = Ordering.gt
⊢ (insertSorted (arr.swap j' (j' + 1) ⋯ ⋯) ⟨j', ⋯⟩).size = arr.size
```
Unfortunately, the induction hypothesis is not strong enough to prove this goal.
The induction hypothesis states that calling {anchorName insert_sorted_size_eq_3}`insertSorted` on {anchorName insert_sorted_size_eq_3}`arr` leaves the size unchanged, but the proof goal is to show that the result of the recursive call with the result of swapping leaves the size unchanged.
Successfully completing the proof requires an induction hypothesis that works for _any_ array that is passed to {anchorName insert_sorted_size_eq_3}`insertSorted` together with the smaller index as an argument
It is possible to get a strong induction hypothesis by using the {anchorTerm insert_sorted_size_eq_3}`generalizing` option to the {anchorTerm insert_sorted_size_eq_3}`induction` tactic.
This option brings additional assumptions from the context into the statement that's used to generate the base case, the induction hypothesis, and the goal to be shown in the inductive step.
Generalizing over {anchorName insert_sorted_size_eq_3}`arr` leads to a stronger hypothesis:
```anchor insert_sorted_size_eq_3
theorem insert_sorted_size_eq [Ord α] (arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
match i with
| ⟨j, isLt⟩ =>
induction j generalizing arr with
| zero => simp [insertSorted]
| succ j' ih =>
simp [insertSorted]
split <;> try rfl
```
In the resulting goal, {anchorName insert_sorted_size_eq_3}`arr` is now part of a “for all” statement in the inductive hypothesis:
```anchorError insert_sorted_size_eq_3
unsolved goals
case h_3
α : Type u_1
inst✝ : Ord α
j' : Nat
ih : ∀ (arr : Array α) (i : Fin arr.size) (isLt : j' < arr.size), (insertSorted arr ⟨j', isLt⟩).size = arr.size
arr : Array α
i : Fin arr.size
isLt : j' + 1 < arr.size
x✝ : Ordering
heq✝ : compare arr[j'] arr[j' + 1] = Ordering.gt
⊢ (insertSorted (arr.swap j' (j' + 1) ⋯ ⋯) ⟨j', ⋯⟩).size = arr.size
```
:::paragraph
However, this whole proof is beginning to get unmanageable.
The next step would be to introduce a variable standing for the length of the result of swapping, show that it is equal to {anchorTerm insert_sorted_size_eq_3}`arr.size`, and then show that this variable is also equal to the length of the array that results from the recursive call.
These equality statements can then be chained together to prove the goal.
It's much easier, however, to use functional induction:
```anchor insert_sorted_size_eq_funInd1
theorem insert_sorted_size_eq [Ord α]
(arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
fun_induction insertSorted with
| case1 arr isLt => skip
| case2 arr i isLt this isLt => skip
| case3 arr i isLt this isEq => skip
| case4 arr i isLt this isGt ih => skip
```
The first goal is the case for index {anchorTerm insertSorted}`0`.
Here, the array is not modified, so proving that its size is unmodified will not require any complicated steps:
```anchorError insert_sorted_size_eq_funInd1
unsolved goals
case case1
α : Type u_1
inst✝ : Ord α
arr✝ arr : Array α
isLt : 0 < arr.size
⊢ arr.size = arr.size
```
The next two goals are the same, and cover the {anchorName insertSorted}`.lt` and {anchorName insertSorted}`.eq` cases for the element comparison.
The local assumptions {anchorName insert_sorted_size_eq_funInd1}`isLt` and {anchorName insert_sorted_size_eq_funInd1}`isEq` will allow the correct branch of the {anchorTerm insertSorted}`match` to be selected:
```anchorError insert_sorted_size_eq_funInd1
unsolved goals
case case2
α : Type u_1
inst✝ : Ord α
arr✝ arr : Array α
i : Nat
isLt✝ : i + 1 < arr.size
this : i < arr.size
isLt : compare arr[i] arr[⟨i.succ, isLt✝⟩] = Ordering.lt
⊢ (match compare arr[i] arr[⟨i.succ, isLt✝⟩] with
| Ordering.lt => arr
| Ordering.eq => arr
| Ordering.gt => insertSorted (arr.swap i (↑⟨i.succ, isLt✝⟩) this ⋯) ⟨i, ⋯⟩).size =
arr.size
```
```anchorError insert_sorted_size_eq_funInd1
unsolved goals
case case3
α : Type u_1
inst✝ : Ord α
arr✝ arr : Array α
i : Nat
isLt : i + 1 < arr.size
this : i < arr.size
isEq : compare arr[i] arr[⟨i.succ, isLt⟩] = Ordering.eq
⊢ (match compare arr[i] arr[⟨i.succ, isLt⟩] with
| Ordering.lt => arr
| Ordering.eq => arr
| Ordering.gt => insertSorted (arr.swap i (↑⟨i.succ, isLt⟩) this ⋯) ⟨i, ⋯⟩).size =
arr.size
```
In the final case, once the {anchorTerm insertSorted}`match` is reduced, there will be some work left to do to prove that the next step of the insertion preserves the size of the array.
In particular, the induction hypothesis states that the size of the next step is equal to the size of the result of the swap, but the desired conclusion is that it's equal to the size of the original array:
```anchorError insert_sorted_size_eq_funInd1
unsolved goals
case case4
α : Type u_1
inst✝ : Ord α
arr✝ arr : Array α
i : Nat
isLt : i + 1 < arr.size
this : i < arr.size
isGt : compare arr[i] arr[⟨i.succ, isLt⟩] = Ordering.gt
ih : (insertSorted (arr.swap i (↑⟨i.succ, isLt⟩) this ⋯) ⟨i, ⋯⟩).size = (arr.swap i (↑⟨i.succ, isLt⟩) this ⋯).size
⊢ (match compare arr[i] arr[⟨i.succ, isLt⟩] with
| Ordering.lt => arr
| Ordering.eq => arr
| Ordering.gt => insertSorted (arr.swap i (↑⟨i.succ, isLt⟩) this ⋯) ⟨i, ⋯⟩).size =
arr.size
```
:::
:::paragraph
The Lean library includes the theorem {anchorName insert_sorted_size_eq_funInd}`Array.size_swap`, which states that swapping two elements of an array doesn't change its size.
By default, {tactic}`grind` doesn't use this fact, but once instructed to do so, it can take care of all four cases:
```anchor insert_sorted_size_eq_funInd
theorem insert_sorted_size_eq [Ord α]
(arr : Array α) (i : Fin arr.size) :
(insertSorted arr i).size = arr.size := by
fun_induction insertSorted <;> grind [Array.size_swap]
```
:::
:::paragraph
This proof can now be used to replace the {anchorTerm insertionSortLoopSorry}`sorry` in {anchorName insertionSortLoopSorry}`insertionSortLoop`.
In particular, this theorem allows {anchorTerm insertionSortLoop}`grind` to succeed:
```anchor insertionSortLoop
def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
have : (insertSorted arr ⟨i, h⟩).size - (i + 1) < arr.size - i := by
grind [insert_sorted_size_eq]
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
termination_by arr.size - i
```
:::
# The Driver Function
%%%
tag := "insertion-sort-driver-function"
%%%
Insertion sort itself calls {anchorName insertionSort}`insertionSortLoop`, initializing the index that demarcates the sorted region of the array from the unsorted region to {anchorTerm insertionSort}`0`:
```anchor insertionSort
def insertionSort [Ord α] (arr : Array α) : Array α :=
insertionSortLoop arr 0
```
A few quick tests show the function is at least not blatantly wrong:
```anchor insertionSortNums
#eval insertionSort #[3, 1, 7, 4]
```
```anchorInfo insertionSortNums
#[1, 3, 4, 7]
```
```anchor insertionSortStrings
#eval insertionSort #[ "quartz", "marble", "granite", "hematite"]
```
```anchorInfo insertionSortStrings
#["granite", "hematite", "marble", "quartz"]
```
# Is This Really Insertion Sort?
%%%
tag := "insertion-sort-in-place"
%%%
Insertion sort is _defined_ to be an in-place sorting algorithm.
What makes it useful, despite its quadratic worst-case run time, is that it is a stable sorting algorithm that doesn't allocate extra space and that handles almost-sorted data efficiently.
If each iteration of the inner loop allocated a new array, then the algorithm wouldn't _really_ be insertion sort.
Lean's array operations, such as {anchorName names}`Array.set` and {anchorName names}`Array.swap`, check whether the array in question has a reference count that is greater than one.
If so, then the array is visible to multiple parts of the code, which means that it must be copied.
Otherwise, Lean would no longer be a pure functional language.
However, when the reference count is exactly one, there are no other potential observers of the value.
In these cases, the array primitives mutate the array in place.
What other parts of the program don't know can't hurt them.
Lean's proof logic works at the level of pure functional programs, not the underlying implementation.
This means that the best way to discover whether a program unnecessarily copies data is to test it.
Adding calls to {anchorName dbgTraceIfSharedSig}`dbgTraceIfShared` at each point where mutation is desired causes the provided message to be printed to {lit}`stderr` when the value in question has more than one reference.
Insertion sort has precisely one place that is at risk of copying rather than mutating: the call to {anchorName names}`Array.swap`.
Replacing {anchorTerm insertSorted}`arr.swap i' i` with {anchorTerm InstrumentedInsertionSort (module := Examples.ProgramsProofs.InstrumentedInsertionSort)}`(dbgTraceIfShared "array to swap" arr).swap i' i` causes the program to emit {lit}`shared RC array to swap` whenever it is unable to mutate the array.
However, this change to the program changes the proofs as well, because now there's a call to an additional function.
Adding a local assumption that {anchorName dbgTraceIfSharedSig}`dbgTraceIfShared` preserves the length of its argument and adding it to some calls to {anchorTerm InstrumentedInsertionSort (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`simp` is enough to fix the program and proofs.
The complete instrumented code for insertion sort is:
```anchor InstrumentedInsertionSort (module := Examples.ProgramsProofs.InstrumentedInsertionSort)
def insertSorted [Ord α] (arr : Array α) (i : Fin arr.size) : Array α :=
match i with
| ⟨0, _⟩ => arr
| ⟨i' + 1, _⟩ =>
have : i' < arr.size := by
omega
match Ord.compare arr[i'] arr[i] with
| .lt | .eq => arr
| .gt =>
have : (dbgTraceIfShared "array to swap" arr).size = arr.size := by
simp [dbgTraceIfShared]
insertSorted
((dbgTraceIfShared "array to swap" arr).swap i' i)
⟨i', by simp [*]⟩
theorem insert_sorted_size_eq [Ord α] (len : Nat) (i : Nat) :
(arr : Array α) → (isLt : i < arr.size) → (arr.size = len) →
(insertSorted arr ⟨i, isLt⟩).size = len := by
induction i with
| zero =>
intro arr isLt hLen
simp [insertSorted, *]
| succ i' ih =>
intro arr isLt hLen
simp [insertSorted, dbgTraceIfShared]
split <;> simp [*]
def insertionSortLoop [Ord α] (arr : Array α) (i : Nat) : Array α :=
if h : i < arr.size then
have : (insertSorted arr ⟨i, h⟩).size - (i + 1) < arr.size - i := by
rw [insert_sorted_size_eq arr.size i arr h rfl]
omega
insertionSortLoop (insertSorted arr ⟨i, h⟩) (i + 1)
else
arr
termination_by arr.size - i
def insertionSort [Ord α] (arr : Array α) : Array α :=
insertionSortLoop arr 0
```
A bit of cleverness is required to check whether the instrumentation actually works.
First off, the Lean compiler aggressively optimizes function calls away when all their arguments are known at compile time.
Simply writing a program that applies {anchorName InstrumentedInsertionSort (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`insertionSort` to a large array is not sufficient, because the resulting compiled code may contain only the sorted array as a constant.
The easiest way to ensure that the compiler doesn't optimize away the sorting routine is to read the array from {anchorName getLines (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`stdin`.
Secondly, the compiler performs dead code elimination.
Adding extra {kw}`let`s to the program won't necessarily result in more references in running code if the {kw}`let`-bound variables are never used.
To ensure that the extra reference is not eliminated entirely, it's important to ensure that the extra reference is somehow used.
The first step in testing the instrumentation is to write {anchorName getLines (module := Examples.ProgramsProofs.InstrumentedInsertionSort)}`getLines`, which reads an array of lines from standard input:
```anchor getLines (module := Examples.ProgramsProofs.InstrumentedInsertionSort)
def getLines : IO (Array String) := do
let stdin ← IO.getStdin
let mut lines : Array String := #[]
let mut currLine ← stdin.getLine
while !currLine.isEmpty do
-- Drop trailing newline:
lines := lines.push (currLine.dropRight 1)
currLine ← stdin.getLine
pure lines
```
{anchorName various (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`IO.FS.Stream.getLine` returns a complete line of text, including the trailing newline.
It returns {anchorTerm mains (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`""` when the end-of-file marker has been reached.
Next, two separate {anchorName main (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`main` routines are needed.
Both read the array to be sorted from standard input, ensuring that the calls to {anchorName mains (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`insertionSort` won't be replaced by their return values at compile time.
Both then print to the console, ensuring that the calls to {anchorName insertionSort}`insertionSort` won't be optimized away entirely.
One of them prints only the sorted array, while the other prints both the sorted array and the original array.
The second function should trigger a warning that {anchorName names}`Array.swap` had to allocate a new array:
```anchor mains (module := Examples.ProgramsProofs.InstrumentedInsertionSort)
def mainUnique : IO Unit := do
let lines ← getLines
for line in insertionSort lines do
IO.println line
def mainShared : IO Unit := do
let lines ← getLines
IO.println "--- Sorted lines: ---"
for line in insertionSort lines do
IO.println line
IO.println ""
IO.println "--- Original data: ---"
for line in lines do
IO.println line
```
The actual {anchorName main (module:=Examples.ProgramsProofs.InstrumentedInsertionSort)}`main` simply selects one of the two main actions based on the provided command-line arguments:
```anchor main (module := Examples.ProgramsProofs.InstrumentedInsertionSort)
def main (args : List String) : IO UInt32 := do
match args with
| ["--shared"] => mainShared; pure 0
| ["--unique"] => mainUnique; pure 0
| _ =>
IO.println "Expected single argument, either \"--shared\" or \"--unique\""
pure 1
```
Running it with no arguments produces the expected usage information:
```commands «sort-sharing» "sort-demo"
$ expect -f ./run-usage # sort
Expected single argument, either "--shared" or "--unique"
```
The file {lit}`test-data` contains the following rocks:
```file «sort-sharing» "sort-demo/test-data"
schist
feldspar
diorite
pumice
obsidian
shale
gneiss
marble
flint
```
Using the instrumented insertion sort on these rocks results them being printed in alphabetical order:
```commands «sort-sharing» "sort-demo"
$ sort --unique < test-data
diorite
feldspar
flint
gneiss
marble
obsidian
pumice
schist
shale
```
However, the version in which a reference is retained to the original array results in a notification on {lit}`stderr` (namely, {lit}`shared RC array to swap`) from the first call to {anchorName names}`Array.swap`:
```commands «sort-sharing» "sort-demo"
$ sort --shared < test-data
--- Sorted lines: ---
diorite
feldspar
flint
gneiss
marble
obsidian
pumice
schist
shale
--- Original data: ---
schist
feldspar
diorite
pumice
obsidian
shale
gneiss
marble
flint
shared RC array to swap
```
The fact that only a single {lit}`shared RC` notification appears means that the array is copied only once.
This is because the copy that results from the call to {anchorName names}`Array.swap` is itself unique, so no further copies need to be made.
In an imperative language, subtle bugs can result from forgetting to explicitly copy an array before passing it by reference.
When running {lit}`sort --shared`, the array is copied as needed to preserve the pure functional meaning of Lean programs, but no more.
# Other Opportunities for Mutation
%%%
tag := none
%%%
The use of mutation instead of copying when references are unique is not limited to array update operators.
Lean also attempts to “recycle” constructors whose reference counts are about to fall to zero, reusing them instead of allocating new data.
This means, for instance, that {anchorName names}`List.map` will mutate a linked list in place, at least in cases when nobody could possibly notice.
One of the most important steps in optimizing hot loops in Lean code is making sure that the data being modified is not referred to from multiple locations.
# Exercises
%%%
tag := "insertion-sort-exercises"
%%%
* Write a function that reverses arrays. Test that if the input array has a reference count of one, then your function does not allocate a new array.
* Implement either merge sort or quicksort for arrays. Prove that your implementation terminates, and test that it doesn't allocate more arrays than expected. This is a challenging exercise! |
fp-lean/book/FPLean/ProgramsProofs/TailRecursion.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.TCO"
#doc (Manual) "Tail Recursion" =>
%%%
tag := "tail-recursion"
%%%
While Lean's {kw}`do`-notation makes it possible to use traditional loop syntax such as {kw}`for` and {kw}`while`, these constructs are translated behind the scenes to invocations of recursive functions.
In most programming languages, recursive functions have a key disadvantage with respect to loops: loops consume no space on the stack, while recursive functions consume stack space proportional to the number of recursive calls.
Stack space is typically limited, and it is often necessary to take algorithms that are naturally expressed as recursive functions and rewrite them as loops paired with an explicit mutable heap-allocated stack.
In functional programming, the opposite is typically true.
Programs that are naturally expressed as mutable loops may consume stack space, while rewriting them to recursive functions can cause them to run quickly.
This is due to a key aspect of functional programming languages: _tail-call elimination_.
A tail call is a call from one function to another that can be compiled to an ordinary jump, replacing the current stack frame rather than pushing a new one, and tail-call elimination is the process of implementing this transformation.
Tail-call elimination is not just merely an optional optimization.
Its presence is a fundamental part of being able to write efficient functional code.
For it to be useful, it must be reliable.
Programmers must be able to reliably identify tail calls, and they must be able to trust that the compiler will eliminate them.
The function {anchorName NonTailSum}`NonTail.sum` adds the contents of a list of {anchorName NonTailSum}`Nat`s:
```anchor NonTailSum
def NonTail.sum : List Nat → Nat
| [] => 0
| x :: xs => x + sum xs
```
Applying this function to the list {anchorTerm NonTailSumOneTwoThree}`[1, 2, 3]` results in the following sequence of evaluation steps:
```anchorEvalSteps NonTailSumOneTwoThree
NonTail.sum [1, 2, 3]
===>
1 + (NonTail.sum [2, 3])
===>
1 + (2 + (NonTail.sum [3]))
===>
1 + (2 + (3 + (NonTail.sum [])))
===>
1 + (2 + (3 + 0))
===>
1 + (2 + 3)
===>
1 + 5
===>
6
```
In the evaluation steps, parentheses indicate recursive calls to {anchorName NonTailSumOneTwoThree}`NonTail.sum`.
In other words, to add the three numbers, the program must first check that the list is non-empty.
To add the head of the list ({anchorTerm NonTailSumOneTwoThree}`1`) to the sum of the tail of the list, it is first necessary to compute the sum of the tail of the list:
```anchorEvalStep NonTailSumOneTwoThree 1
1 + (NonTail.sum [2, 3])
```
But to compute the sum of the tail of the list, the program must check whether it is empty.
It is not—the tail is itself a list with {anchorTerm NonTailSumOneTwoThree}`2` at its head.
The resulting step is waiting for the return of {anchorTerm NonTailSumOneTwoThree}`NonTail.sum [3]`:
```anchorEvalStep NonTailSumOneTwoThree 2
1 + (2 + (NonTail.sum [3]))
```
The whole point of the run-time call stack is to keep track of the values {anchorTerm NonTailSumOneTwoThree}`1`, {anchorTerm NonTailSumOneTwoThree}`2`, and {anchorTerm NonTailSumOneTwoThree}`3` along with the instruction to add them to the result of the recursive call.
As recursive calls are completed, control returns to the stack frame that made the call, so each step of addition is performed.
Storing the heads of the list and the instructions to add them is not free; it takes space proportional to the length of the list.
The function {anchorName TailSum}`Tail.sum` also adds the contents of a list of {anchorName TailSum}`Nat`s:
```anchor TailSum
def Tail.sumHelper (soFar : Nat) : List Nat → Nat
| [] => soFar
| x :: xs => sumHelper (x + soFar) xs
def Tail.sum (xs : List Nat) : Nat :=
Tail.sumHelper 0 xs
```
Applying it to the list {anchorTerm TailSumOneTwoThree}`[1, 2, 3]` results in the following sequence of evaluation steps:
```anchorEvalSteps TailSumOneTwoThree
Tail.sum [1, 2, 3]
===>
Tail.sumHelper 0 [1, 2, 3]
===>
Tail.sumHelper (0 + 1) [2, 3]
===>
Tail.sumHelper 1 [2, 3]
===>
Tail.sumHelper (1 + 2) [3]
===>
Tail.sumHelper 3 [3]
===>
Tail.sumHelper (3 + 3) []
===>
Tail.sumHelper 6 []
===>
6
```
The internal helper function calls itself recursively, but it does so in a way where nothing needs to be remembered in order to compute the final result.
When {anchorName TailSum}`Tail.sumHelper` reaches its base case, control can be returned directly to {anchorName TailSum}`Tail.sum`, because the intermediate invocations of {anchorName TailSum}`Tail.sumHelper` simply return the results of their recursive calls unmodified.
In other words, a single stack frame can be re-used for each recursive invocation of {anchorName TailSum}`Tail.sumHelper`.
Tail-call elimination is exactly this re-use of the stack frame, and {anchorName TailSum}`Tail.sumHelper` is referred to as a _tail-recursive function_.
The first argument to {anchorName TailSum}`Tail.sumHelper` contains all of the information that would otherwise need to be tracked in the call stack—namely, the sum of the numbers encountered so far.
In each recursive call, this argument is updated with new information, rather than adding new information to the call stack.
Arguments like {anchorName TailSum}`soFar` that replace the information from the call stack are called _accumulators_.
At the time of writing and on the author's computer, {anchorName NonTailSum}`NonTail.sum` crashes with a stack overflow when passed a list with 216,856 or more entries.
{anchorName TailSum}`Tail.sum`, on the other hand, can sum a list of 100,000,000 elements without a stack overflow.
Because no new stack frames need to be pushed while running {anchorName TailSum}`Tail.sum`, it is completely equivalent to a {kw}`while` loop with a mutable variable that holds the current list.
At each recursive call, the function argument on the stack is simply replaced with the next node of the list.
# Tail and Non-Tail Positions
%%%
tag := "tail-positions"
%%%
The reason why {anchorName TailSum}`Tail.sumHelper` is tail recursive is that the recursive call is in _tail position_.
Informally speaking, a function call is in tail position when the caller does not need to modify the returned value in any way, but will just return it directly.
More formally, tail position can be defined explicitly for expressions.
If a {kw}`match`-expression is in tail position, then each of its branches is also in tail position.
Once a {kw}`match` has selected a branch, control proceeds immediately to it.
Similarly, both branches of an {kw}`if`-expression are in tail position if the {kw}`if`-expression itself is in tail position.
Finally, if a {kw}`let`-expression is in tail position, then its body is as well.
All other positions are not in tail position.
The arguments to a function or a constructor are not in tail position because evaluation must track the function or constructor that will be applied to the argument's value.
The body of an inner function is not in tail position because control may not even pass to it: function bodies are not evaluated until the function is called.
Similarly, the body of a function type is not in tail position.
To evaluate {lit}`E` in {lit}`(x : α) → E`, it is necessary to track that the resulting type must have {lit}`(x : α) → ...` wrapped around it.
In {anchorName NonTailSum}`NonTail.sum`, the recursive call is not in tail position because it is an argument to {anchorTerm NonTailSum}`+`.
In {anchorName TailSum}`Tail.sumHelper`, the recursive call is in tail position because it is immediately underneath a pattern match, which itself is the body of the function.
At the time of writing, Lean only eliminates direct tail calls in recursive functions.
This means that tail calls to {lit}`f` in {lit}`f`'s definition will be eliminated, but not tail calls to some other function {lit}`g`.
While it is certainly possible to eliminate a tail call to some other function, saving a stack frame, this is not yet implemented in Lean.
# Reversing Lists
%%%
tag := "reversing-lists-tail-recursively"
%%%
The function {anchorName NonTailReverse}`NonTail.reverse` reverses lists by appending the head of each sub-list to the end of the result:
```anchor NonTailReverse
def NonTail.reverse : List α → List α
| [] => []
| x :: xs => reverse xs ++ [x]
```
Using it to reverse {anchorTerm NonTailReverseSteps}`[1, 2, 3]` yields the following sequence of steps:
```anchorEvalSteps NonTailReverseSteps
NonTail.reverse [1, 2, 3]
===>
(NonTail.reverse [2, 3]) ++ [1]
===>
((NonTail.reverse [3]) ++ [2]) ++ [1]
===>
(((NonTail.reverse []) ++ [3]) ++ [2]) ++ [1]
===>
(([] ++ [3]) ++ [2]) ++ [1]
===>
([3] ++ [2]) ++ [1]
===>
[3, 2] ++ [1]
===>
[3, 2, 1]
```
The tail-recursive version uses {lit}`x :: ·` instead of {lit}`· ++ [x]` on the accumulator at each step:
```anchor TailReverse
def Tail.reverseHelper (soFar : List α) : List α → List α
| [] => soFar
| x :: xs => reverseHelper (x :: soFar) xs
def Tail.reverse (xs : List α) : List α :=
Tail.reverseHelper [] xs
```
This is because the context saved in each stack frame while computing with {anchorName NonTailReverse}`NonTail.reverse` is applied beginning at the base case.
Each “remembered” piece of context is executed in last-in, first-out order.
On the other hand, the accumulator-passing version modifies the accumulator beginning from the first entry in the list, rather than the original base case, as can be seen in the series of reduction steps:
```anchorEvalSteps TailReverseSteps
Tail.reverse [1, 2, 3]
===>
Tail.reverseHelper [] [1, 2, 3]
===>
Tail.reverseHelper [1] [2, 3]
===>
Tail.reverseHelper [2, 1] [3]
===>
Tail.reverseHelper [3, 2, 1] []
===>
[3, 2, 1]
```
In other words, the non-tail-recursive version starts at the base case, modifying the result of recursion from right to left through the list.
The entries in the list affect the accumulator in a first-in, first-out order.
The tail-recursive version with the accumulator starts at the head of the list, modifying an initial accumulator value from left to right through the list.
Because addition is commutative, nothing needed to be done to account for this in {anchorName TailSum}`Tail.sum`.
Appending lists is not commutative, so care must be taken to find an operation that has the same effect when run in the opposite direction.
Appending {anchorTerm NonTailReverse}`[x]` after the result of the recursion in {anchorName NonTailReverse}`NonTail.reverse` is analogous to adding {anchorName NonTailReverse}`x` to the beginning of the list when the result is built in the opposite order.
# Multiple Recursive Calls
%%%
tag := "multiple-call-tail-recursion"
%%%
In the definition of {anchorName mirrorNew (module := Examples.Monads.Conveniences)}`BinTree.mirror`, there are two recursive calls:
```anchor mirrorNew (module := Examples.Monads.Conveniences)
def BinTree.mirror : BinTree α → BinTree α
| .leaf => .leaf
| .branch l x r => .branch (mirror r) x (mirror l)
```
Just as imperative languages would typically use a while loop for functions like {anchorName NonTailReverse}`reverse` and {anchorName NonTailSum}`sum`, they would typically use recursive functions for this kind of traversal.
This function cannot be straightforwardly rewritten to be tail recursive using accumulator-passing style, at least not using the techniques presented in this book.
Typically, if more than one recursive call is required for each recursive step, then it will be difficult to use accumulator-passing style.
This difficulty is similar to the difficulty of rewriting a recursive function to use a loop and an explicit data structure, with the added complication of convincing Lean that the function terminates.
However, as in {anchorName mirrorNew (module:=Examples.Monads.Conveniences)}`BinTree.mirror`, multiple recursive calls often indicate a data structure that has a constructor with multiple recursive occurrences of itself.
In these cases, the depth of the structure is often logarithmic with respect to its overall size, which makes the tradeoff between stack and heap less stark.
There are systematic techniques for making these functions tail-recursive, such as using _continuation-passing style_ and _defunctionalization_, but they are outside the scope of this book.
# Exercises
%%%
tag := "tail-recursion-exercises"
%%%
Translate each of the following non-tail-recursive functions into accumulator-passing tail-recursive functions:
```anchor NonTailLength
def NonTail.length : List α → Nat
| [] => 0
| _ :: xs => NonTail.length xs + 1
```
```anchor NonTailFact
def NonTail.factorial : Nat → Nat
| 0 => 1
| n + 1 => factorial n * (n + 1)
```
The translation of {anchorName NonTailFilter}`NonTail.filter` should result in a program that takes constant stack space through tail recursion, and time linear in the length of the input list.
A constant factor overhead is acceptable relative to the original:
```anchor NonTailFilter
def NonTail.filter (p : α → Bool) : List α → List α
| [] => []
| x :: xs =>
if p x then
x :: filter p xs
else
filter p xs
``` |
fp-lean/book/FPLean/ProgramsProofs/Summary.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.InsertionSort"
#doc (Manual) "Summary" =>
%%%
tag := "programs-proofs-summary"
%%%
# Tail Recursion
%%%
tag := none
%%%
Tail recursion is recursion in which the results of recursive calls are returned immediately, rather than being used in some other way.
These recursive calls are called _tail calls_.
Tail calls are interesting because they can be compiled to a jump instruction rather than a call instruction, and the current stack frame can be re-used instead of pushing a new frame.
In other words, tail-recursive functions are actually loops.
A common way to make a recursive function faster is to rewrite it in accumulator-passing style.
Instead of using the call stack to remember what is to be done with the result of a recursive call, an additional argument called an _accumulator_ is used to collect this information.
For example, an accumulator for a tail-recursive function that reverses a list contains the already-seen list entries, in reverse order.
In Lean, only self-tail-calls are optimized into loops.
In other words, two functions that each end with a tail call to the other will not be optimized.
# Reference Counting and In-Place Updates
%%%
tag := none
%%%
Rather than using a tracing garbage collector, as is done in Java, C#, and most JavaScript implementations, Lean uses reference counting for memory management.
This means that each value in memory contains a field that tracks how many other values refer to it, and the run-time system maintains these counts as references appear or disappear.
Reference counting is also used in Python, PHP, and Swift.
When asked to allocate a fresh object, Lean's run-time system is able to recycle existing objects whose reference counts are falling to zero.
Additionally, array operations such as {anchorName names}`Array.set` and {anchorName names}`Array.swap` will mutate an array if its reference count is one, rather than allocating a modified copy.
If {anchorName names}`Array.swap` holds the only reference to an array, then no other part of the program can tell that it was mutated rather than copied.
Writing efficient code in Lean requires the use of tail recursion and being careful to ensure that large arrays are used uniquely.
While tail calls can be identified by inspecting the function's definition, understanding whether a value is referred to uniquely may require reading the whole program.
The debugging helper {anchorName dbgTraceIfSharedSig}`dbgTraceIfShared` can be used at key locations in the program to check that a value is not shared.
# Proving Programs Correct
%%%
tag := none
%%%
Rewriting a program in accumulator-passing style, or making other transformations that make it run faster, can also make it more difficult to understand.
It can be useful to keep the original version of the program that is more clearly correct, and then use it as an executable specification for the optimized version.
While techniques such as unit testing work just as well in Lean as in any other language, Lean also enables the use of mathematical proofs that completely ensure that both versions of the function return the same result for _all possible_ inputs.
Typically, proving that two functions are equal is done using function extensionality (the {kw}`funext` tactic), which is the principle that two functions are equal if they return the same values for every input.
If the functions are recursive, then induction is usually a good way to prove that their outputs are the same.
Usually, the recursive definition of the function will make recursive calls on one particular argument; this argument is a good choice for induction.
In some cases, the induction hypothesis is not strong enough.
Fixing this problem usually requires thought about how to construct a more general version of the theorem statement that provides induction hypotheses that are strong enough.
In particular, to prove that a function is equivalent to an accumulator-passing version, a theorem statement that relates arbitrary initial accumulator values to the final result of the original function is needed.
# Safe Array Indices
%%%
tag := none
%%%
The type {anchorTerm names}`Fin n` represents natural numbers that are strictly less than {anchorName names}`n`.
{anchorName names}`Fin` is short for “finite”.
As with subtypes, a {anchorTerm names}`Fin n` is a structure that contains a {anchorName names}`Nat` and a proof that this {anchorName names}`Nat` is less than {anchorName names}`n`.
There are no values of type {anchorTerm names}`Fin 0`.
If {anchorName names}`arr` is an {anchorTerm names}`Array α`, then {anchorTerm names}`Fin arr.size` always contains a number that is a suitable index into {anchorName names}`arr`.
Lean provides instances of most of the useful numeric type classes for {anchorName names}`Fin`.
The {anchorName names}`OfNat` instances for {anchorName names}`Fin` perform modular arithmetic rather than failing at compile time if the number provided is larger than the {anchorName names}`Fin` can accept.
# Provisional Proofs
%%%
tag := none
%%%
Sometimes, it can be useful to pretend that a statement is proved without actually doing the work of proving it.
This can be useful when making sure that a proof of a statement would be suitable for some task, such as a rewrite in another proof, determining that an array access is safe, or showing that a recursive call is made on a smaller value than the original argument.
It's very frustrating to spend time proving something, only to discover that some other proof would have been more useful.
The {anchorTerm names}`sorry` tactic causes Lean to provisionally accept a statement as if it were a real proof.
It can be seen as analogous to a stub method that throws a {CSharp}`NotImplementedException` in C#.
Any proof that relies on {anchorTerm names}`sorry` includes a warning in Lean.
Be careful!
The {anchorTerm names}`sorry` tactic can prove _any_ statement, even false statements.
Proving that {anchorTerm names}`3 < 2` can cause an out-of-bounds array access to persist to runtime, unexpectedly crashing a program.
Using {anchorTerm names}`sorry` is convenient during development, but keeping it in the code is dangerous.
# Proving Termination
%%%
tag := none
%%%
When a recursive function does not use structural recursion, Lean cannot automatically determine that it terminates.
In these situations, the function could just be marked {kw}`partial`.
However, it is also possible to provide a proof that the function terminates.
Partial functions have a key downside: they can't be unfolded during type checking or in proofs.
This means that Lean's value as an interactive theorem prover can't be applied to them.
Additionally, showing that a function that is expected to terminate actually always does terminate removes one more potential source of bugs.
The {kw}`termination_by` clause that's allowed at the end of a function can be used to specify the reason why a recursive function terminates.
The clause maps the function's arguments to an expression that is expected to be smaller for each recursive call.
Some examples of expressions that might decrease are the difference between a growing index into an array and the array's size, the length of a list that's cut in half at each recursive call, or a pair of lists, exactly one of which shrinks on each recursive call.
Lean contains proof automation that can automatically determine that some expressions shrink with each call, but many interesting programs will require manual proofs.
These proofs can be provided with {kw}`have`, a version of {kw}`let` that's intended for locally providing proofs rather than values.
A good way to write recursive functions is to begin by declaring them {kw}`partial` and debugging them with testing until they return the right answers.
Then, {kw}`partial` can be removed and replaced with a {kw}`termination_by` clause.
Lean will place error highlights on each recursive call for which a proof is needed that contains the statement that needs to be proved.
Each of these statements can be placed in a {kw}`have`, with the proof being {anchorTerm names}`sorry`.
If Lean accepts the program and it still passes its tests, the final step is to actually prove the theorems that enable Lean to accept it.
This approach can prevent wasting time on proving that a buggy program terminates. |
fp-lean/book/FPLean/ProgramsProofs/TailRecursionProofs.lean | import VersoManual
import FPLean.Examples
open Verso.Genre Manual
open Verso.Code.External
open FPLean
set_option verso.exampleProject "../examples"
set_option verso.exampleModule "Examples.ProgramsProofs.TCO"
#doc (Manual) "Proving Equivalence" =>
%%%
tag := "proving-tail-rec-equiv"
%%%
Programs that have been rewritten to use tail recursion and an accumulator can look quite different from the original program.
The original recursive function is often much easier to understand, but it runs the risk of exhausting the stack at run time.
After testing both versions of the program on examples to rule out simple bugs, proofs can be used to show once and for all that the programs are equivalent.
# Proving {lit}`sum` Equal
%%%
tag := "proving-sum-equal"
%%%
To prove that both versions of {lit}`sum` are equal, begin by writing the theorem statement with a stub proof:
```anchor sumEq0
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
skip
```
As expected, Lean describes an unsolved goal:
```anchorError sumEq0
unsolved goals
⊢ NonTail.sum = Tail.sum
```
The {kw}`rfl` tactic cannot be applied here because {anchorName sumEq0}`NonTail.sum` and {anchorName sumEq0}`Tail.sum` are not definitionally equal.
Functions can be equal in more ways than just definitional equality, however.
It is also possible to prove that two functions are equal by proving that they produce equal outputs for the same input.
In other words, $`f = g` can be proved by proving that $`f(x) = g(x)` for all possible inputs $`x`.
This principle is called _function extensionality_.
Function extensionality is exactly the reason why {anchorName sumEq0}`NonTail.sum` equals {anchorName sumEq0}`Tail.sum`: they both sum lists of numbers.
In Lean's tactic language, function extensionality is invoked using {anchorTerm sumEq1}`funext`, followed by a name to be used for the arbitrary argument.
The arbitrary argument is added as an assumption to the context, and the goal changes to require a proof that the functions applied to this argument are equal:
```anchor sumEq1
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
```
```anchorError sumEq1
unsolved goals
case h
xs : List Nat
⊢ NonTail.sum xs = Tail.sum xs
```
This goal can be proved by induction on the argument {anchorName sumEq1}`xs`.
Both {lit}`sum` functions return {anchorTerm TailSum}`0` when applied to the empty list, which serves as a base case.
Adding a number to the beginning of the input list causes both functions to add that number to the result, which serves as an induction step.
Invoking the {anchorTerm sumEq2a}`induction` tactic results in two goals:
```anchor sumEq2a
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
induction xs with
| nil => skip
| cons y ys ih => skip
```
```anchorError sumEq2a
unsolved goals
case h.nil
⊢ NonTail.sum [] = Tail.sum []
```
```anchorError sumEq2b
unsolved goals
case h.cons
y : Nat
ys : List Nat
ih : NonTail.sum ys = Tail.sum ys
⊢ NonTail.sum (y :: ys) = Tail.sum (y :: ys)
```
The base case for {anchorName sumEq3}`nil` can be solved using {anchorTerm sumEq3}`rfl`, because both functions return {anchorTerm TailSum}`0` when passed the empty list:
```anchor sumEq3
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
induction xs with
| nil => rfl
| cons y ys ih => skip
```
The first step in solving the induction step is to simplify the goal, asking {anchorTerm sumEq4}`simp` to unfold {anchorName sumEq4}`NonTail.sum` and {anchorName sumEq4}`Tail.sum`:
```anchor sumEq4
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
induction xs with
| nil => rfl
| cons y ys ih =>
simp [NonTail.sum, Tail.sum]
```
```anchorError sumEq4
unsolved goals
case h.cons
y : Nat
ys : List Nat
ih : NonTail.sum ys = Tail.sum ys
⊢ y + NonTail.sum ys = Tail.sumHelper 0 (y :: ys)
```
Unfolding {anchorName sumEq5}`Tail.sum` revealed that it immediately delegates to {anchorName sumEq5}`Tail.sumHelper`, which should also be simplified:
```anchor sumEq5
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
induction xs with
| nil => rfl
| cons y ys ih =>
simp [NonTail.sum, Tail.sum, Tail.sumHelper]
```
In the resulting goal, {anchorName TailSum}`sumHelper` has taken a step of computation and added {anchorName sumEq5}`y` to the accumulator:
```anchorError sumEq5
unsolved goals
case h.cons
y : Nat
ys : List Nat
ih : NonTail.sum ys = Tail.sum ys
⊢ y + NonTail.sum ys = Tail.sumHelper y ys
```
Rewriting with the induction hypothesis removes all mentions of {anchorName sumEq6}`NonTail.sum` from the goal:
```anchor sumEq6
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
induction xs with
| nil => rfl
| cons y ys ih =>
simp [NonTail.sum, Tail.sum, Tail.sumHelper]
rw [ih]
```
```anchorError sumEq6
unsolved goals
case h.cons
y : Nat
ys : List Nat
ih : NonTail.sum ys = Tail.sum ys
⊢ y + Tail.sum ys = Tail.sumHelper y ys
```
This new goal states that adding some number to the sum of a list is the same as using that number as the initial accumulator in {anchorName TailSum}`sumHelper`.
For the sake of clarity, this new goal can be proved as a separate theorem:
```anchor sumEqHelperBad0
theorem helper_add_sum_accum (xs : List Nat) (n : Nat) :
n + Tail.sum xs = Tail.sumHelper n xs := by
skip
```
```anchorError sumEqHelperBad0
unsolved goals
xs : List Nat
n : Nat
⊢ n + Tail.sum xs = Tail.sumHelper n xs
```
Once again, this is a proof by induction where the base case uses {anchorTerm sumEqHelperBad1}`rfl`:
```anchor sumEqHelperBad1
theorem helper_add_sum_accum (xs : List Nat) (n : Nat) :
n + Tail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil => rfl
| cons y ys ih => skip
```
```anchorError sumEqHelperBad1
unsolved goals
case cons
n y : Nat
ys : List Nat
ih : n + Tail.sum ys = Tail.sumHelper n ys
⊢ n + Tail.sum (y :: ys) = Tail.sumHelper n (y :: ys)
```
Because this is an inductive step, the goal should be simplified until it matches the induction hypothesis {anchorName sumEqHelperBad2}`ih`.
Simplifying, using the definitions of {anchorName sumEqHelperBad2}`Tail.sum` and {anchorName sumEqHelperBad2}`Tail.sumHelper`, results in the following:
```anchor sumEqHelperBad2
theorem helper_add_sum_accum (xs : List Nat) (n : Nat) :
n + Tail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil => rfl
| cons y ys ih =>
simp [Tail.sum, Tail.sumHelper]
```
```anchorError sumEqHelperBad2
unsolved goals
case cons
n y : Nat
ys : List Nat
ih : n + Tail.sum ys = Tail.sumHelper n ys
⊢ n + Tail.sumHelper y ys = Tail.sumHelper (y + n) ys
```
Ideally, the induction hypothesis could be used to replace {lit}`Tail.sumHelper (y + n) ys`, but they don't match.
The induction hypothesis can be used for {lit}`Tail.sumHelper n ys`, not {lit}`Tail.sumHelper (y + n) ys`.
In other words, this proof is stuck.
# A Second Attempt
%%%
tag := "proving-sum-equal-again"
%%%
Rather than attempting to muddle through the proof, it's time to take a step back and think.
Why is it that the tail-recursive version of the function is equal to the non-tail-recursive version?
Fundamentally speaking, at each entry in the list, the accumulator grows by the same amount as would be added to the result of the recursion.
This insight can be used to write an elegant proof.
Crucially, the proof by induction must be set up such that the induction hypothesis can be applied to _any_ accumulator value.
Discarding the prior attempt, the insight can be encoded as the following statement:
```anchor nonTailEqHelper0
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
skip
```
In this statement, it's very important that {anchorName nonTailEqHelper0}`n` is part of the type that's after the colon.
The resulting goal begins with {lit}`∀ (n : Nat)`, which is short for “For all {lit}`n`”:
```anchorError nonTailEqHelper0
unsolved goals
xs : List Nat
⊢ ∀ (n : Nat), n + NonTail.sum xs = Tail.sumHelper n xs
```
Using the induction tactic results in goals that include this “for all” statement:
```anchor nonTailEqHelper1a
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil => skip
| cons y ys ih => skip
```
In the {anchorName nonTailEqHelper1a}`nil` case, the goal is:
```anchorError nonTailEqHelper1a
unsolved goals
case nil
⊢ ∀ (n : Nat), n + NonTail.sum [] = Tail.sumHelper n []
```
For the induction step for {anchorName nonTailEqHelper1a}`cons`, both the induction hypothesis and the specific goal contain the “for all {lit}`n`”:
```anchorError nonTailEqHelper1b
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
⊢ ∀ (n : Nat), n + NonTail.sum (y :: ys) = Tail.sumHelper n (y :: ys)
```
In other words, the goal has become more challenging to prove, but the induction hypothesis is correspondingly more useful.
A mathematical proof for a statement that beings with “for all $`x`” should assume some arbitrary $`x`, and prove the statement.
“Arbitrary” means that no additional properties of $`x` are assumed, so the resulting statement will work for _any_ $`x`.
In Lean, a “for all” statement is a dependent function: no matter which specific value it is applied to, it will return evidence of the proposition.
Similarly, the process of picking an arbitrary $`x` is the same as using {lit}`fun x => ...`.
In the tactic language, this process of selecting an arbitrary $`x` is performed using the {kw}`intro` tactic, which produces the function behind the scenes when the tactic script has completed.
The {kw}`intro` tactic should be provided with the name to be used for this arbitrary value.
Using the {kw}`intro` tactic in the {anchorName nonTailEqHelper2}`nil` case removes the {lit}`∀ (n : Nat),` from the goal, and adds an assumption {lit}`n : Nat`:
```anchor nonTailEqHelper2
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil => intro n
| cons y ys ih => skip
```
```anchorError nonTailEqHelper2
unsolved goals
case nil
n : Nat
⊢ n + NonTail.sum [] = Tail.sumHelper n []
```
Both sides of this propositional equality are definitionally equal to {anchorName nonTailEqHelper3}`n`, so {anchorTerm nonTailEqHelper3}`rfl` suffices:
```anchor nonTailEqHelper3
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih => skip
```
The {anchorName nonTailEqHelper3}`cons` goal also contains a “for all”:
```anchorError nonTailEqHelper3
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
⊢ ∀ (n : Nat), n + NonTail.sum (y :: ys) = Tail.sumHelper n (y :: ys)
```
This suggests the use of {kw}`intro`.
```anchor nonTailEqHelper4
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih =>
intro n
```
```anchorError nonTailEqHelper4
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
n : Nat
⊢ n + NonTail.sum (y :: ys) = Tail.sumHelper n (y :: ys)
```
The proof goal now contains both {anchorName nonTailEqHelper5}`NonTail.sum` and {anchorName nonTailEqHelper5}`Tail.sumHelper` applied to {lit}`y :: ys`.
The simplifier can make the next step more clear:
```anchor nonTailEqHelper5
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih =>
intro n
simp [NonTail.sum, Tail.sumHelper]
```
```anchorError nonTailEqHelper5
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
n : Nat
⊢ n + (y + NonTail.sum ys) = Tail.sumHelper (y + n) ys
```
This goal is very close to matching the induction hypothesis.
There are two ways in which it does not match:
* The left-hand side of the equation is {lit}`n + (y + NonTail.sum ys)`, but the induction hypothesis needs the left-hand side to be a number added to {lit}`NonTail.sum ys`.
In other words, this goal should be rewritten to {lit}`(n + y) + NonTail.sum ys`, which is valid because addition of natural numbers is associative.
* When the left side has been rewritten to {lit}`(y + n) + NonTail.sum ys`, the accumulator argument on the right side should be {lit}`n + y` rather than {lit}`y + n` in order to match.
This rewrite is valid because addition is also commutative.
The associativity and commutativity of addition have already been proved in Lean's standard library.
The proof of associativity is named {anchorTerm NatAddAssoc}`Nat.add_assoc`, and its type is {anchorTerm NatAddAssoc}`(n m k : Nat) → (n + m) + k = n + (m + k)`, while the proof of commutativity is called {anchorTerm NatAddComm}`Nat.add_comm` and has type {anchorTerm NatAddComm}`(n m : Nat) → n + m = m + n`.
Normally, the {kw}`rw` tactic is provided with an expression whose type is an equality.
However, if the argument is instead a dependent function whose return type is an equality, it attempts to find arguments to the function that would allow the equality to match something in the goal.
There is only one opportunity to apply associativity, though the direction of the rewrite must be reversed because the right side of the equality in {anchorTerm NatAddAssoc}`(n + m) + k = n + (m + k)` is the one that matches the proof goal:
```anchor nonTailEqHelper6
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih =>
intro n
simp [NonTail.sum, Tail.sumHelper]
rw [←Nat.add_assoc]
```
```anchorError nonTailEqHelper6
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
n : Nat
⊢ n + y + NonTail.sum ys = Tail.sumHelper (y + n) ys
```
Rewriting directly with {anchorTerm nonTailEqHelper7}`rw [Nat.add_comm]`, however, leads to the wrong result.
The {kw}`rw` tactic guesses the wrong location for the rewrite, leading to an unintended goal:
```anchor nonTailEqHelper7
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih =>
intro n
simp [NonTail.sum, Tail.sumHelper]
rw [←Nat.add_assoc]
rw [Nat.add_comm]
```
```anchorError nonTailEqHelper7
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
n : Nat
⊢ NonTail.sum ys + (n + y) = Tail.sumHelper (y + n) ys
```
This can be fixed by explicitly providing {anchorName nonTailEqHelper8}`y` and {anchorName nonTailEqHelper8}`n` as arguments to {anchorName nonTailEqHelper8}`Nat.add_comm`:
```anchor nonTailEqHelper8
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil =>
intro n
rfl
| cons y ys ih =>
intro n
simp [NonTail.sum, Tail.sumHelper]
rw [←Nat.add_assoc]
rw [Nat.add_comm y n]
```
```anchorError nonTailEqHelper8
unsolved goals
case cons
y : Nat
ys : List Nat
ih : ∀ (n : Nat), n + NonTail.sum ys = Tail.sumHelper n ys
n : Nat
⊢ n + y + NonTail.sum ys = Tail.sumHelper (n + y) ys
```
The goal now matches the induction hypothesis.
In particular, the induction hypothesis's type is a dependent function type.
Applying {anchorName nonTailEqHelperDone}`ih` to {anchorTerm nonTailEqHelperDone}`n + y` results in exactly the desired type.
The {kw}`exact` tactic completes a proof goal if its argument has exactly the desired type:
```anchor nonTailEqHelperDone
theorem non_tail_sum_eq_helper_accum (xs : List Nat) :
(n : Nat) → n + NonTail.sum xs = Tail.sumHelper n xs := by
induction xs with
| nil => intro n; rfl
| cons y ys ih =>
intro n
simp [NonTail.sum, Tail.sumHelper]
rw [←Nat.add_assoc]
rw [Nat.add_comm y n]
exact ih (n + y)
```
The actual proof requires only a little additional work to get the goal to match the helper's type.
The first step is still to invoke function extensionality:
```anchor nonTailEqReal0
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
```
```anchorError nonTailEqReal0
unsolved goals
case h
xs : List Nat
⊢ NonTail.sum xs = Tail.sum xs
```
The next step is unfold {anchorName nonTailEqReal1}`Tail.sum`, exposing {anchorName TailSum}`Tail.sumHelper`:
```anchor nonTailEqReal1
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
simp [Tail.sum]
```
```anchorError nonTailEqReal1
unsolved goals
case h
xs : List Nat
⊢ NonTail.sum xs = Tail.sumHelper 0 xs
```
Having done this, the types almost match.
However, the helper has an additional addend on the left side.
In other words, the proof goal is {lit}`NonTail.sum xs = Tail.sumHelper 0 xs`, but applying {anchorName nonTailEqHelper0}`non_tail_sum_eq_helper_accum` to {anchorName nonTailEqReal2}`xs` and {anchorTerm NatZeroAdd}`0` yields the type {lit}`0 + NonTail.sum xs = Tail.sumHelper 0 xs`.
Another standard library proof, {anchorTerm NatZeroAdd}`Nat.zero_add`, has type {anchorTerm NatZeroAdd}`(n : Nat) → 0 + n = n`.
Applying this function to {anchorTerm nonTailEqReal2}`NonTail.sum xs` results in an expression with type {anchorTerm NatZeroAddApplied}`0 + NonTail.sum xs = NonTail.sum xs`, so rewriting from right to left results in the desired goal:
```anchor nonTailEqReal2
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
simp [Tail.sum]
rw [←Nat.zero_add (NonTail.sum xs)]
```
```anchorError nonTailEqReal2
unsolved goals
case h
xs : List Nat
⊢ 0 + NonTail.sum xs = Tail.sumHelper 0 xs
```
Finally, the helper can be used to complete the proof:
```anchor nonTailEqRealDone
theorem non_tail_sum_eq_tail_sum : NonTail.sum = Tail.sum := by
funext xs
simp [Tail.sum]
rw [←Nat.zero_add (NonTail.sum xs)]
exact non_tail_sum_eq_helper_accum xs 0
```
This proof demonstrates a general pattern that can be used when proving that an accumulator-passing tail-recursive function is equal to the non-tail-recursive version.
The first step is to discover the relationship between the starting accumulator argument and the final result.
For instance, beginning {anchorName TailSum}`Tail.sumHelper` with an accumulator of {anchorName accum_stmt}`n` results in the final sum being added to {anchorName accum_stmt}`n`, and beginning {anchorName accum_stmt}`Tail.reverseHelper` with an accumulator of {anchorName accum_stmt}`ys` results in the final reversed list being prepended to {anchorName accum_stmt}`ys`.
The second step is to write down this relationship as a theorem statement and prove it by induction.
While the accumulator is always initialized with some neutral value in practice, such as {anchorTerm TailSum}`0` or {anchorTerm accum_stmt}`[]`, this more general statement that allows the starting accumulator to be any value is what's needed to get a strong enough induction hypothesis.
Finally, using this helper theorem with the actual initial accumulator value results in the desired proof.
For example, in {anchorName nonTailEqRealDone}`non_tail_sum_eq_tail_sum`, the accumulator is specified to be {anchorTerm TailSum}`0`.
This may require rewriting the goal to make the neutral initial accumulator values occur in the right place.
# Functional Induction
%%%
tag := "fun-induction"
%%%
The proof of {anchorName nonTailEqRealDone}`non_tail_sum_eq_helper_accum` follows the implementation of {anchorName TailSum}`Tail.sumHelper` closely.
There is not, however, a perfect match between the implementation and the structure expected by mathematical induction, which makes it necessary to manage the assumption {anchorName nonTailEqHelperDone}`n` carefully.
This is a small amount of work in the case of {anchorName nonTailEqHelperDone}`non_tail_sum_eq_helper_accum`, but proofs about functions whose definitions are further from the structure expected by {tactic}`induction` require more bookkeeping.
In addition to proving theorems about recursive functions by induction on one of the arguments, Lean supports proofs by induction on the recursive call structure of functions.
This {deftech}_functional induction_ results in a base case for each branch of the function's control flow that does not include a recursive call, and inductive steps for each branch that does.
A proof by functional induction should demonstrate that the theorem holds for the non-recursive branches, and that if the theorem holds for the result of each recursive call, then it also holds for the result of the recursive branch.
:::paragraph
Using functional induction simplifies {anchorName nonTailEqHelperFunInd1}`non_tail_sum_eq_helper_accum`:
```anchor nonTailEqHelperFunInd1
theorem non_tail_sum_eq_helper_accum (xs : List Nat) (n : Nat) :
n + NonTail.sum xs = Tail.sumHelper n xs := by
fun_induction Tail.sumHelper with
| case1 n => skip
| case2 n y ys ih => skip
```
Each branch of the proof matches the corresponding branch of {anchorName TailSum}`Tail.sumHelper`:
```anchorTerm TailSum
def Tail.sumHelper (soFar : Nat) : List Nat → Nat
| [] => soFar
| x :: xs => sumHelper (x + soFar) xs
```
In the first, {anchorTerm nonTailEqHelperFunInd1}`case1`, the right side of the equality is the accumulator value, called {anchorName nonTailEqHelperFunInd1}`n` in the proof:
```anchorError nonTailEqHelperFunInd1
unsolved goals
case case1
n : Nat
⊢ n + NonTail.sum [] = n
```
In the second, {anchorTerm nonTailEqHelperFunInd1}`case2`, the right side of the equality is the next step in the tail-recursive loop:
```anchorError nonTailEqHelperFunInd1
unsolved goals
case case2
n y : Nat
ys : List Nat
ih : y + n + NonTail.sum ys = Tail.sumHelper (y + n) ys
⊢ n + NonTail.sum (y :: ys) = Tail.sumHelper (y + n) ys
```
:::
:::paragraph
The resulting proof can be simpler.
The fundamentals of the argument, including the properties of addition that are used, are the same; however, the bookkeeping has been removed.
It is no longer necessary to manually juggle the accumulator value, and the induction hypothesis can be used directly instead of requiring instantiation:
```anchor nonTailEqHelperFunInd2
theorem non_tail_sum_eq_helper_accum (xs : List Nat) (n : Nat) :
n + NonTail.sum xs = Tail.sumHelper n xs := by
fun_induction Tail.sumHelper with
| case1 n => simp [NonTail.sum]
| case2 n y ys ih =>
simp [NonTail.sum]
rw [←Nat.add_assoc]
rw [Nat.add_comm n y]
assumption
```
:::
:::paragraph
The {tactic}`grind` tactic is very well suited to this kind of goal.
Unlike {tactic}`simp` and {tactic}`rw`, it is not directional; internally, it accumulates a collection of facts until it either proves the goal completely or fails to do so.
It is preconfigured to use basic facts about arithmetic, such as the associativity and commutativity of addition, and it automatically uses local assumptions such as the induction hypothesis.
Using {tactic}`grind`, this proof becomes short and to-the-point:
```anchor nonTailEqHelperFunInd3
theorem non_tail_sum_eq_helper_accum (xs : List Nat) (n : Nat) :
n + NonTail.sum xs = Tail.sumHelper n xs := by
fun_induction Tail.sumHelper <;> grind [NonTail.sum]
```
This proof also matches the way the proof might be explained to a skilled programmer: “Just check both branches of {anchorName nonTailEqHelperFunInd3}`Tail.sumHelper`!”
:::
# Exercise
%%%
tag := "tail-recursion-proof-exercises"
%%%
## Warming Up
%%%
tag := none
%%%
Write your own proofs for {anchorName NatZeroAdd}`Nat.zero_add`, {anchorName NatAddAssoc}`Nat.add_assoc`, and {anchorName NatAddComm}`Nat.add_comm` using the {kw}`induction` tactic.
## More Accumulator Proofs
%%%
tag := none
%%%
### Reversing Lists
%%%
tag := none
%%%
Adapt the proof for {anchorName NonTailSum}`sum` into a proof for {anchorName NonTailReverse}`NonTail.reverse` and {anchorName TailReverse}`Tail.reverse`.
The first step is to think about the relationship between the accumulator value being passed to {anchorName TailReverse}`Tail.reverseHelper` and the non-tail-recursive reverse.
Just as adding a number to the accumulator in {anchorName TailSum}`Tail.sumHelper` is the same as adding it to the overall sum, using {anchorName names}`List.cons` to add a new entry to the accumulator in {anchorName TailReverse}`Tail.reverseHelper` is equivalent to some change to the overall result.
Try three or four different accumulator values with pencil and paper until the relationship becomes clear.
Use this relationship to prove a suitable helper theorem.
Try proving this helper theorem both using induction on lists and via functional induction.
Then, write down the overall theorem.
Because {anchorName reverseEqStart}`NonTail.reverse` and {anchorName TailReverse}`Tail.reverse` are polymorphic, stating their equality requires the use of {lit}`@` to stop Lean from trying to figure out which type to use for {anchorName reverseEqStart}`α`.
Once {anchorName reverseEqStart}`α` is treated as an ordinary argument, {kw}`funext` should be invoked with both {anchorName reverseEqStart}`α` and {anchorName reverseEqStart}`xs`:
```anchor reverseEqStart
theorem non_tail_reverse_eq_tail_reverse :
@NonTail.reverse = @Tail.reverse := by
funext α xs
```
This results in a suitable goal:
```anchorError reverseEqStart
unsolved goals
case h.h
α : Type u_1
xs : List α
⊢ NonTail.reverse xs = Tail.reverse xs
```
### Factorial
%%%
tag := none
%%%
Prove that {anchorName NonTailFact}`NonTail.factorial` from the exercises in the previous section is equal to your tail-recursive solution by finding the relationship between the accumulator and the result and proving a suitable helper theorem. |
fp-lean/book/static/fonts/source-serif/LICENSE.md | Copyright 2014 - 2023 Adobe (http://www.adobe.com/), with Reserved Font Name ‘Source’. All Rights Reserved. Source is a trademark of Adobe in the United States and/or other countries.
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE. |
fp-lean/book/static/fonts/noto-sans-mono/OFL.txt | Copyright 2022 The Noto Project Authors (https://github.com/notofonts/latin-greek-cyrillic)
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at:
https://openfontlicense.org
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE. |
fp-lean/book/static/fonts/source-code-pro/LICENSE.md | © 2023 Adobe (http://www.adobe.com/), with Reserved Font Name 'Source'. All Rights Reserved. Source is a trademark of Adobe in the United States and/or other countries.
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE. |
fp-lean/book/static/fonts/source-sans/LICENSE.md | Copyright 2010-2024 Adobe (http://www.adobe.com/), with Reserved Font Name 'Source'. All Rights Reserved. Source is a trademark of Adobe in the United States and/or other countries.
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE. |
reference-manual/TutorialMain.lean | import VersoTutorial
import VersoBlog
import Tutorial
import Tutorial.Meta.Theme
import VersoWeb.Theme
open Verso.Genre.Manual
open Verso.Genre.Tutorial
open Verso.Genre.Blog (Page)
open Verso.Output.Html in
def plausible := {{
<script defer="defer" data-domain="lean-lang.org" src="https://plausible.io/js/script.outbound-links.js"></script>
}}
def version (_content : Array (Verso.Doc.Inline g)) : Verso.Doc.Inline g := .text Lean.versionString
def manualLink (args : Array (Verso.Doc.Inline g)) : Verso.Doc.Inline g:=
Verso.Doc.Inline.link args Lean.manualRoot
open Verso.Doc Concrete in
def tutorials : Tutorials where
content :=
(verso (Page) "Tutorials"
:::::::
These tutorials cover version {version}[] of Lean.
While the {manualLink}[reference manual] describes the system and its features in detail, these tutorials provide focused introductions to specific tools.
:::::::).toPart
topics := #[
{ title := #[inlines!"Tactics"],
titleString := "Tactics"
description := #[blocks!"These tutorials demonstrate Lean's advanced proof automation."]
tutorials := #[%doc Tutorial.VCGen, %doc Tutorial.Grind.IndexMap]
}
]
open Verso.Genre.Blog Site.Syntax in
open Verso.Doc in
def leanSite : Site :=
site home /
"install" install
"learn" learn
"community" community
"use-cases" «use-cases»
"fro" fro
where
placeholder title : Part Page := { title := #[.text title], titleString := title, metadata := none, content := #[], subParts := #[] }
home : VersoDoc Page := { construct := fun _ => placeholder "Lean" }
install : VersoDoc Page := { construct := fun _ => placeholder "Install" }
learn : VersoDoc Page := { construct := fun _ => placeholder "Learn" }
community : VersoDoc Page := { construct := fun _ => placeholder "Community" }
«use-cases» : VersoDoc Page := { construct := fun _ => placeholder "Use Cases" }
«trademark-policy» : VersoDoc Page := { construct := fun _ => placeholder "Trademark Policy" }
privacy : VersoDoc Page := { construct := fun _ => placeholder "Privacy" }
terms : VersoDoc Page := { construct := fun _ => placeholder "Terms" }
logos : VersoDoc Page := { construct := fun _ => placeholder "Logos" }
fro : VersoDoc Page := { construct := fun _ => placeholder "FRO" }
documentation : VersoDoc Page := { construct := fun _ => placeholder "Documentation" }
examples : VersoDoc Page :={ construct := fun _ => placeholder "Examples" }
publications : VersoDoc Page := { construct := fun _ => placeholder "Publications" }
links : VersoDoc Page := { construct := fun _ => placeholder "Links" }
people : VersoDoc Page := { construct := fun _ => placeholder "People" }
def codeColors := r#"
:root {
--verso-code-color: var(--color-text) !important;
}
"#
def localToCStyle := r#"
/*
nav.local-toc {
position: fixed;
top: calc(var(--nav-height) + 1rem + 60px);
right: calc((100vw - var(--container-width)) / 2 - 15rem - 2rem);
width: 15rem;
font-size: 0.875rem;
max-height: calc(100vh - 2rem - var(--nav-height) - 60px);
overflow-y: auto;
}
*/
nav.local-toc h1 {
display: none;
}
nav.local-toc ol {
list-style: none;
margin: 0;
padding: 0;
display: none;
}
nav.local-toc li {
margin: 0.5rem 0;
}
nav.local-toc a {
color: var(--verso-text-color);
text-decoration: none;
}
nav.local-toc a:hover {
color: var(--color-accent);
}
/* On smaller screens, display normally */
/*
@media (max-width: 90rem) {
nav.local-toc {
position: static;
right: auto;
width: auto;
max-height: none;
border-left: 3px solid var(--color-border);
padding-left: 1rem;
margin-bottom: 1rem;
}
}
*/
nav.local-toc ol ol {
padding-left: 1rem;
margin-top: 0.25rem;
}
nav.local-toc ol ol li {
margin: 0.25rem 0;
}
/*******/
nav.local-toc > div:first-child {
margin-bottom: 1rem;
}
nav.local-toc h1 {
font-size: 1rem;
margin: 0 0 0.5rem 0;
font-weight: 600;
}
nav.local-toc .code-links {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
nav.local-toc .code-link {
font-size: 0.8125rem;
padding: 0.25rem 0.5rem;
border: 1px solid var(--color-border);
border-radius: 6px;
color: var(--verso-text-color);
text-decoration: none;
display: inline-block;
}
nav.local-toc .code-link:hover {
background: var(--verso-selected-color);
border-color: var(--color-accent);
}
nav.local-toc .code-link code {
background: none;
padding: 0;
font-size: inherit;
}
nav.local-toc .code-link.live::before {
content: "↪ ";
margin-right: 0.25rem;
}
nav.local-toc .code-link.download::before {
content: "📦 ";
margin-right: 0.25rem;
}
"#
open Verso.Genre.Blog.Template in
open Verso.Output Html in
def theme :=
{ LeanLangOrg.theme with
pageTemplate := do
let tutorialNav : Option Html ← param? "tutorialNav"
let content : Html ← param "content"
let content := tutorialNav.getD .empty ++ content
withTheReader Context (fun ρ => { ρ with params := ρ.params.insert "content" content }) LeanLangOrg.theme.pageTemplate
cssFiles := LeanLangOrg.theme.cssFiles.push ("code-colors.css", codeColors) |>.push ("local-toc.css", localToCStyle)
}
def main :=
tutorialsMain tutorials (config := {destination := "_tutorial-out"}) (theme := theme) (navSite := leanSite) |
reference-manual/Tutorial.lean | import Tutorial.VCGen
import Tutorial.Grind.IndexMap |
reference-manual/README.md | # Lean Language Reference
The Lean Language Reference is intended as a comprehensive, precise
description of Lean. It is first and foremost a reference work in
which Lean users can look up detailed information, rather than a
tutorial for new users.
This new reference has been rebuilt from the ground up in Verso. This
means that all example code is type checked, the source code contains
tests to ensure that it stays up-to-date with respect to changes in
Lean, and we can add any features that we need to improve the
documentation. Verso also makes it easy to integrate tightly with
Lean, so we can show function docstrings directly, mechanically check
descriptions of syntax against the actual parser, and insert
cross-references automatically.
## Reading the Manual
The latest release of this reference manual can be read
[here](https://lean-lang.org/doc/reference/latest/).
For developers:
- The output of building the current state of the `nightly-testing`
branch can be read
[here](https://lean-reference-manual-review.netlify.app/).
- Each pull request in this repository causes two separate previews to
be generated, one with extra information that's only useful to those
actively working on the text, such as TODO notes and symbol coverage
progress bars. These are posted by a bot to the PR after the first
successful build.
## Branches and Development
The two most important branches are:
- `main` tracks the latest Lean release or release candidate
- `nightly-testing` tracks the latest Lean nightlies
New content that addresses in-development features of Lean will be
written on `nightly-testing`, while updates to existing content may be
written either on `main` or `nightly-testing`, as appropriate. From
time to time, `main` will be merged into `nightly-testing`; when Lean
is released, the commits in `nightly-testing` are rebased onto `main`
to achieve a clean history.
See later in this README for details about how these branches are kept
up to date.
## Building the Reference Manual Locally
This reference manual contains figures that are built from LaTeX
sources. To build them, you'll need the following:
- A LaTeX installation, including LuaLaTeX and the following packages
from TeXLive:
- `scheme-minimal`
- `latex-bin`
- `fontspec`
- `standalone`
- `pgf`
- `pdftexcmds`
- `luatex85`
- `lualatex-math`
- `infwarerr`
- `ltxcmds`
- `xcolor`
- `fontawesome`
- `spath3`
- `inter`
- `epstopdf-pkg`
- `tex-gyre`
- `tex-gyre-math`
- `unicode-math`
- `amsmath`
- `sourcecodepro`
- `pdftocairo`, which can be found in the `poppler-utils` package on
Debian-derived systems and the `poppler` package in Homebrew
Additionally, to run the style checker locally, you'll need
[Vale](https://vale.sh/). It runs in CI, so this is not a necessary
step to contribute.
Part of the manual deployment process adds JavaScript to each version
that inserts metadata to both enable links to live.lean-lang.org and
arrange for them to target the correct versions of Lean. To test these
links locally, follow the instructions in a comment in the definition
of `staticJs` in `Main.lean`.
To build the manual, run the following command:
```
lake exe generate-manual --depth 2
```
Then run a local web server on its output:
```
python3 ./server.py 8880 &
```
Then open <http://localhost:8880> in your browser.
## Contributing
Please see [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
# Deployment Infrastructure
TL;DR: push a tag of the form `vX.Y.Z` onto the commit that should be
released as the manual for that version, and the rest is automatic.
This repository contains the deployment infrastructure for both the
reference manual and the tutorials site. Deployment happens in GitHub
Actions, in response to certain tags being pushed. Because the latest
version of the GH action file will always be used, and we want to be
able to mutate tags to re-deploy old manual versions (e.g. to update
CSS for consistent look and feel while keeping content
version-accurate, or add a "THIS IS OBSOLETE" banner in a few years),
the steps of the workflow that might change are captured in scripts
that are versioned along with the code.
The files are:
- `prep.sh` is used to set up the build, installing OS-level
dependencies and Elan.
- `build.sh` is used to build the executable that generates the
manual.
- `generate.sh` builds both the reference manual and tutorials, saving
them in `/html/site/reference` and `/html/site/tutorials`.
- `release.py` puts the generated HTML in the right place on a new
commit on a deployment branch (`deploy` for the reference manual,
`deploy-tutorials` for tutorials).
Everything above is what needs to happen specifically to the single
version of the documentation that is being updated in the course of
the deploy. There is one further step, which is computing the desired
state of the final `postdeploy` branches from the state in the
`deploy` branches. This is done by the script `overlay.py`, which is
triggered by pushes to `deploy`, and therefore runs at branch `main`
rather than at the tag being pushed. It processes both the reference
manual (`deploy` → `postdeploy`) and tutorials (`deploy-tutorials` →
`postdeploy-tutorials`).
We might have named the two branches `predeploy` and `deploy`, but
chose instead `deploy` and `postdeploy` so that we cold leave
unchanged the older tags for particular versions of the manual which
still have workflows that emit commits to `deploy`.
## Deployment Overview
The goal is to have versioned snapshots of both the reference manual
and tutorials, with a structure like:
- `https://lean-lang.org/doc/reference/latest/` - latest version
- `https://lean-lang.org/doc/reference/stable/` - latest stable
version
- `https://lean-lang.org/doc/reference/4.19.0/` - reference for
v4.19.0
- `https://lean-lang.org/doc/reference/4.20.0/` - reference for
v4.20.0
- `https://lean-lang.org/doc/tutorials/latest/` - latest tutorials
- `https://lean-lang.org/doc/tutorials/stable/` - latest stable
tutorials
- `https://lean-lang.org/doc/tutorials/4.19.0/` - tutorials for
v4.19.0
- `https://lean-lang.org/doc/tutorials/4.20.0/` - tutorials for
v4.20.0
and so forth. The base URLs should redirect to `latest`. It's
important to be able to edit past deployments as well.
Orphan branches `deploy` and `deploy-tutorials` contain the versioned
content for each site. For example, the `deploy` branch might contain:
- `/4.25.0-rc1/` - built HTML for 4.25.0-rc1
- `/4.24.0/` - built HTML for 4.24.0
- `/4.23.0/` - built HTML for 4.23.0
- `/latest/` - copy of `/4.25.0-rc1/` (the most recent version)
- `/stable/` - copy of `/4.24.0/` (the most recent non-RC version)
The `latest` and `stable` directories are full copies rather than
symlinks because Netlify deployment doesn't support symlinks.
The `deploy-tutorials` branch has the same structure for tutorials.
The `release.py` script is responsible for updating this structure. It
takes the generated HTML directory, the version number, and the
deployment branch name as arguments, and then does the following:
1. It copies the HTML to the branch (deleting an existing directory
first if needed).
2. It updates the `latest` directory to be a copy of the most recent
version, with all numbered releases being considered more recent
than any nightly and real releases being more recent than their
RCs.
3. It updates the `stable` directory to be a copy of the most recent
non-RC version.
4. It commits the changes to the deployment branch, then switches
back to the original branch.
A successful push to `deploy` triggers a GH action that runs the
`overlay.py` script, which creates commits to `postdeploy` (based on
`deploy`) and `postdeploy-tutorials` (based on `deploy-tutorials`).
These commits include all desired overlays. At time of writing, this
is just a single file `static/metadata.js` in each version that
contains information about whether the version is in fact stable or
latest.
A successful push to `postdeploy` or `postdeploy-tutorials` triggers a
GH Action which publishes the content to Netlify.
## Overlays
The script `overlay.py` computes `postdeploy` from `deploy` any time
`deploy` changes, and `postdeploy-tutorials` from `deploy-tutorials`
any time `deploy-tutorials` changes. Its purpose is to add metadata or
make in-place changes to deployed content that is best thought of as a
unified overlay on top of the data that exists at the historical
version tags.
Examples of the sorts of things we might like to achieve with this
overlay mechanism are:
- injecting version metadata so that a particular version of the
manual knows _that_ it is the current latest or latest-stable
version
- global css changes across all versions, for consistency
- banners appended to sufficiently old versions' manuals describing
how they are so old as to be deprecated and unsupported
Interactions between overlays created by `overlay.py` and reference
manual versions should be carefully considered to ensure
backwards-compatibility.
An overlay that simply injects a `<div>` inside old versions is
relatively safe, for the document being injected into doesn't need to
know about the injection. However, if a document depends rigidly on
the presence of data created by the overlay mechanism, a problem could
occur if the overlay changes to not produce that data in the future.
Therefore we can be careful on both sides:
- overlays should, ideally, as time goes on, only monotonically
produce more data, e.g. it should only add fields to injected
javascript values and avoid changing the contract of existing
fields.
- documents should, ideally, fail gracefully if injected data they
expect to exist is missing
### Local Testing
To test `overlay.py` locally before pushing, do the following.
- Ensure the deployment branches exist locally.
- You'll probably want to do
```
git fetch
git checkout deploy
git reset --hard remotes/upstream/deploy
git checkout postdeploy
git reset --hard remotes/upstream/postdeploy
git checkout deploy-tutorials
git reset --hard remotes/upstream/deploy-tutorials
git checkout postdeploy-tutorials
git reset --hard remotes/upstream/postdeploy-tutorials
```
- From the `reference-manual` checkout directory, on branch `main`,
from a clean working directory (i.e. make sure to commit any changes
you've made) run
```shell
python3 -B deploy/overlay.py . deploy postdeploy --site-dir reference
python3 -B deploy/overlay.py . deploy-tutorials postdeploy-tutorials --site-dir tutorials
```
- Inspect whatever `postdeploy` results you're interested in, e.g.
```
git show postdeploy:4.25.0-rc2/Type-Classes/Basic-Classes/index.html
# Expect to see <meta name="robots" content="noindex">
```
```
git show postdeploy:latest/Type-Classes/Basic-Classes/index.html
# Expect to *not* see <meta name="robots" content="noindex">
```
# Supporting Nightlies
Everything described in this section has two goals:
1. We should always be able to produce a reference manual for a new
Lean release with a minimal delay. This means we need to discover
breaking changes and adapt to them on an ongoing basis.
2. Lean developers should always be able to document their changes to
the language.
## Keeping `nightly-testing` Updated
Keeping the `nightly-testing` branch up to date is key to supporting
both goals. It needs to be kept up to date in two ways: it needs to
support the latest Lean nightly, and it needs to include all changes
made on `main` so that they will also be adapted and ready to go for
releases.
### Maintaining `nightly-testing`
To the extent that it is feasible, the `nightly-testing` branch is
kept up to date with Lean nightlies. The process for doing so is
partially automated. The CI config file
[`update-nightly.yml`](.github/workflows/update-nightly.yml)
[runs](https://github.com/leanprover/reference-manual/actions/workflows/update-nightly.yml)
every six hours. If it detects a newer nightly than the one in
`nightly-testing`'s `lean-toolchain` file, then it attempts to change
the contents of `lean-toolchain` to the latest nightly and build the
HTML version of the manual. On success, it commits the result and
pushes it to `nightly-testing`, adding the tag
`nightly-testing-YYYY-MM-DD`.
When the automated process fails, a human-created PR to
`nightly-testing` is required.
1. Create a branch `bump-YYYY-MM-DD` file off of the `nightly-testing`
branch
2. Update the `lean-toolchain` to the latest available
`leanprover/lean4:nightly-*` toolchain
3. Make any changes to the manual necessary to build the manual on top
of the new nightly. After these changes are added back to
`nightly-testing` via PR, the
[`tag-nightly-testing.yml`](.github/workflows/tag-nightly-testing.yml)
action
[runs](https://github.com/leanprover/reference-manual/actions/workflows/tag-nightly-testing.yml)
automatically and applies the `nightly-testing-YYYY-MM-DD` tag.
The
[`merge-main-nightly.yml`](.github/workflows/merge-main-nightly.yml)
workflow
[runs](https://github.com/leanprover/reference-manual/actions/workflows/update-nightly.yml)
every six hours. It attempts to merge `main` into `nightly-testing`
and generate HTML. If this succeeds, then the result is pushed to
`nightly-testing`. If it fails, human intervention is required.
It's almost always desirable to maintain the invariant that
`nightly-testing` is using a more recent version of `lean` than
`main`, but new Lean releases can violate this property. In these
cases, it's helpful to update the toolchain for `nightly-testing`
_before_ trying to merge `main` into `nightly-testing`.
### Maintaining `nightly-with-manual` in Lean
The Lean repository contains a branch `nightly-with-manual` that
contains the most recent Lean nightly for which a reference manual
`nightly-testing` build succeeded. In particular, it should always
contain the Lean nightly indicated by this repository's most recent
`nightly-testing-YYYY-MM-DD` tag.
When a change is pushed to `nightly-testing` that modifies
`lean-toolchain`,
[`nightly-with-manual.yml`](.github/workflows/nightly-with-manual.yml)
[runs](https://github.com/leanprover/reference-manual/actions/workflows/nightly-with-manual.yml).
This workflow also runs twice per day on its own, and it is explicitly
invoked by `update-nightly.yml`. It finds the most recent
`nightly-testing-YYYY-MM-DD` tag in the `reference-manual` repository.
This tag also exists in the `lean4-nightly` repository, and the commit
it corresponds to also exists in the `lean4` repository. The workflow
resets the `lean4` repository's `nightly-with-manual` branch to this
commit.
This branch is important because it is used to provide feedback in
Lean PRs as to whether the manual succeeds in building their PR, or
whether documentation adaptation will also be necessary.
### PR Feedback in Lean
For every pull request, Lean's CI creates or updates a branch in this
repository to serve as the basis for adaptations. For PR #XXXXX, the
branch is named `lean-pr-testing-XXXXX`. This branch is based off the
most recent nightly. In it, the toolchain is set to the release that's
created for each Lean PR.
These branches are used to report status upstream to Lean. When this
reposistory's [`ci.yml`](.github/workflows/ci.yml) runs on them, the
status is reported upstream by
[`lean-pr-testing-comments.sh`](scripts/lean-pr-testing-comments.sh).
On each push to `nightly-testing`,
[`discover-lean-pr-testing.yml`](scripts/discover-lean-pr-testing.yml)
runs. When it discovers that a Lean PR has been merged, it
automatically merges the PR's corresponding adaptation branch into
`nightly-testing`. |
reference-manual/CONTRIBUTING.md | # External Contribution Guidelines
Thank you for helping out with the Lean reference manual!
In the interest of getting as much documentation as possible written
quickly, while still maintaining a consistent voice and style and
keeping the technical quality high, all contributions will be
carefully reviewed. However, because review can be very time
consuming, we may decline to review some contributions. This means
that slow-to-review PRs may just be closed. Nobody wants this to
happen, so please get in touch to discuss your plans to contribute
ahead of time so we can agree on suitable parameters.
## Issues
Issues are a great way to communicate your priorities and needs for
documentation, and they are an important input into our planning and
prioritization process as we write the Lean reference manual. Please
upvote issues that are important to you. Pithy technical examples as
comments to documentation requests are also an incredibly useful
contribution.
## Small Fixes
Pull requests that fix typos and small mistakes are welcome. Please
don't group too many in one PR, which makes them difficult to review.
## Substantial Content
Please remember to get in touch ahead of time to plan a larger
contribution. In general, text included in the reference manual should
live up to the following:
- Empirical claims about Lean should be tested, either via examples or
hidden test cases.
- Examples should be clearly marked as such, separating the
description of the system from the illustrative examples.
- Technical terms should be introduced using the `deftech` role and
referred to using the `tech` role.
- Write in US English, deferring to the Chicago Manual of Style 18
(CMS) when in doubt. Exceptions to this style may be added and
documented.
- One sentence per line, to make diffs easier to follow.
### Style
Automated tooling is not yet capable of implementing these rules
perfectly, so pull requests that bring text into compliance with this
guide are very welcome. If complying with style guidelines makes the
text more difficult to understand, prioritize the understandability of
the text.
#### Typographical Unicode
In English-language text, use the appropriate Unicode characters for
left and right quotation marks (both single and double) and em dashes.
#### Headings
Headings should be set in title case, rather than just capitalizing
the first word. This is defined in CMS rule 8.160, but a good first
approximation is to capitalize the first and last words, plus all
words other than the following:
- prepositions less than five letters when not used as adverbs or
adjectives
- "a", "an", "the", "to" (infinitive marker), "and", "but", "for",
"or", "nor"
- conventionally lower-case components of names, like "de" or "van"
The [Title Case Converter](https://titlecaseconverter.com/) is useful
if in doubt. Remember to select "Chicago" from the list of styles.
#### Lists
Numbered or bulleted lists should be introduced by a
grammatically-complete sentence that is terminated with a colon,
follow one of two options:
- All list items contain one or more complete sentences that start
with a capital letter and are punctuated accordingly.
- All list items contain noun phrases or sentence fragments that begin
with a lower-case letter and do not end with punctuation.
That is to say, lists may consist of:
1. complete sentences, punctuated accordingly
2. non-sentences, punctuated accordingly
In Verso, the list and the sentence with the colon should be grouped
with the `paragraph` directive.
If necessary for emphasis, a sentence that contains a list may be
broken up into a vertical list element (cf Chicago rule 6.142). In
this case, punctuate the list items as though they were inline in the
sentence, without using a colon at the start. When using this style in
this document, remember to
- use the `paragraph` directive so the list is typeset together with
its sentence,
- punctuate with semicolons if the list items themselves contain
commas, and
- remember the trailing "and" and period in the penultimate and final
items.
## Markup
The reference manual is written in Verso's manual genre. In addition
to what Verso provides, there are a number of additional roles, code
block styles, and directives:
### Roles
Please use the following roles where they make sense:
- `` {lean}`TERM` `` - `TERM` is a Lean term, to be elaborated as such
and included it in the rendered document with appropriate
highlighting. The optional named argument `type` specifies an
expected type, e.g. `` {lean (type := "Nat")}`.succ .zero` ``
- `` {name}`X` `` - `X` is a constant in the Lean environment. The
optional positional argument can be used to override name
resolution; if it is provided, then the positional argument is used
to resolve the name but the contents of the directive are rendered.
`` {name Lean.Elab.Command.CommandElabM}`CommandElabM` `` renders as
`CommandElabM` with the metadata from the full name.
- `` {keywordOf KIND}`KW` `` - `KW` is an atom from the syntax kind
`KIND`.
- `` {keyword}`KW` `` - `KW` is an atom from an unspecified syntax
kind.
- `` {tactic}`TAC` `` - `TAC` is a tactic name
- `` {option}`OPT` `` - `OPT` is the name of an option
- `{TODO}[...]` specifies a task to be rendered in draft versions of
the manual
### Code Blocks
- `lean` specifies that the code block contains Lean commands. The
named arguments are:
- `name` - names the code block for later reference in
`leanOutput`
- `keep` - whether to keep or discard changes made to the
environment (default: `true`)
- `error` - the code is expected to contain an error (default:
`false`)
- `leanTerm` specifies that the code block contains a Lean term. The
named arguments are:
- `name` - names the code block for later reference in
`leanOutput`
- `keep` - whether to keep or discard changes made to the
environment (default: `true`)
- `error` - the code is expected to contain an error (default:
`false`)
- `leanOutput NAME` specifies that the code block contains an output
from a prior `lean` block. The optional named argument `severity`
restricts the output to information, warning, or error output.
- `signature` specifies that the code block contains the signature of
an existing constant.
- `syntaxError NAME` specifies that the code block contains invalid
Lean syntax, and saves the message under `NAME` for `leanOutput`.
The optional named argument `category` specifies the syntactic
category (default: `term`).
### Directives
- `:::TODO` specifies a task to be rendered in draft versions of the
manual
- `:::example NAME` indicates an example. `NAME` is a string literal
that contains valid Verso inline markup. Unless the named argument
`keep` is `true`, changes made to the Lean environment in the
example are discarded. Within an `example`, `lean` blocks are
elaborated before paragraphs, so inline `lean` roles can refer to
names defined later in the example.
- `:::planned N` describes content that is not yet written, tracked at
issue `N` in this repository
- `:::syntax` describes the syntax of a Lean construct, using a custom
DSL based on Lean's quasiquotation mechanism. This allows the Lean
parser to validate the description, while at the same time
decoupling the specifics of the implementation from the structure of
the documentation.
## CI
The CI requires that various checks are passed.
One of them is that the text must live up to a number of rules written
with Vale. The style implementation is still quite incomplete; just
because your prose passes the linter doesn't mean it will necessarily
be accepted!
To run the check, first install Vale. The next step is to preprocess
the generated HTML to remove features that Vale can't cope with.
Finally, Vale itself can be run.
To preprocess the HTML, use the script
`.vale/scripts/rewrite_html.py`. It requires BeautifulSoup, so here's
the overall steps to get it working the first time:
```
$ cd .vale/scripts
$ python3 -m venv venv
$ . ./venv/bin/activate # or the appropriate script for your shell, e.g. activate.fish
$ pip install beautifulsoup4
```
After that, just run
```
$ . .vale/scripts/venv/bin/activate
```
to set up the Python environment.
The next step is to run this on Verso's output. If it's in
`_out/html-multi`, do this via:
```
$ cd _out
$ python ../.vale/scripts/rewrite_html.py html-multi html-vale
```
Now, run `vale`:
```
$ vale html-vale
```
### Deployments from PRs
To enable contributions from external forks while allowing HTML
previews, the CI does the following:
1. `ci.yml` builds the HTML for the pull request and saves it to
artifact storage
2. `label-pr.yml` is triggered when `ci.yml` completes. It (re)labels
the PR with `HTML available` to indicate that the artifact was
built.
3. Whenever the label is added, `pr-deploy.yml` runs _in the context
of `main`_ with access to secrets. It can deploy the previews.
The second two steps run the CI code on `main`, not the config from
the PR. |
reference-manual/Manual.lean | import VersoManual
import Manual.Intro
import Manual.Elaboration
import Manual.Types
import Manual.SourceFiles
import Manual.Attributes
import Manual.Defs
import Manual.Classes
import Manual.Axioms
import Manual.Terms
import Manual.ErrorExplanations
import Manual.Tactics
import Manual.Simp
import Manual.Grind
import Manual.BasicTypes
import Manual.Iterators
import Manual.BasicProps
import Manual.NotationsMacros
import Manual.IO
import Manual.Interaction
import Manual.Monads
import Manual.BuildTools
import Manual.Releases
import Manual.Namespaces
import Manual.Runtime
import Manual.SupportedPlatforms
import Manual.VCGen
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option maxRecDepth 1024
#doc (Manual) "The Lean Language Reference" =>
%%%
tag := "lean-language-reference"
shortContextTitle := "Lean Reference"
%%%
This is the _Lean Language Reference_.
It is intended to be a comprehensive, precise description of Lean: a reference work in which Lean users can look up detailed information, rather than a tutorial intended for new users.
For other documentation, please refer to the [Lean documentation overview](https://lean-lang.org/documentation/).
This manual covers Lean version {versionString}[].
Lean is an *interactive theorem prover* based on dependent type theory, designed for use both in cutting-edge mathematics and in software verification.
Lean's core type theory is expressive enough to capture very complicated mathematical objects, but simple enough to admit independent implementations, reducing the risk of bugs that affect soundness.
The core type theory is implemented in a minimal {tech}[kernel] that does nothing other than check proof terms.
This core theory and kernel are supported by advanced automation, realized in {ref "tactics"}[an expressive tactic language].
Each tactic produces a term in the core type theory that is checked by the kernel, so bugs in tactics do not threaten the soundness of Lean as a whole.
Along with many other parts of Lean, the tactic language is user-extensible, so it can be built up to meet the needs of a given formalization project.
Tactics are written in Lean itself, and can be used immediately upon definition; rebuilding the prover or loading external modules is not required.
Lean is also a pure *functional programming language*, with features such as a run-time system based on reference counting that can efficiently work with packed array structures, multi-threading, and monadic {name}`IO`.
As befits a programming language, Lean is primarily implemented in itself, including the language server, build tool, {tech (key := "Lean elaborator") -normalize}[elaborator], and tactic system.
This very book is written in [Verso](https://github.com/leanprover/verso), a documentation authoring tool written in Lean.
Familiarity with Lean's programming features is valuable even for users whose primary interest is in writing proofs, because Lean programs are used to implement new tactics and proof automation.
Thus, this reference manual does not draw a barrier between the two aspects, but rather describes them together so they can shed light on one another.
{include 0 Manual.Intro}
{include 0 Manual.Elaboration}
{include 0 Manual.Interaction}
{include 0 Manual.Types}
{include 0 Manual.SourceFiles}
{include 0 Manual.Namespaces}
{include 0 Manual.Defs}
{include 0 Manual.Axioms}
{include 0 Manual.Attributes}
{include 0 Manual.Classes}
{include 0 Manual.Coercions}
{include 0 Manual.Runtime}
{include 0 Manual.Terms}
{include 0 Manual.Tactics}
{include 0 Manual.Simp}
{include 0 Manual.Grind}
{include 0 Manual.VCGen}
{include 0 Manual.Monads}
{include 0 Manual.BasicProps}
{include 0 Manual.BasicTypes}
{include 0 Manual.IO}
# Dynamic Typing
%%%
draft := true
%%%
{docstring TypeName}
{docstring Dynamic}
{docstring Dynamic.mk +allowMissing}
{docstring Dynamic.get?}
{include 0 Manual.Iterators}
# Standard Library
%%%
tag := "standard-library"
draft := true
%%%
:::planned 109
Overview of the standard library, including types from the prelude and those that require imports.
:::
{include 0 Manual.NotationsMacros}
{include 0 Manual.BuildTools}
{include 0 Manual.ValidatingProofs}
{include 0 Manual.ErrorExplanations}
{include 0 Manual.Releases}
{include 0 Manual.SupportedPlatforms}
# Index
%%%
number := false
file := some "the-index"
%%%
{theIndex}
# Progress
%%%
number := false
draft := true
file := some "the-index"
%%%
::::::draft
:::progress
```namespace
ByteArray
ByteArray.Iterator
ByteSlice
List
Int
IntCast
Empty
PEmpty
Function
Ord
Ordering
Functor
Applicative
Monad
Pure
Bind
Seq
SeqLeft
SeqRight
MonadState
MonadStateOf
StateT
StateM
MonadReader
MonadReaderOf
ReaderT
ReaderM
MonadExcept
MonadExceptOf
ExceptT
Except
MonadFunctor
MonadFunctorT
MonadControl
MonadControlT
MonadLift
MonadLiftT
OptionT
StateRefT'
StateCpsT
ExceptCpsT
LawfulFunctor
LawfulApplicative
LawfulMonad
LawfulBEq
ReflBEq
EquivBEq
LawfulHashable
Id
Thunk
ForM
ForIn
ForInStep
ForIn'
EStateM
EStateM.Result
EStateM.Backtrackable
String
Substring
String.Slice
String.Slice.Pos
String.Pattern
String.Pos.Raw
String.Pos
String.ValidPos
String.Iterator
Char
Nat
Lean.Elab.Tactic
Array
Subarray
IO
IO.FS
System
System.FilePath
IO.Process
IO.FS.Stream
ST
IO.Error
IO.FS.Stream.Buffer
IO.FS.Handle
IO.Process.SpawnArgs
IO.Process.Output
IO.Process.Child
IO.Process.StdioConfig
IO.Process.Stdio
IO.Ref
ST.Ref
IO.FS.Metadata
IO.FS.DirEntry
EIO
BaseIO
IO.FileRight
IO.AccessRight
IO.FS.Stream
Task
Task.Priority
IO.Promise
Std.Mutex
Std.Channel
Std.Channel.Sync
Std.CloseableChannel
Std.Condvar
Std.Format
Unit
PUnit
Bool
Decidable
System.Platform
PLift
ULift
Subtype
Option
List
List.IsSuffix
List.IsPrefix
List.IsInfix
List.Perm
List.Pairwise
List.Nodup
List.Lex
USize
UInt8
UInt16
UInt32
UInt64
ISize
Int8
Int16
Int32
Int64
Fin
Option
List
Prod
PProd
MProd
Sum
PSum
Sigma
Subtype
Repr
Thunk
_root_
BitVec
Float
Float32
Empty
Quotient
Quot
Setoid
Squash
Subsingleton
WellFoundedRelation
Equivalence
HasEquiv
Lake
Lake.RecBuildM
Lake.FetchM
Lake.ScriptM
MonadEval
True
False
And
Or
Not
Iff
Exists
Eq
HEq
Max
Min
Std.Do
Std.Do.PredTrans
Std.Do.SVal
Std.HashMap
Std.ExtHashMap
Std.DHashMap
Std.ExtDHashMap
Std.HashSet
Std.ExtHashSet
Std.TreeMap
Std.DTreeMap
Std.TreeSet
Std.Iterators
Std.Iterators.Iter
Std.Iterators.Iter.Equiv
Std.Iterators.Iter.TerminationMeasures
Std.Iterators.IterM
Std.Iterators.IterM.Equiv
Std.Iterators.IterM.TerminationMeasures
Std.Iterators.Iterator
Std.Iterators.IteratorAccess
Std.Iterators.IteratorLoop
Std.Iterators.IteratorLoopPartial
Std.Iterators.Finite
Std.Iterators.Productive
Std.Iterators.PostconditionT
Std.Iterators.HetT
Std.PRange
Std.PRange.UpwardEnumerable
Std.Rco
Std.Rcc
Std.Rci
Std.Roo
Std.Roc
Std.Roi
Std.Rio
Std.Ric
Std.Rii
```
```exceptions
Std.HashMap.all
Std.HashMap.equiv_emptyWithCapacity_iff_isEmpty.match_1_1
Std.HashMap.noConfusionType.withCtor
Std.HashMap.noConfusionType.withCtorType
Std.HashMap.«term_~m_»
```
```exceptions
Std.DHashMap.equiv_emptyWithCapacity_iff_isEmpty.match_1_1
Std.DHashMap.insertMany_ind.match_1_1
Std.DHashMap.isSetoid
Std.DHashMap.«term_~m_»
```
```exceptions
Std.ExtHashMap.noConfusionType.withCtor
Std.ExtHashMap.noConfusionType.withCtorType
```
```exceptions
Bool.toLBool
Bool.atLeastTwo
Bool.«term_^^_»
```
```exceptions
Decidable.or_not_self
```
```exceptions
Sum.repr
```
```exceptions
String.revFindAux String.extract.go₂ String.substrEq.loop String.casesOn
String.offsetOfPosAux String.extract.go₁ String.mapAux String.firstDiffPos.loop String.utf8SetAux String.revPosOfAux String.replace.loop
String.rec String.recOn String.posOfAux String.splitAux String.foldrAux
String.splitOnAux String.intercalate.go String.anyAux String.findAux
String.utf8GetAux? String.foldlAux String.utf8GetAux
String.utf8PrevAux String.noConfusionType String.noConfusion
String.utf8ByteSize.go String.validateUTF8.loop
String.crlfToLf.go
String.fromUTF8.loop
String.one_le_csize
String.mangle
```
```exceptions
String.codepointPosToUtf16Pos
String.codepointPosToUtf16PosFrom
String.codepointPosToUtf8PosFrom
String.fromExpr?
String.reduceBinPred
String.reduceBoolPred
String.toFileMap
String.utf16Length
String.utf16PosToCodepointPos
String.utf16PosToCodepointPosFrom
```
```exceptions
Substring.commonPrefix.loop
Substring.commonSuffix.loop
Substring.splitOn.loop
Substring.takeRightWhileAux
Substring.takeWhileAux
```
```exceptions
String.Pos.Raw.ctorIdx
String.Pos.Raw.extract.go₁
String.Pos.Raw.extract.go₂
String.Pos.Raw.mk.noConfusion
String.Pos.Raw.utf8GetAux
String.Pos.Raw.utf8GetAux?
String.Pos.Raw.utf8PrevAux
String.Pos.Raw.utf8SetAux
```
```exceptions
String.Slice.ctorIdx
String.Slice.hash
String.Slice.instDecidableEqPos.decEq
String.Slice.instInhabitedByteIterator.default
String.Slice.instInhabitedPosIterator.default
String.Slice.instInhabitedRevPosIterator.default
String.Slice.instInhabitedRevSplitIterator.default
String.Slice.instInhabitedSplitInclusiveIterator.default
String.Slice.instInhabitedSplitIterator.default
String.Slice.lines.lineMap
String.Slice.mk.noConfusion
```
```exceptions
String.Slice.Pos.ctorIdx
String.Slice.Pos.mk.noConfusion
String.Slice.Pos.prevAux
String.Slice.Pos.prevAux.go
```
```exceptions
String.ValidPos.ctorIdx
String.ValidPos.mk.noConfusion
```
```exceptions
Char.ofNatAux
Char.quoteCore
Char.quoteCore.smallCharToHex
Char.notLTTrans
Char.notLTTotal
Char.notLTAntisymm
Char.repr
```
```exceptions
Char.fromExpr?
Char.reduceBinPred
Char.reduceBoolPred
Char.reduceUnary
```
```exceptions
BitVec.fromExpr?
BitVec.negOverflow
BitVec.reduceBin
BitVec.reduceBinPred
BitVec.reduceBoolPred
BitVec.reduceExtend
BitVec.reduceGetBit
BitVec.reduceShift
BitVec.reduceShiftShift
BitVec.reduceShiftWithBitVecLit
BitVec.reduceUnary
```
```exceptions
ByteArray.ctorIdx
ByteArray.findFinIdx?.loop
ByteArray.findIdx?.loop
ByteArray.foldlM.loop
ByteArray.foldlMUnsafe
ByteArray.foldlMUnsafe.fold
ByteArray.forIn.loop
ByteArray.forInUnsafe
ByteArray.forInUnsafe.loop
ByteArray.hash
ByteArray.instBEq.beq
ByteArray.instInhabitedIterator.default
ByteArray.mk.noConfusion
ByteArray.mkIterator
ByteArray.toList.loop
ByteArray.utf8Decode?.go
ByteArray.utf8DecodeChar?.assemble₁
ByteArray.utf8DecodeChar?.assemble₂
ByteArray.utf8DecodeChar?.assemble₂Unchecked
ByteArray.utf8DecodeChar?.assemble₃
ByteArray.utf8DecodeChar?.assemble₃Unchecked
ByteArray.utf8DecodeChar?.assemble₄
ByteArray.utf8DecodeChar?.assemble₄Unchecked
ByteArray.utf8DecodeChar?.isInvalidContinuationByte
ByteArray.utf8DecodeChar?.parseFirstByte
```
```exceptions
Quot.indep
Quot.lcInv
```
```exceptions
String.Pos.isValid.go
```
```exceptions
String.sluggify
```
```exceptions
Ordering.ofNat
Ordering.toCtorIdx
```
```exceptions
Ord.arrayOrd
```
```exceptions
Nat.applyEqLemma
Nat.applySimprocConst
Nat.div.go
Nat.fromExpr?
Nat.imax
Nat.lt_wfRel
Nat.modCore.go
Nat.reduceBin
Nat.reduceBinPred
Nat.reduceBoolPred
Nat.reduceLTLE
Nat.reduceNatEqExpr
Nat.reduceUnary
Nat.toDigitsCore
Nat.toLevel
```
```exceptions
Nat.anyM.loop
Nat.nextPowerOfTwo.go
Nat.foldRevM.loop
Nat.foldM.loop
Nat.foldTR.loop
Nat.recAux
Nat.allTR.loop
Nat.allM.loop
Nat.anyTR.loop
Nat.anyM.loop
Nat.toSuperDigitsAux
Nat.casesAuxOn
Nat.forM.loop
Nat.repeatTR.loop
Nat.forRevM.loop
Nat.toSubDigitsAux
```
```exceptions
Nat.one_pos
Nat.not_lt_of_lt
Nat.sub_lt_self
Nat.lt_or_gt
Nat.pow_le_pow_left
Nat.not_lt_of_gt
Nat.le_or_le
Nat.le_or_ge
Nat.pred_lt'
Nat.pow_le_pow_right
Nat.lt_iff_le_and_not_ge
Nat.mul_pred_right
Nat.mul_pred_left
Nat.prod_dvd_and_dvd_of_dvd_prod
Nat.lt_iff_le_and_not_ge
Nat.mul_pred_right
```
```exceptions
Nat.binductionOn
Nat.le.rec
Nat.le.recOn
Nat.le.casesOn
Nat.le.below
Nat.le.below.step
Nat.le.below.rec
Nat.le.below.recOn
Nat.le.below.refl
Nat.le.below.casesOn
```
```exceptions
EStateM.dummySave
EStateM.dummyRestore
```
```exceptions
BitVec.rotateLeftAux
BitVec.rotateRightAux
BitVec.unexpandBitVecOfNat
BitVec.unexpandBitVecOfNatLt
```
```exceptions
Id.hasBind
```
```exceptions
Array.get?_size
Array.forIn'.loop
Array.mapM.map
Array.findIdx?.loop
Array.get_extract_loop_lt
Array.foldrM_eq_foldrM_data
Array.get?_push
Array.appendList_data
Array.insertAt.loop
Array.reverse.loop
Array.foldrM_eq_reverse_foldlM_data
Array.isPrefixOfAux
Array.takeWhile.go
Array.isPrefixOfAux
Array.size_eq_length_data
Array.qpartition.loop
Array.insertionSort.swapLoop
Array.foldl_data_eq_bind
Array.foldl_toList_eq_bind
Array.foldrMUnsafe
Array.get_swap_left
Array.get_extract_loop_ge_aux
Array.data_swap
Array.get_extract_loop_lt_aux
Array.get?_swap
Array.get_swap'
Array.mapM_eq_mapM_data
Array.anyM.loop
Array.getElem_eq_data_getElem
Array.get_swap_right
Array.get_extract_loop_ge
Array.foldrM.fold
Array.foldlM.loop
Array.take.loop
Array.mapMUnsafe
Array.binSearchAux
Array.eq_push_pop_back_of_size_ne_zero
Array.get?_push_eq
Array.append_data
Array.indexOfAux
Array.reverse_toList
Array.ofFn.go
Array.get?_eq_data_get?
Array.filterMap_data
Array.empty_data
Array.foldrMUnsafe.fold
Array.toListImpl
Array.filter_data
Array.get_swap_of_ne
Array.get_append_right
Array.getElem?_eq_toList_getElem?
Array.foldl_eq_foldl_data
Array.sequenceMap.loop
Array.toList_eq
Array.findSomeRevM?.find
Array.data_range
Array.forIn'Unsafe.loop
Array.foldlM_eq_foldlM_data
Array.getElem_eq_toList_getElem
Array.getElem_mem_data
Array.get_extract
Array.extract.loop
Array.foldlMUnsafe.fold
Array.data_set
Array.forIn'Unsafe
Array.mapMUnsafe.map
Array.mapM'.go
Array.pop_data
Array.appendCore.loop
Array.get?_len_le
Array.back_push
Array.all_def
Array.get_push_lt
Array.foldl_data_eq_map
Array.get?_eq_toList_get?
Array.isEqvAux
Array.getElem?_mem
Array.getElem_fin_eq_toList_get
Array.getElem?_eq_data_get?
Array.foldr_eq_foldr_data
Array.data_length
Array.get_push
Array.push_data
Array.toArray_data
Array.get_append_left
Array.insertionSort.traverse
Array.getElem_fin_eq_data_get
Array.toListLitAux
Array.map_data
Array.get?_push_lt
Array.get_extract_aux
Array.foldlMUnsafe
Array.qsort.sort
Array.any_def
Array.anyMUnsafe
Array.data_toArray
Array.mem_data
Array.get_swap
Array.mapFinIdxM.map
Array.data_pop
Array.anyMUnsafe.any
Array.mkArray0
Array.mkArray1
Array.mkArray2
Array.mkArray3
Array.mkArray4
Array.mkArray5
Array.mkArray6
Array.mkArray7
Array.mkArray8
Array.mkEmpty
Array.get_push_eq
Array.appendCore
Array.modifyMUnsafe
Array.mapSepElems
Array.mapSepElemsM
Array.toArrayLit
Array.getSepElems
Array.zipWithAux
Array.casesOn
Array.rec
Array.recOn
Array.noConfusion
Array.noConfusionType
Array.tacticArray_get_dec
Array.back_eq_back?
Array.mkArray_data
Array.getLit
Array.zipWithAll.go
Array.shrink.loop
Array.idxOfAux
Array.firstM.go
Array.get!Internal
Array.getInternal
Array.findFinIdx?.loop
Array.insertIdx.loop
```
```exceptions
Array.qpartition
```
```exceptions
Option.toLOption
```
```exceptions
Subarray.forInUnsafe.loop
Subarray.as
Subarray.casesOn
Subarray.recOn
Subarray.rec
Subarray.noConfusion
Subarray.noConfusionType
Subarray.forInUnsafe
Subarray.findSomeRevM?.find
```
```exceptions
Lean.Elab.Tactic.evalUnfold.go
Lean.Elab.Tactic.dsimpLocation.go
Lean.Elab.Tactic.withCollectingNewGoalsFrom.go
Lean.Elab.Tactic.evalRunTac.unsafe_impl_1
Lean.Elab.Tactic.evalRunTac.unsafe_1
Lean.Elab.Tactic.evalTactic.handleEx
Lean.Elab.Tactic.simpLocation.go
Lean.Elab.Tactic.instToSnapshotTreeTacticParsedSnapshot.go
Lean.Elab.Tactic.dsimpLocation'.go
Lean.Elab.Tactic.withRWRulesSeq.go
Lean.Elab.Tactic.runTermElab.go
Lean.Elab.Tactic.getMainGoal.loop
Lean.Elab.Tactic.elabSimpArgs.isSimproc?
Lean.Elab.Tactic.elabSimpArgs.resolveSimpIdTheorem?
Lean.Elab.Tactic.tactic.dbg_cache
Lean.Elab.Tactic.tactic.simp.trace
Lean.Elab.Tactic.liftMetaTacticAux
```
```exceptions
Int.add_of_le
Int.fromExpr?
Int.reduceBin
Int.reduceBinIntNatOp
Int.reduceBinPred
Int.reduceBoolPred
Int.reduceNatCore
Int.reduceUnary
```
```exceptions
Int8.fromExpr
Int16.fromExpr
Int32.fromExpr
Int64.fromExpr
ISize.fromExpr
UInt8.fromExpr
UInt16.fromExpr
UInt32.fromExpr
UInt64.fromExpr
USize.fromExpr
```
```exceptions
System.Platform.getIsEmscripten
System.Platform.getIsOSX
System.Platform.getIsWindows
System.Platform.getNumBits
System.Platform.getTarget
```
```exceptions
Prod.repr
Prod.rprod
Prod.lex
Prod.Lex
```
```exceptions
Std.Iterators.Iter.instForIn'
Std.Iterators.Iter.step_filter
Std.Iterators.Iter.val_step_filter
```
```exceptions
Std.Iterators.IterM.instForIn'
Std.Iterators.IterM.toListRev.go
```
```exceptions
Lean.Elab.Tactic.elabSetOption
Lean.Elab.Tactic.evalSeq1
Lean.Elab.Tactic.evalSimp
Lean.Elab.Tactic.evalSpecialize
Lean.Elab.Tactic.evalTacticAt
Lean.Elab.Tactic.evalSimpAll
Lean.Elab.Tactic.evalIntro.introStep
Lean.Elab.Tactic.evalDone
Lean.Elab.Tactic.evalRevert
Lean.Elab.Tactic.evalRight
Lean.Elab.Tactic.evalUnfold
Lean.Elab.Tactic.evalConstructor
Lean.Elab.Tactic.evalTacticCDot
Lean.Elab.Tactic.evalTraceMessage
Lean.Elab.Tactic.evalClear
Lean.Elab.Tactic.evalIntroMatch
Lean.Elab.Tactic.evalInduction
Lean.Elab.Tactic.evalApply
Lean.Elab.Tactic.evalUnknown
Lean.Elab.Tactic.evalRefl
Lean.Elab.Tactic.evalTactic.throwExs
Lean.Elab.Tactic.evalDSimp
Lean.Elab.Tactic.evalSepTactics.goEven
Lean.Elab.Tactic.evalAllGoals
Lean.Elab.Tactic.evalSplit
Lean.Elab.Tactic.evalInjection
Lean.Elab.Tactic.evalParen
Lean.Elab.Tactic.evalFocus
Lean.Elab.Tactic.evalLeft
Lean.Elab.Tactic.evalRotateRight
Lean.Elab.Tactic.evalWithReducible
Lean.Elab.Tactic.evalTactic.expandEval
Lean.Elab.Tactic.evalTraceState
Lean.Elab.Tactic.evalCase'
Lean.Elab.Tactic.evalSepTactics.goOdd
Lean.Elab.Tactic.evalWithReducibleAndInstances
Lean.Elab.Tactic.evalTacticSeqBracketed
Lean.Elab.Tactic.evalTactic.eval
Lean.Elab.Tactic.evalAlt
Lean.Elab.Tactic.evalGeneralize
Lean.Elab.Tactic.evalRewriteSeq
Lean.Elab.Tactic.evalSleep
Lean.Elab.Tactic.evalDSimpTrace
Lean.Elab.Tactic.evalReplace
Lean.Elab.Tactic.evalOpen
Lean.Elab.Tactic.evalAssumption
Lean.Elab.Tactic.evalSepTactics
Lean.Elab.Tactic.evalWithUnfoldingAll
Lean.Elab.Tactic.evalMatch
Lean.Elab.Tactic.evalRepeat1'
Lean.Elab.Tactic.evalFailIfSuccess
Lean.Elab.Tactic.evalRename
Lean.Elab.Tactic.evalFirst.loop
Lean.Elab.Tactic.evalSimpTrace
Lean.Elab.Tactic.evalFirst
Lean.Elab.Tactic.evalSubstVars
Lean.Elab.Tactic.evalRunTac
Lean.Elab.Tactic.evalSymmSaturate
Lean.Elab.Tactic.evalWithAnnotateState
Lean.Elab.Tactic.evalTacticAtRaw
Lean.Elab.Tactic.evalDbgTrace
Lean.Elab.Tactic.evalSubst
Lean.Elab.Tactic.evalNativeDecide
Lean.Elab.Tactic.evalCalc
Lean.Elab.Tactic.evalCase
Lean.Elab.Tactic.evalRepeat'
Lean.Elab.Tactic.evalRefine
Lean.Elab.Tactic.evalContradiction
Lean.Elab.Tactic.evalSymm
Lean.Elab.Tactic.evalInjections
Lean.Elab.Tactic.evalExact
Lean.Elab.Tactic.evalRotateLeft
Lean.Elab.Tactic.evalFail
Lean.Elab.Tactic.evalTactic
Lean.Elab.Tactic.evalSimpAllTrace
Lean.Elab.Tactic.evalRefine'
Lean.Elab.Tactic.evalChoice
Lean.Elab.Tactic.evalInduction.checkTargets
Lean.Elab.Tactic.evalIntro
Lean.Elab.Tactic.evalAnyGoals
Lean.Elab.Tactic.evalCases
Lean.Elab.Tactic.evalDelta
Lean.Elab.Tactic.evalDecide
Lean.Elab.Tactic.evalChoiceAux
Lean.Elab.Tactic.evalTacticSeq
Lean.Elab.Tactic.evalCheckpoint
Lean.Elab.Tactic.evalRenameInaccessibles
Lean.Elab.Tactic.evalIntros
Lean.Elab.Tactic.evalApplyLikeTactic
Lean.Elab.Tactic.evalSkip
Lean.Elab.Tactic.evalCalc.throwFailed
Lean.Elab.Tactic.evalSubstEqs
Lean.Elab.Tactic.evalTacticSeq1Indented
```
```exceptions
IO.Error.fopenErrorToString
IO.Error.mkAlreadyExists
IO.Error.mkAlreadyExistsFile
IO.Error.mkEofError
IO.Error.mkHardwareFault
IO.Error.mkIllegalOperation
IO.Error.mkInappropriateType
IO.Error.mkInappropriateTypeFile
IO.Error.mkInterrupted
IO.Error.mkInvalidArgument
IO.Error.mkInvalidArgumentFile
IO.Error.mkNoFileOrDirectory
IO.Error.mkNoSuchThing
IO.Error.mkNoSuchThingFile
IO.Error.mkOtherError
IO.Error.mkPermissionDenied
IO.Error.mkPermissionDeniedFile
IO.Error.mkProtocolError
IO.Error.mkResourceBusy
IO.Error.mkResourceExhausted
IO.Error.mkResourceExhaustedFile
IO.Error.mkResourceVanished
IO.Error.mkTimeExpired
IO.Error.mkUnsatisfiedConstraints
IO.Error.mkUnsupportedOperation
IO.Error.otherErrorToString
```
```exceptions
IO.stdGenRef
IO.throwServerError
IO.initializing
```
```exceptions
IO.Process.StdioConfig.noConfusionType
IO.Process.StdioConfig.recOn
IO.Process.StdioConfig.rec
IO.Process.StdioConfig.noConfusion
IO.Process.StdioConfig.casesOn
```
```exceptions
IO.FS.lines.read
```
```exceptions
IO.FS.Handle.readBinToEndInto.loop
```
```exceptions
IO.FS.Stream.readLspNotificationAs
IO.FS.Stream.readNotificationAs
IO.FS.Stream.readResponseAs
IO.FS.Stream.writeLspNotification
IO.FS.Stream.readJson
IO.FS.Stream.readLspMessage
IO.FS.Stream.Buffer.casesOn
IO.FS.Stream.Buffer.noConfusion
IO.FS.Stream.Buffer.recOn
IO.FS.Stream.Buffer.noConfusionType
IO.FS.Stream.Buffer.rec
IO.FS.Stream.rec
IO.FS.Stream.writeLspRequest
IO.FS.Stream.writeResponseError
IO.FS.Stream.noConfusionType
IO.FS.Stream.writeLspResponseErrorWithData
IO.FS.Stream.readLspResponseAs
IO.FS.Stream.noConfusion
IO.FS.Stream.writeLspResponse
IO.FS.Stream.readLspRequestAs
IO.FS.Stream.casesOn
IO.FS.Stream.readMessage
IO.FS.Stream.writeLspMessage
IO.FS.Stream.writeResponseErrorWithData
IO.FS.Stream.recOn
IO.FS.Stream.writeRequest
IO.FS.Stream.writeJson
IO.FS.Stream.writeLspResponseError
IO.FS.Stream.chainLeft
IO.FS.Stream.readRequestAs
IO.FS.Stream.withPrefix
IO.FS.Stream.writeResponse
IO.FS.Stream.chainRight
IO.FS.Stream.writeNotification
IO.FS.Stream.writeMessage
```
```exceptions
System.FilePath.recOn
System.FilePath.noConfusion
System.FilePath.casesOn
System.FilePath.walkDir.go
System.FilePath.rec
System.FilePath.noConfusionType
```
```exceptions
List.tacticSizeOf_list_dec
Lean.Parser.Tactic.tacticRefine_lift_
Lean.Parser.Tactic.tacticRefine_lift'_
Array.tacticArray_mem_dec
Lean.Parser.Tactic.normCast0
tacticClean_wf
Lean.Parser.Tactic.nestedTactic
Lean.Parser.Tactic.unknown
Lean.Parser.Tactic.paren
tacticDecreasing_trivial_pre_omega
SubVerso.Compat.HashMap.Compat_simp_arith_all
Lean.Parser.Tactic.bvDecideMacro
Lean.Parser.Tactic.bvNormalizeMacro
Lean.Parser.Tactic.bvTraceMacro
Lean.Parser.Tactic.attemptAll
Lean.Parser.Tactic.tryResult
```
```exceptions
List.hasDecEq
List.zipIdxLE
List.toPArray'
List.maxNatAbs
List.minNatAbs
List.nonzeroMinimum
List.toAssocList'
List.toPArray'
List.countP.go
List.eraseDups.loop
List.eraseIdxTR.go
List.erasePTR.go
List.eraseReps.loop
List.eraseTR.go
List.filterAuxM
List.filterMapM.loop
List.filterMapTR.go
List.filterTR.loop
List.findFinIdx?.go
List.findIdx.go
List.findIdx?.go
List.flatMapM.loop
List.flatMapTR.go
List.forIn'.loop
List.insertIdxTR.go
List.intercalateTR.go
List.iotaTR.go
List.lengthTRAux
List.mapFinIdx.go
List.mapFinIdxM.go
List.mapIdx.go
List.mapIdxM.go
List.mapM.loop
List.mapTR.loop
List.modifyTR.go
List.partition.loop
List.partitionM.go
List.partitionMap.go
List.range'TR.go
List.range.loop
List.replaceTR.go
List.replicateTR.loop
List.reverseAux
List.setTR.go
List.span.loop
List.splitAt.go
List.splitBy.loop
List.takeTR.go
List.takeWhileTR.go
List.toArrayAux
List.toByteArray.loop
List.toFloatArray.loop
List.toPArray'.loop
List.toSMap
List.toSSet
List.zipWithTR.go
```
```exceptions
List.format
List.repr
List.repr'
List.«term_<+:_»
List.«term_<+_»
List.«term_<:+:_»
List.«term_<:+_»
List.«term_~_»
```
```exceptions
List.Perm.below
List.Lex.below
List.Pairwise.below
```
```exceptions
IO.Process.Stdio.toCtorIdx
```
```exceptions
BaseIO.mapTasks.go
```
```exceptions
Task.asServerTask
Task.mapList.go
```
```exceptions
Fin.foldrM.loop
Fin.induction.go
Fin.foldr.loop
Fin.foldlM.loop
Fin.foldl.loop
```
```exceptions
Fin.fromExpr?
Fin.reduceBin
Fin.reduceBinPred
Fin.reduceBoolPred
Fin.reduceNatOp
Fin.reduceOp
```
:::
:::::: |
reference-manual/lakefile.lean | import Lean.Elab.Import
import Lake
open Lake DSL
open System (FilePath)
require verso from git "https://github.com/leanprover/verso.git"@"v4.28.0"
require versowebcomponents from git "https://github.com/leanprover/verso-web-components"@"main"
package "verso-manual" where
-- building the C code cost much more than the optimizations save
moreLeancArgs := #["-O0"]
-- work around clang emitting invalid linker optimization hints that lld rejects
moreLinkArgs :=
if System.Platform.isOSX then
#["-Wl,-ignore_optimization_hints"]
else #[]
leanOptions := #[
⟨`weak.verso.code.warnLineLength, .ofNat 72⟩,
⟨`weak.linter.typography.dashes, true⟩,
⟨`weak.linter.typography.quotes, true⟩,
⟨`weak.linter.typ, .ofNat 72⟩
]
-- Extended examples used in the grind chapter
@[default_target]
lean_lib IndexMap where
srcDir := "extended-examples"
@[default_target]
lean_lib IndexMapGrind where
srcDir := "extended-examples"
@[default_target]
lean_lib Manual where
def figureDir : FilePath := "figures"
def figureOutDir : FilePath := "static/figures"
def ensureDir (dir : System.FilePath) : IO Unit := do
if !(← dir.pathExists) then
IO.FS.createDirAll dir
if !(← dir.isDir) then
throw (↑ s!"Not a directory: {dir}")
/-- Ensure that the subverso-extract-mod executable is available -/
target subversoExtractMod : FilePath := do
let some pkg := ← findPackageByName? `subverso
| failure
let some exe := pkg.findLeanExe? `«subverso-extract-mod»
| failure
exe.fetch
target figures : Array FilePath := do
let files := (← figureDir.readDir).filterMap fun f => do
let some "tex" := f.path.extension | throw ()
let some fn := f.path.fileName | throw ()
-- Ignore backup files
if ".#".isPrefixOf fn then throw ()
return f.path
let files := files.qsort (toString · < toString ·)
let srcs := Job.collectArray (← liftM <| files.mapM inputTextFile)
let traceFile := figureDir.join "lake.trace"
srcs.mapM fun srcInfo => do
buildUnlessUpToDate traceFile (← getTrace) traceFile do
for src in srcInfo do
let some f := src.fileStem
| continue
proc { cmd := "lualatex", args := #[f], cwd := some figureDir} (quiet := true)
proc { cmd := "lualatex", args := #[f], cwd := some figureDir} (quiet := true)
proc { cmd := "lualatex", args := #[f], cwd := some figureDir} (quiet := true)
proc { cmd := "lualatex", args := #[f], cwd := some figureDir} (quiet := true)
proc { cmd := "pdftocairo", args := #["-svg", s!"{f}.pdf", s!"{f}.svg"], cwd := some figureDir} (quiet := true)
ensureDir "static"
ensureDir figureOutDir
for fmt in ["pdf", "svg"] do
let built := s!"{f}.{fmt}"
IO.println s!"Generated: {figureOutDir.join built}"
IO.FS.withFile (figureDir.join built) .read fun h =>
IO.FS.withFile (figureOutDir.join built) .write fun h' => do
let mut buf ← h.read 1024
while !buf.isEmpty do
h'.write buf
buf ← h.read 1024
pure srcInfo
@[default_target]
lean_exe "generate-manual" where
needs := #[`@/figures, `@/subversoExtractMod]
root := `Main
@[default_target]
lean_lib Tutorial where
@[default_target]
lean_exe "generate-tutorials" where
root := `TutorialMain
def lakeExe (prog : String) (args : Array String) : IO Unit := do
IO.println s!"Running {prog} with args {args}"
-- Using spawn and wait here causes the process to inherit stdio streams from Lake, so output is immediately visible
let code ← IO.Process.Child.wait <| (← IO.Process.spawn { cmd := "lake", args := #["--quiet", "exe", prog] ++ args })
if code ≠ 0 then
let code' := code.toUInt8
let code := if code' ≠ 0 then code' else 1
IO.eprintln s!"Failed to run {prog} with args {args}"
IO.Process.exit code |
reference-manual/test.txt | This is a test |
reference-manual/Main.lean | import Manual
import Manual.Meta
import VersoManual
import Manual.ExtractExamples
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
open Verso.Output.Html in
def plausible := {{
<script defer="defer" data-domain="lean-lang.org" src="https://plausible.io/js/script.outbound-links.js"></script>
}}
open Verso.Output.Html in
def staticJs := {{
<script src="static/metadata.js"></script>
<script src="static/print.js"></script>
{{ if false then -- Flip this bit to test live links locally
{{ <script>"window.metadata = {'latest': true};"</script> }}
else
.empty }}
}}
open Verso.Output.Html in
def staticCss := {{
<link rel="stylesheet" href="static/colors.css" />
<link rel="stylesheet" href="static/theme.css" />
<link rel="stylesheet" href="static/print.css" />
<link rel="stylesheet" href="static/fonts/source-serif/source-serif-text.css" />
<link rel="stylesheet" href="static/fonts/source-code-pro/source-code-pro.css" />
<link rel="stylesheet" href="static/fonts/source-sans/source-sans-3.css" />
<link rel="stylesheet" href="static/fonts/noto-sans-mono/noto-sans-mono.css" />
}}
def main :=
manualMain (%doc Manual) (config := config) (extraSteps := [extractExamples])
where
config := {
extraFiles := [("static", "static")],
extraHead := #[plausible, staticJs, staticCss],
emitTeX := false,
emitHtmlSingle := .no, -- for proofreading
logo := some "/static/lean_logo.svg",
sourceLink := some "https://github.com/leanprover/reference-manual",
issueLink := some "https://github.com/leanprover/reference-manual/issues",
} |
reference-manual/Tutorial/VCGen.lean | import VersoManual
import VersoTutorial
import Manual.Meta
import Manual.Papers
import Std.Tactic.Do
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Verso.Code.External (lit)
open Verso.Genre.Tutorial
set_option pp.rawOnError true
set_option verso.docstring.allowMissing true
set_option linter.unusedVariables false
set_option linter.typography.quotes true
set_option linter.typography.dashes true
set_option mvcgen.warning false
#doc (Tutorial) "Verifying Imperative Programs Using `mvcgen`" =>
%%%
tag := "mvcgen-tactic-tutorial"
slug := "mvcgen"
summary := inlines!"A demonstration of how to use Lean's verification condition generator to conveniently and compositionally prove properties of monadic programs."
exampleStyle := .inlineLean `MVCGenTutorial
%%%
This section is a tutorial that introduces the most important concepts of {tactic}`mvcgen` top-down.
Recall that you need to import {module}`Std.Tactic.Do` and open {namespace}`Std.Do` to run these examples:
:::codeOnly
```imports
import Std.Data.HashMap
import Std.Data.HashSet
```
:::
```imports
import Std.Tactic.Do
```
:::codeOnly
```lean
set_option mvcgen.warning false
```
:::
```lean
open Std.Do
```
# Preconditions and Postconditions
One style in which program specifications can be written is to provide a {tech (remote := "reference")}_precondition_ $`P`, which the caller of a program $`\mathit{prog}` is expected to ensure, and a {tech (remote := "reference")}_postcondition_ $`Q`, which the $`\mathit{prog}` is expected to ensure.
The program $`\mathit{prog}` satisfies the specification if running it when the precondition $`P` holds always results in the postcondition $`Q` holding.
In general, many different preconditions might suffice for a program to ensure the postcondition.
After all, new preconditions can be generated by replacing a precondition $`P_1` with $`P_1 \wedge P_2`.
The {tech (remote := "reference")}_weakest precondition_ $`\textbf{wp}⟦\mathit{prog}⟧(Q)` of a program $`\mathit{prog}` and postcondition $`Q` is a precondition for which $`\mathit{prog}` ensures the postcondition $`Q` and is implied by all other such preconditions.
One way to prove something about the result of a program is to find the weakest precondition that guarantees the desired result, and then to show that this weakest precondition is simply true.
This means that the postcondition holds no matter what.
# Loops and Invariants
:::leanFirst
As a first example of {tactic}`mvcgen`, the function {name}`mySum` computes the sum of an array using {ref "let-mut" (remote := "reference")}[local mutable state] and a {keywordOf Lean.Parser.Term.doFor}`for` loop:
```lean
def mySum (l : Array Nat) : Nat := Id.run do
let mut out := 0
for i in l do
out := out + i
return out
```
:::
If {name}`mySum` is correct, then it is equal to {name}`Array.sum`.
In {name}`mySum`, the use of {keywordOf Lean.Parser.Term.do}`do` is an internal implementation detail—the function's signature makes no mention of any monad.
Thus, the proof first manipulates the goal into a form that is amenable to the use of {tactic}`mvcgen`, using the lemma {name}`Id.of_wp_run_eq`.
This lemma states that facts about the result of running a computation in the {name}`Id` monad that terminates normally (`Id` computations never throw exceptions) can be proved by showing that the {tech (remote := "reference")}[weakest precondition] that ensures the desired result is true.
Next, the proof uses {tactic}`mvcgen` to replace the formulation in terms of weakest preconditions with a set of {tech (remote := "reference")}[verification conditions].
While {tactic}`mvcgen` is mostly automatic, it does require an invariant for the loop.
A {tech (remote := "reference")}_loop invariant_ is a statement that is both assumed and guaranteed by the body of the loop; if it is true when the loop begins, then it will be true when the loop terminates.
```lean
theorem mySum_correct (l : Array Nat) : mySum l = l.sum := by
-- Focus on the part of the program with the `do` block (`Id.run ...`)
generalize h : mySum l = x
apply Id.of_wp_run_eq h
-- Break down into verification conditions
mvcgen
-- Specify the invariant which should hold throughout the loop
-- * `out` refers to the current value of the `let mut` variable
-- * `xs` is a `List.Cursor`, which is a data structure representing
-- a list that is split into `xs.prefix` and `xs.suffix`.
-- It tracks how far into the loop we have gotten.
-- Our invariant is that `out` holds the sum of the prefix.
-- The notation ⌜p⌝ embeds a `p : Prop` into the assertion language.
case inv1 => exact ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
-- After specifying the invariant, we can further simplify our goals
-- by "leaving the proof mode". `mleave` is just
-- `simp only [...] at *` with a stable simp subset.
all_goals mleave
-- Prove that our invariant is preserved at each step of the loop
case vc1 ih =>
-- The goal here mentions `pref`, which binds the `prefix` field of
-- the cursor passed to the invariant. Unpacking the
-- (dependently-typed) cursor makes it easier for `grind`.
grind
-- Prove that the invariant is true at the start
case vc2 =>
grind
-- Prove that the invariant at the end of the loop implies the
-- property we wanted
case vc3 h =>
grind
```
:::paragraph
Note that the case labels are actually unique prefixes of the full case labels.
Whenever referring to cases, only this prefix should be used; the suffix is merely a hint to the user of where that particular {tech (remote := "reference")}[VC] came from.
For example:
* `vc1.step` conveys that this {tech (remote := "reference")}[VC] proves the inductive step for the loop
* `vc2.a.pre` is meant to prove that the hypotheses of a goal imply the precondition of a specification (of {name}`forIn`).
* `vc3.a.post.success` is meant to prove that the postcondition of a specification (of {name}`forIn`) implies the desired property.
:::
:::paragraph
After specifying the loop invariant, the proof can be shortened to just {keyword}`all_goals mleave; grind` (where {tactic}`mleave` leaves the stateful proof mode, cleaning up the proof state).
```lean
theorem mySum_correct_short (l : Array Nat) : mySum l = l.sum := by
generalize h : mySum l = x
apply Id.of_wp_run_eq h
mvcgen
case inv1 => exact ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
all_goals mleave; grind
```
This pattern is so common that {tactic}`mvcgen` comes with special syntax for it:
```lean
theorem mySum_correct_shorter (l : Array Nat) : mySum l = l.sum := by
generalize h : mySum l = x
apply Id.of_wp_run_eq h
mvcgen
invariants
· ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
with grind
```
The {multiCode}[{keyword}`mvcgen invariants `{lit}`...`{keyword}` with `{lit}`...`] is an abbreviation for the
tactic sequence {multiCode}[{keyword}`mvcgen; case`{lit}` inv1 => ...`{keyword}`; all_goals mleave; grind`]
above. It is the form that we will be using from now on.
:::
:::paragraph
It is helpful to compare the proof of {name}`mySum_correct_shorter` to a traditional correctness proof:
```lean
theorem mySum_correct_vanilla (l : Array Nat) : mySum l = l.sum := by
-- Turn the array into a list
cases l with | mk l =>
-- Unfold `mySum` and rewrite `forIn` to `foldl`
simp [mySum]
-- Generalize the inductive hypothesis
suffices h : ∀ out, List.foldl (· + ·) out l = out + l.sum by simp [h]
-- Grind away
induction l with grind
```
:::
:::paragraph
This proof is similarly succinct as the proof in {name}`mySum_correct_shorter` that uses {tactic}`mvcgen`.
However, the traditional approach relies on important properties of the program:
* The {keywordOf Lean.Parser.Term.doFor}`for` loop does not {keywordOf Lean.Parser.Term.doBreak}`break` or {keywordOf Lean.Parser.Term.doReturn}`return` early. Otherwise, the {name}`forIn` could not be rewritten to a {name Array.foldl}`foldl`.
* The loop body {lean (type := "Nat → Nat → Nat")}`(· + ·)` is small enough to be repeated in the proof.
* The loop body does not carry out any effects in the underlying monad (that is, the only effects are those introduced by {keywordOf Lean.Parser.Term.do}`do`-notation).
The {name}`Id` monad has no effects, so all of its comptutations are pure.
While {name}`forIn` could still be rewritten to a {name Array.foldlM}`foldlM`, reasoning about the monadic loop body can be tough for {tactic}`grind`.
In the following sections, we will go through several examples to learn about {tactic}`mvcgen` and its support library, and also see where traditional proofs become difficult.
This is usually caused by:
* {keywordOf Lean.Parser.Term.do}`do` blocks using control flow constructs such as {keywordOf Lean.Parser.Term.doFor}`for` loops, {keywordOf Lean.Parser.Term.doBreak}`break`s and early {keywordOf Lean.Parser.Term.doReturn}`return`.
* The use of effects in non-{name}`Id` monads, which affects the implicit monadic context (state, exceptions) in ways that need to be reflected in loop invariants.
{tactic}`mvcgen` scales to these challenges with reasonable effort.
:::
# Control Flow
:::leanFirst
Let us consider another example that combines {keywordOf Lean.Parser.Term.doFor}`for` loops with an early return.
{name}`List.Nodup` is a predicate that asserts that a given list does not contain any duplicates.
The function {name}`nodup` below decides this predicate:
```lean
def nodup (l : List Int) : Bool := Id.run do
let mut seen : Std.HashSet Int := ∅
for x in l do
if x ∈ seen then
return false
seen := seen.insert x
return true
```
:::
:::paragraph
This function is correct if it returns {name}`true` for every list that satisfies {name}`List.Nodup` and {name}`false` for every list that does not.
Just as it was in {name}`mySum`, the use of {keywordOf Lean.Parser.Term.do}`do`-notation and the {name}`Id` monad is an internal implementation detail of {name}`nodup`.
Thus, the proof begins by using {name}`Id.of_wp_run_eq` to make the proof state amenable to {tactic}`mvcgen`:
```lean
theorem nodup_correct (l : List Int) : nodup l ↔ l.Nodup := by
generalize h : nodup l = r
apply Id.of_wp_run_eq h
mvcgen
invariants
· Invariant.withEarlyReturn
(onReturn := fun ret seen => ⌜ret = false ∧ ¬l.Nodup⌝)
(onContinue := fun xs seen =>
⌜(∀ x, x ∈ seen ↔ x ∈ xs.prefix) ∧ xs.prefix.Nodup⌝)
with grind
```
:::
:::paragraph
```lean -show
section
variable {l : List Int} {ret : Bool} {seen : Std.HashSet Int} {xs : l.Cursor}
axiom onReturn : Bool → Std.HashSet Int → SPred PostShape.pure.args
axiom onContinue : l.Cursor → Std.HashSet Int → SPred PostShape.pure.args
axiom onExcept : ExceptConds PostShape.pure
```
The proof has the same succinct structure as for the initial {name}`mySum` example, because we again offload all proofs to {tactic}`grind` and its existing automation around {name}`List.Nodup`.
Therefore, the only difference is in the {tech (remote := "reference")}[loop invariant].
Since our loop has an {ref "early-return" (remote := "reference")}[early return], we construct the invariant using the helper function {lean}`Invariant.withEarlyReturn`.
This function allows us to specify the invariant in three parts:
* {lean}`onReturn ret seen` holds after the loop was left through an early return with value {lean}`ret`.
In case of {name}`nodup`, the only value that is ever returned is {name}`false`, in which case {name}`nodup` has decided there _is_ a duplicate in the list.
* {lean}`onContinue xs seen` is the regular induction step that proves the invariant is preserved each loop iteration.
The iteration state is captured by the cursor {lean}`xs`.
The given example asserts that the set {lean}`seen` contains all the elements of previous loop iterations and asserts that there were no duplicates so far.
* {lean}`onExcept` must hold when the loop throws an exception.
There are no exceptions in {lean}`Id`, so we leave it unspecified to use the default.
(Exceptions will be discussed at a later point.)
```lean -show
end
```
:::
:::paragraph
Note that the form `mvcgen invariants?` will suggest an initial invariant using {name}`Invariant.withEarlyReturn`, so there is no need to memorize the exact syntax for specifying invariants:
```lean (name := invariants?)
example (l : List Int) : nodup l ↔ l.Nodup := by
generalize h : nodup l = r
apply Id.of_wp_run_eq h
mvcgen invariants? <;> sorry
```
The tactic suggests a starting invariant.
This starting point will not allow the proof to succeed—after all, if the invariant can be inferred by the system, then there's no need to make the user specify it—but it does provide a reminder of the correct syntax to use for assertions in the current monad:
```leanOutput invariants?
Try this:
[apply] invariants
·
Invariant.withEarlyReturn (onReturn := fun r letMuts => ⌜l.Nodup ∧ (r = true ↔ l.Nodup)⌝) (onContinue :=
fun xs letMuts => ⌜xs.prefix = [] ∧ letMuts = ∅ ∨ xs.suffix = [] ∧ l.Nodup⌝)
```
:::
:::paragraph
Now consider the following direct (and excessively golfed) proof without {tactic}`mvcgen`:
```lean
theorem nodup_correct_directly (l : List Int) : nodup l ↔ l.Nodup := by
rw [nodup]
generalize hseen : (∅ : Std.HashSet Int) = seen
change ?lhs ↔ l.Nodup
suffices h : ?lhs ↔ l.Nodup ∧ ∀ x ∈ l, x ∉ seen by grind
clear hseen
induction l generalizing seen with grind [Id.run_pure, Id.run_bind]
```
:::
:::paragraph
Some observations:
* The proof is even shorter than the one with {tactic}`mvcgen`.
* The use of {tactic}`generalize` to generalize the accumulator relies on there being exactly one occurrence of {lean (type := "Std.HashSet Int")}`∅` to generalize. If that were not the case, we would have to copy parts of the program into the proof. This is a no-go for larger functions.
* {tactic}`grind` splits along the control flow of the function and reasons about {name}`Id`, given the right lemmas.
While this works for {name}`Id.run_pure` and {name}`Id.run_bind`, it would not work for {name}`Id.run_seq`, for example, because that lemma is not {tech (key := "E-matching") (remote := "reference")}[E-matchable].
If {tactic}`grind` would fail, we would be forced to do all the control flow splitting and monadic reasoning by hand until {tactic}`grind` could pick up again.
:::
The usual way to avoid replicating the control flow of a definition in a proof is to use the {tactic}`fun_cases` or {tactic}`fun_induction` tactics.
Unfortunately, {tactic}`fun_cases` does not help with control flow inside a {name}`forIn` application.
The {tactic}`mvcgen` tactic, on the other hand, ships with support for many {name}`forIn` implementations.
It can easily be extended (with {attrs}`@[spec]` annotations) to support custom {name}`forIn` implementations.
Furthermore, an {tactic}`mvcgen`-powered proof will never need to copy any part of the original program.
# Compositional Reasoning About Effectful Programs Using Hoare Triples
:::leanSection
```lean -show
variable (M : Type u → Type v) [Monad M] (α : Type u)
axiom M.run : M α → β → α
```
The previous examples reasoned about functions defined using {multiCode}[{lean}`Id.run`{lit}` `{keywordOf Lean.Parser.Term.do}`do`{lit}` <prog>`] to make use of local mutability and early return in {lit}`<prog>`.
However, real-world programs often use {keywordOf Lean.Parser.Term.do}`do` notation and monads {lean}`M` to hide away state and failure conditions as implicit “effects”.
In this use case, functions usually omit the {name}`M.run`.
Instead they have a monadic return type {lean}`M α` and compose well with other functions of that return type.
In other words, the monad is part of the function's _interface_, not merely its implementation.
:::
:::leanFirst
Here is an example involving a stateful function {name}`mkFresh` that returns auto-incremented counter values:
```lean
structure Supply where
counter : Nat
def mkFresh : StateM Supply Nat := do
let n ← (·.counter) <$> get
modify fun s => { s with counter := s.counter + 1 }
pure n
def mkFreshN (n : Nat) : StateM Supply (List Nat) := do
let mut acc := #[]
for _ in [:n] do
acc := acc.push (← mkFresh)
pure acc.toList
```
:::
::::leanFirst
:::leanSection
```lean -show
variable (n : Nat)
```
{lean}`mkFreshN n` returns {lean}`n` “fresh” numbers, modifying the internal {name}`Supply` state through {name}`mkFresh`.
Here, “fresh” refers to all previously generated numbers being distinct from the next generated number.
We can formulate and prove a correctness property {name}`mkFreshN_correct` in terms of {name}`List.Nodup`: the returned list of numbers should contain no duplicates.
In this proof, {name}`StateM.of_wp_run'_eq` serves the same role that {name}`Id.of_wp_run_eq` did in the preceding examples.
:::
```lean
theorem mkFreshN_correct (n : Nat) : ((mkFreshN n).run' s).Nodup := by
-- Focus on `(mkFreshN n).run' s`.
generalize h : (mkFreshN n).run' s = x
apply StateM.of_wp_run'_eq h
-- Show something about monadic program `mkFresh n`.
-- The `mkFreshN` and `mkFresh` arguments to `mvcgen` add to an
-- internal `simp` set and makes `mvcgen` unfold these definitions.
mvcgen [mkFreshN, mkFresh]
invariants
-- Invariant: The counter is larger than any accumulated number,
-- and all accumulated numbers are distinct.
-- Note that the invariant may refer to the state through function
-- argument `state : Supply`. Since the next number to accumulate is
-- the counter, it is distinct to all accumulated numbers.
· ⇓⟨xs, acc⟩ state =>
⌜(∀ x ∈ acc, x < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
::::
## Hoare Triples
::::::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [Monad m] [WP m ps] {α σ ε : Type u} {P : Assertion ps} {Q : PostCond α ps} {prog : m α} {c : Nat}
```
A {tech (remote := "reference")}_Hoare triple_ consists of a precondition, a statement, and a postcondition; it asserts that if the precondition holds, then the postcondition holds after running the statement.
In Lean syntax, this is written {lean}`⦃ P ⦄ prog ⦃ Q ⦄`, where {lean}`P` is the precondition, {typed}`prog : m α` is the statement, and {lean}`Q` is the postcondition.
{lean}`P` and {lean}`Q` are written in an assertion language that is determined by the specific monad {lean}`m`.{margin}[In particular, monad's instance of the type class {name}`WP` specifies the ways in which assertions may refer to the monad's state or the exceptions it may throw.]
:::leanSection
```lean -show
variable {stmt1 stmt2 : m PUnit} {ps : PostShape.{0}} {P : Assertion ps} {Q : PostCond Unit ps} {P' : Assertion ps} {Q' : PostCond Unit ps}
```
Specifications as Hoare triples are compositional because they allow statements to be sequenced.
Given {lean}`⦃P⦄ stmt1 ⦃Q⦄` and {lean}`⦃P'⦄ stmt2 ⦃Q'⦄`, if {lean}`Q` implies {lean}`P'` then {lean}`⦃P⦄ (do stmt1; stmt2) ⦃Q'⦄`.
Just as proofs about ordinary functions can rely on lemmas about the functions that they call, proofs about monadic programs can use lemmas that are specified in terms of Hoare triples.
:::
:::::paragraph
One suitable specification for {name}`mkFresh` as a Hoare triple is this translation of {name}`mkFreshN_correct`:
::::displayOnly
:::leanSection
```lean -show
variable {n : Nat}
```
```leanTerm
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄
```
:::
::::
```lean -show
variable {p : Prop}
```
Corner brackets embed propositions into the monadic assertion language, so {lean}`⌜p⌝` is the assertion of the proposition {lean}`p`.
The precondition {lean}`⌜True⌝` asserts that {lean}`True` is true; this trivial precondition is used to state that the specification imposes no requirements on the state in which it is called.
The postcondition states that the result value is a list with no duplicate elements.
:::::
:::::paragraph
A specification for the single-step {name}`mkFresh` describes its effects on the monad's state:
::::displayOnly
:::leanSection
```lean -show
variable {n : Nat}
```
```leanTerm
∀ (c : Nat),
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄
```
When working in a state monad, preconditions may be parameterized over the value of the state prior to running the code.
Here, the universally quantified {name}`Nat` is used to _relate_ the initial state to the final state; the precondition is used to connect it to the initial state.
Similarly, the postcondition may also accept the final state as a parameter.
This Hoare triple states:
> If {lean}`c` refers to the {name}`Supply.counter` field of the {name}`Supply` prestate, then running {name}`mkFresh` returns {lean}`c` and modifies the {name}`Supply.counter` of the poststate to be larger than {lean}`c`.
Note that this specification is lossy: {name}`mkFresh` could increment its state by an arbitrary non-negative amount and still satisfy the specification.
This is good, because specifications may _abstract over_ uninteresting implementation details, ensuring resilient and small proofs.
:::
::::
:::::
:::paragraph
Hoare triples are defined in terms of a logic of stateful predicates plus a {tech (remote := "reference")}[weakest precondition] semantics {lean}`wp⟦prog⟧` that translates monadic programs into this logic.
A weakest precondition semantics is an interpretation of programs as mappings from postconditions to the weakest precondition that the program would require to ensure the postcondition; in this interpretation, programs are understood as {tech (key := "predicate transformer semantics") (remote := "reference")}_predicate transformers_.
The Hoare triple syntax is notation for {name}`Std.Do.Triple`:
```lean -keep
-- This is the definition of Std.Do.Triple:
def Triple [WP m ps] {α : Type u} (prog : m α)
(P : Assertion ps) (Q : PostCond α ps) : Prop :=
P ⊢ₛ wp⟦prog⟧ Q
```
:::
```lean -show
variable {σ : Type u}
```
:::paragraph
The {name}`WP` type class maps a monad {lean}`m` to its {name}`PostShape` {lean}`ps`, and this {name}`PostShape` governs the exact shape of the {name}`Std.Do.Triple`.
Many of the standard monad transformers such as {name}`StateT`, {name}`ReaderT` and {name}`ExceptT` come with a canonical {name}`WP` instance.
For example, {lean}`StateT σ` comes with a {name}`WP` instance that adds a {lean}`σ` argument to every {name}`Assertion`.
Stateful entailment `⊢ₛ` eta-expands through these additional {lean}`σ` arguments.
For {name}`StateM` programs, the following type is definitionally equivalent to {name}`Std.Do.Triple`:
```lean
def StateMTriple {α σ : Type u} (prog : StateM σ α)
(P : σ → ULift Prop) (Q : (α → σ → ULift Prop) × PUnit) : Prop :=
∀ s, (P s).down → let (a, s') := prog.run s; (Q.1 a s').down
```
```lean -show
example : @StateMTriple α σ = Std.Do.Triple (m := StateM σ) := rfl
```
:::
```lean -show
variable {p : Prop}
```
The common postcondition notation `⇓ r => ...` injects an assertion of type {lean}`α → Assertion ps` into
{lean}`PostCond α ps` (the `⇓` is meant to be parsed like `fun`); in case of {name}`StateM` by adjoining it with an empty tuple {name}`PUnit.unit`.
The shape of postconditions becomes more interesting once exceptions enter the picture.
The notation {lean}`⌜p⌝` embeds a pure hypotheses {lean}`p` into a stateful assertion.
Vice versa, any stateful hypothesis {lean}`P` is called _pure_ if it is equivalent to {lean}`⌜p⌝`
for some {lean}`p`.
Pure, stateful hypotheses may be freely moved into the regular Lean context and back.
(This can be done manually with the {tactic}`mpure` tactic.)
::::::
## Composing Specifications
Nested unfolding of definitions as in {multiCode}[{tactic}`mvcgen`{lit}` [`{name}`mkFreshN`{lit}`, `{name}`mkFresh`{lit}`]`] is quite blunt but effective for small programs.
A more compositional way is to develop individual {tech (remote := "reference")}_specification lemmas_ for each monadic function.
A specification lemma is a Hoare triple that is automatically used during {tech (remote := "reference")}[verification condition] generation to obtain the pre- and postconditions of each statement in a {keywordOf Lean.Parser.Term.do}`do`-block.
When the system cannot automatically prove that the postcondition of one statement implies the precondition of the next, then this missing reasoning step becomes a verification condition.
:::paragraph
Specification lemmas can either be passed as arguments to {tactic}`mvcgen` or registered in a global (or {keyword}`scoped`, or {keyword}`local`) database of specifications using the {attrs}`@[spec]` attribute:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄ := by
-- Unfold `mkFresh` and blast away:
mvcgen [mkFresh] with grind
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄ := by
-- `mvcgen [mkFreshN, mkFresh_spec]` if `mkFresh_spec` were not
-- registered with `@[spec]`
mvcgen [mkFreshN]
invariants
-- As before:
· ⇓⟨xs, acc⟩ state =>
⌜(∀ x ∈ acc, x < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
:::
:::paragraph
The original correctness theorem can now be proved using {tactic}`mvcgen` alone:
```lean
theorem mkFreshN_correct_compositional (n : Nat) :
((mkFreshN n).run' s).Nodup := by
generalize h : (mkFreshN n).run' s = x
apply StateM.of_wp_run'_eq h
mvcgen
```
The specification lemma {name}`mkFreshN_spec` is automatically used by {tactic}`mvcgen`.
:::
## An Advanced Note About Pure Preconditions and a Notion of Frame Rule
This subsection is a bit of a digression and can be skipped on first reading.
::::leanSection
:::codeOnly
```lean
axiom M : Type → Type
variable {x y : UInt8} [Monad M] [WP M .pure]
def addQ (x y : UInt8) : M UInt8 := pure (x + y)
local infix:1023 " +? " => addQ
```
```lean -show
axiom dots {α} : α
local notation "…" => dots
```
:::
Say the specification for some [`Aeneas`](https://github.com/AeneasVerif/aeneas)-inspired monadic addition function {typed}`x +? y : M UInt8` has the
requirement that the addition won't overflow, that is, `h : x.toNat + y.toNat ≤ UInt8.size`.
Should this requirement be encoded as a regular Lean hypothesis of the specification (`add_spec_hyp`) or should this requirement be encoded as a pure precondition of the Hoare triple, using `⌜·⌝` notation (`add_spec_pre`)?
:::displayOnly
```lean
theorem add_spec_hyp (x y : UInt8)
(h : x.toNat + y.toNat ≤ UInt8.size) :
⦃⌜True⌝⦄ x +? y ⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := …
theorem add_spec_pre (x y : UInt8) :
⦃⌜x.toNat + y.toNat ≤ UInt8.size⌝⦄
x +? y
⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := …
```
:::
::::
The first approach is advisable, although it should not make a difference in practice.
The VC generator will move pure hypotheses from the stateful context into the regular Lean context, so the second form turns effectively into the first form.
This is referred to as {deftech}_framing_ hypotheses (cf. the {tactic}`mpure` and {tactic}`mframe` tactics).
Hypotheses in the Lean context are part of the immutable {deftech}_frame_ of the stateful logic, because in contrast to stateful hypotheses they survive the rule of consequence.
# Monad Transformers and Lifting
Real-world programs often use monads that are built from multiple {tech (remote := "reference")}[monad transformers], with operations being frequently {ref "lifting-monads" (remote := "reference")}[lifted] from one monad to another.
Verification of these programs requires taking this into account.
We can tweak the previous example to demonstrate this.
:::codeOnly
```lean
namespace Transformers
```
```lean -show
variable {m : Type → Type} {α : Type} {ps : PostShape.{0}}
attribute [-instance] Lake.instMonadLiftTOfMonadLift_lake
```
:::
::::paragraph
:::leanFirst
Now, there is an application with two separate monads, both built using transformers:
```lean
abbrev CounterM := StateT Supply (ReaderM String)
abbrev AppM := StateT Bool CounterM
```
Instead of using {lean}`StateM Supply`, {name}`mkFresh` uses {lean}`CounterM`:
```lean
def mkFresh : CounterM Nat := do
let n ← (·.counter) <$> get
modify fun s => { s with counter := s.counter + 1 }
pure n
```
{name}`mkFreshN` is defined in terms of {name}`AppM`, which includes multiple states and a reader effect.
The definition of {name}`mkFreshN` lifts {name}`mkFresh` into {name}`AppM`:
```lean
def mkFreshN (n : Nat) : AppM (List Nat) := do
let mut acc := #[]
for _ in [:n] do
let n ← mkFresh
acc := acc.push n
return acc.toList
```
:::
::::
::::paragraph
Then the {tactic}`mvcgen`-based proof goes through unchanged:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄ := by
mvcgen [mkFresh] with grind
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄ := by
-- `liftCounterM` here ensures unfolding
mvcgen [mkFreshN]
invariants
· ⇓⟨xs, acc⟩ _ state =>
⌜(∀ n ∈ acc, n < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
::::
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {α : Type u} {prog : m α}
```
The {name}`WPMonad` type class asserts that {lean}`wp⟦prog⟧` distributes over the {name}`Monad` operations (“monad morphism”).
This proof might not look much more exciting than when only a single monad was involved.
However, under the radar of the user the proof builds on a cascade of specifications for {name}`MonadLift` instances.
:::
:::codeOnly
```lean
end Transformers
```
:::
# Exceptions
::::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α : Type u} {prog : m α} {Q' : α → Assertion ps}
```
If {keyword}`let mut` is the {keywordOf Lean.Parser.Term.do}`do`-equivalent of {name}`StateT`, then early {keywordOf Lean.Parser.Term.doReturn}`return` is the equivalent of {name}`ExceptT`.
We have seen how the {tactic}`mvcgen` copes with {name}`StateT`; here we will look at the program logic's support for {name}`ExceptT`.
Exceptions are the reason why the type of postconditions {lean}`PostCond α ps` is not simply a single condition of type {lean}`α → Assertion ps` for the success case.
To see why, suppose the latter was the case, and suppose that program {lean}`prog` throws an exception in a prestate satisfying {lean}`P`.
Should we be able to prove {lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄`?
(Recall that `⇓` is grammatically similar to `fun`.)
There is no result `r`, so it is unclear what this proof means for {lean}`Q'`!
So there are two reasonable options, inspired by non-termination in traditional program logics:
: The {tech (remote := "reference")}_total correctness interpretation_
{lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` asserts that, given {lean}`P` holds, then {lean}`prog` terminates _and_ {lean}`Q'` holds for the result.
: The {tech (remote := "reference")}_partial correctness interpretation_
{lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` asserts that, given {lean}`P` holds, and _if_ {lean}`prog` terminates _then_ {lean}`Q'` holds for the result.
The notation {lean}`⇓ r => Q' r` has the total interpretation, while {lean}`⇓? r => Q' r` has the partial interpretation.
In the running example, {lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` is unprovable, but {lean}`⦃P⦄ prog ⦃⇓? r => Q' r⦄` is trivially provable.
However, the binary choice suggests that there is actually a _spectrum_ of correctness properties to express.
The notion of postconditions {name}`PostCond` in `Std.Do` supports this spectrum.
::::
:::codeOnly
```lean
namespace Exceptions
```
:::
For example, suppose that our {name}`Supply` of fresh numbers is bounded and we want to throw an exception if the supply is exhausted.
Then {name}`mkFreshN` should throw an exception _only if_ the supply is indeed exhausted, as in this implementation:
```lean
structure Supply where
counter : Nat
limit : Nat
property : counter ≤ limit
def mkFresh : EStateM String Supply Nat := do
let supply ← get
if h : supply.counter = supply.limit then
throw s!"Supply exhausted: {supply.counter} = {supply.limit}"
else
let n := supply.counter
have := supply.property
set { supply with counter := n + 1, property := by grind }
pure n
```
The following correctness property expresses this:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃post⟨fun r state => ⌜r = c ∧ c < state.counter⌝,
fun _ state => ⌜c = state.counter ∧ c = state.limit⌝⟩⦄ := by
mvcgen [mkFresh] with grind
```
In this property, the postcondition has two branches: the first covers successful termination, and the second applies when an exception is thrown.
The monad's {name}`WP` instance determines both how many branches the postcondition may have and the number of parameters in each branch: each exception that might be triggered gives rise to an extra branch, and each state gives an extra parameter.
:::leanFirst
In this new monad, {name}`mkFreshN`'s implementation is unchanged, except for the type signature:
```lean
def mkFreshN (n : Nat) : EStateM String Supply (List Nat) := do
let mut acc := #[]
for _ in [:n] do
acc := acc.push (← mkFresh)
pure acc.toList
```
:::
:::paragraph
However, the specification lemma must account for both successful termination and exceptions being thrown, in both the postcondition and the loop invariant:
```lean
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄
mkFreshN n
⦃post⟨fun r => ⌜r.Nodup⌝,
fun _msg state => ⌜state.counter = state.limit⌝⟩⦄ := by
mvcgen [mkFreshN]
invariants
· post⟨fun ⟨xs, acc⟩ state =>
⌜(∀ n ∈ acc, n < state.counter) ∧ acc.toList.Nodup⌝,
fun _msg state => ⌜state.counter = state.limit⌝⟩
with grind
```
:::
:::paragraph
The final proof uses the specification lemmas and {tactic}`mvcgen`, just as before:
```lean
theorem mkFreshN_correct (n : Nat) :
match (mkFreshN n).run s with
| .ok l _ => l.Nodup
| .error _ s' => s'.counter = s'.limit := by
generalize h : (mkFreshN n).run s = x
apply EStateM.of_wp_run_eq h
mvcgen
```
:::
:::codeOnly
```lean
end Exceptions
```
:::
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
Just as any {lean}`StateT σ`-like monad transformer gives rise to a {lean}`PostShape.arg σ` layer in the {lean}`ps` that {name}`WP` maps into, any {lean}`ExceptT ε`-like layer gives rise to a {lean}`PostShape.except ε` layer.
Every {lean}`PostShape.arg σ` adds another `σ → ...` layer to the language of {lean}`Assertion`s.
Every {lean}`PostShape.except ε` leaves the {lean}`Assertion` language unchanged, but adds another exception
condition to the postcondition.
Hence the {name}`WP` instance for {lean}`EStateM ε σ` maps to the {name}`PostShape` {lean}`PostShape.except ε (.arg σ .pure)`, just
as for {lean}`ExceptT ε (StateM σ)`.
:::
# Extending `mvcgen` With Support for Custom Monads
The {tactic}`mvcgen` framework is designed to be extensible.
None of the monads presented so far have in any way been hard-coded into {tactic}`mvcgen`.
Rather, {tactic}`mvcgen` relies on instances of the {name}`WP` and {name}`WPMonad` type class and user-provided specifications to generate {tech (remote := "reference")}[verification conditions].
:::leanSection
```lean -show
variable {m : Type u → Type v} [Monad m] {ps : PostShape.{u}}
```
The {name}`WP` instance defines the weakest precondition interpretation of a monad {lean}`m` into a predicate transformer {lean}`PredTrans ps`,
and the matching {name}`WPMonad` instance asserts that this translation distributes over the {name}`Monad` operations.
:::
:::::paragraph
::::leanFirst
Suppose one wants to use `mvcgen` to generate verification conditions for programs generated by [`Aeneas`](https://github.com/AeneasVerif/aeneas).
`Aeneas` translates Rust programs into Lean programs in the following {name}`Result` monad:
```lean
inductive Error where
| integerOverflow: Error
-- ... more error kinds ...
inductive Result (α : Type u) where
| ok (v: α): Result α
| fail (e: Error): Result α
| div
```
:::codeOnly
```lean
instance Result.instMonad : Monad Result where
pure x := .ok x
bind x f := match x with
| .ok v => f v
| .fail e => .fail e
| .div => .div
instance Result.instLawfulMonad : LawfulMonad Result := by
apply LawfulMonad.mk' <;> (simp only [Result.instMonad]; grind)
```
:::
::::
:::::
:::paragraph
There are both {inst}`Monad Result` and {inst}`LawfulMonad Result` instances.
Supporting this monad in {tactic}`mvcgen` is a matter of:
1. Adding {name}`WP` and {name}`WPMonad` instances for {name}`Result`
2. Registering specification lemmas for the translation of basic Rust primitives such as addition etc.
:::
::::paragraph
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
The {name}`WP` instance for {name}`Result` specifies a postcondition shape {lean (type := "PostShape.{0}")}`.except Error .pure` because there are no state-like effects, but there is a single exception of type {lean}`Error`.
The {name}`WP` instance translates programs in {lean}`Result α` to predicate transformers in {lean}`PredTrans ps α`.
That is, a function in {lean}`PostCond α ps → Assertion ps`, mapping a postcondition to its weakest precondition.
The implementation of {name}`WP.wp` reuses the implementation for {lean}`Except Error` for two of its cases, and maps diverging programs to {lean}`False`.
The instance is named so that it can be more easily unfolded in proofs about it.
:::
```lean
instance Result.instWP : WP Result (.except Error .pure) where
wp
| .ok v => wp (pure v : Except Error _)
| .fail e => wp (throw e : Except Error _)
| .div => PredTrans.const ⌜False⌝
```
::::
:::paragraph
The implementation of {name}`WP.wp` should distribute over the basic monad operators:
```lean
instance : WPMonad Result (.except Error .pure) where
wp_pure := by
intros
ext Q
simp [wp, PredTrans.pure, pure, Except.pure, Id.run]
wp_bind x f := by
simp only [Result.instWP, bind]
ext Q
cases x <;> simp [PredTrans.bind, PredTrans.const]
```
:::
```lean
theorem Result.of_wp {α} {x : Result α} (P : Result α → Prop) :
(⊢ₛ wp⟦x⟧ post⟨fun a => ⌜P (.ok a)⌝,
fun e => ⌜P (.fail e)⌝⟩) → P x := by
intro hspec
simp only [instWP] at hspec
split at hspec <;> simp_all
```
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
The definition of the {name}`WP` instance determines what properties can be derived from proved specifications via {lean}`Result.of_wp`.
This lemma defines what “weakest precondition” means.
:::
:::paragraph
To exemplify the second part, here is an example definition of {name}`UInt32` addition in {name}`Result` that models integer overflow:
```lean
instance : MonadExcept Error Result where
throw e := .fail e
tryCatch x h := match x with
| .ok v => pure v
| .fail e => h e
| .div => .div
def addOp (x y : UInt32) : Result UInt32 :=
if x.toNat + y.toNat ≥ UInt32.size then
throw .integerOverflow
else
pure (x + y)
```
:::
:::paragraph
There are two relevant specification lemmas to register:
```lean
@[spec]
theorem Result.throw_spec {α Q} (e : Error) :
⦃Q.2.1 e⦄ throw (m := Result) (α := α) e ⦃Q⦄ := id
@[spec]
theorem addOp_ok_spec {x y} (h : x.toNat + y.toNat < UInt32.size) :
⦃⌜True⌝⦄
addOp x y
⦃⇓ r => ⌜r = x + y ∧ (x + y).toNat = x.toNat + y.toNat⌝⦄ := by
mvcgen [addOp] with (simp_all; try grind)
```
:::
:::paragraph
This is already enough to prove the following example:
```lean
example :
⦃⌜True⌝⦄
do let mut x ← addOp 1 3
for _ in [:4] do
x ← addOp x 5
return x
⦃⇓ r => ⌜r.toNat = 24⌝⦄ := by
mvcgen
invariants
· ⇓⟨xs, x⟩ => ⌜x.toNat = 4 + 5 * xs.prefix.length⌝
with (simp_all [UInt32.size]; try grind)
```
:::
# Proof Mode for Stateful Goals
```lean -show
variable {σs : List (Type u)} {H T : SPred σs}
```
It is a priority of {tactic}`mvcgen` to break down monadic programs into {tech (remote := "reference")}[verification conditions] that are straightforward to understand.
For example, when the monad is monomorphic and all loop invariants have been instantiated, an invocation of {multiCode}[{tactic}`all_goals`{lit}` `{tactic}`mleave`] should simplify away any {name}`Std.Do.SPred`-specific constructs and leave behind a goal that is easily understood by humans and {tactic}`grind`.
This {multiCode}[{tactic}`all_goals`{lit}` `{tactic}`mleave`]step is carried out automatically by {tactic}`mvcgen` after loop invariants have been instantiated.
However, there are times when {tactic}`mleave` will be unable to remove all {name}`Std.Do.SPred` constructs.
In this case, verification conditions of the form {lean}`H ⊢ₛ T` will be left behind.
The assertion language {name}`Assertion` translates into an {name}`Std.Do.SPred` as follows:
```lean -keep
abbrev PostShape.args : PostShape.{u} → List (Type u)
| .pure => []
| .arg σ s => σ :: PostShape.args s
| .except _ s => PostShape.args s
abbrev Assertion (ps : PostShape.{u}) : Type u :=
SPred (PostShape.args ps)
```
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
A common case for when a VC of the form {lean}`H ⊢ₛ T` is left behind is when the base monad {lean}`m` is polymorphic.
In this case, the proof will depend on a {lean}`WP m ps` instance which governs the translation into the {name}`Assertion` language, but the exact correspondence to `σs : List (Type u)` is yet unknown.
To successfully discharge such a VC, `mvcgen` comes with an entire proof mode that is inspired by that of the Iris concurrent separation logic.
(In fact, the proof mode was adapted in large part from its Lean clone, [`iris-lean`](https://github.com/leanprover-community/iris-lean).)
The {ref "tactic-ref-spred" (remote := "reference")}[tactic reference] contains a list of all proof mode tactics.
::: |
reference-manual/Tutorial/Meta/Theme.lean | import VersoBlog
import VersoWeb.Theme
import VersoWeb.Components.Footer
import VersoWeb.Components.NavBar
open Verso Genre Blog Output Html Multi
open Web Components Theme
namespace LeanLangOrg
/--
Default footer configuration with all standard links
-/
def footer : FooterConfig := {
columns := #[
-- Get Started column
{
heading := "Get Started"
headingId := some "get-started"
ariaLabel := some "LEAN"
items := #[
{ title := "Install", url := "/install" },
{ title := "Learn", url := "/learn" },
{ title := "Community", url := "/community" },
{ title := "Reservoir", url := "https://reservoir.lean-lang.org/", blank := true }
]
},
-- Documentation column
{
heading := "Documentation"
headingId := some "documentation"
ariaLabel := some "Documentation"
items := #[
{ title := "Language reference", url := "/doc/reference/latest/" },
{ title := "Lean API", url := "/doc/api/" },
{ title := "Use cases", url := "/use-cases" },
{ title := "Cite Lean", url := "/learn#how-to-cite-lean" }
]
},
-- Resources column
{
heading := "Resources"
headingId := some "resources"
ariaLabel := some "Resources"
items := #[
{ title := "Lean playground", url := "https://live.lean-lang.org/?from=lean", blank := true },
{ title := "VS Code extension", url := "https://marketplace.visualstudio.com/items?itemName=leanprover.lean4", blank := true },
{ title := "Loogle", url := "https://loogle.lean-lang.org/", blank := true },
{ title := "Mathlib", url := "https://github.com/leanprover-community/mathlib4", blank := true }
]
},
-- FRO column
{
heading := "FRO"
headingId := some "fro"
ariaLabel := some "FRO"
items := #[
{ title := "Vision", url := "/fro" },
{ title := "Team", url := "/fro/team" },
{ title := "Roadmap", url := "/fro/roadmap/y3" },
{ title := "Contact", url := "/fro/contact" }
]
},
-- Policies column
{
heading := "Policies"
headingId := some "policies"
ariaLabel := some "Policies"
items := #[
{ title := "Privacy Policy", url := "/privacy" },
{ title := "Terms of Use", url := "/terms" },
{ title := "Lean Trademark Policy", url := "/trademark-policy" }
]
}
]
socialLinks := #[
{ url := "https://bsky.app/profile/lean-lang.org", icon := Icon.blueskyLogo, ariaLabel := some "Bluesky" },
{ url := "https://www.linkedin.com/company/lean-fro", icon := Icon.linkedinLogo, ariaLabel := some "LinkedIn" },
{ url := "https://functional.cafe/@leanprover", icon := Icon.mastodonLogo, ariaLabel := some "Mastodon" },
{ url := "https://x.com/leanprover", icon := Icon.xLogo, ariaLabel := some "X (Twitter)" },
{ url := "https://leanprover.zulipchat.com/", icon := Icon.zulipLogo, ariaLabel := some "Zulip" },
{ url := "https://github.com/leanprover/", icon := Icon.githubLogo, ariaLabel := some "GitHub" }
]
copyrightText := "© 2025 Lean FRO. All rights reserved."
showThemeSwitcher := true
}
/--
Helper to create FRO home navigation item
-/
def navFroItem (path : Path) : NavBarItem :=
{ title := .text false "Home"
, url := some "/fro"
, active := path == #["fro"] }
/--
Function to get all the items that redirect to pages.
-/
def getPageItems : TemplateM (Array NavBarItem) := do
let links ← Verso.Web.Util.getDirLinks
return links.map fun (active, url, title) => { title, url, active }
def isFro (path : Path) : Bool := path[0]?.isEqSome "fro"
/--
Build NavBarConfig for FRO section
-/
def buildFroNavBarConfig : TemplateM NavBarConfig := do
let leftItems ← getPageItems
let path ← currentPath
let froPathItems (path : Path) : Array NavBarItem := #[
{ title := .text false "About", url := some "/fro/about", active := path == #["fro", "about"] },
{ title := .text false "Team", url := some "/fro/team", active := path == #["fro", "team"] },
{ title := .text false "Roadmap", url := some "/fro/roadmap", active := path == #["fro", "roadmap"] },
{ title := .text false "Contact", url := some "/fro/contact", active := path == #["fro", "contact"] }
]
let externalLinks : Array NavBarItem := #[
{ title := .text false "Playground", url := some "https://live.lean-lang.org/?from=lean", blank := true },
{ title := .text false "Reservoir", url := some "https://reservoir.lean-lang.org/", blank := true }
]
let rightItems : Array NavBarItem := #[
{ title := Icon.moon, alt := some "Change Theme", classes := some "change-theme" },
{ title := Icon.github, alt := some "Github", url := some "https://github.com/leanprover/lean4", blank := true }
]
let menuItems := #[navFroItem path] ++ froPathItems path
return {
leftItems := leftItems
rightItems := rightItems
menuItems := menuItems
externalLinks := externalLinks
subNavBar := if isFro path then some (SubNavBarConfig.mk (froPathItems path)) else none
}
def socialMeta : SocialMeta :=
{ title := "Lean Programming Language",
description := "Lean is an open-source programming language and proof assistant that enables correct, maintainable, and formally verified code.",
image := "https://lean-lang.org/static/png/banner.png",
url := "https://lean-lang.org",
siteName := "Lean Language",
alt := "Lean Programming Language",
articleCreator := "@leanprover",
}
def headConfig : HeadConfig :=
{ description := socialMeta.description,
faviconWhite := "https://lean-lang.org/static/favicon-light.ico",
faviconDark := "https://lean-lang.org/static/favicon-dark.ico",
appleTouchIcon := "https://lean-lang.org/static/apple-touch-icon.png",
color := "#3D6AC9"
}
/--
Default theme configuration with all design tokens
-/
def colorTheme : ThemeConfig := {
variables := [
-- Typography
{ name := "font-primary", value := "'Open Sans', Arial, sans-serif" },
{ name := "font-secondary", value := "'Oranienbaum', serif" },
{ name := "fs-xs", value := "0.75rem" },
{ name := "fs-sm", value := "0.875rem" },
{ name := "fs-base", value := "1rem" },
{ name := "fs-md", value := "17px" },
{ name := "fs-lg", value := "1.25rem" },
{ name := "fs-xl", value := "2rem" },
{ name := "fs-2xl", value := "3.3rem" },
-- Spacing
{ name := "space-1", value := "0.25rem" },
{ name := "space-2", value := "0.5rem" },
{ name := "space-3", value := "0.75rem" },
{ name := "space-4", value := "1rem" },
{ name := "space-5", value := "1.25rem" },
{ name := "space-6", value := "1.5rem" },
{ name := "space-8", value := "2rem" },
{ name := "space-10", value := "2.5rem" },
{ name := "space-12", value := "3rem" },
{ name := "space-13", value := "3.5rem" },
{ name := "space-14", value := "4rem" },
{ name := "space-16", value := "5rem" },
-- Semantic spacing
{ name := "gap-sm", value := "var(--space-2)" },
{ name := "gap-md", value := "10px" },
{ name := "gap-lg", value := "30px" },
{ name := "gap-xl", value := "100px" },
-- Section padding
{ name := "section-padding", value := "var(--space-10)" },
{ name := "section-padding-top", value := "var(--space-16)" },
-- Border Radius
{ name := "radius-sm", value := "0.25rem" },
{ name := "radius-md", value := "0.5rem" },
{ name := "radius-lg", value := "1rem" },
{ name := "radius-pill", value := "9999px" },
-- Sizes
{ name := "container-width", value := "1240px" },
{ name := "logo-size", value := "1.25rem" },
{ name := "logo-footer-size", value := "60px" },
{ name := "icon-size", value := "64px" },
-- Layout
{ name := "nav-padding-y", value := "var(--space-6)" },
{ name := "nav-padding-x", value := "10vw" },
{ name := "nav-height", value := "calc(var(--nav-padding-y) * 2 + 2em)" },
-- Transitions
{ name := "transition-fast", value := "0.2s" },
{ name := "transition-base", value := "0.3s" },
{ name := "transition-slow", value := "0.6s" },
{ name := "transition-delay-none", value := "0s" },
{ name := "transition-delay-small", value := "0.05s" },
{ name := "transition-delay-medium", value := "0.1s" },
{ name := "transition-delay-large", value := "0.15s" },
-- Animation
{ name := "animation-delay", value := "10000ms" },
-- Z-Index
{ name := "z-below", value := "-1" },
{ name := "z-normal", value := "0" },
{ name := "z-above", value := "1" },
{ name := "z-header", value := "1000" },
-- Colors
{ name := "color-surface", value := "#fff" },
{ name := "color-primary", value := "#386EE0" },
{ name := "color-primary-focus", value := "#1D4ED8" },
{ name := "color-primary-light", value := "#4a90e2" },
{ name := "color-secondary", value := "#607D8B" },
{ name := "color-text", value := "#333" },
{ name := "color-text-contrast", value := "white" },
{ name := "color-text-light", value := "#64748b" },
{ name := "color-muted", value := "#607D8B" },
{ name := "color-bg", value := "#F9FBFD" },
{ name := "color-bg-translucent", value := "rgba(249, 251, 253, 0.81)" },
{ name := "color-white", value := "#fff" },
{ name := "color-border", value := "#E4EBF3" },
{ name := "color-border-nav", value := "#E4EBF3" },
{ name := "color-border-light", value := "#D1D9E2" },
{ name := "color-hover", value := "rgba(56, 110, 224, 0.08)" },
{ name := "color-link-hover", value := "#0073e6" },
{ name := "color-shadow", value := "rgba(35, 55, 139, 0.1)" },
-- Components
{ name := "btn-bg", value := "var(--color-primary)" },
{ name := "btn-text", value := "var(--color-white)" },
{ name := "btn-font", value := "var(--font-primary)" },
{ name := "btn-radius", value := "var(--radius-md)" },
-- Card specific
{ name := "card-bg", value := "var(--color-white)" },
{ name := "card-border", value := "var(--color-border-light)" },
-- Testimonial specific
{ name := "testimonial-bg", value := "var(--color-primary)" },
{ name := "testimonial-text", value := "var(--color-white)" }
],
darkVariables := [
-- Dark theme color overrides
{ name := "color-surface", value := "#121212" },
{ name := "color-primary", value := "#3b94ff" },
{ name := "color-primary-focus", value := "#669df6" },
{ name := "color-primary-light", value := "#6aadfe" },
{ name := "color-secondary", value := "#aabfc9" },
{ name := "color-text", value := "#eee" },
{ name := "color-text-light", value := "#bbb" },
{ name := "color-text-contrast", value := "white" },
{ name := "color-muted", value := "#90a4ae" },
{ name := "color-bg", value := "#181818" },
{ name := "color-bg-translucent", value := "rgba(24, 24, 24, 0.85)" },
{ name := "color-white", value := "#1e1e1e" },
{ name := "color-border", value := "#333" },
{ name := "color-border-nav", value := "#333" },
{ name := "color-border-light", value := "#444" },
{ name := "color-hover", value := "rgba(255, 255, 255, 0.08)" },
{ name := "color-link-hover", value := "#4d9efc" },
{ name := "color-shadow", value := "rgba(0, 0, 0, 0.5)" },
-- Component overrides
{ name := "btn-bg", value := "var(--color-primary)" },
{ name := "btn-text", value := "var(--color-white)" },
{ name := "card-bg", value := "#1f1f1f" },
{ name := "card-border", value := "#2a2a2a" },
{ name := "testimonial-bg", value := "#2e3a59" },
{ name := "testimonial-text", value := "#fff" }
]
}
/--
Lean-specific page type detection functions.
-/
def isMarkdownPage : Path → Bool
| _ => true
def indexPage : Path → Bool
| _ => false
def needsTitle : Path → Bool
| #["learn"] | #["install"] | #["404"] => false
| _ => true
def isInstallPage (path : Path) : Bool :=
path[0]?.isEqSome "install"
def isUseCases : Path → Bool
| #["use-cases"] => true
| _ => false
def isRoadmap : Path → Bool
| #["fro", "roadmap"] => true
| _ => false
def isPagePost : Path → Bool
| #["use-cases", _] | #["fro", "roadmap", _] => true
| _ => false
/--
Lean-specific post configuration.
-/
def postConfig : PostConfig :=
{ hasSpecialStyling := fun path => if isFro path then some "fro" else none }
/--
Lean website layout configuration.
-/
def layoutConfig : LayoutConfig :=
{ isMarkdownPage := isMarkdownPage,
isIndexPage := indexPage,
needsTitle := needsTitle,
isPagePost := isPagePost,
postConfig := postConfig,
hasSpecialStyling := fun path => if isFro path then some "fro" else if path.isEmpty then "tutorials" else none,
renderPostList := fun path html =>
if isUseCases path then
{{ <div class="use-cases-grid"> {{ html }} </div> }}
else
html
}
def theme : Theme :=
Verso.Web.theme
{ siteName := "Lean Lang", rootTitle := "Lean enables correct, maintainable, and formally verified code", socialMeta, headConfig, variables := colorTheme }
layoutConfig
buildFroNavBarConfig
{{
<script async src="https://plausible.io/js/pa-RTua_4FfKHhfAvAc3liZd.js"></script>
<script>
"window.plausible=window.plausible||function(){(plausible.q=plausible.q||[]).push(arguments)},plausible.init=plausible.init||function(i){plausible.o=i||{}};
plausible.init()"
</script>
}}
(pure footer) |
reference-manual/Tutorial/Grind/IndexMap.lean | import VersoManual
import VersoTutorial
import Lean.Parser.Term
import Manual.Meta
open Verso.Genre Manual Tutorial
open Verso.Genre.Manual.InlineLean hiding module
open Verso.Doc.Elab (CodeBlockExpander)
open Verso.Code.External
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
open Lean.Grind
set_option maxHeartbeats 1000000 -- Needed for elaboration of the `IndexMap` example.
set_option maxRecDepth 20000 -- Needed for compilation of the `IndexMap` example.
set_option pp.rawOnError true
set_option verso.exampleProject "."
set_option verso.exampleModule "IndexMapGrind"
#doc (Tutorial) "Using `grind` for Ordered Maps" =>
%%%
slug := "grind-index-map"
tag := "grind-index-map"
summary := inlines!"A demonstration of how to use {tactic}`grind` to automate essentially all proofs in a new data structure. The resulting API finds proofs automatically, allowing code that is both safe and convenient."
exampleStyle := .inlineLean `IndexMap
%%%
In this section we'll build an example of a new data structure and basic API for it, illustrating the use of {tactic}`grind`.
The example will be derived from Rust's [`indexmap`](https://docs.rs/indexmap/latest/indexmap/) data structure.
{anchorName IndexMap}`IndexMap` is intended as a replacement for {name Std.HashMap}`HashMap` (in particular, it has fast hash-based lookup), but allowing the user to maintain control of the order of the elements.
We won't give a complete API, just set up some basic functions and theorems about them.
The two main functions we'll implement for now are {anchorName insert}`insert` and {anchorName eraseSwap}`eraseSwap`:
* `insert k v` checks if `k` is already in the map. If so, it replaces the value with `v`, keeping `k` in the same position in the ordering.
If it is not already in the map, `insert` adds `(k, v)` to the end of the map.
* `eraseSwap k` removes the element with key `k` from the map, and swaps it with the last element of the map (or does nothing if `k` is not in the map).
(This behavior may be surprising: this function exists because it is an efficient way to an erase element when you don't care about the order of the remaining elements.
Another function, not implemented here, would preserve the order of the remaining elements, but at the cost of running in time proportional to the number of elements after the erased element.)
Our goals will be:
: Complete encapsulation
The implementation of {anchorName IndexMap}`IndexMap` is hidden from the users, *and* the theorems about the implementation details are private.
: Use {tactic}`grind` as much as possible
We'll prefer adding a private theorem and annotating it with {attrs}`@[grind]` over writing a longer proof whenever practical.
: Use auto-parameters as much as possible
Ideally, we don't even need to see the proofs; they should mostly be handled invisibly by {tactic}`grind`.
:::paragraph
The first step is to import the necessary data structures:
```anchor imports
import Std.Data.HashMap
```
:::
# Skeleton
:::displayOnly
To begin with, we'll write out a skeleton of what we want to achieve, liberally using {lean}`sorry` as a placeholder for all proofs.
In particular, this version makes no use of {tactic}`grind`.
```module (module := IndexMap)
import Std.Data.HashMap
open Std
structure IndexMap
(α : Type u) (β : Type v) [BEq α] [Hashable α] where
indices : HashMap α Nat
keys : Array α
values : Array β
size_keys : keys.size = values.size
WF : ∀ (i : Nat) (a : α),
keys[i]? = some a ↔ indices[a]? = some i
namespace IndexMap
variable {α : Type u} {β : Type v}
[BEq α] [LawfulBEq α] [Hashable α] [LawfulHashable α]
variable {m : IndexMap α β} {a : α} {b : β} {i : Nat}
@[inline] def size (m : IndexMap α β) : Nat :=
m.values.size
def emptyWithCapacity (capacity := 8) : IndexMap α β where
indices := HashMap.emptyWithCapacity capacity
keys := Array.emptyWithCapacity capacity
values := Array.emptyWithCapacity capacity
size_keys := sorry
WF := sorry
instance : EmptyCollection (IndexMap α β) where
emptyCollection := emptyWithCapacity
instance : Inhabited (IndexMap α β) where
default := ∅
@[inline] def contains (m : IndexMap α β)
(a : α) : Bool :=
m.indices.contains a
instance : Membership α (IndexMap α β) where
mem m a := a ∈ m.indices
instance {m : IndexMap α β} {a : α} : Decidable (a ∈ m) :=
inferInstanceAs (Decidable (a ∈ m.indices))
@[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat :=
m.indices[a]?
@[inline] def findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) : Nat :=
m.indices[a]
@[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β :=
m.values[i]?
@[inline] def getIdx (m : IndexMap α β) (i : Nat)
(h : i < m.size := by get_elem_tactic) : β :=
m.values[i]
instance :
GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]'(by sorry)
getElem? m a :=
m.indices[a]?.bind (m.values[·]?)
getElem! m a :=
m.indices[a]?.bind (m.values[·]?) |>.getD default
instance : LawfulGetElem (IndexMap α β) α β (fun m a => a ∈ m) where
getElem?_def := sorry
getElem!_def := sorry
@[inline] def insert (m : IndexMap α β) (a : α) (b : β) :
IndexMap α β :=
match h : m.indices[a]? with
| some i =>
{ indices := m.indices
keys := m.keys.set i a sorry
values := m.values.set i b sorry
size_keys := sorry
WF := sorry }
| none =>
{ indices := m.indices.insert a m.size
keys := m.keys.push a
values := m.values.push b
size_keys := sorry
WF := sorry }
instance : Singleton (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ => (∅ : IndexMap α β).insert a b⟩
instance : Insert (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ s => s.insert a b⟩
instance : LawfulSingleton (α × β) (IndexMap α β) :=
⟨fun _ => rfl⟩
/--
Erase the key-value pair with the given key,
moving the last pair into its place in the order.
If the key is not present, the map is unchanged.
-/
@[inline] def eraseSwap (m : IndexMap α β) (a : α) :
IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop
size_keys := sorry
WF := sorry }
else
let lastKey := m.keys.back sorry
let lastValue := m.values.back sorry
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey sorry
values := m.values.pop.set i lastValue sorry
size_keys := sorry
WF := sorry }
| none => m
/-! ### Verification theorems -/
theorem getIdx_findIdx (m : IndexMap α β) (a : α)
(h : a ∈ m) :
m.getIdx (m.findIdx a h) sorry = m[a] :=
sorry
theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
sorry
theorem getElem_insert
(m : IndexMap α β) (a a' : α) (b : β)
(h : a' ∈ m.insert a b) :
(m.insert a b)[a']'h =
if h' : a' == a then b else m[a']'sorry := by
sorry
theorem findIdx_insert_self
(m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a sorry =
if h : a ∈ m then m.findIdx a h else m.size := by
sorry
end IndexMap
```
:::
# Header 2
Let's get started.
We'll aspire to never writing a proof by hand, and the first step of that is to install auto-parameters for the `size_keys` and `WF` field,
so we can omit these fields whenever `grind` can prove them.
While we're modifying the definition of `IndexMap` itself, let's make all the fields private, since we're planning on having complete encapsulation.
```anchor IndexMap
open Std
structure IndexMap
(α : Type u) (β : Type v) [BEq α] [Hashable α] where
private indices : HashMap α Nat
private keys : Array α
private values : Array β
private size_keys' : keys.size = values.size := by grind
private WF : ∀ (i : Nat) (a : α),
keys[i]? = some a ↔ indices[a]? = some i := by grind
```
For the rest of this tutorial, the following namespace and variable declarations are in effect:
```anchor variables
namespace IndexMap
variable {α : Type u} {β : Type v} [BEq α] [Hashable α]
variable {m : IndexMap α β} {a : α} {b : β} {i : Nat}
```
Let's give {tactic}`grind` access to the definition of `size`, and `size_keys` private field:
```anchor size
@[inline] def size (m : IndexMap α β) : Nat :=
m.values.size
@[local grind =] private theorem size_keys : m.keys.size = m.size :=
m.size_keys'
@[local grind =] private theorem size_values : m.values.size = m.size := rfl
```
:::paragraph
Our first {lean}`sorry`s in the draft version are the {anchorTerm size}`size_keys` and {anchorTerm IndexMap}`WF` fields in our construction of {anchorTerm emptyWithCapacity}`def emptyWithCapacity`.
Surely these are trivial, and solvable by {tactic}`grind`, so we simply delete those fields:
```anchor emptyWithCapacity
def emptyWithCapacity (capacity := 8) : IndexMap α β where
indices := HashMap.emptyWithCapacity capacity
keys := Array.emptyWithCapacity capacity
values := Array.emptyWithCapacity capacity
```
:::
:::codeOnly
```anchor Membership
@[inline] def contains (m : IndexMap α β)
(a : α) : Bool :=
m.indices.contains a
instance : Membership α (IndexMap α β) where
mem m a := a ∈ m.indices
instance {m : IndexMap α β} {a : α} : Decidable (a ∈ m) :=
inferInstanceAs (Decidable (a ∈ m.indices))
```
:::
:::displayOnly
Our next task is to deal with the {lean}`sorry` in our construction of the original {anchorTerm GetElem?}`GetElem?` instance:
```anchor GetElem? (module := IndexMap)
instance :
GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]'(by sorry)
getElem? m a :=
m.indices[a]?.bind (m.values[·]?)
getElem! m a :=
m.indices[a]?.bind (m.values[·]?) |>.getD default
```
:::
The goal at this sorry is
```
m : IndexMap α β
a : α
h : a ∈ m
⊢ m.indices[a] < m.values.size
```
:::comment
FIXME (Q3): @david-christiansen:
We need to keep the goal display above in sync with the `sorry` in the code block before it.
The solution is to add support for term goals to the SubVerso extraction mechanism, along the lines of the existing support for saving ordinary goals.
:::
Let's try proving this as a stand-alone theorem, via {tactic}`grind`, and see where {tactic}`grind` gets stuck.
Because we've added {tactic}`grind` annotations for {anchorTerm size}`size` and {anchorTerm size}`size_keys` already, we can safely reformulate the goal as:
```anchor getElem_indices_lt_init
theorem getElem_indices_lt (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.indices[a] < m.size := by
grind
```
This fails, and looking at the `Goal diagnostics` section of the message from {tactic}`grind` we see that it hasn't done much:
```anchorError getElem_indices_lt_init (expandTrace := facts)
`grind` failed
case grind
α : Type u
β : Type v
inst : BEq α
inst_1 : Hashable α
m : IndexMap α β
a : α
h : a ∈ m
h_1 : m.size ≤ m.indices[a]
⊢ False
[grind] Goal diagnostics
[facts] Asserted facts
[prop] a ∈ m
[prop] m.size ≤ m.indices[a]
[eqc] True propositions
[eqc] Equivalence classes
[ematch] E-matching patterns
[cutsat] Assignment satisfying linear constraints
```
An immediate problem we can see here is that
{tactic}`grind` does not yet know that `a ∈ m` is the same as `a ∈ m.indices`.
Let's add this fact:
```anchor mem_indices
@[local grind _=_] private theorem mem_indices
{m : IndexMap α β} {a : α} :
a ∈ m.indices ↔ a ∈ m := Iff.rfl
```
::::leanSection
```lean -show
variable {α : Type u} [BEq α] [Hashable α]
```
:::paragraph
However this proof is going to work, we know the following:
* It must use the well-formedness condition of the map.
* It can't do so without relating `m.indices[a]` and `m.indices[a]?` (because the later is what appears in the well-formedness condition).
* The expected relationship there doesn't even hold unless the map `m.indices` satisfies {lean}`LawfulGetElem`,
for which we need {tech (remote:="reference")}[instances] of {lean}`LawfulBEq α` and {lean}`LawfulHashable α`.
:::
:::TODO
TODO: I'd like to ensure there's a link to the `LawfulGetElem` instance for `HashMap`, so we can see these requirements!
:::
::::
:::paragraph
Let's configure things so that those are available:
```anchor Lawfuls
variable [LawfulBEq α] [LawfulHashable α]
attribute [local grind _=_] IndexMap.WF
```
and then give {tactic}`grind` one manual hint, to relate {anchorTerm getElem_indices_lt}`m.indices[a]` and {anchorTerm getElem_indices_lt}`m.indices[a]?`:
```anchor getElem_indices_lt
private theorem getElem_indices_lt {h : a ∈ m} : m.indices[a] < m.size := by
have : m.indices[a]? = some m.indices[a] := by grind
grind
```
:::
With that theorem proved, we want to make it accessible to {tactic}`grind`.
We could either add {attrs}`@[local grind]` before the theorem statement,
or write `attribute [local grind] getElem_indices_lt` after the theorem statement.
These will use {tactic}`grind`'s built-in heuristics for deciding a pattern to match the theorem on.
:::paragraph
In this case, let's see which patterns the {attr}`grind` attribute generates:
```anchor getElem_indices_lt_attr
attribute [local grind] getElem_indices_lt
```
```anchorInfo getElem_indices_lt_attr
Try these:
[apply] [grind
.] for pattern: [@LE.le `[Nat] `[instLENat] ((@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #4) #3 #0) + 1) (@size _ _ _ _ #4)]
[apply] [grind →] for pattern: [LawfulBEq #8 #6, LawfulHashable _ _ #5, @Membership.mem _ (IndexMap _ #7 _ _) _ #4 #3]
```
These patterns are not useful.
The first is matching on the entire conclusion of the theorem (in fact, a normalized version of it, in which `x < y` has been replaced by `x + 1 ≤ y`).
The second is too general: it will match any term that includes the theorem's assumptions, ignoring the conclusion.
:::
:::paragraph
We want something more general than the entire conclusion, the conclusion should not be ignored.
We'd like this theorem to fire whenever {tactic}`grind` sees {anchorTerm getElem_indices_lt_pattern}`m.indices[a]`, and so instead of using the attribute we write a custom pattern:
```anchor getElem_indices_lt_pattern
grind_pattern getElem_indices_lt => m.indices[a]
```
:::
:::paragraph
The Lean standard library uses the {tactic}`get_elem_tactic` tactic as an auto-parameter for the `xs[i]` notation
(which desugars to `GetElem.getElem xs i h`, with the proof `h` generated by {tactic}`get_elem_tactic`).
We'd like to not only have {tactic}`grind` fill in these proofs, but even to be able to omit these proofs.
To achieve this, we add the line
```anchor get_elem_grind
macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| grind)
```
(In later versions of Lean this may be part of the built-in behavior.)
:::
:::paragraph
We can now return to constructing our {anchorName GetElem?}`GetElem?` instance.
In order to use the well-formedness condition, {tactic}`grind` must be able to unfold {anchorName size}`size`:
```anchor local_grind_size
attribute [local grind] size
```
The {anchorTerm local_grind_size}`local` modifier restricts this unfolding to the current file.
With this in place, we can simply write:
```anchor GetElem?
instance : GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]
getElem? m a :=
m.indices[a]?.bind (fun i => (m.values[i]?))
getElem! m a :=
m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default
```
with neither any {lean}`sorry`s, nor any explicitly written proofs.
:::
:::paragraph
Next, we want to expose the content of these definitions, but only locally in this file:
```anchor getElem_local
@[local grind =] private theorem getElem_def
(m : IndexMap α β) (a : α) (h : a ∈ m) :
m[a] = m.values[m.indices[a]'h] :=
rfl
@[local grind =] private theorem getElem?_def
(m : IndexMap α β) (a : α) :
m[a]? = m.indices[a]?.bind (fun i => (m.values[i]?)) :=
rfl
@[local grind =] private theorem getElem!_def
[Inhabited β] (m : IndexMap α β) (a : α) :
m[a]! = (m.indices[a]?.bind (m.values[·]?)).getD default :=
rfl
```
Again we're using the {anchorTerm getElem_local}`@[local grind =] private theorem` pattern to hide these implementation details,
but allow {tactic}`grind` to see these facts locally.
:::
:::paragraph
Next, we want to prove the {anchorName LawfulGetElem}`LawfulGetElem` instance, and hope that {tactic}`grind` can fill in the proofs:
```anchor LawfulGetElem
instance : LawfulGetElem (IndexMap α β) α β (fun m a => a ∈ m) where
getElem?_def := by grind
getElem!_def := by grind
```
Success!
:::
:::paragraph
Let's press onward, and see if we can define {anchorName insert}`insert` without having to write any proofs:
```anchor insert
@[inline] def insert (m : IndexMap α β) (a : α) (b : β) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
{ indices := m.indices
keys := m.keys.set i a
values := m.values.set i b }
| none =>
{ indices := m.indices.insert a m.size
keys := m.keys.push a
values := m.values.push b }
```
In both branches, {tactic}`grind` is automatically proving both the {anchorTerm IndexMap}`size_keys'` and {anchorTerm IndexMap}`WF` fields!
Note also in the first branch the {anchorTerm insert}`set` calls {anchorTerm insert}`m.keys.set i a` and {anchorTerm insert}`m.values.set i b`
are having their “in-bounds” obligations automatically filled in by {tactic}`grind` via the {tactic}`get_elem_tactic` auto-parameter.
:::
:::paragraph
Next let's try `eraseSwap`:
```anchor eraseSwap_init
@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop }
else
let lastKey := m.keys.back
let lastValue := m.values.back
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey
values := m.values.pop.set i lastValue }
| none => m
```
```anchorError eraseSwap_init
could not synthesize default value for field 'WF' of 'IndexMap' using tactics
```
```anchorError eraseSwap_init
`grind` failed
case grind.1.1.2.2.1.1.1
α : Type u
β : Type v
inst : BEq α
inst_1 : Hashable α
m_1 : IndexMap α β
a_1 : α
b : β
i_1 : Nat
inst_2 : LawfulBEq α
inst_3 : LawfulHashable α
m : IndexMap α β
a : α
i : Nat
h : m.indices[a]? = some i
w : ¬i = m.size - 1
lastKey : α := m.keys.back ⋯
lastValue : β := m.values.back ⋯
i_2 : Nat
a_2 : α
h_1 : ((m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2) =
¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
h_2 : -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0
left : (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2
right : ¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
h_4 : ¬i = i_2
left_1 : ¬m.keys[i_2]? = some a
right_1 : ¬m.indices[a]? = some i_2
h_6 : (m.keys.back ⋯ == a_2) = true
h_7 : i + 1 ≤ m.keys.pop.size
left_2 : (m.indices.erase a).contains a_2 = true
right_2 : a_2 ∈ m.indices.erase a
⊢ False
[grind] Goal diagnostics
[facts] Asserted facts
[eqc] True propositions
[eqc] False propositions
[eqc] Equivalence classes
[cases] Case analyses
[ematch] E-matching patterns
[cutsat] Assignment satisfying linear constraints
[ring] Rings
[grind] Diagnostics
```
This fails while attempting to prove the {anchorName IndexMap}`WF` field in the second branch.
As usual, there is detailed information from {tactic}`grind` about its failure state, but almost too much to be helpful!
Let's look at the model produced by `cutsat` and see if we can see what's going on:
```anchorError eraseSwap_init (onlyTrace := "Assignment satisfying linear constraints") (expandTrace := cutsat)
[cutsat] Assignment satisfying linear constraints
[assign] i_1 := 4
[assign] i := 0
[assign] i_2 := 1
[assign] m.keys.pop.size := 2
[assign] m.keys.size := 3
[assign] m.size := 3
[assign] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size := 2
[assign] m.values.size := 3
[assign] m.indices[a] := 0
[assign] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2] := 0
[assign] (m.keys.set i (m.keys.back ⋯) ⋯).pop.size := 2
[assign] (m.keys.set i (m.keys.back ⋯) ⋯).size := 3
[assign] m.indices[a] := 0
[assign] m.indices[a_2] := 1
[assign] m.indices[m.keys[i_2]] := 1
[assign] m.indices[m.keys[i_2]] := 1
```
```comment
FIXME (@kim-em / @leodemoura): there is some repeated output here.
```
This model consists of an {anchorName IndexMap}`IndexMap` of size {lean}`3`,
with keys `a_1`, `a_2` and the otherwise unnamed `(keys m_1).back ⋯`.
:::
:::paragraph
Everything looks fine, *except* the line:
```
(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] := 0
```
This shouldn't be possible! Since the three keys are distinct,
we should have
```
(((indices m_1).erase a_1).insert ((keys m_1).back ⋯) i_1)[a_2] =
((indices m_1).erase a_1)[a_2] =
(indices m_1)[a_2] =
1
```
Now that we've found something suspicious, we can look through the equivalence classes identified by `grind`.
(In the future we'll be providing search tools for inspecting equivalence classes, but for now you need to read through manually.)
We find amongst many others:
```
{a_2,
(keys m_1).back ⋯,
(keys m_1)[(keys m_1).size - 1],
(keys m_1)[i_2], ...}
```
This should imply, by the injectivity of {anchorName IndexMap}`keys`, that `i_2 = (keys m_1).size - 1`.
Since this identity *wasn't* reflected by the `cutsat` model,
we suspect that {tactic}`grind` is not managing to use the injectivity of {anchorName IndexMap}`keys`.
:::
Thinking about the way that we've provided the well-formedness condition, as
`∀ (i : Nat) (a : α), keys[i]? = some a ↔ indices[a]? = some i`, this perhaps isn't surprising:
it's expressed in terms of `keys[i]?` and `indices[a]?`.
Let's add a variant version of the well-formedness condition using {name GetElem.getElem}`getElem` instead of {name GetElem?.getElem?}`getElem?`:
```anchor WF'
@[local grind .]
private theorem WF' (i : Nat) (a : α) (h₁ : i < m.keys.size) (h₂ : a ∈ m) :
m.keys[i] = a ↔ m.indices[a] = i := by
have := m.WF i a
grind
```
We can verify that with this available, {tactic}`grind` can now prove:
```anchor WF'ex
example {m : IndexMap α β} {a : α} {h : a ∈ m} :
m.keys[m.indices[a]'h] = a := by grind
```
Trying again with {anchorName eraseSwap}`eraseSwap`, everything goes through cleanly now, with no manual proofs:
```anchor eraseSwap
@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop }
else
let lastKey := m.keys.back
let lastValue := m.values.back
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey
values := m.values.pop.set i lastValue }
| none => m
```
:::codeOnly
```anchor getFindIdx
@[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat :=
m.indices[a]?
@[inline] def findIdx (m : IndexMap α β) (a : α)
(h : a ∈ m := by get_elem_tactic) : Nat :=
m.indices[a]
@[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β :=
m.values[i]?
@[inline] def getIdx (m : IndexMap α β) (i : Nat)
(h : i < m.size := by get_elem_tactic) : β :=
m.values[i]
```
:::
Finally we turn to the verification theorems about the basic operations, relating {anchorName Verification}`getIdx`, {anchorName Verification}`findIdx`, and {anchorName Verification}`insert`.
The proofs all go through effortlessly using {tactic}`grind` with the `+locals` modifier (which tells {tactic}`grind` to unfold local definitions):
```anchor Verification
/-! ### Verification theorems (not exhaustive) -/
@[grind =]
theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
grind +locals
@[grind =]
theorem getElem_insert (m : IndexMap α β) (a a' : α) (b : β) (h : a' ∈ m.insert a b) :
(m.insert a b)[a'] = if h' : a' == a then b else m[a'] := by
grind +locals
theorem findIdx_lt (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.findIdx a h < m.size := by
grind +locals
grind_pattern findIdx_lt => m.findIdx a h
@[grind =]
theorem findIdx_insert_self (m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
grind +locals
@[grind =]
theorem findIdx?_eq (m : IndexMap α β) (a : α) :
m.findIdx? a = if h : a ∈ m then some (m.findIdx a h) else none := by
grind +locals
@[grind =]
theorem getIdx_findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.getIdx (m.findIdx a) = m[a] := by grind +locals
omit [LawfulBEq α] [LawfulHashable α] in
@[grind =]
theorem getIdx?_eq (m : IndexMap α β) (i : Nat) :
m.getIdx? i = if h : i < m.size then some (m.getIdx i h) else none := by
grind +locals
private theorem getElem_keys_mem {m : IndexMap α β} {i : Nat} (h : i < m.size) :
m.keys[i] ∈ m := by
have : m.indices[m.keys[i]]? = some i := by grind
grind
local grind_pattern getElem_keys_mem => m.keys[i]
theorem getElem?_eraseSwap (m : IndexMap α β) (a a' : α) :
(m.eraseSwap a)[a']? = if a' == a then none else m[a']? := by
grind +locals
@[grind =]
theorem mem_eraseSwap (m : IndexMap α β) (a a' : α) :
a' ∈ m.eraseSwap a ↔ a' ≠ a ∧ a' ∈ m := by
grind +locals
theorem getElem_eraseSwap (m : IndexMap α β) (a a' : α) (h : a' ∈ m.eraseSwap a) :
(m.eraseSwap a)[a'] = m[a'] := by
grind +locals
```
Note that these are part of the public API of {anchorName Verification}`IndexMap`, so we need to mark them as {attrs}`@[grind]`,
so that users without our internal {keyword}`local grind` annotations can still use them in {tactic}`grind` proofs.
Putting this all together, our prototype API has reached the following state:
:::TODO
Construct this version from the source module using annotations that cause unwanted content to be discarded, so we keep them in sync
:::
```lean
local macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| grind)
open Std
structure IndexMap
(α : Type u) (β : Type v) [BEq α] [Hashable α] where
private indices : HashMap α Nat
private keys : Array α
private values : Array β
private size_keys' : keys.size = values.size := by grind
private WF : ∀ (i : Nat) (a : α),
keys[i]? = some a ↔ indices[a]? = some i := by grind
namespace IndexMap
variable {α : Type u} {β : Type v} [BEq α] [Hashable α]
variable {m : IndexMap α β} {a : α} {b : β} {i : Nat}
@[inline] def size (m : IndexMap α β) : Nat :=
m.values.size
@[local grind =] private theorem size_keys : m.keys.size = m.size :=
m.size_keys'
@[local grind =] private theorem size_values : m.values.size = m.size := rfl
def emptyWithCapacity (capacity := 8) : IndexMap α β where
indices := HashMap.emptyWithCapacity capacity
keys := Array.emptyWithCapacity capacity
values := Array.emptyWithCapacity capacity
instance : EmptyCollection (IndexMap α β) where
emptyCollection := emptyWithCapacity
instance : Inhabited (IndexMap α β) where
default := ∅
@[inline] def contains (m : IndexMap α β) (a : α) : Bool :=
m.indices.contains a
instance : Membership α (IndexMap α β) where
mem m a := a ∈ m.indices
instance {m : IndexMap α β} {a : α} : Decidable (a ∈ m) :=
inferInstanceAs (Decidable (a ∈ m.indices))
@[local grind _=_] private theorem mem_indices
{m : IndexMap α β} {a : α} :
a ∈ m.indices ↔ a ∈ m := Iff.rfl
@[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat :=
m.indices[a]?
@[inline] def findIdx (m : IndexMap α β) (a : α)
(h : a ∈ m := by get_elem_tactic) : Nat :=
m.indices[a]
@[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β :=
m.values[i]?
@[inline] def getIdx (m : IndexMap α β) (i : Nat)
(h : i < m.size := by get_elem_tactic) : β :=
m.values[i]
variable [LawfulBEq α] [LawfulHashable α]
attribute [local grind _=_] IndexMap.WF
private theorem getElem_indices_lt
{h : a ∈ m} : m.indices[a] < m.size := by
have : m.indices[a]? = some m.indices[a] := by grind
grind
grind_pattern getElem_indices_lt => m.indices[a]
instance : GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]
getElem? m a :=
m.indices[a]?.bind (fun i => (m.values[i]?))
getElem! m a :=
m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default
@[local grind =] private theorem getElem_def
(m : IndexMap α β) (a : α) (h : a ∈ m) :
m[a] = m.values[m.indices[a]'h] :=
rfl
@[local grind =] private theorem getElem?_def
(m : IndexMap α β) (a : α) :
m[a]? = m.indices[a]?.bind (fun i => (m.values[i]?)) :=
rfl
@[local grind =] private theorem getElem!_def
[Inhabited β] (m : IndexMap α β) (a : α) :
m[a]! = (m.indices[a]?.bind (m.values[·]?)).getD default :=
rfl
instance : LawfulGetElem (IndexMap α β) α β (fun m a => a ∈ m) where
getElem?_def := by grind
getElem!_def := by grind
@[inline] def insert (m : IndexMap α β) (a : α) (b : β) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
{ indices := m.indices
keys := m.keys.set i a
values := m.values.set i b }
| none =>
{ indices := m.indices.insert a m.size
keys := m.keys.push a
values := m.values.push b }
instance : Singleton (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ => (∅ : IndexMap α β).insert a b⟩
instance : Insert (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ s => s.insert a b⟩
instance : LawfulSingleton (α × β) (IndexMap α β) :=
⟨fun _ => rfl⟩
@[local grind .]
private theorem WF' (i : Nat) (a : α) (h₁ : i < m.keys.size) (h₂ : a ∈ m) :
m.keys[i] = a ↔ m.indices[a] = i := by
have := m.WF i a
grind
/--
Erase the key-value pair with the given key,
moving the last pair into its place in the order.
If the key is not present, the map is unchanged.
-/
@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop }
else
let lastKey := m.keys.back
let lastValue := m.values.back
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey
values := m.values.pop.set i lastValue }
| none => m
/-! ### Verification theorems (not exhaustive) -/
@[grind =]
theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
grind +locals
@[grind =]
theorem getElem_insert (m : IndexMap α β) (a a' : α) (b : β) (h : a' ∈ m.insert a b) :
(m.insert a b)[a'] = if h' : a' == a then b else m[a'] := by
grind +locals
theorem findIdx_lt (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.findIdx a h < m.size := by
grind +locals
grind_pattern findIdx_lt => m.findIdx a h
@[grind =]
theorem findIdx_insert_self (m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
grind +locals
@[grind =]
theorem findIdx?_eq (m : IndexMap α β) (a : α) :
m.findIdx? a = if h : a ∈ m then some (m.findIdx a h) else none := by
grind +locals
@[grind =]
theorem getIdx_findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.getIdx (m.findIdx a) = m[a] := by grind +locals
omit [LawfulBEq α] [LawfulHashable α] in
@[grind =]
theorem getIdx?_eq (m : IndexMap α β) (i : Nat) :
m.getIdx? i = if h : i < m.size then some (m.getIdx i h) else none := by
grind +locals
end IndexMap
```
We've now also added verification theorems for {anchorName eraseSwap}`eraseSwap` operations; the interested reader is encouraged to explore further,
and perhaps even releasing a complete {anchorName IndexMap}`IndexMap` library!
Summarizing the design principles discussed above about encapsulation:
* the fields of {anchorName IndexMap}`IndexMap` are all private, as these are implementation details.
* the theorems about these fields are all private, and marked as {attrs}`@[local grind]`, rather than {attrs}`@[grind]`, as they won't be needed after we've set up the API.
* the verification theorems are both marked as {attrs}`@[grind]`, and proved by {tactic}`grind`:
the annotation is necessary because we want grind to be able to prove these facts even once we're outside the current module, and the {attrs}`@[local grind]` theorems are no longer available. |
reference-manual/extended-examples/IndexMapGrind.lean | --ANCHOR: imports
import Std.Data.HashMap
--ANCHOR_END: imports
import IndexMapGrind.CheckMsgs
open Std in
-- This block is here as a guard: when/if the global `get_elem_tactic` uses grind, this will fail,
-- prompting us to update the sentence about "later versions of Lean" in the chapter.
example (m : HashMap Nat Nat) : (m.insert 1 2).size ≤ m.size + 1 := by
fail_if_success get_elem_tactic
grind
-- ANCHOR: get_elem_grind
macro_rules | `(tactic| get_elem_tactic_extensible) => `(tactic| grind)
-- ANCHOR_END: get_elem_grind
open Std in
-- This code is also here as a guard: it makes sure `grind` is working in tactic.
example (m : HashMap Nat Nat) : (m.insert 1 2).size ≤ m.size + 1 := by
get_elem_tactic
-- ANCHOR: IndexMap
open Std
structure IndexMap
(α : Type u) (β : Type v) [BEq α] [Hashable α] where
private indices : HashMap α Nat
private keys : Array α
private values : Array β
private size_keys' : keys.size = values.size := by grind
private WF : ∀ (i : Nat) (a : α),
keys[i]? = some a ↔ indices[a]? = some i := by grind
-- ANCHOR_END: IndexMap
-- ANCHOR: variables
namespace IndexMap
variable {α : Type u} {β : Type v} [BEq α] [Hashable α]
variable {m : IndexMap α β} {a : α} {b : β} {i : Nat}
-- ANCHOR_END: variables
-- ANCHOR: size
@[inline] def size (m : IndexMap α β) : Nat :=
m.values.size
@[local grind =] private theorem size_keys : m.keys.size = m.size :=
m.size_keys'
@[local grind =] private theorem size_values : m.values.size = m.size := rfl
-- ANCHOR_END: size
-- ANCHOR: emptyWithCapacity
def emptyWithCapacity (capacity := 8) : IndexMap α β where
indices := HashMap.emptyWithCapacity capacity
keys := Array.emptyWithCapacity capacity
values := Array.emptyWithCapacity capacity
-- ANCHOR_END: emptyWithCapacity
instance : EmptyCollection (IndexMap α β) where
emptyCollection := emptyWithCapacity
instance : Inhabited (IndexMap α β) where
default := ∅
-- ANCHOR: Membership
@[inline] def contains (m : IndexMap α β)
(a : α) : Bool :=
m.indices.contains a
instance : Membership α (IndexMap α β) where
mem m a := a ∈ m.indices
instance {m : IndexMap α β} {a : α} : Decidable (a ∈ m) :=
inferInstanceAs (Decidable (a ∈ m.indices))
-- ANCHOR_END: Membership
discarding
/--
error: `grind` failed
case grind
α : Type u
β : Type v
inst : BEq α
inst_1 : Hashable α
m : IndexMap α β
a : α
h : a ∈ m
h_1 : m.size ≤ m.indices[a]
⊢ False
[grind] Goal diagnostics
[facts] Asserted facts
[prop] a ∈ m
[prop] m.size ≤ m.indices[a]
[eqc] True propositions
[prop] m.size ≤ m.indices[a]
[prop] a ∈ m
[eqc] Equivalence classes
[eqc] {Membership.mem, fun m a => a ∈ m}
[ematch] E-matching patterns
[thm] HashMap.contains_iff_mem: [@Membership.mem #5 (HashMap _ #4 #3 #2) _ #1 #0]
[cutsat] Assignment satisfying linear constraints
[assign] m.size := 0
[assign] m.indices[a] := 0
-/
#check_msgs in
-- ANCHOR: getElem_indices_lt_init
theorem getElem_indices_lt (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.indices[a] < m.size := by
grind
-- ANCHOR_END: getElem_indices_lt_init
stop discarding
-- ANCHOR: mem_indices
@[local grind _=_] private theorem mem_indices
{m : IndexMap α β} {a : α} :
a ∈ m.indices ↔ a ∈ m := Iff.rfl
-- ANCHOR_END: mem_indices
-- ANCHOR: getFindIdx
@[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat :=
m.indices[a]?
@[inline] def findIdx (m : IndexMap α β) (a : α)
(h : a ∈ m := by get_elem_tactic) : Nat :=
m.indices[a]
@[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β :=
m.values[i]?
@[inline] def getIdx (m : IndexMap α β) (i : Nat)
(h : i < m.size := by get_elem_tactic) : β :=
m.values[i]
-- ANCHOR_END: getFindIdx
-- ANCHOR: Lawfuls
variable [LawfulBEq α] [LawfulHashable α]
attribute [local grind _=_] IndexMap.WF
-- ANCHOR_END: Lawfuls
-- ANCHOR: getElem_indices_lt
private theorem getElem_indices_lt {h : a ∈ m} : m.indices[a] < m.size := by
have : m.indices[a]? = some m.indices[a] := by grind
grind
-- ANCHOR_END: getElem_indices_lt
section
/--
info: Try these:
[apply] [grind
.] for pattern: [@LE.le `[Nat] `[instLENat] ((@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #4) #3 #0) + 1) (@size _ _ _ _ #4)]
[apply] [grind →] for pattern: [LawfulBEq #8 #6, LawfulHashable _ _ #5, @Membership.mem _ (IndexMap _ #7 _ _) _ #4 #3]
-/
#check_msgs in
-- ANCHOR: getElem_indices_lt_attr
attribute [local grind] getElem_indices_lt
-- ANCHOR_END: getElem_indices_lt_attr
end
-- ANCHOR: getElem_indices_lt_pattern
grind_pattern getElem_indices_lt => m.indices[a]
-- ANCHOR_END: getElem_indices_lt_pattern
-- ANCHOR: local_grind_size
attribute [local grind] size
-- ANCHOR_END: local_grind_size
-- ANCHOR: GetElem?
instance : GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]
getElem? m a :=
m.indices[a]?.bind (fun i => (m.values[i]?))
getElem! m a :=
m.indices[a]?.bind (fun i => (m.values[i]?)) |>.getD default
-- ANCHOR_END: GetElem?
-- ANCHOR: getElem_local
@[local grind =] private theorem getElem_def
(m : IndexMap α β) (a : α) (h : a ∈ m) :
m[a] = m.values[m.indices[a]'h] :=
rfl
@[local grind =] private theorem getElem?_def
(m : IndexMap α β) (a : α) :
m[a]? = m.indices[a]?.bind (fun i => (m.values[i]?)) :=
rfl
@[local grind =] private theorem getElem!_def
[Inhabited β] (m : IndexMap α β) (a : α) :
m[a]! = (m.indices[a]?.bind (m.values[·]?)).getD default :=
rfl
-- ANCHOR_END: getElem_local
-- ANCHOR: LawfulGetElem
instance : LawfulGetElem (IndexMap α β) α β (fun m a => a ∈ m) where
getElem?_def := by grind
getElem!_def := by grind
-- ANCHOR_END: LawfulGetElem
-- ANCHOR: insert
@[inline] def insert (m : IndexMap α β) (a : α) (b : β) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
{ indices := m.indices
keys := m.keys.set i a
values := m.values.set i b }
| none =>
{ indices := m.indices.insert a m.size
keys := m.keys.push a
values := m.values.push b }
-- ANCHOR_END: insert
discarding
/--
error: could not synthesize default value for field 'WF' of 'IndexMap' using tactics
---
error: `grind` failed
case grind.1.1.2.2.1.1.1
α : Type u
β : Type v
inst : BEq α
inst_1 : Hashable α
m_1 : IndexMap α β
a_1 : α
b : β
i_1 : Nat
inst_2 : LawfulBEq α
inst_3 : LawfulHashable α
m : IndexMap α β
a : α
i : Nat
h : m.indices[a]? = some i
w : ¬i = m.size - 1
lastKey : α := m.keys.back ⋯
lastValue : β := m.values.back ⋯
i_2 : Nat
a_2 : α
h_1 :
((m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2) =
¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
h_2 : -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0
left : (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2
right : ¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
h_4 : ¬i = i_2
left_1 : ¬m.keys[i_2]? = some a
right_1 : ¬m.indices[a]? = some i_2
h_6 : (m.keys.back ⋯ == a_2) = true
h_7 : i + 1 ≤ m.keys.pop.size
left_2 : a_2 ∈ m.indices.erase a
left_3 : (a == a_2) = false
right_3 : a_2 ∈ m.indices
⊢ False
[grind] Goal diagnostics
[facts] Asserted facts
[prop] LawfulBEq α
[prop] LawfulHashable α
[prop] m.indices[a]? = some i
[prop] ¬i = m.size - 1
[prop] ↑(m.size - 1) = if -1 * ↑m.size + 1 ≤ 0 then ↑m.size + -1 else 0
[prop] ((m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2) =
¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[prop] ¬a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i →
((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = none
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] ¬a ∈ m.indices → m.indices[a]? = none
[prop] ∀ (h_9 : a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i),
((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? =
some ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]
[prop] ∀ (h_9 : i_2 + 1 ≤ (m.keys.pop.set i (m.keys.back ⋯) ⋯).size),
(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] ∀ (h : a ∈ m.indices), m.indices[a]? = some m.indices[a]
[prop] (m.keys[i]? = some a) = (m.indices[a]? = some i)
[prop] (m.keys[i_2]? = some a) = (m.indices[a]? = some i_2)
[prop] m.size = m.values.size
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = if i = i_2 then some (m.keys.back ⋯) else m.keys.pop[i_2]?
[prop] m.keys.pop.set i (m.keys.back ⋯) ⋯ = (m.keys.set i (m.keys.back ⋯) ⋯).pop
[prop] m.keys.back ⋯ = m.keys[m.keys.size - 1]
[prop] ↑(m.keys.size - 1) = if -1 * ↑m.keys.size + 1 ≤ 0 then ↑m.keys.size + -1 else 0
[prop] m.keys.size = m.size
[prop] m.keys.pop.size = m.keys.size - 1
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? =
if (m.keys.back ⋯ == a_2) = true then some i else (m.indices.erase a)[a_2]?
[prop] m.keys.size ≤ i_2 → m.keys[i_2]? = none
[prop] m.keys.size ≤ i → m.keys[i]? = none
[prop] ∀ (h : i_2 + 1 ≤ m.keys.size), m.keys[i_2]? = some m.keys[i_2]
[prop] ∀ (h : i + 1 ≤ m.keys.size), m.keys[i]? = some m.keys[i]
[prop] (m.indices.contains a = true) = (a ∈ m.indices)
[prop] (((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2 = true) =
(a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i)
[prop] (m.keys[m.indices[a]]? = some a) = (m.indices[a]? = some m.indices[a])
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).pop[i_2]? =
if i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size - 1 then (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? else none
[prop] ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1) =
if -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0 then ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + -1 else 0
[prop] ∀ (h_9 : i + 1 ≤ m.keys.pop.size), m.keys.pop.set i (m.keys.back ⋯) ⋯ = (m.keys.set i (m.keys.back ⋯) ⋯).pop
[prop] (m.keys[i_2]? = some a_2) = (m.indices[a_2]? = some i_2)
[prop] (m.keys[i]? = some a_2) = (m.indices[a_2]? = some i)
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] m.keys.size ≤ i_2 → m.keys[i_2]? = none
[prop] m.keys.size ≤ i → m.keys[i]? = none
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).pop.size = (m.keys.set i (m.keys.back ⋯) ⋯).size - 1
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size = m.keys.pop.size
[prop] (a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i) = (m.keys.back ⋯ = a_2 ∨ a_2 ∈ m.indices.erase a)
[prop] ∀ (h : a ∈ m), m.indices[a] + 1 ≤ m.size
[prop] ¬a_2 ∈ m.indices → m.indices[a_2]? = none
[prop] ∀ (h : a_2 ∈ m.indices), m.indices[a_2]? = some m.indices[a_2]
[prop] ((m.indices.erase a).contains a_2 = true) = (a_2 ∈ m.indices.erase a)
[prop] (m.keys[m.indices[a]]? = some a_2) = (m.indices[a_2]? = some m.indices[a])
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).size = m.keys.size
[prop] (a_2 ∈ m.indices.erase a) = ((a == a_2) = false ∧ a_2 ∈ m.indices)
[prop] (((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2 = true) =
(m.keys.back ⋯ = a_2 ∨ (m.indices.erase a).contains a_2 = true)
[prop] (a ∈ m) = (a ∈ m.indices)
[prop] (m.indices.contains a_2 = true) = (a_2 ∈ m.indices)
[prop] (m.keys[i]? = some m.keys[i]) = (m.indices[m.keys[i]]? = some i)
[prop] (m.keys[i_2]? = some m.keys[i]) = (m.indices[m.keys[i]]? = some i_2)
[prop] ((m.indices.erase a).contains a_2 = true) = ((!a == a_2) = true ∧ m.indices.contains a_2 = true)
[prop] -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2
[prop] ¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] ∀ (h_9 : i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size),
(m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = if i = i_2 then some (m.keys.back ⋯) else m.keys[i_2]?
[prop] (m.keys[i]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]) =
(m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i)
[prop] (m.keys[i]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some i)
[prop] (m.keys[i_2]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some i_2)
[prop] (m.keys[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]) =
(m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i_2)
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2] = if i = i_2 then m.keys.back ⋯ else m.keys.pop[i_2]
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).pop[i_2] = (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] ¬m.keys[i_2] ∈ m.indices → m.indices[m.keys[i_2]]? = none
[prop] ∀ (h_9 : m.keys[i_2] ∈ m.indices), m.indices[m.keys[i_2]]? = some m.indices[m.keys[i_2]]
[prop] (m.keys[m.indices[a]]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some m.indices[a])
[prop] (m.keys.set i (m.keys.back ⋯) ⋯)[i_2] = if i = i_2 then m.keys.back ⋯ else m.keys[i_2]
[prop] (m.indices.contains m.keys[i_2] = true) = (m.keys[i_2] ∈ m.indices)
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some m.keys[i_2]) =
(m.indices[m.keys[i_2]]? = some m.indices[m.keys[i_2]])
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some a_2) = (m.indices[a_2]? = some m.indices[m.keys[i_2]])
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some a) = (m.indices[a]? = some m.indices[m.keys[i_2]])
[prop] ∀ (h_9 : m.keys[i_2] ∈ m), m.indices[m.keys[i_2]] + 1 ≤ m.size
[prop] (m.keys[i_2] ∈ m) = (m.keys[i_2] ∈ m.indices)
[prop] ¬i = i_2
[prop] m.keys.pop.size ≤ i_2 → m.keys.pop[i_2]? = none
[prop] ∀ (h : i_2 + 1 ≤ m.keys.pop.size), m.keys.pop[i_2]? = some m.keys.pop[i_2]
[prop] m.keys.pop[i_2]? = if i_2 + 1 ≤ m.keys.size - 1 then m.keys[i_2]? else none
[prop] m.keys.pop.size ≤ i_2 → m.keys.pop[i_2]? = none
[prop] m.keys.pop[i_2] = m.keys[i_2]
[prop] ¬m.keys[i_2]? = some a
[prop] ¬m.indices[a]? = some i_2
[prop] (m.keys.back ⋯ == a_2) = true
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2] =
if h₂ : (m.keys.back ⋯ == a_2) = true then i else (m.indices.erase a)[a_2]
[prop] i + 1 ≤ m.keys.pop.size
[prop] a_2 ∈ m.indices.erase a
[prop] (a == a_2) = false
[prop] a_2 ∈ m.indices
[eqc] True propositions
[prop] ¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[prop] LawfulBEq α
[prop] ((m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2) =
¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2
[prop] LawfulHashable α
[prop] -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0
[prop] -1 * ↑m.keys.size + 1 ≤ 0
[prop] -1 * ↑m.size + 1 ≤ 0
[prop] i < m.keys.pop.size
[prop] 0 < m.keys.size
[prop] (m.keys[i]? = some a) = (m.indices[a]? = some i)
[prop] (m.keys[i_2]? = some a) = (m.indices[a]? = some i_2)
[prop] (m.keys.back ⋯ == a_2) = true
[prop] m.indices[a]? = some i
[prop] m.indices[a]? = some m.indices[a]
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? =
some ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] m.keys[i]? = some a
[prop] i_2 + 1 ≤ (m.keys.pop.set i (m.keys.back ⋯) ⋯).size
[prop] i < m.keys.size
[prop] i_2 < (m.keys.pop.set i (m.keys.back ⋯) ⋯).size
[prop] m.keys.size - 1 < m.keys.size
[prop] a ∈ m.indices
[prop] a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i
[prop] ¬a ∈ m.indices → m.indices[a]? = none
[prop] ¬a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i →
((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = none
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] ∀ (h_9 : i_2 + 1 ≤ (m.keys.pop.set i (m.keys.back ⋯) ⋯).size),
(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] ∀ (h : a ∈ m.indices), m.indices[a]? = some m.indices[a]
[prop] ∀ (h_9 : a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i),
((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? =
some ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]
[prop] m.keys.back ⋯ = a_2 ∨ a_2 ∈ m.indices.erase a
[prop] (m.indices.contains a = true) = (a ∈ m.indices)
[prop] (((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2 = true) =
(a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i)
[prop] (m.keys[i]? = some a_2) = (m.indices[a_2]? = some i)
[prop] (m.keys[i]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]) =
(m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i)
[prop] (m.keys[i_2]? = some a_2) = (m.indices[a_2]? = some i_2)
[prop] (m.keys[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]) =
(m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i_2)
[prop] (m.keys[m.indices[a]]? = some a) = (m.indices[a]? = some m.indices[a])
[prop] (a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i) = (m.keys.back ⋯ = a_2 ∨ a_2 ∈ m.indices.erase a)
[prop] m.indices.contains a = true
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2 = true
[prop] m.indices[a_2]? = some i_2
[prop] m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i_2
[prop] m.keys.back ⋯ = a_2
[prop] m.keys.pop.set i (m.keys.back ⋯) ⋯ = (m.keys.set i (m.keys.back ⋯) ⋯).pop
[prop] m.keys.pop[i_2]? = some m.keys.pop[i_2]
[prop] m.keys[i]? = some m.keys[i]
[prop] m.keys[i_2]? = some a_2
[prop] m.keys[i_2]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] m.keys[i_2]? = some m.keys[i_2]
[prop] m.keys[m.indices[a]]? = some a
[prop] i + 1 ≤ m.keys.pop.size
[prop] i + 1 ≤ m.keys.size
[prop] i_2 + 1 ≤ m.keys.pop.size
[prop] i_2 + 1 ≤ m.keys.size
[prop] i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size - 1
[prop] i_2 + 1 ≤ m.keys.size - 1
[prop] m.indices[a] + 1 ≤ m.size
[prop] i_2 < (m.keys.set i (m.keys.back ⋯) ⋯).pop.size
[prop] i_2 < m.keys.pop.size
[prop] i_2 < m.keys.size
[prop] a_2 ∈ m.indices.erase a
[prop] a ∈ m
[prop] m.keys.pop.size ≤ i_2 → m.keys.pop[i_2]? = none
[prop] m.keys.size ≤ i → m.keys[i]? = none
[prop] m.keys.size ≤ i_2 → m.keys[i_2]? = none
[prop] ∀ (h_9 : i + 1 ≤ m.keys.pop.size), m.keys.pop.set i (m.keys.back ⋯) ⋯ = (m.keys.set i (m.keys.back ⋯) ⋯).pop
[prop] ∀ (h : i + 1 ≤ m.keys.size), m.keys[i]? = some m.keys[i]
[prop] ∀ (h : i_2 + 1 ≤ m.keys.pop.size), m.keys.pop[i_2]? = some m.keys.pop[i_2]
[prop] ∀ (h : i_2 + 1 ≤ m.keys.size), m.keys[i_2]? = some m.keys[i_2]
[prop] ∀ (h : a ∈ m), m.indices[a] + 1 ≤ m.size
[prop] (a == a_2) = false ∧ a_2 ∈ m.indices
[prop] m.keys.back ⋯ = a_2 ∨ (m.indices.erase a).contains a_2 = true
[prop] ((m.indices.erase a).contains a_2 = true) = (a_2 ∈ m.indices.erase a)
[prop] (((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2 = true) =
(m.keys.back ⋯ = a_2 ∨ (m.indices.erase a).contains a_2 = true)
[prop] (m.keys[i]? = some m.keys[i]) = (m.indices[m.keys[i]]? = some i)
[prop] (m.keys[i]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some i)
[prop] (m.keys[i_2]? = some m.keys[i]) = (m.indices[m.keys[i]]? = some i_2)
[prop] (m.keys[i_2]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some i_2)
[prop] (m.keys[m.indices[a]]? = some a_2) = (m.indices[a_2]? = some m.indices[a])
[prop] (a_2 ∈ m.indices.erase a) = ((a == a_2) = false ∧ a_2 ∈ m.indices)
[prop] (a ∈ m) = (a ∈ m.indices)
[prop] (a == a_2) = false
[prop] (m.indices.erase a).contains a_2 = true
[prop] m.indices[a_2]? = some m.indices[a_2]
[prop] m.indices[m.keys[i]]? = some i
[prop] m.indices[m.keys[i_2]]? = some i_2
[prop] (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size
[prop] i_2 < (m.keys.set i (m.keys.back ⋯) ⋯).size
[prop] a_2 ∈ m.indices
[prop] ¬a_2 ∈ m.indices → m.indices[a_2]? = none
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).size ≤ i_2 → (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] ∀ (h_9 : i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size),
(m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = some (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] ∀ (h : a_2 ∈ m.indices), m.indices[a_2]? = some m.indices[a_2]
[prop] (!a == a_2) = true ∧ m.indices.contains a_2 = true
[prop] (m.indices.contains a_2 = true) = (a_2 ∈ m.indices)
[prop] ((m.indices.erase a).contains a_2 = true) = ((!a == a_2) = true ∧ m.indices.contains a_2 = true)
[prop] (m.keys[m.indices[a]]? = some m.keys[i_2]) = (m.indices[m.keys[i_2]]? = some m.indices[a])
[prop] (!a == a_2) = true
[prop] m.indices.contains a_2 = true
[prop] m.indices[m.keys[i_2]]? = some m.indices[m.keys[i_2]]
[prop] m.keys[i_2] ∈ m.indices
[prop] ¬m.keys[i_2] ∈ m.indices → m.indices[m.keys[i_2]]? = none
[prop] ∀ (h_9 : m.keys[i_2] ∈ m.indices), m.indices[m.keys[i_2]]? = some m.indices[m.keys[i_2]]
[prop] (m.indices.contains m.keys[i_2] = true) = (m.keys[i_2] ∈ m.indices)
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some a) = (m.indices[a]? = some m.indices[m.keys[i_2]])
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some a_2) = (m.indices[a_2]? = some m.indices[m.keys[i_2]])
[prop] (m.keys[m.indices[m.keys[i_2]]]? = some m.keys[i_2]) =
(m.indices[m.keys[i_2]]? = some m.indices[m.keys[i_2]])
[prop] m.indices.contains m.keys[i_2] = true
[prop] m.indices[a_2]? = some m.indices[m.keys[i_2]]
[prop] m.keys[m.indices[m.keys[i_2]]]? = some a_2
[prop] m.keys[m.indices[m.keys[i_2]]]? = some m.keys[i_2]
[prop] m.indices[m.keys[i_2]] + 1 ≤ m.size
[prop] m.keys[i_2] ∈ m
[prop] ∀ (h_9 : m.keys[i_2] ∈ m), m.indices[m.keys[i_2]] + 1 ≤ m.size
[prop] (m.keys[i_2] ∈ m) = (m.keys[i_2] ∈ m.indices)
[eqc] False propositions
[prop] i = m.size - 1
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[prop] a = a_2
[prop] ¬a ∈ m.indices
[prop] ¬a_2 ∈ (m.indices.erase a).insert (m.keys.back ⋯) i
[prop] i = i_2
[prop] m.indices[a]? = none
[prop] m.indices[a]? = some i_2
[prop] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = none
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] m.keys[i_2]? = some a
[prop] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size ≤ i_2
[prop] m.indices[a_2]? = some i
[prop] m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]? = some i
[prop] m.keys.pop[i_2]? = none
[prop] m.keys[i]? = none
[prop] m.keys[i]? = some a_2
[prop] m.keys[i]? = some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]
[prop] m.keys[i_2]? = none
[prop] m.keys.pop.size ≤ i_2
[prop] m.keys.size ≤ i
[prop] m.keys.size ≤ i_2
[prop] ¬a_2 ∈ m.indices
[prop] m.indices[a_2]? = none
[prop] m.indices[a_2]? = some m.indices[a]
[prop] m.indices[m.keys[i]]? = some i_2
[prop] m.indices[m.keys[i_2]]? = some i
[prop] (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? = none
[prop] m.keys[i]? = some m.keys[i_2]
[prop] m.keys[i_2]? = some m.keys[i]
[prop] m.keys[m.indices[a]]? = some a_2
[prop] (m.keys.set i (m.keys.back ⋯) ⋯).size ≤ i_2
[prop] ¬m.keys[i_2] ∈ m.indices
[prop] m.indices[m.keys[i_2]]? = none
[prop] m.indices[m.keys[i_2]]? = some m.indices[a]
[prop] m.keys[m.indices[a]]? = some m.keys[i_2]
[prop] m.indices[a]? = some m.indices[m.keys[i_2]]
[prop] m.keys[m.indices[m.keys[i_2]]]? = some a
[eqc] Equivalence classes
[eqc] {a, m.keys[i]}
[eqc] {i, m.indices[a], ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2], m.indices[a]}
[eqc] {if h₂ : (m.keys.back ⋯ == a_2) = true then i else (m.indices.erase a)[a_2]}
[eqc] {i_2, m.indices[a_2], m.indices[m.keys[i_2]], m.indices[m.keys[i_2]]}
[eqc] {a_2,
m.keys.back ⋯,
(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2],
m.keys[m.keys.size - 1],
(m.keys.set i (m.keys.back ⋯) ⋯).pop[i_2],
m.keys.pop[i_2],
m.keys[i_2],
(m.keys.set i (m.keys.back ⋯) ⋯)[i_2]}
[eqc] {if i = i_2 then m.keys.back ⋯ else m.keys.pop[i_2], if i = i_2 then m.keys.back ⋯ else m.keys[i_2]}
[eqc] {false, a == a_2}
[eqc] {true,
m.keys.back ⋯ == a_2,
m.indices.contains a,
((m.indices.erase a).insert (m.keys.back ⋯) i).contains a_2,
(m.indices.erase a).contains a_2,
!a == a_2,
m.indices.contains a_2,
m.indices.contains m.keys[i_2]}
[eqc] {m.keys.pop.size,
m.size - 1,
(m.keys.pop.set i (m.keys.back ⋯) ⋯).size,
m.keys.size - 1,
(m.keys.set i (m.keys.back ⋯) ⋯).pop.size,
(m.keys.set i (m.keys.back ⋯) ⋯).size - 1}
[eqc] {m.keys.size, m.size, m.values.size, (m.keys.set i (m.keys.back ⋯) ⋯).size}
[eqc] {some i,
m.indices[a]?,
((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]?,
some m.indices[a],
some ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2],
m.indices[m.keys[i]]?}
[eqc] {if (m.keys.back ⋯ == a_2) = true then some i else (m.indices.erase a)[a_2]?}
[eqc] {some i_2,
m.indices[a_2]?,
m.indices[(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]]?,
some m.indices[a_2],
m.indices[m.keys[i_2]]?,
some m.indices[m.keys[i_2]]}
[eqc] {some a_2,
(m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]?,
some (m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2],
m.keys.pop[i_2]?,
m.keys[i_2]?,
some m.keys.pop[i_2],
some m.keys[i_2],
(m.keys.set i (m.keys.back ⋯) ⋯).pop[i_2]?,
(m.keys.set i (m.keys.back ⋯) ⋯)[i_2]?,
some (m.keys.set i (m.keys.back ⋯) ⋯)[i_2],
m.keys[m.indices[m.keys[i_2]]]?}
[eqc] {if i = i_2 then some (m.keys.back ⋯) else m.keys.pop[i_2]?,
if i_2 + 1 ≤ (m.keys.set i (m.keys.back ⋯) ⋯).size - 1 then (m.keys.set i (m.keys.back ⋯) ⋯)[i_2]? else none,
if i_2 + 1 ≤ m.keys.size - 1 then m.keys[i_2]? else none,
if i = i_2 then some (m.keys.back ⋯) else m.keys[i_2]?}
[eqc] {Membership.mem, fun m a => a ∈ m}
[eqc] {m.keys.pop.set i (m.keys.back ⋯) ⋯, (m.keys.set i (m.keys.back ⋯) ⋯).pop}
[eqc] {some a, m.keys[i]?, some m.keys[i], m.keys[m.indices[a]]?}
[eqc] {i_2 + 1, m.indices[m.keys[i_2]] + 1}
[eqc] {i + 1, m.indices[a] + 1}
[eqc] others
[eqc] {↑(m.size - 1), ↑(m.keys.size - 1), ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1)}
[eqc] {↑i, ↑m.indices[a]}
[eqc] {↑i_2, ↑m.indices[m.keys[i_2]]}
[eqc] {↑m.keys.pop.size,
↑(m.keys.pop.set i (m.keys.back ⋯) ⋯).size,
↑(m.size - 1),
if -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0 then ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + -1
else 0,
if -1 * ↑m.keys.size + 1 ≤ 0 then ↑m.keys.size + -1 else 0,
if -1 * ↑m.size + 1 ≤ 0 then ↑m.size + -1 else 0,
↑(m.keys.set i (m.keys.back ⋯) ⋯).size + -1,
↑m.keys.size + -1,
↑m.size + -1,
↑(m.keys.size - 1),
↑(m.keys.set i (m.keys.back ⋯) ⋯).pop.size,
↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1)}
[eqc] {↑(m.keys.set i (m.keys.back ⋯) ⋯).size, ↑m.keys.size, ↑m.size}
[eqc] {-1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1, -1 * ↑m.keys.size + 1, -1 * ↑m.size + 1}
[eqc] {-1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size, -1 * ↑m.keys.size, -1 * ↑m.size}
[cases] Case analyses
[cases] [1/2]: if -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 ≤ 0 then
↑(m.keys.set i (m.keys.back ⋯) ⋯).size + -1
else 0
[cases] source: E-matching Array.getElem?_pop
[cases] [1/2]: ((m.keys.pop.set i (m.keys.back ⋯) ⋯)[i_2]? = some a_2) =
¬((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2]? = some i_2
[cases] source: Initial goal
[cases] [2/2]: if i = i_2 then some (m.keys.back ⋯) else m.keys.pop[i_2]?
[cases] source: E-matching Array.getElem?_set
[cases] [2/2]: (m.keys[i_2]? = some a) = (m.indices[a]? = some i_2)
[cases] source: E-matching WF
[cases] [1/2]: if (m.keys.back ⋯ == a_2) = true then some i else (m.indices.erase a)[a_2]?
[cases] source: E-matching HashMap.getElem?_insert
[cases] [1/2]: i + 1 ≤ m.keys.pop.size
[cases] source: E-matching Array.set_pop
[cases] [1/2]: (a_2 ∈ m.indices.erase a) = ((a == a_2) = false ∧ a_2 ∈ m.indices)
[cases] source: E-matching HashMap.mem_erase
[ematch] E-matching patterns
[thm] getElem?_neg: [@getElem? #8 #7 #6 #5 #4 #2 #1]
[thm] getElem?_pos: [@getElem? #8 #7 #6 #5 #4 #2 #1]
[thm] HashMap.contains_iff_mem: [@Membership.mem #5 (HashMap _ #4 #3 #2) _ #1 #0]
[thm] WF: [@getElem? (HashMap #6 `[Nat] #4 #3) _ `[Nat] _ _ (@indices _ #5 _ _ #2) #0, @some `[Nat] #1]
[thm] size.eq_1: [@size #4 #3 #2 #1 #0]
[thm] Option.some_le_some: [@LE.le (Option #3) _ (@some _ #1) (@some _ #0)]
[thm] Option.mem_some: [@Membership.mem #2 (Option _) _ (@some _ #0) #1]
[thm] Array.getElem?_set: [@getElem? (Array #5) `[Nat] _ _ _ (@Array.set _ #4 #3 #1 #2) #0]
[thm] Array.mem_or_eq_of_mem_set: [@Membership.mem #6 (Array _) _ (@Array.set _ #5 #4 #2 _) #3]
[thm] Array.set_pop: [@Array.set #4 (@Array.pop _ #3) #1 #2 #0]
[thm] Array.getElem?_pop: [@getElem? (Array #2) `[Nat] _ _ _ (@Array.pop _ #1) #0]
[thm] Array.set_pop: [@Array.pop #4 (@Array.set _ #3 #1 #2 _)]
[thm] WF: [@getElem? (Array #6) `[Nat] _ _ _ (@keys _ #5 #4 #3 #2) #1, @some _ #0]
[thm] Array.back_eq_getElem: [@Array.back #2 #1 #0]
[thm] Option.some_lt_some: [@LT.lt (Option #3) _ (@some _ #1) (@some _ #0)]
[thm] Array.size_pos_of_mem: [@Membership.mem #3 (Array _) _ #1 #2, @Array.size _ #1]
[thm] size_keys: [@Array.size #4 (@keys _ #3 #2 #1 #0)]
[thm] Array.getElem?_eq_none: [@Array.size #3 #1, @getElem? (Array _) `[Nat] _ _ _ #1 #2]
[thm] Array.size_pop: [@Array.size #1 (@Array.pop _ #0)]
[thm] Array.size_set: [@Array.size #4 (@Array.set _ #3 #2 #1 #0)]
[thm] HashMap.mem_insert: [@Membership.mem #9 (HashMap _ #8 #7 #6) _ (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1]
[thm] HashMap.getElem?_insert: [@getElem? (HashMap #9 #8 #7 #6) _ _ _ _ (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1]
[thm] HashMap.mem_erase: [@Membership.mem #8 (HashMap _ #7 #6 #5) _ (@HashMap.erase _ _ #6 #5 #4 #1) #0]
[thm] HashMap.getElem?_erase: [@getElem? (HashMap #8 #7 #6 #5) _ _ _ _ (@HashMap.erase _ _ #6 #5 #4 #1) #0]
[thm] Option.not_lt_none: [@LT.lt (Option #2) _ #0 (@none _)]
[thm] Option.none_lt_some: [@LT.lt (Option #2) _ (@none _) (@some _ #0)]
[thm] Option.not_mem_none: [@Membership.mem #1 (Option _) _ (@none _) #0]
[thm] Option.not_some_le_none: [@LE.le (Option #2) _ (@some _ #0) (@none _)]
[thm] Option.none_le: [@LE.le (Option #2) _ (@none _) #0]
[thm] Array.getElem_mem: [@Membership.mem #3 (Array _) _ #2 (@getElem (Array _) `[Nat] _ _ _ #2 #1 _)]
[thm] getElem_indices_lt: [@getElem (HashMap #8 `[Nat] #6 #5) _ `[Nat] _ _ (@indices _ #7 _ _ #4) #3 _]
[thm] HashMap.getElem_erase: [@getElem (HashMap #9 #8 #7 #6) _ _ _ _ (@HashMap.erase _ _ #7 #6 #5 #2) #1 #0]
[thm] HashMap.getElem_insert: [@getElem (HashMap #10 #9 #8 #7) _ _ _ _ (@HashMap.insert _ _ #8 #7 #6 #3 #1) #2 #0]
[thm] Array.getElem_set: [@getElem (Array #6) `[Nat] _ _ _ (@Array.set _ #5 #4 #2 #3) #1 #0]
[thm] Array.getElem_pop: [@getElem (Array #3) `[Nat] _ _ _ (@Array.pop _ #2) #1 #0]
[thm] Option.some_beq_some: [@BEq.beq (Option #3) _ (@some _ #1) (@some _ #0)]
[thm] Option.some_beq_none: [@BEq.beq (Option #2) _ (@some _ #0) (@none _)]
[thm] Option.none_beq_some: [@BEq.beq (Option #2) _ (@none _) (@some _ #0)]
[thm] Option.none_beq_none: [@BEq.beq (Option #1) _ (@none _) (@none _)]
[thm] HashMap.contains_erase: [@HashMap.contains #8 #7 #6 #5 (@HashMap.erase _ _ #6 #5 #4 #1) #0]
[thm] HashMap.contains_insert: [@HashMap.contains #9 #8 #7 #6 (@HashMap.insert _ _ #7 #6 #5 #2 #0) #1]
[thm] getElem_def: [@getElem (IndexMap #8 #7 #6 #5) _ _ _ _ #2 #1 #0]
[thm] mem_indices: [@Membership.mem #5 (IndexMap _ #4 #3 #2) _ #1 #0]
[thm] getElem?_def: [@getElem? (IndexMap #7 #6 #5 #4) _ _ _ _ #1 #0]
[cutsat] Assignment satisfying linear constraints
[assign] i_1 := 4
[assign] i := 0
[assign] i_2 := 1
[assign] m.keys.pop.size := 2
[assign] m.keys.size := 3
[assign] m.size := 3
[assign] (m.keys.pop.set i (m.keys.back ⋯) ⋯).size := 2
[assign] m.values.size := 3
[assign] m.indices[a] := 0
[assign] ((m.indices.erase a).insert (m.keys.back ⋯) i)[a_2] := 0
[assign] (m.keys.set i (m.keys.back ⋯) ⋯).pop.size := 2
[assign] (m.keys.set i (m.keys.back ⋯) ⋯).size := 3
[assign] m.indices[a] := 0
[assign] m.indices[a_2] := 1
[assign] m.indices[m.keys[i_2]] := 1
[assign] m.indices[m.keys[i_2]] := 1
[ring] Rings
[ring] Ring `Int`
[basis] Basis
[_] ↑m.size + -1 * ↑m.keys.size = 0
[_] ↑m.keys.size + -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size = 0
[_] ↑i + -1 * ↑m.indices[a] = 0
[_] ↑m.keys.pop.size + -1 * ↑(m.keys.size - 1) = 0
[_] ↑(m.keys.pop.set i (m.keys.back ⋯) ⋯).size + -1 * ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1) = 0
[_] ↑i_2 + -1 * ↑m.indices[m.keys[i_2]] = 0
[_] ↑(m.keys.size - 1) + -1 * ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1) = 0
[_] ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1) + -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + 1 = 0
[_] ↑(m.keys.set i (m.keys.back ⋯) ⋯).size + -1 * ↑(m.keys.set i (m.keys.back ⋯) ⋯).pop.size + -1 = 0
[ring] Ring `Lean.Grind.Ring.OfSemiring.Q Nat`
[basis] Basis
[_] ↑(m.keys.size - 1) + -1 * ↑(m.size - 1) = 0
[_] ↑(m.size - 1) + -1 * ↑((m.keys.set i (m.keys.back ⋯) ⋯).size - 1) = 0
[grind] Diagnostics
[thm] E-Matching instances
[thm] WF ↦ 16
[thm] getElem?_neg ↦ 9
[thm] getElem?_pos ↦ 9
[thm] Array.getElem?_eq_none ↦ 5
[thm] HashMap.contains_iff_mem ↦ 5
[thm] Array.getElem?_pop ↦ 2
[thm] Array.getElem?_set ↦ 2
[thm] Array.getElem_pop ↦ 2
[thm] Array.getElem_set ↦ 2
[thm] Array.set_pop ↦ 2
[thm] Array.size_pop ↦ 2
[thm] Array.size_set ↦ 2
[thm] getElem_indices_lt ↦ 2
[thm] mem_indices ↦ 2
[thm] Array.back_eq_getElem ↦ 1
[thm] size_keys ↦ 1
[thm] size.eq_1 ↦ 1
[thm] HashMap.contains_erase ↦ 1
[thm] HashMap.contains_insert ↦ 1
[thm] HashMap.getElem?_insert ↦ 1
[thm] HashMap.getElem_insert ↦ 1
[thm] HashMap.mem_erase ↦ 1
[thm] HashMap.mem_insert ↦ 1
-/
#check_msgs (maxDiff := 20%) in
-- ANCHOR: eraseSwap_init
@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop }
else
let lastKey := m.keys.back
let lastValue := m.values.back
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey
values := m.values.pop.set i lastValue }
| none => m
-- ANCHOR_END: eraseSwap_init
stop discarding
instance : Singleton (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ => (∅ : IndexMap α β).insert a b⟩
instance : Insert (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ s => s.insert a b⟩
instance : LawfulSingleton (α × β) (IndexMap α β) :=
⟨fun _ => rfl⟩
-- ANCHOR: WF'
@[local grind .]
private theorem WF' (i : Nat) (a : α) (h₁ : i < m.keys.size) (h₂ : a ∈ m) :
m.keys[i] = a ↔ m.indices[a] = i := by
have := m.WF i a
grind
-- ANCHOR_END: WF'
-- ANCHOR: WF'ex
example {m : IndexMap α β} {a : α} {h : a ∈ m} :
m.keys[m.indices[a]'h] = a := by grind
-- ANCHOR_END: WF'ex
/--
Erase the key-value pair with the given key,
moving the last pair into its place in the order.
If the key is not present, the map is unchanged.
-/
-- ANCHOR: eraseSwap
@[inline] def eraseSwap (m : IndexMap α β) (a : α) : IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop }
else
let lastKey := m.keys.back
let lastValue := m.values.back
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey
values := m.values.pop.set i lastValue }
| none => m
-- ANCHOR_END: eraseSwap
-- TODO: similarly define `eraseShift`, etc.
-- ANCHOR: Verification
/-! ### Verification theorems (not exhaustive) -/
@[grind =]
theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
grind +locals
@[grind =]
theorem getElem_insert (m : IndexMap α β) (a a' : α) (b : β) (h : a' ∈ m.insert a b) :
(m.insert a b)[a'] = if h' : a' == a then b else m[a'] := by
grind +locals
theorem findIdx_lt (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.findIdx a h < m.size := by
grind +locals
grind_pattern findIdx_lt => m.findIdx a h
@[grind =]
theorem findIdx_insert_self (m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a = if h : a ∈ m then m.findIdx a else m.size := by
grind +locals
@[grind =]
theorem findIdx?_eq (m : IndexMap α β) (a : α) :
m.findIdx? a = if h : a ∈ m then some (m.findIdx a h) else none := by
grind +locals
@[grind =]
theorem getIdx_findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) :
m.getIdx (m.findIdx a) = m[a] := by grind +locals
omit [LawfulBEq α] [LawfulHashable α] in
@[grind =]
theorem getIdx?_eq (m : IndexMap α β) (i : Nat) :
m.getIdx? i = if h : i < m.size then some (m.getIdx i h) else none := by
grind +locals
private theorem getElem_keys_mem {m : IndexMap α β} {i : Nat} (h : i < m.size) :
m.keys[i] ∈ m := by
have : m.indices[m.keys[i]]? = some i := by grind
grind
local grind_pattern getElem_keys_mem => m.keys[i]
theorem getElem?_eraseSwap (m : IndexMap α β) (a a' : α) :
(m.eraseSwap a)[a']? = if a' == a then none else m[a']? := by
grind +locals
@[grind =]
theorem mem_eraseSwap (m : IndexMap α β) (a a' : α) :
a' ∈ m.eraseSwap a ↔ a' ≠ a ∧ a' ∈ m := by
grind +locals
theorem getElem_eraseSwap (m : IndexMap α β) (a a' : α) (h : a' ∈ m.eraseSwap a) :
(m.eraseSwap a)[a'] = m[a'] := by
grind +locals
-- ANCHOR_END: Verification
end IndexMap |
reference-manual/extended-examples/IndexMap.lean | import Std.Data.HashMap
open Std
structure IndexMap
(α : Type u) (β : Type v) [BEq α] [Hashable α] where
indices : HashMap α Nat
keys : Array α
values : Array β
size_keys : keys.size = values.size
WF : ∀ (i : Nat) (a : α),
keys[i]? = some a ↔ indices[a]? = some i
namespace IndexMap
variable {α : Type u} {β : Type v}
[BEq α] [LawfulBEq α] [Hashable α] [LawfulHashable α]
variable {m : IndexMap α β} {a : α} {b : β} {i : Nat}
@[inline] def size (m : IndexMap α β) : Nat :=
m.values.size
def emptyWithCapacity (capacity := 8) : IndexMap α β where
indices := HashMap.emptyWithCapacity capacity
keys := Array.emptyWithCapacity capacity
values := Array.emptyWithCapacity capacity
size_keys := sorry
WF := sorry
instance : EmptyCollection (IndexMap α β) where
emptyCollection := emptyWithCapacity
instance : Inhabited (IndexMap α β) where
default := ∅
@[inline] def contains (m : IndexMap α β)
(a : α) : Bool :=
m.indices.contains a
instance : Membership α (IndexMap α β) where
mem m a := a ∈ m.indices
instance {m : IndexMap α β} {a : α} : Decidable (a ∈ m) :=
inferInstanceAs (Decidable (a ∈ m.indices))
@[inline] def findIdx? (m : IndexMap α β) (a : α) : Option Nat :=
m.indices[a]?
@[inline] def findIdx (m : IndexMap α β) (a : α) (h : a ∈ m) : Nat :=
m.indices[a]
@[inline] def getIdx? (m : IndexMap α β) (i : Nat) : Option β :=
m.values[i]?
@[inline] def getIdx (m : IndexMap α β) (i : Nat)
(h : i < m.size := by get_elem_tactic) : β :=
m.values[i]
-- ANCHOR: GetElem?
instance :
GetElem? (IndexMap α β) α β (fun m a => a ∈ m) where
getElem m a h :=
m.values[m.indices[a]]'(by sorry)
getElem? m a :=
m.indices[a]?.bind (m.values[·]?)
getElem! m a :=
m.indices[a]?.bind (m.values[·]?) |>.getD default
-- ANCHOR_END: GetElem?
instance : LawfulGetElem (IndexMap α β) α β (fun m a => a ∈ m) where
getElem?_def := sorry
getElem!_def := sorry
@[inline] def insert (m : IndexMap α β) (a : α) (b : β) :
IndexMap α β :=
match h : m.indices[a]? with
| some i =>
{ indices := m.indices
keys := m.keys.set i a sorry
values := m.values.set i b sorry
size_keys := sorry
WF := sorry }
| none =>
{ indices := m.indices.insert a m.size
keys := m.keys.push a
values := m.values.push b
size_keys := sorry
WF := sorry }
instance : Singleton (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ => (∅ : IndexMap α β).insert a b⟩
instance : Insert (α × β) (IndexMap α β) :=
⟨fun ⟨a, b⟩ s => s.insert a b⟩
instance : LawfulSingleton (α × β) (IndexMap α β) :=
⟨fun _ => rfl⟩
/--
Erase the key-value pair with the given key,
moving the last pair into its place in the order.
If the key is not present, the map is unchanged.
-/
@[inline] def eraseSwap (m : IndexMap α β) (a : α) :
IndexMap α β :=
match h : m.indices[a]? with
| some i =>
if w : i = m.size - 1 then
{ indices := m.indices.erase a
keys := m.keys.pop
values := m.values.pop
size_keys := sorry
WF := sorry }
else
let lastKey := m.keys.back sorry
let lastValue := m.values.back sorry
{ indices := (m.indices.erase a).insert lastKey i
keys := m.keys.pop.set i lastKey sorry
values := m.values.pop.set i lastValue sorry
size_keys := sorry
WF := sorry }
| none => m
/-! ### Verification theorems -/
theorem getIdx_findIdx (m : IndexMap α β) (a : α)
(h : a ∈ m) :
m.getIdx (m.findIdx a h) sorry = m[a] :=
sorry
theorem mem_insert (m : IndexMap α β) (a a' : α) (b : β) :
a' ∈ m.insert a b ↔ a' = a ∨ a' ∈ m := by
sorry
theorem getElem_insert
(m : IndexMap α β) (a a' : α) (b : β)
(h : a' ∈ m.insert a b) :
(m.insert a b)[a']'h =
if h' : a' == a then b else m[a']'sorry := by
sorry
theorem findIdx_insert_self
(m : IndexMap α β) (a : α) (b : β) :
(m.insert a b).findIdx a sorry =
if h : a ∈ m then m.findIdx a h else m.size := by
sorry
end IndexMap |
reference-manual/extended-examples/IndexMapGrind/CheckMsgs.lean | import Lean.Message
import Lean.DocString
import Lean.Elab.Command
import Lean.Elab.GuardMsgs
import SubVerso.Examples.Messages
import SubVerso.Highlighting
open Lean Elab Command Tactic GuardMsgs
open SubVerso.Examples.Messages
/--
A version of `#guard_msgs` that leaves the messages in the log for extraction.
The passthrough parts of the spec are ignored.
-/
syntax (name := checkMsgsCmd)
(docComment)? "#check_msgs" (" (" &"maxDiff" " := " num "%" ")")? (ppSpace guardMsgsSpec)? " in" ppLine command : command
/-- Gives a string representation of a message without source position information.
Ensures the message ends with a '\n'. -/
private def messageToStringWithoutPos (msg : Message) : BaseIO String := do
let mut str ← msg.data.toString
unless msg.caption == "" do
str := msg.caption ++ ":\n" ++ str
if !("\n".isPrefixOf str) then str := " " ++ str
match msg.severity with
| MessageSeverity.information => str := "info:" ++ str
| MessageSeverity.warning => str := "warning:" ++ str
| MessageSeverity.error => str := "error:" ++ str
if str.isEmpty || str.back != '\n' then
str := str ++ "\n"
return str
def messagesEq (maxDiff? : Option Nat) (whitespace : WhitespaceMode) (msg1 msg2 : String) : Bool × Option String := Id.run do
let msg1 := normalizeLineNums <| normalizeMetavars msg1
let msg2 := normalizeLineNums <| normalizeMetavars msg2
if let some maxDiff := maxDiff? then
let lines1 := msg1.splitToList (· == '\n') |>.map (·.trimAsciiEnd.copy |> whitespace.apply) |>.reverse |>.dropWhile String.isEmpty |>.reverse
let lines2 := msg2.splitToList (· == '\n') |>.map (·.trimAsciiEnd.copy |> whitespace.apply) |>.reverse |>.dropWhile String.isEmpty |>.reverse
let maxPercent := maxDiff.toFloat / 100.0
let lines1 := lines1.toArray
let lines2 := lines2.toArray
let maxDiff := (min lines1.size lines2.size).toFloat * maxPercent |>.floor |>.toUInt64
let mut ins : UInt64 := 0
let mut del : UInt64 := 0
let mut d : UInt64 := 0
for (act, _) in Diff.diff lines1 lines2 do
match act with
| .insert => ins := ins + 1
| .delete => del := del + 1
| .skip =>
d := d + max ins del
ins := 0
del := 0
d := d + max ins del
return (d ≤ maxDiff, some s!"{d}/{maxDiff} lines differ")
else
return (whitespace.apply msg1 == whitespace.apply msg2, none)
open Tactic.GuardMsgs in
@[command_elab checkMsgsCmd]
def elabCheckMsgs : CommandElab
| `(command| $[$dc?:docComment]? #check_msgs%$tk $[(maxDiff := $maxDiff % )]? $(spec?)? in $cmd) => do
let expected : String := (← dc?.mapM (getDocStringText ·)).getD ""
|>.trimAscii.copy |> removeTrailingWhitespaceMarker
let {whitespace, ordering, filterFn, .. } ← parseGuardMsgsSpec spec?
let maxDiff? := maxDiff.map (·.getNat)
let initMsgs ← modifyGet fun st => (st.messages, { st with messages := {} })
-- do not forward snapshot as we don't want messages assigned to it to leak outside
withReader ({ · with snap? := none }) do
-- The `#guard_msgs` command is special-cased in `elabCommandTopLevel` to ensure linters only run once.
elabCommandTopLevel cmd
-- collect sync and async messages
let msgs := (← get).messages ++
(← get).snapshotTasks.foldl (· ++ ·.get.getAll.foldl (· ++ ·.diagnostics.msgLog) {}) {}
-- clear async messages as we don't want them to leak outside
modify ({ · with snapshotTasks := #[] })
let mut toCheck : MessageLog := .empty
let mut toPassthrough : MessageLog := .empty
for msg in msgs.toList do
match filterFn msg with
| .check => toCheck := toCheck.add msg
| .drop => pure ()
| .pass => toPassthrough := toPassthrough.add msg
let strings ← toCheck.toList.mapM (messageToStringWithoutPos ·)
let strings := ordering.apply strings
let res := "---\n".intercalate strings |>.trimAscii.copy
let (same, msg?) := messagesEq maxDiff? whitespace expected res
let text ← getFileMap
let msg? : Option Message ← msg?.bindM fun s => OptionT.run do
let ⟨pos, endPos⟩ ← OptionT.mk <| pure tk.getRange?
return {
fileName := (← getFileName),
pos := text.toPosition pos,
endPos := text.toPosition endPos,
data := s,
isSilent := true
severity := .information
}
let msg := msg?.map (MessageLog.empty.add ·) |>.getD .empty
if same then
-- Passed. Put messages back on the log, downgrading errors to warnings while recording their original status
modify fun st => { st with messages := initMsgs ++ SubVerso.Highlighting.Messages.errorsToWarnings msgs ++ msg }
else
-- Failed. Put all the messages back on the message log and add an error
modify fun st => { st with messages := initMsgs ++ msgs ++ msg }
let feedback :=
let diff := Diff.diff (expected.splitToList (· == '\n')).toArray (res.splitToList (· == '\n')).toArray
Diff.linesToString diff
logErrorAt tk m!"❌️ Docstring on `#check_msgs` does not match generated message:\n\n{feedback}"
pushInfoLeaf (.ofCustomInfo { stx := ← getRef, value := Dynamic.mk (GuardMsgFailure.mk res) })
| _ => throwUnsupportedSyntax
attribute [command_code_action checkMsgsCmd] Tactic.GuardMsgs.guardMsgsCodeAction
syntax withPosition("discarding" (colGe command)* "stop " "discarding") : command
open Lean Elab Command in
elab_rules : command
| `(discarding $cmds* stop discarding) => do
withoutModifyingEnv do
for c in cmds do
elabCommand c |
reference-manual/.vale/styles/config/vocabularies/Lean/accept.txt | [Ee]xtensible
[Dd]iscriminant
[Aa]pplicative |
reference-manual/.vale/styles/config/ignore/names.txt | Bhavik
Blott
Bruijn
Carneiro
Collatz
Himmel's
Hoare
Lua
Madelaine
Markus
Mathlib
Mehta
Merkin
Moura
Peano
Rish
Selsam
Simons
Streicher
Streicher's
Ullrich
Wadler
Wojciech
Nawrocki
Nawrocki's
Rustan
Leino
Leino's
Vaishnav's |
reference-manual/.vale/styles/config/ignore/terms.txt | Abelian
Ackermann
antiquotation
APIs
CaDiCaL
Goguen
Kissat
Kleisli
Mathlib's
Noncomputable
Nullability
Packrat
antiquotation's
antiquotations
antiquotations'
bitvector
bitvectors
bitwise
booleans
callout
checkpointing
codomain
codomain's
codomains
combinator
combinators
constructorless
conv
cumulative
cumulativity
cutsat
deallocate
deallocated
deallocates
declaratively
definitionally
delaborate
delaborate
delaborated
delaborates
delaborating
delaboration
delaboration
delaborator
delaborators
destructure
destructured
destructures
desugar
desugar
desugared
desugared
desugaring
desugaring
desugarings
desugars
discriminant
discriminant's
disequality
disequalities
disjointness
disjunct
disjuncts
downloader
effectful
elaborator
elaborator's
elaborators
enum
equational
executable's
extensional
extensionality
fixpoint
fixpoints
functor
functor's
functors
guillemet
guillemets
hoc
impredicative
impredicativity
indexable
initializer
initializers
injective
injectivity
inlines
inlining
interoperate
interoperates
interrobang
invalidField
iterator
iterator's
iterators
letterlike
linearization
linearize
linearizing
logics
lookup
lookups
lossy
macro_rules
matcher
matchers
memoization
metaprogram
metaprogramming
metaprograms
metatheoretic
metavariable
metavariable's
metavariables
mixfix
modus
monad's
monoid
monomorphic
monomorphism
morphism
morphisms
multipattern
multipatterns
multiset
multisets
mvcgen
namespace
namespace's
namespaces
noncomputable
nonterminal
nonterminals
nullability
nullable
nullary
parameters'
parenthesization
parenthesizer
parenthesizers
parser's
polymorphic
polymorphically
ponens
popcount
postcondition
postconditions
postfix
poststate
predicative
predicativity
prepending
preprocesses
propositionally
prestate
quasiquotation
quasiquotations
quasiquote
quasiquoted
quotiented
recursor
recursor's
recursors
Repr
sandboxed
satisfiability
scrutinee
scrutinees
se
semigroup
semireducible
semiring
setoid
setoid's
setoids
severities
simp
simproc
simprocs
simps
subcommand
subcommand's
subcommands
subexpression
subexpressions
subgoal
subgoals
subproblem
subproblems
subprocess
subprocess's
subprocesses
subsingleton
subsingletons
substring
substrings
subterm
subterms
subtree
subtrees
subtype
subtyping
syntaxes
thunk's
toolchain
toolchain's
toolchains
unary
underdetermined
unescaped
unexpander
unexpanders
unexpanders
unexpansion
uninstantiated
unknownIdentifier
unparenthesized
uploader
upvote
VC
VCs
walkthrough
workspace's
zulip |
reference-manual/.vale/styles/proselint/README.md | Copyright © 2014–2015, Jordan Suchow, Michael Pacer, and Lara A. Ross
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
reference-manual/Manual/BasicProps.lean | import VersoManual
import Manual.Meta
import Manual.Papers
open Manual
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
#doc (Manual) "Basic Propositions" =>
%%%
tag := "basic-props"
%%%
With the exception of implication and universal quantification, logical connectives and quantifiers are implemented as {tech}[inductive types] in the {lean}`Prop` universe.
In some sense, the connectives described in this chapter are not special—they could be implemented by any user.
However, these basic connectives are used pervasively in the standard library and built-in proof automation tools.
# Truth
%%%
tag := "true-false"
%%%
Fundamentally, there are only two propositions in Lean: {lean}`True` and {lean}`False`.
The axiom of propositional extensionality ({name}`propext`) allows propositions to be considered equal when they are logically equivalent, and every true proposition is logically equivalent to {lean}`True`.
Similarly, every false proposition is logically equivalent to {lean}`False`.
{lean}`True` is an inductively defined proposition with a single constructor that takes no parameters.
It is always possible to prove {lean}`True`.
{lean}`False`, on the other hand, is an inductively defined proposition with no constructors.
Proving it requires finding an inconsistency in the current context.
Both {lean}`True` and {lean}`False` are {ref "subsingleton-elimination"}[subsingletons]; this means that they can be used to compute inhabitants of non-propositional types.
For {lean}`True`, this amounts to ignoring the proof, which is not informative.
For {lean}`False`, this amounts to a demonstration that the current code is unreachable and does not need to be completed.
{docstring True}
{docstring False}
{docstring False.elim}
:::example "Dead Code and Subsingleton Elimination"
The fourth branch in the definition of {lean}`f` is unreachable, so no concrete {lean}`String` value needs to be provided:
```lean
def f (n : Nat) : String :=
if h1 : n < 11 then
"Small"
else if h2 : n > 13 then
"Large"
else if h3 : n % 2 = 1 then
"Odd"
else if h4 : n ≠ 12 then
False.elim (by omega)
else "Twelve"
```
In this example, {name}`False.elim` indicates to Lean that the current local context is logically inconsistent: proving {name}`False` suffices to abandon the branch.
Similarly, the definition of {name}`g` appears to have the potential to be non-terminating.
However, the recursive call occurs on an unreachable path through the program.
The proof automation used for producing termination proofs can detect that the local assumptions are inconsistent.
```lean
def g (n : Nat) : String :=
if n < 11 then
"Small"
else if n > 13 then
"Large"
else if n % 2 = 1 then
"Odd"
else if n ≠ 12 then
g (n + 1)
else "Twelve"
termination_by n
```
:::
# Logical Connectives
Conjunction is implemented as the inductively defined proposition {name}`And`.
The constructor {name}`And.intro` represents the introduction rule for conjunction: to prove a conjunction, it suffices to prove both conjuncts.
Similarly, {name}`And.elim` represents the elimination rule: given a proof of a conjunction and a proof of some other statement that assumes both conjuncts, the other statement can be proven.
Because {name}`And` is a {tech}[subsingleton], {name}`And.elim` can also be used as part of computing data.
However, it should not be confused with {name}`PProd`: using non-computable reasoning principles such as the Axiom of Choice to define data (including {lean}`Prod`) causes Lean to be unable to compile and run the resulting program, while using them in a proof of a proposition causes no such issue.
In a {ref "tactics"}[tactic] proof, conjunctions can be proved using {name}`And.intro` explicitly via {tactic}`apply`, but {tactic}`constructor` is more common.
When multiple conjunctions are nested in a proof goal, {tactic}`and_intros` can be used to apply {name}`And.intro` in each relevant location.
Assumptions of conjunctions in the context can be simplified using {tactic}`cases`, pattern matching with {tactic}`let` or {tactic (show := "match")}`Lean.Parser.Tactic.match`, or {tactic}`rcases`.
{docstring And}
{docstring And.elim}
Disjunction implemented as the inductively defined proposition {name}`Or`.
It has two constructors, one for each introduction rule: a proof of either disjunct is sufficient to prove the disjunction.
While the definition of {lean}`Or` is similar to that of {lean}`Sum`, it is quite different in practice.
Because {lean}`Sum` is a type, it is possible to check _which_ constructor was used to create any given value.
{lean}`Or`, on the other hand, forms propositions: terms that prove a disjunction cannot be interrogated to check which disjunct was true.
In other words, because {lean}`Or` is not a {tech}[subsingleton], its proofs cannot be used as part of a computation.
In a {ref "tactics"}[tactic] proof, disjunctions can be proved using either constructor ({name}`Or.inl` or {name}`Or.inr`) explicitly via {tactic}`apply`.
Assumptions of disjunctions in the context can be simplified using {tactic}`cases`, pattern matching with {tactic (show := "match")}`Lean.Parser.Tactic.match`, or {tactic}`rcases`.
{docstring Or}
When either disjunct is {tech}[decidable], it becomes possible to use {lean}`Or` to compute data.
This is because the decision procedure's result provides a suitable branch condition.
{docstring Or.by_cases}
{docstring Or.by_cases'}
```lean -show
section
variable {P : Prop}
```
Rather than encoding negation as an inductive type, {lean}`¬P` is defined to mean {lean}`P → False`.
In other words, to prove a negation, it suffices to assume the negated statement and derive a contradiction.
This also means that {lean}`False` can be derived immediately from a proof of a proposition and its negation, and then used to prove any proposition or inhabit any type.
```lean -show
end
```
{docstring Not}
{docstring absurd}
{docstring Not.elim}
```lean -show
section
variable {A B : Prop}
```
Implication is represented using {ref "function-types"}[function types] in the {tech}[universe] of {tech}[propositions].
To prove {lean}`A → B`, it is enough to prove {lean}`B` after assuming {lean}`A`.
This corresponds to the typing rule for {keywordOf Lean.Parser.Term.fun}`fun`.
Similarly, the typing rule for function application corresponds to {deftech}_modus ponens_: given a proof of {lean}`A → B` and a proof of {lean}`A`, {lean}`B` can be proved.
:::example "Truth-Functional Implication"
The representation of implication as functions in the universe of propositions is equivalent to the traditional definition in which {lean}`A → B` is defined as {lean}`(¬A) ∨ B`.
This can be proved using {tech}[propositional extensionality] and the law of the excluded middle:
```lean
theorem truth_functional_imp {A B : Prop} :
((¬ A) ∨ B) = (A → B) := by
apply propext
constructor
. rintro (h | h) a <;> trivial
. intro h
by_cases A
. apply Or.inr; solve_by_elim
. apply Or.inl; trivial
```
:::
```lean -show
end
```
Logical equivalence, or “if and only if”, is represented using a structure that is equivalent to the conjunction of both directions of the implication.
{docstring Iff}
{docstring Iff.elim}
:::syntax term (title := "Propositional Connectives")
The logical connectives other than implication are typically referred to using dedicated syntax, rather than via their defined names:
```grammar
$_ ∧ $_
```
```grammar
$_ ∨ $_
```
```grammar
¬ $_
```
```grammar
$_ ↔ $_
```
:::
# Quantifiers
Just as implication is implemented as ordinary function types in {lean}`Prop`, universal quantification is implemented as dependent function types in {lean}`Prop`.
Because {lean}`Prop` is {tech}[impredicative], any function type in which the {tech}[codomain] is a {lean}`Prop` is itself a {lean}`Prop`, even if the {tech}[domain] is a {lean}`Type`.
The typing rules for dependent functions precisely match the introduction and elimination rules for universal quantification: if a predicate holds for any arbitrarily chosen element of a type, then it holds universally.
If a predicate holds universally, then it can be instantiated to a proof for any individual.
:::syntax term (title := "Universal Quantification")
```grammar
∀ $x:ident $[$_:ident]* $[: $t]?, $_
```
```grammar
forall $x:ident $[$_:ident]* $[: $t]?, $_
```
```grammar
∀ $_ $[$_]*, $_
```
```grammar
forall $_ $[$_]*, $_
```
Universal quantifiers bind one or more variables, which are then in scope in the final term.
The identifiers may also be `_`.
With parenthesized type annotations, multiple bound variables may have different types, while the unparenthesized variant requires that all have the same type.
:::
Even though universal quantifiers are represented by functions, their proofs should not be thought of as computations.
Because of proof irrelevance and the elimination restriction for propositions, there's no way to actually compute data using these proofs.
As a result, they are free to use reasoning principles that are not readily computed, such as the classical Axiom of Choice.
Existential quantification is implemented as a structure that is similar to {name}`Subtype` and {name}`Sigma`: it contains a {deftech}_witness_, which is a value that satisfies the predicate, along with a proof that the witness does in fact satisfy the predicate.
In other words, it is a form of dependent pair type.
Unlike both {name}`Subtype` and {name}`Sigma`, it is a {tech}[proposition]; this means that programs cannot in general use a proof of an existential statement to obtain a value that satisfies the predicate.
When writing a proof, the {tactic}`exists` tactic allows one (or more) witness(es) to be specified for a (potentially nested) existential statement.
The {tactic}`constructor` tactic, on the other hand, creates a {tech}[metavariable] for the witness; providing a proof of the predicate may solve the metavariable as well.
The components of an existential assumption can be made available individually by pattern matching with {tactic}`let` or {tactic (show := "match")}`Lean.Parser.Tactic.match`, as well as by using {tactic}`cases` or {tactic}`rcases`.
:::example "Proving Existential Statements"
When proving that there exists some natural number that is the sum of four and five, the {tactic}`exists` tactic expects the sum to be provided, constructing the equality proof using {tactic}`trivial`:
```lean
theorem ex_four_plus_five : ∃ n, 4 + 5 = n := by
exists 9
```
The {tactic}`constructor` tactic, on the other hand, expects a proof.
The {tactic}`rfl` tactic causes the sum to be determined as a side effect of checking definitional equality.
```lean
theorem ex_four_plus_five' : ∃ n, 4 + 5 = n := by
constructor
rfl
```
:::
{docstring Exists}
:::syntax term (title := "Existential Quantification")
```grammar
∃ $x:ident $[$_:ident]* $[: $t]?, $_
```
```grammar
exists $x:ident $[$_:ident]* $[: $t]?, $_
```
```grammar
∃ $_ $[$_]*, $_
```
```grammar
exists $_ $[$_]*, $_
```
Existential quantifiers bind one or more variables, which are then in scope in the final term.
The identifiers may also be `_`.
With parenthesized type annotations, multiple bound variables may have different types, while the unparenthesized variant requires that all have the same type.
If more than one variable is bound, then the result is multiple instances of {name}`Exists`, nested to the right.
:::
{docstring Exists.choose}
# Propositional Equality
%%%
tag := "propositional-equality"
%%%
{deftech}_Propositional equality_ is the operator that allows the equality of two terms to be stated as a proposition.
{tech}[Definitional equality] is checked automatically where necessary.
As a result, its expressive power is limited in order to keep the algorithm that checks it fast and understandable.
Propositional equality, on the other hand, must be explicitly proved and explicitly used—Lean checks the validity of the proofs, rather than determining whether the statement is true.
In exchange, it is much more expressive: many terms are propositionally equal without being definitionally equal.
Propositional equality is defined as an inductive type.
Its sole constructor {name}`Eq.refl` requires that both of the equated values be the same; this is implicitly an appeal to {tech}[definitional equality].
Propositional equality can also be thought of as the least reflexive relation modulo definitional equality.
In addition to {name}`Eq.refl`, equality proofs are generated by the {name}`propext` and {name}`Quot.sound` axioms.
{docstring Eq}
:::syntax term (title := "Propositional Equality")
```grammar
$_ = $_
```
Propositional equality is typically denoted by the infix `=` operator.
:::
{docstring rfl}
{docstring Eq.symm}
{docstring Eq.trans}
{docstring Eq.subst}
{docstring cast}
{docstring congr}
{docstring congrFun}
{docstring congrArg}
{docstring Eq.mp}
{docstring Eq.mpr}
:::syntax term (title := "Casting")
```grammar
$_ ▸ $_
```
When a term's type includes one side of an equality as a sub-term, it can be rewritten using the `▸` operator.
If the both sides of the equality occur in the term's type, then the left side is rewritten to the right.
:::
## Uniqueness of Equality Proofs
%%%
tag := "UIP"
%%%
:::keepEnv
Because of definitional proof irrelevance, propositional equality proofs are _unique_: two mathematical objects cannot be equal in different ways.
```lean
theorem Eq.unique {α : Sort u}
(x y : α)
(p1 p2 : x = y) :
p1 = p2 := by
rfl
```
Streicher's axiom K{citep streicher1993}[] is also a consequence of definitional proof irrelevance, as is its computation rule.
Axiom K is a principle that's logically equivalent to {name}`Eq.unique`, implemented as an alternative {tech}[recursor] for propositional equality.
```lean
def K {α : Sort u}
{motive : {x : α} → x = x → Sort v}
(d : {x : α} → motive (Eq.refl x))
(x : α) (z : x = x) :
motive z :=
d
example {α : Sort u} {a : α}
{motive : {x : α} → x = x → Sort u}
{d : {x : α} → motive (Eq.refl x)}
{v : motive (Eq.refl a)} :
K (motive := motive) d a rfl = d := by
rfl
```
:::
## Heterogeneous Equality
%%%
tag := "HEq"
%%%
{deftech}_Heterogeneous equality_ is a version of {tech}[propositional equality] that does not require that the two equated terms have the same type.
However, _proving_ that the terms are equal using its version of {name}`rfl` requires that both the types and the terms are definitionally equal.
In other words, it allows more statements to be formulated.
Heterogeneous equality is typically less convenient in practice than ordinary propositional equality.
The greater flexibility afforded by not requiring both sides of the equality to have the same type means that it has fewer useful properties.
It is often encountered as a result of dependent pattern matching: the {tactic}`split` tactic and functional induction{TODO}[xref] add heterogeneous equality assumptions to the context when the ordinary equality assumptions that are needed to accurate reflect the corresponding control flow would not be type correct.
In these cases, the built-in automation has no choice but to use heterogeneous equality.
{docstring HEq}
:::syntax term (title := "Heterogeneous Equality")
```grammar
$_ ≍ $_
```
```lean -show
section
variable (x : α) (y : β)
```
Heterogeneous equality {lean}`HEq x y` can be written {lean}`x ≍ y`.
```lean -show
end
```
:::
{docstring HEq.rfl}
:::::leanSection
::::example "Heterogeneous Equality"
```lean -show
variable {α : Type u} {n k l₁ l₂ l₃ : Nat}
```
The type {lean}`Vector α n` is a wrapper around an {lean}`Array α` that includes a proof that the array has size {lean}`n`.
Appending {name}`Vector`s is associative, but this fact cannot be straightforwardly stated using ordinary propositional equality:
```lean
variable
{xs : Vector α l₁} {ys : Vector α l₂} {zs : Vector α l₃}
set_option linter.unusedVariables false
```
```lean (name := assocFail) +error -keep
theorem Vector.append_associative :
xs ++ (ys ++ zs) = (xs ++ ys) ++ zs := by sorry
```
The problem is that the associativity of addition of natural numbers holds propositionally, but not definitionally:
```leanOutput assocFail
Type mismatch
xs ++ ys ++ zs
has type
Vector α (l₁ + l₂ + l₃)
but is expected to have type
Vector α (l₁ + (l₂ + l₃))
```
:::paragraph
One solution to this problem is to use the associativity of natural number addition in the statement:
```lean
theorem Vector.append_associative' :
xs ++ (ys ++ zs) =
Nat.add_assoc _ _ _ ▸ ((xs ++ ys) ++ zs) := by
sorry
```
However, such proof statements can be difficult to work with in certain circumstances.
:::
:::paragraph
Another is to use heterogeneous equality:
```lean -keep
theorem Vector.append_associative :
HEq (xs ++ (ys ++ zs)) ((xs ++ ys) ++ zs) := by sorry
```
:::
In this case, {ref "the-simplifier"}[the simplifier] can rewrite both sides of the equation without having to preserve their types.
However, proving the theorem does require eventually proving that the lengths nonetheless match.
```lean -keep
theorem Vector.append_associative :
HEq (xs ++ (ys ++ zs)) ((xs ++ ys) ++ zs) := by
cases xs; cases ys; cases zs
simp
congr 1
. omega
. apply heq_of_eqRec_eq
. rfl
. apply propext
constructor <;> intro h <;> simp_all +arith
```
::::
:::::
{docstring HEq.elim}
{docstring HEq.ndrec}
{docstring HEq.ndrecOn}
{docstring HEq.subst}
{docstring eq_of_heq}
{docstring heq_of_eq}
{docstring heq_of_eqRec_eq}
{docstring eqRec_heq}
{docstring cast_heq}
{docstring heq_of_heq_of_eq}
{docstring type_eq_of_heq} |
reference-manual/Manual/Interaction.lean | import VersoManual
import Manual.Meta
import Manual.Interaction.FormatRepr
open Lean.MessageSeverity
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
#doc (Manual) "Interacting with Lean" =>
%%%
htmlSplit := .never
tag := "interaction"
%%%
Lean is designed for interactive use, rather than as a batch-mode system in which whole files are fed in and then translated to either object code or error messages.
Many programming languages designed for interactive use provide a {deftech}[REPL],{margin}[Short for {noVale "Vale can't handle partly-bolded words"}[“*R*ead-*E*val-*P*rint *L*oop”], because code is parsed (“read”), evaluated, and the result displayed, with this process repeated as many times as desired.] at which code can be input and tested, along with commands for loading source files, type checking terms, or querying the environment.
Lean's interactive features are based on a different paradigm.
Rather than a separate command prompt outside of the program, Lean provides {tech}[commands] for accomplishing the same tasks in the context of a source file.
By convention, commands that are intended for interactive use rather than as part of a durable code artifact are prefixed with {keyword}`#`.
Information from Lean commands is available in the {deftech}_message log_, which accumulates output from the {tech (key := "Lean elaborator")}[elaborator].
Each entry in the message log is associated with a specific source range and has a {deftech}_severity_.
There are three severities: {lean (type := "Lean.MessageSeverity")}`information` is used for messages that do not indicate a problem, {lean (type := "Lean.MessageSeverity")}`warning` indicates a potential problem, and {lean (type := "Lean.MessageSeverity")}`error` indicates a definite problem.
For interactive commands, results are typically returned as informational messages that are associated with the command's leading keyword.
# Evaluating Terms
%%%
tag := "hash-eval"
%%%
The {keywordOf Lean.Parser.Command.eval}`#eval` command is used to run code as a program.
In particular, it is capable of executing {lean}`IO` actions, it uses a call-by-value evaluation strategy, {ref "partial-unsafe"}[{keyword}`partial` functions are executed], and both types and proofs are erased.
Use {keywordOf Lean.reduceCmd}`#reduce` to instead reduce terms using the reduction rules that are part of {tech}[definitional equality].
:::syntax command (title := "Evaluating Terms")
```grammar
#eval $t
```
```grammar
#eval! $t
```
{includeDocstring Lean.Parser.Command.eval}
:::
{keywordOf Lean.Parser.Command.eval}`#eval` always {tech (key := "Lean elaborator")}[elaborates] and compiles the provided term.
It then checks whether the term transitively depends on any uses of {lean}`sorry`, in which case evaluation is terminated unless the command was invoked as {keywordOf Lean.Parser.Command.eval}`#eval!`.
This is because compiled code may rely on compile-time invariants (such as array lookups being in-bounds) that are ensured by proofs of suitable statements, and running code that contains incomplete proofs (or uses of {lean}`sorry` that “prove” incorrect statements) can cause Lean itself to crash.
```lean -show
section
variable (m : Type → Type)
open Lean.Elab.Command (CommandElabM)
```
:::paragraph
The way the code is run depends on its type:
* If the type is in the {lean}`IO` monad, then it is executed in a context where {tech}[standard output] and {tech}[standard error] are captured and redirected to the Lean {tech}[message log].
If the returned value's type is not {lean}`Unit`, then it is displayed as if it were the result of a non-monadic expression.
* If the type is in one of the internal Lean metaprogramming monads ({name Lean.Elab.Command.CommandElabM}`CommandElabM`, {name Lean.Elab.Term.TermElabM}`TermElabM`, {name Lean.MetaM}`MetaM`, or {name Lean.CoreM}`CoreM`), then it is run in the current context.
For example, the environment will contain the definitions that are in scope where {keywordOf Lean.Parser.Command.eval}`#eval` is invoked.
As with {name}`IO`, the resulting value is displayed as if it were the result of a non-monadic expression.
When Lean is running under {ref "lake"}[Lake], its working directory (and thus the working directory for {name}`IO` actions) is the current {tech}`workspace`.
* If the type is in some other monad {lean}`m`, and there is a {lean}`MonadLiftT m CommandElabM` or {lean}`MonadEvalT m CommandElabM` instance, then {name}`MonadLiftT.monadLift` or {name}`MonadEvalT.monadEval` is used to transform the monad into one that may be run with {keywordOf Lean.Parser.Command.eval}`#eval`, after which it is run as usual.
* If the term's type is not in any of the supported monads, then it is treated as a pure value.
The compiled code is run, and the result is displayed.
Auxiliary definitions or other environment modifications that result from elaborating the term in {keywordOf Lean.Parser.Command.eval}`#eval` are discarded.
If the term is an action in a metaprogramming monad, then changes made to the environment by running the monadic action are preserved.
:::
```lean -show
end
```
:::::TODO
When used in a {tech}`module`, {keywordOf Lean.Parser.Command.eval}`#eval` reveals a difference between the way the Lean language server and the Lean compiler process files.
Because it runs code at compile time, {keywordOf Lean.Parser.Command.eval}`#eval` requires that its code is available in the {tech}[meta phase].
To make easier to experiment with a module, the language server makes all imported modules available in the meta phase, while the compiler strictly adheres to the {keywordOf Lean.Parser.Module.import}`meta` declarations.
As a result, modules that use {keywordOf Lean.guardMsgsCmd}`#guard_msgs` together with {keywordOf Lean.Parser.Command.eval}`#eval` to embed lightweight tests may elaborate successfully in the language server but fail during a build.
To fix this, the definitions can be imported with {keywordOf Lean.Parser.Module.import}`meta import` in the module that contains the test:
::::example "Evaluation and Meta"
:::leanModules -server +error
```leanModule (moduleName := Eval.Even)
module
public section
def isEven (n : Nat) : Bool :=
n % 2 = 0
```
```leanModule (moduleName := Eval) (name := noMetaEval)
module
import Eval.Even
/-- info: [true, false] -/
#guard_msgs in
#eval [isEven 4, isEven 5]
```
```leanOutput noMetaEval
❌️ Docstring on `#guard_msgs` does not match generated message:
- info: [true, false]
+ error: Could not find native implementation of external declaration 'isEven' (symbols 'lp_example_isEven___boxed' or 'lp_example_isEven').
+ For declarations from `Init`, `Std`, or `Lean`, you need to set `supportInterpreter := true` in the relevant `lean_exe` statement in your `lakefile.lean`.
```
:::
:::leanModules
Importing {name}`isEven` to the meta phase fixes the problem:
```leanModule (moduleName := Eval.Even)
module
public section
def isEven (n : Nat) : Bool :=
n % 2 = 0
```
```leanModule (moduleName := Eval) (name := metaEval)
module
meta import Eval.Even
/-- info: [true, false] -/
#guard_msgs in
#eval [isEven 4, isEven 5]
```
:::
::::
:::::
Results are displayed using a {name Lean.ToExpr}`ToExpr`, {name}`ToString`, or {name}`Repr` instance, if they exist.
If not, and {option}`eval.derive.repr` is {lean}`true`, Lean attempts to derive a suitable {name}`Repr` instance.
It is an error if no suitable instance can be found or derived.
Setting {option}`eval.pp` to {lean}`false` disables the use of {name Lean.ToExpr}`ToExpr` instances by {keywordOf Lean.Parser.Command.eval}`#eval`.
:::example "Displaying Output"
{keywordOf Lean.Parser.Command.eval}`#eval` cannot display functions:
```lean (name := funEval) +error
#eval fun x => x + 1
```
```leanOutput funEval
could not synthesize a `ToExpr`, `Repr`, or `ToString` instance for type
Nat → Nat
```
It is capable of deriving instances to display output that has no {name}`ToString` or {name}`Repr` instance:
```lean (name := quadEval)
inductive Quadrant where
| nw | sw | se | ne
#eval Quadrant.nw
```
```leanOutput quadEval
Quadrant.nw
```
The derived instance is not saved.
Disabling {option}`eval.derive.repr` causes {keywordOf Lean.Parser.Command.eval}`#eval` to fail:
```lean (name := quadEval2) +error
set_option eval.derive.repr false
#eval Quadrant.nw
```
```leanOutput quadEval2
could not synthesize a `ToExpr`, `Repr`, or `ToString` instance for type
Quadrant
```
:::
{optionDocs eval.pp}
{optionDocs eval.type}
{optionDocs eval.derive.repr}
Monads can be given the ability to execute in {keywordOf Lean.Parser.Command.eval}`#eval` by defining a suitable {lean}`MonadLift`{margin}[{lean}`MonadLift` is described in {ref "lifting-monads"}[the section on lifting monads.]] or {lean}`MonadEval` instance.
Just as {name}`MonadLiftT` is the transitive closure of {name}`MonadLift` instances, {name}`MonadEvalT` is the transitive closure of {name}`MonadEval` instances.
As with {name}`MonadLiftT` users should not define additional instances of {name}`MonadEvalT` directly.
{docstring MonadEval}
{docstring MonadEvalT}
# Reducing Terms
%%%
tag := "hash-reduce"
%%%
The {keywordOf Lean.reduceCmd}`#reduce` command repeatedly applies reductions to a term until no further reductions are possible.
Reductions are performed under binders, but to avoid unexpected slowdowns, proofs and types are skipped unless the corresponding options to {keywordOf Lean.reduceCmd}`#reduce` are enabled.
Unlike {keywordOf Lean.Parser.Command.eval}`#eval` command, reduction cannot have side effects and the result is displayed as a term rather than via a {name}`ToString` or {name}`Repr` instance.
Generally speaking, {keywordOf Lean.reduceCmd}`#reduce` is primarily useful for diagnosing issues with definitional equality and proof terms, while {keywordOf Lean.Parser.Command.eval}`#eval` is more suitable for computing the value of a term.
In particular, functions defined using {tech}[well-founded recursion] or as {tech}[partial fixpoints] are either very slow to compute with the reduction engine, or will not reduce at all.
:::syntax command (title := "Reducing Terms")
```grammar
#reduce $[(proofs := true)]? $[(types := true)]? $t
```
{includeDocstring Lean.reduceCmd}
:::
:::example "Reducing Functions"
Reducing a term results in its normal form in Lean's logic.
Because the underlying term is reduced and then displayed, there is no need for a {name}`ToString` or {name}`Repr` instance.
Functions can be displayed just as well as any other term.
In some cases, this normal form is short and resembles a term that a person might write:
```lean (name := plusOne)
#reduce (fun x => x + 1)
```
```leanOutput plusOne
fun x => x.succ
```
In other cases, the details of {ref "elab-as-course-of-values"}[the elaboration of functions] such as addition to Lean's core logic are exposed:
```lean (name := onePlus)
#reduce (fun x => 1 + x)
```
```leanOutput onePlus
fun x => (Nat.rec ⟨fun x => x, PUnit.unit⟩ (fun n n_ih => ⟨fun x => (n_ih.1 x).succ, n_ih⟩) x).1 1
```
:::
# Checking Types
%%%
tag := "hash-check"
%%%
:::syntax command (title := "Checking Types")
{keyword}`#check` can be used to elaborate a term and check its type.
```grammar
#check $t
```
If the provided term is an identifier that is the name of a global constant, then {keyword}`#check` prints its signature.
Otherwise, the term is elaborated as a Lean term and its type is printed.
:::
Elaboration of the term in {keywordOf Lean.Parser.Command.check}`#check` does not require that the term is fully elaborated; it may contain metavariables.
If the term as written _could_ have a type, elaboration succeeds.
If a required instance could never be synthesized, then elaboration fails; synthesis problems that are due to metavariables do not block elaboration.
:::example "{keyword}`#check` and Underdetermined Types"
In this example, the type of the list's elements is not determined, so the type contains a metavariable:
```lean (name := singletonList)
#check fun x => [x]
```
```leanOutput singletonList
fun x => [x] : ?m.9 → List ?m.9
```
In this example, both the type of the terms being added and the result type of the addition are unknown, because {name}`HAdd` allows terms of different types to be added.
Behind the scenes, a metavariable represents the unknown {name}`HAdd` instance.
```lean (name := polyPlus)
#check fun x => x + x
```
```leanOutput polyPlus
fun x => x + x : (x : ?m.12) → ?m.19 x
```
:::
:::syntax command (title := "Testing Type Errors")
```grammar
#check_failure $t
```
This variant of {keywordOf Lean.Parser.Command.check}`#check` elaborates the term using the same process as {keywordOf Lean.Parser.Command.check}`#check`.
If elaboration succeeds, it is an error; if it fails, there is no error.
The partially-elaborated term and any type information that was discovered are added to the {tech}[message log].
:::
:::example "Checking for Type Errors"
Attempting to add a string to a natural number fails, as expected:
```lean (name := oneOne)
#check_failure "one" + 1
```
```leanOutput oneOne
failed to synthesize instance of type class
HAdd String Nat ?m.5
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
Nonetheless, a partially-elaborated term is available:
```leanOutput oneOne
"one" + 1 : ?m.32
```
:::
# Synthesizing Instances
%%%
tag := "hash-synth"
%%%
:::syntax command (title := "Synthesizing Instances")
```grammar
#synth $t
```
:::
The {keywordOf Lean.Parser.Command.synth}`#synth` command invokes Lean's {tech}[type class] resolution machinery and attempts to perform {ref "instance-synth"}[instance synthesis] to find an instance for the given type class.
If it succeeds, then the resulting instance term is output.
::::example "Synthesizing a Type Class Instance"
:::paragraph
Lean uses type classes to overload operations like addition.
The `+` operator is notation for a call to {name}`HAdd.hAdd`, which is the single method in the {name}`HAdd` type class.
This example shows that Lean will let us add two integers, and the result will be an integer:
```lean (name := synthInstHAddNat)
#synth HAdd Int Int Int
```
```leanOutput synthInstHAddNat
instHAdd
```
:::
:::paragraph
By default, Lean does not show the implicit arguments in the output term.
Instance arguments are implicit, however, which decreases the usefulness of this output for understanding instance synthesis.
Setting the option {option}`pp.explicit` to {name}`true` causes Lean to display implicit arguments, including instances:
```lean (name := synthInstHAddNat2)
set_option pp.explicit true in
#synth HAdd Int Int Int
```
```leanOutput synthInstHAddNat2
@instHAdd Int Int.instAdd
```
:::
:::paragraph
Lean does not allow the addition of integers and strings, as demonstrated by this failure of type class instance synthesis:
```lean (name := synthInstHAddNatInt) +error
#synth HAdd Int String String
```
```leanOutput synthInstHAddNatInt
failed to synthesize
HAdd Int String String
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
:::
::::
# Querying the Context
%%%
tag := "hash-print"
%%%
The {keyword}`#print` family of commands are used to query Lean for information about definitions.
:::syntax command (title := "Printing Definitions")
```grammar
#print $t:ident
```
Prints the definition of a constant.
:::
Printing a definition with {keywordOf Lean.Parser.Command.print}`#print` prints the definition as a term.
Theorems that were proved using {ref "tactics"}[tactics] may be very large when printed as terms.
:::syntax command (title := "Printing Strings")
```grammar
#print $s:str
```
Adds the string literal to Lean's {tech}[message log].
:::
:::syntax command (title := "Printing Axioms")
```grammar
#print axioms $t
```
Lists all axioms that the constant transitively relies on. See {ref "print-axioms"}[the documentation for axioms] for more information.
:::
:::example "Printing Axioms"
```imports -show
import Std.Tactic.BVDecide
```
These two functions each swap the elements in a pair of bitvectors:
```lean
def swap (x y : BitVec 32) : BitVec 32 × BitVec 32 :=
(y, x)
def swap' (x y : BitVec 32) : BitVec 32 × BitVec 32 :=
let x := x ^^^ y
let y := x ^^^ y
let x := x ^^^ y
(x, y)
```
They can be proven equal using {ref "function-extensionality"}[function extensionality], the {ref "the-simplifier"}[simplifier], and {tactic}`bv_decide`:
```lean
theorem swap_eq_swap' : swap = swap' := by
funext x y
simp only [swap, swap', Prod.mk.injEq]
bv_decide
```
The resulting proof makes use of a number of axioms:
```lean (name := axioms)
#print axioms swap_eq_swap'
```
```leanOutput axioms
'swap_eq_swap'' depends on axioms: [propext, Classical.choice, Lean.ofReduceBool, Lean.trustCompiler, Quot.sound]
```
:::
:::syntax command (title := "Printing Equations")
The command {keywordOf Lean.Parser.Command.printEqns}`#print equations`, which can be abbreviated {keywordOf Lean.Parser.Command.printEqns}`#print eqns`, displays the {tech}[equational lemmas] for a function.
```grammar
#print equations $t
```
```grammar
#print eqns $t
```
:::
:::example "Printing Equations"
```lean (name := intersperse_eqns)
def intersperse (x : α) : List α → List α
| y :: z :: zs => y :: x :: intersperse x (z :: zs)
| xs => xs
#print equations intersperse
```
```leanOutput intersperse_eqns
equations:
@[defeq] theorem intersperse.eq_1.{u_1} : ∀ {α : Type u_1} (x y z : α) (zs : List α),
intersperse x (y :: z :: zs) = y :: x :: intersperse x (z :: zs)
theorem intersperse.eq_2.{u_1} : ∀ {α : Type u_1} (x : α) (x_1 : List α),
(∀ (y z : α) (zs : List α), x_1 = y :: z :: zs → False) → intersperse x x_1 = x_1
```
It does not print the defining equation, nor the unfolding equation:
```lean (name := intersperse_eq_def)
#check intersperse.eq_def
```
```leanOutput intersperse_eq_def
intersperse.eq_def.{u_1} {α : Type u_1} (x : α) (x✝ : List α) :
intersperse x x✝ =
match x✝ with
| y :: z :: zs => y :: x :: intersperse x (z :: zs)
| xs => xs
```
```lean (name := intersperse_eq_unfold)
#check intersperse.eq_unfold
```
```leanOutput intersperse_eq_unfold
intersperse.eq_unfold.{u_1} :
@intersperse = fun {α} x x_1 =>
match x_1 with
| y :: z :: zs => y :: x :: intersperse x (z :: zs)
| xs => xs
```
:::
:::syntax command (title := "Scope Information")
{includeDocstring Lean.Parser.Command.where}
```grammar
#where
```
:::
:::example "Scope Information"
The {keywordOf Lean.Parser.Command.where}`#where` command displays all the modifications made to the current {tech}[section scope], both in the current scope and in the scopes in which it is nested.
```lean +fresh (name := scopeInfo)
section
open Nat
namespace A
variable (n : Nat)
namespace B
open List
set_option pp.funBinderTypes true
#where
end A.B
end
```
```leanOutput scopeInfo (allowDiff := 1)
namespace A.B
open Nat List
variable (n : Nat)
set_option pp.funBinderTypes true
```
:::
:::syntax command (title := "Checking the Lean Version")
{includeDocstring Lean.Parser.Command.version}
```grammar
#version
```
:::
# Testing Output with {keyword}`#guard_msgs`
%%%
tag := "hash-guard_msgs"
%%%
The {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command can be used to ensure that the messages output by a command are as expected.
Together with the interaction commands in this section, it can be used to construct a file that will only elaborate if the output is as expected; such a file can be used as a {tech}[test driver] in {ref "lake"}[Lake].
:::syntax command (title := "Documenting Expected Output")
```grammar
$[$_:docComment]?
#guard_msgs $[($_,*)]? in
$c:command
```
{includeDocstring Lean.guardMsgsCmd}
:::
:::example "Testing Return Values"
The {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command can ensure that a set of test cases pass:
```lean
def reverse : List α → List α := helper []
where
helper acc
| [] => acc
| x :: xs => helper (x :: acc) xs
/-- info: [] -/
#guard_msgs in
#eval reverse ([] : List Nat)
/-- info: ['c', 'b', 'a'] -/
#guard_msgs in
#eval reverse "abc".toList
```
:::
:::paragraph
The behavior of the {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command can be specified in three ways:
1. Providing a filter that selects a subset of messages to be checked
2. Specifying a whitespace comparison strategy
3. Deciding to sort messages by their content or by the order in which they were produced
These configuration options are provided in parentheses, separated by commas.
:::
::::syntax Lean.guardMsgsSpecElt (title := "Specifying {keyword}`#guard_msgs` Behavior") -open
```grammar
$_:guardMsgsFilter
```
```grammar
whitespace := $_
```
```grammar
ordering := $_
```
There are three kinds of options for {keywordOf Lean.guardMsgsCmd}`#guard_msgs`: filters, whitespace comparison strategies, and orderings.
::::
:::syntax Lean.guardMsgsFilter (title := "Output Filters for {keyword}`#guard_msgs`") -open
```grammar
$[drop]? all
```
```grammar
$[drop]? info
```
```grammar
$[drop]? warning
```
```grammar
$[drop]? error
```
{includeDocstring Lean.guardMsgsFilter}
:::
:::syntax Lean.guardMsgsWhitespaceArg (title := "Whitespace Comparison for `#guard_msgs`") -open
```grammar
exact
```
```grammar
lax
```
```grammar
normalized
```
Leading and trailing whitespace is always ignored when comparing messages. On top of that, the following settings are available:
* `whitespace := exact` requires an exact whitespace match.
* `whitespace := normalized` converts all newline characters to a space before matching (the default). This allows breaking long lines.
* `whitespace := lax` collapses whitespace to a single space before matching.
:::
The option {option}`guard_msgs.diff` controls the content of the error message that {keywordOf Lean.guardMsgsCmd}`#guard_msgs` produces when the expected message doesn't match the produced message.
By default, {keywordOf Lean.guardMsgsCmd}`#guard_msgs` shows a line-by-line difference, with a leading `+` used to indicate lines from the produced message and a leading `-` used to indicate lines from the expected message.
When messages are large and only differ by a small amount, this can make it easier to notice where they differ.
Setting {option}`guard_msgs.diff` to `false` causes {keywordOf Lean.guardMsgsCmd}`#guard_msgs` to instead show just the produced message, which can be compared with the expected message in the source file.
This can be convenient if the difference between the message is confusing or overwhelming.
{optionDocs guard_msgs.diff}
:::example "Displaying Differences"
The {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command can be used to test definition of a rose tree {lean}`Tree` and a function {lean}`Tree.big` that creates them:
```lean
inductive Tree (α : Type u) : Type u where
| val : α → Tree α
| branches : List (Tree α) → Tree α
def Tree.big (n : Nat) : Tree Nat :=
if n < 5 then .branches [.val n, .val (n - 1), .val n, .val (n - 2)]
else .branches [.big (n / 2), .big (n / 3)]
```
However, it can be difficult to spot where test failures come from when the output is large:
```lean +error (name := bigMsg)
set_option guard_msgs.diff false
/--
info: Tree.branches
[Tree.branches
[Tree.branches
[Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0],
Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0],
Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1]],
Tree.branches
[Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1],
Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0]]]
-/
#guard_msgs in
#eval Tree.big 20
```
The evaluation produces:
```leanOutput bigMsg (severity := information)
Tree.branches
[Tree.branches
[Tree.branches
[Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0],
Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0]],
Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1]],
Tree.branches
[Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1],
Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0]]]
```
Without {option}`guard_msgs.diff`, the {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command reports this error:
```leanOutput bigMsg (severity := error)
❌️ Docstring on `#guard_msgs` does not match generated message:
info: Tree.branches
[Tree.branches
[Tree.branches
[Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0],
Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0]],
Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1]],
Tree.branches
[Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1],
Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0]]]
```
Enabling {option}`guard_msgs.diff` highlights the differences instead, making the error more apparent:
```lean +error (name := bigMsg')
set_option guard_msgs.diff true in
/--
info: Tree.branches
[Tree.branches
[Tree.branches
[Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0],
Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0,
Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1]],
Tree.branches
[Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1],
Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0]]]
-/
#guard_msgs in
#eval Tree.big 20
```
```leanOutput bigMsg' (severity := error)
❌️ Docstring on `#guard_msgs` does not match generated message:
info: Tree.branches
[Tree.branches
[Tree.branches
[Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0],
- Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0,
+ Tree.branches [Tree.val 1, Tree.val 0, Tree.val 1, Tree.val 0]],
Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1]],
Tree.branches
[Tree.branches [Tree.val 3, Tree.val 2, Tree.val 3, Tree.val 1],
Tree.branches [Tree.val 2, Tree.val 1, Tree.val 2, Tree.val 0]]]
```
:::
{include 1 Manual.Interaction.FormatRepr} |
reference-manual/Manual/SupportedPlatforms.lean | import VersoManual
open Verso.Genre Manual
#doc (Manual) "Supported Platforms" =>
%%%
tag := "platforms"
file := "platforms"
number := false
htmlSplit := .never
%%%
# Tier 1
:::paragraph
Tier 1 platforms are those for which Lean is built and tested by our CI infrastructure.
Binary releases of Lean are available for these platforms via {ref "elan"}[`elan`].
The Tier 1 platforms are:
* `x86-64` Linux with glibc 2.26+
* `aarch64` Linux with glibc 2.27+
* `aarch64` (Apple Silicon) macOS 10.15+
* `x86-64` Windows 11 (any version), Windows 10 (version 1903 or higher), Windows Server 2022, Windows Server 2025
:::
# Tier 2
Tier 2 platforms are those for which Lean is cross-compiled but not tested by our CI.
Binary releases are available for these platforms.
Releases may be silently broken due to the lack of automated testing.
Issue reports and fixes are welcome.
:::paragraph
The Tier 2 platforms are:
* `x86-64` macOS 10.15+
* Emscripten WebAssembly
::: |
reference-manual/Manual/Intro.lean | import VersoManual
import Manual.Meta
open Lean.MessageSeverity
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
#doc (Manual) "Introduction" =>
%%%
htmlSplit := .never
tag := "introduction"
%%%
The _Lean Language Reference_ is intended as a comprehensive, precise description of Lean.
It is a reference work in which Lean users can look up detailed information, rather than a tutorial for new users.
At the moment, this reference manual is a public preview.
For tutorials and learning materials, please visit [the Lean documentation page](https://lean-lang.org/documentation/).
This document describes version {versionString}[] of Lean.
# History
%%%
tag := "history-of-lean"
%%%
Leonardo de Moura launched the Lean project when he was at Microsoft Research in 2013, and Lean 0.1 was officially released on June 16, 2014.
The goal of the Lean project is to combine the high level of trust provided by a small, independently-implementable logical kernel with the convenience and automation of tools like SMT solvers, while scaling to large problems.
This vision still guides the development of Lean, as we invest in improved automation, improved performance, and user-friendliness; the trusted core proof checker is still minimal and independent implementations exist.
The initial versions of Lean were primarily configured as C++ libraries in which client code could carry out trustworthy proofs that were independently checkable.
In these early years, the design of Lean rapidly evolved towards traditional interactive provers, first with tactics written in Lua, and later with a dedicated front-end syntax.
January 20, 2017 saw the first release of the Lean 3.0 series.
Lean 3 achieved widespread adoption by mathematicians, and pioneered self-extensibility: tactics, notations, and top-level commands could all be defined in Lean itself.
The mathematics community built Mathlib, which at the end of Lean 3 had over one million lines of formalized mathematics, with all proofs mechanically checked.
The system itself, however, was still implemented in C++, which imposed limits on Lean's flexibility and made it more difficult to develop due to the diverse skills required.
Development of Lean 4 began in 2018, culminating in the 4.0 release on September 8, 2023.
Lean 4 represents an important milestone: as of version 4, Lean is self-hosted—approximately 90% of the code that implements Lean is itself written in Lean.
Lean 4's rich extension API provides users with the ability to adapt it to their needs, rather than relying on the core developers to add necessary features.
Additionally, self-hosting makes the development process much faster, so features and performance can be delivered more quickly; Lean 4 is faster and scales to larger problems than Lean 3.
Mathlib was successfully ported to Lean 4 in 2023 through a community effort supported by the Lean developers, and it has now grown to over 1.5 million lines.
Even though Mathlib has grown by 50%, Lean 4 checks it faster than Lean 3 could check its smaller library.
The development process for Lean 4 was approximately as long as that of all prior versions combined, and we are now delighted with its design—no further rewrites are planned.
Leonardo de Moura and his co-founder, Sebastian Ullrich, launched the Lean Focused Research Organization (FRO) nonprofit in July of 2023 within Convergent Research, with philanthropic support from the Simons Foundation International, the Alfred P. Sloan Foundation, and Richard Merkin.
The FRO currently has more than ten employees working to support the growth and scalability of Lean and the broader Lean community.
# Typographical Conventions
%%%
tag := "typographical-conventions"
%%%
This document makes use of a number of typographical and layout conventions to indicate various aspects of the information being presented.
## Lean Code
%%%
tag := "code-samples"
%%%
This document contains many Lean code examples.
They are formatted as follows:
```lean
def hello : IO Unit := IO.println "Hello, world!"
```
Compiler output (which may be errors, warnings, or just information) is shown both in the code and separately:
```lean (name := output) +error
#eval s!"The answer is {2 + 2}"
theorem bogus : False := by sorry
example := Nat.succ "two"
```
Informative output, such as the result of {keywordOf Lean.Parser.Command.eval}`#eval`, is shown like this:
```leanOutput output (severity := information)
"The answer is 4"
```
Warnings are shown like this:
```leanOutput output (severity := warning)
declaration uses `sorry`
```
Error messages are shown like this:
```leanOutput output (severity := error)
Application type mismatch: The argument
"two"
has type
String
but is expected to have type
Nat
in the application
Nat.succ "two"
```
The presence of tactic proof states is indicated by the presence of small lozenges that can be clicked to show the proof state, such as after {tactic}`rfl` below:
```lean
example : 2 + 2 = 4 := by rfl
```
:::tacticExample
Proof states may also be shown on their own.
When attempting to prove that {goal}`2 + 2 = 4`, the initial proof state is:
```pre
⊢ 2 + 2 = 4
```
After using {tacticStep}`rfl`, the resulting state is:
```post
```
```setup
skip
```
:::
Identifiers in code examples are hyperlinked to their documentation.
Examples of code with syntax errors are shown with an indicator of where the parser error occurred, along with the error message:
```syntaxError intro
def f : Option Nat → Type
| some 0 => Unit
| => Option (f t)
| none => Empty
```
```leanOutput intro
<example>:3:3-3:6: unexpected token '=>'; expected term
```
## Examples
%%%
tag := "example-boxes"
%%%
Illustrative examples are in callout boxes, as below:
::::keepEnv
:::example "Even Numbers"
This is an example of an example.
One way to define even numbers is via an inductive predicate:
```lean
inductive Even : Nat → Prop where
| zero : Even 0
| plusTwo : Even n → Even (n + 2)
```
:::
::::
## Technical Terminology
%%%
tag := "technical-terms"
%%%
{deftech}_Technical terminology_ refers to terms used in a very specific sense when writing technical material, such as this reference.
Uses of {tech}[technical terminology] are frequently hyperlinked to their definition sites, using links like this one.
## Constant, Syntax, and Tactic References
%%%
tag := "reference-boxes"
%%%
Definitions, inductive types, syntax formers, and tactics have specific descriptions.
These descriptions are marked as follows:
::::keepEnv
```lean
/--
Evenness: a number is even if it can be evenly divided by two.
-/
inductive Even : Nat → Prop where
| /-- 0 is considered even here -/
zero : Even 0
| /-- If `n` is even, then so is `n + 2`. -/
plusTwo : Even n → Even (n + 2)
```
{docstring Even}
::::
# How to Cite This Work
In formal citations, please cite this work as _The Lean Language Reference_ by The Lean Developers.
Additionally, please include the corresponding version of Lean in the citation, which is {versionString}[].
# Open-Source Licenses
%%%
tag := "dependency-licenses"
number := false
%%%
{licenseInfo} |
reference-manual/Manual/Namespaces.lean | import VersoManual
import Manual.Meta
import Manual.Language.Namespaces
import Manual.Coercions
import Lean.Parser.Command
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
set_option pp.rawOnError true
set_option maxRecDepth 3000
set_option linter.unusedVariables false
#doc (Manual) "Namespaces and Sections" =>
%%%
tag := "namespaces-sections"
htmlSplit := .never
%%%
Names are organized into hierarchical {deftech}_namespaces_, which are collections of names.
Namespaces are the primary means of organizing APIs in Lean: they provide an ontology of operations, grouping related items.
Additionally, while this is not done by giving them names in the namespace, the effects of features such as {ref "language-extension"}[syntax extensions], {tech}[instances], and {tech}[attributes] can be attached to a namespace.
Sorting operations into namespaces organizes libraries conceptually, from a global perspective.
Any given Lean file will, however, typically not use all names equally.
{tech}[Sections] provide a means of ordering a local view of the globally-available collection of names, as well as a way to precisely control the scope of compiler options along with language extensions, instances, and attributes.
They also allow parameters shared by many declarations to be declared centrally and propagated as needed using the {keywordOf Lean.Parser.Command.variable}`variable` command.
{include 1 Manual.Language.Namespaces}
# Section Scopes
%%%
tag := "scopes"
%%%
Many commands have an effect for the current {deftech}[_section scope_] (sometimes just called “scope” when clear).
Every Lean module has a section scope.
Nested scopes are created via the {keywordOf Lean.Parser.Command.namespace}`namespace` and {keywordOf Lean.Parser.Command.section}`section` commands, as well as the {keywordOf Lean.Parser.Command.in}`in` command combinator.
The following data are tracked in section scopes:
: The Current Namespace
The {deftech}_current namespace_ is the namespace into which new declarations will be defined.
Additionally, {tech (key:="resolve")}[name resolution] includes all prefixes of the current namespace in the scope for global names.
: Opened Namespaces
When a namespace is {deftech}_opened_, its names become available without an explicit prefix in the current scope.
Additionally, scoped attributes and {ref "syntax-rules"}[scoped syntax extensions] in namespaces that have been opened are active in the current section scope.
: Options
Compiler options are reverted to their original values at the end of the scope in which they were modified.
: Section Variables
{tech}[Section variables] are names (or {tech}[instance implicit] parameters) that are automatically added as parameters to definitions.
They are also added as universally-quantified assumptions to theorems when they occur in the theorem's statement.
## Controlling Section Scopes
%%%
tag := "scope-commands"
%%%
The {keywordOf Lean.Parser.Command.section}`section` command creates a new {deftech}[section] scope, but does not modify the current namespace, opened namespaces, or section variables.
Changes made to the section scope are reverted when the section ends.
Additionally, a section may cause a set of modifiers to be applied by default to all declarations in the section.
Sections may optionally be named; the {keywordOf Lean.Parser.Command.end}`end` command that closes a named section must use the same name.
If section names have multiple components (that is, if they contain `.`-separated names), then multiple nested sections are introduced.
Section names have no other effect, and are a readability aid.
:::syntax command (title := "Sections")
The {keywordOf Lean.Parser.Command.section}`section` command creates a section scope that lasts either until an `end` command or the end of the file.
The section header, if present, modifies the declarations in the section.
```grammar
$hdr:sectionHeader section $[$id:ident]?
```
:::
:::syntax Lean.Parser.Command.sectionHeader (title := "Section Headers")
A section header, if present, modifies the declarations in the section.
```grammar
$[@[expose]]?
$[public]? $[noncomputable]? $[meta]?
```
If the header includes {keyword}`noncomputable`, then the definitions in the section are all considered to be noncomputable, and no compiled code is generated for them.
This is needed for definitions that rely on noncomputational reasoning principles such as the Axiom of Choice.
The remaining modifiers are only useful in {tech}[modules].
If the header includes {attrs}`@[expose]`, then all definitions in the section are {tech}[exposed].
If it includes {keyword}`public`, then the declarations in such a {deftech}[public section] are public, rather than private, by default.
If it includes {keyword}`meta`, then the section's declarations are all placed in the {tech}[meta phase].
:::
:::example "Named Section"
The name {name Greetings.english}`english` is defined in the `Greetings` namespace.
```lean
def Greetings.english := "Hello"
```
Outside its namespace, it cannot be evaluated.
```lean +error (name := english1)
#eval english
```
```leanOutput english1
Unknown identifier `english`
```
Opening a section allows modifications to the global scope to be contained.
This section is named `Greetings`.
```lean
section Greetings
```
Even though the section name matches the definition's namespace, the name is not in scope because section names are purely for readability and ease of refactoring.
```lean +error (name := english2)
#eval english
```
```leanOutput english2
Unknown identifier `english`
```
Opening the namespace `Greetings` brings {name}`Greetings.english` as {name Greetings.english}`english`:
```lean (name := english3)
open Greetings
#eval english
```
```leanOutput english3
"Hello"
```
The section's name must be used to close it.
```lean +error (name := english4) -keep
end
```
```leanOutput english4
Missing name after `end`: Expected the current scope name `Greetings`
Hint: To end the current scope `Greetings`, specify its name:
end ̲G̲r̲e̲e̲t̲i̲n̲g̲s̲
```
```lean
end Greetings
```
When the section is closed, the effects of the {keywordOf Lean.Parser.Command.open}`open` command are reverted.
```lean +error (name := english5)
#eval english
```
```leanOutput english5
Unknown identifier `english`
```
:::
The {keywordOf Lean.Parser.Command.namespace}`namespace` command creates a new section scope.
Within this section scope, the current namespace is the name provided in the command, interpreted relative to the current namespace in the surrounding section scope.
Like sections, changes made to the section scope are reverted when the namespace's scope ends.
To close a namespace, the {keywordOf Lean.Parser.Command.end}`end` command requires a suffix of the current namespace, which is removed.
All section scopes introduced by the {keywordOf Lean.Parser.Command.namespace}`namespace` command that introduced part of that suffix are closed.
:::syntax command (title := "Namespace Declarations")
The `namespace` command modifies the current namespace by appending the provided identifier.
It creates a section scope that lasts either until an {keywordOf Lean.Parser.Command.end}`end` command or the end of the file.
```grammar
namespace $id:ident
```
:::
:::syntax command (title := "Section and Namespace Terminators")
Without an identifier, {keywordOf Lean.Parser.Command.end}`end` closes the most recently opened section, which must be anonymous.
```grammar
end
```
With an identifier, it closes the most recently opened section or namespace.
If it is a section, the identifier must be a suffix of the concatenated names of the sections opened since the most recent {keywordOf Lean.Parser.Command.namespace}`namespace` command.
If it is a namespace, then the identifier must be a suffix of the current namespace's extensions since the most recent {keywordOf Lean.Parser.Command.section}`section` that is still open; afterwards, the current namespace will have had this suffix removed.
```grammar
end $id:ident
```
:::
The {keywordOf Lean.Parser.Command.mutual}`end` that closes a {keywordOf Lean.Parser.Command.mutual}`mutual` block is part of the syntax of {keywordOf Lean.Parser.Command.mutual}`mutual`, rather than the {keywordOf Lean.Parser.Command.end}`end` command.
:::example "Nesting Namespaces and Sections"
Namespaces and sections may be nested.
A single {keywordOf Lean.Parser.Command.end}`end` command may close one or more namespaces or one or more sections, but not a mix of the two.
After setting the current namespace to `A.B.C` with two separate commands, `B.C` may be removed with a single {keywordOf Lean.Parser.Command.end}`end`:
```lean
namespace A.B
namespace C
end B.C
```
At this point, the current namespace is `A`.
Next, an anonymous section and the namespace `D.E` are opened:
```lean
section
namespace D.E
```
At this point, the current namespace is `A.D.E`.
An {keywordOf Lean.Parser.Command.end}`end` command cannot close all three due to the intervening section:
```lean +error (name := endADE) -keep
end A.D.E
```
```leanOutput endADE
Invalid name after `end`: Expected `D.E`, but found `A.D.E`
```
Instead, namespaces and sections must be ended separately.
```lean
end D.E
end
end A
```
:::
Rather than opening a section for a single command, the {keywordOf Lean.Parser.Command.in}`in` combinator can be used to create single-command section scope.
The {keywordOf Lean.Parser.Command.in}`in` combinator is right-associative, allowing multiple scope modifications to be stacked.
:::syntax command (title := "Local Section Scopes")
The `in` command combinator introduces a section scope for a single command.
```grammar
$c:command in
$c:command
```
:::
:::example "Using {keywordOf Lean.Parser.Command.in}`in` for Local Scopes"
The contents of a namespace can be made available for a single command using {keywordOf Lean.Parser.Command.in}`in`.
```lean
def Dessert.cupcake := "delicious"
open Dessert in
#eval cupcake
```
After the single command, the effects of {keywordOf Lean.Parser.Command.open}`open` are reverted.
```lean +error (name := noCake)
#eval cupcake
```
```leanOutput noCake
Unknown identifier `cupcake`
```
:::
## Section Variables
%%%
tag := "section-variables"
%%%
{deftech}_Section variables_ are parameters that are automatically added to declarations that mention them.
This occurs whether or not the option {option}`autoImplicit` is {lean}`true`.
Section variables may be implicit, strict implicit, or explicit; instance implicit section variables are treated specially.
When the name of a section variable is encountered in a non-theorem declaration, it is added as a parameter.
Any instance implicit section variables that mention the variable are also added.
If any of the variables that were added depend on other variables, then those variables are added as well; this process is iterated until no more dependencies remain.
All section variables are added in the order in which they are declared, before all other parameters.
Section variables are added only when they occur in the _statement_ of a theorem.
Otherwise, modifying the proof of a theorem could change its statement if the proof term made use of a section variable.
Variables are declared using the {keywordOf Lean.Parser.Command.variable}`variable` command.
:::syntax command (title := "Variable Declarations")
```grammar
variable $b:bracketedBinder $b:bracketedBinder*
```
:::
The bracketed binders allowed after `variable` match the {ref "bracketed-parameter-syntax"}[syntax used in definition headers].
::::example "Section Variables"
In this section, automatic implicit parameters are disabled, but a number of section variables are defined.
```lean
section
set_option autoImplicit false
universe u
variable {α : Type u} (xs : List α) [Zero α] [Add α]
```
Because automatic implicit parameters are disabled and `β` is neither a section variable nor bound as a parameter of the function, the following definition fails:
```lean +error (name := secvars) -keep
def addAll (lst : List β) : β :=
lst.foldr (init := 0) (· + ·)
```
```leanOutput secvars
Unknown identifier `β`
Note: It is not possible to treat `β` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```
:::paragraph
On the other hand, not even {lean}`xs` needs to be written directly in the definition when it uses the section variables:
```lean
def addAll :=
xs.foldr (init := 0) (· + ·)
```
:::
::::
To add a section variable to a theorem even if it is not explicitly mentioned in the statement, mark the variable with the {keywordOf Lean.Parser.Command.include}`include` command.
All variables marked for inclusion are added to all theorems.
The {keywordOf Lean.Parser.Command.omit}`omit` command removes the inclusion mark from a variable; it's typically a good idea to use it with {keywordOf Lean.Parser.Command.in}`in`.
```lean -show
section
variable {p : Nat → Prop}
variable (pFifteen : p 15)
```
:::::example "Included and Omitted Section Variables"
This section's variables include a predicate as well as everything needed to prove that it holds universally, along with a useless extra assumption.
```lean
section
variable {p : Nat → Prop}
variable (pZero : p 0) (pStep : ∀ n, p n → p (n + 1))
variable (pFifteen : p 15)
```
However, only {lean}`p` is added to this theorem's assumptions, so it cannot be proved.
```lean +error -keep
theorem p_all : ∀ n, p n := by
intro n
induction n
```
The {keywordOf Lean.Parser.Command.include}`include` command causes the additional assumptions to be added unconditionally:
```lean -keep (name := lint)
include pZero pStep pFifteen
theorem p_all : ∀ n, p n := by
intro n
induction n <;> simp [*]
```
Because the spurious assumption {lean}`pFifteen` was inserted, Lean issues a warning:
```leanOutput lint
automatically included section variable(s) unused in theorem `p_all`:
pFifteen
consider restructuring your `variable` declarations so that the variables are not in scope or explicitly omit them:
omit pFifteen in theorem ...
Note: This linter can be disabled with `set_option linter.unusedSectionVars false`
```
This can be avoided by using {keywordOf Lean.Parser.Command.omit}`omit` to remove {lean}`pFifteen`:
```lean -keep
include pZero pStep pFifteen
omit pFifteen in
theorem p_all : ∀ n, p n := by
intro n
induction n <;> simp [*]
```
```lean
end
```
:::::
```lean -show
end
``` |
reference-manual/Manual/SourceFiles.lean | import VersoManual
import Manual.Meta
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
#doc (Manual) "Source Files and Modules" =>
%%%
tag := "files"
htmlSplit := .never
%%%
The smallest unit of compilation in Lean is a single {tech}[source file].
Source files may import other source files based on their file names.
In other words, the names and folder structures of files are significant in Lean code.
Each source file has an {deftech}_import name_ that is derived from a combination of its filename and the way in which Lean was invoked: Lean has set of a _root directories_ in which it expects to find code, and the source file's import name is the names of the directories from the root to the filename, with dots (`.`) interspersed and `.lean` removed.
For example, if Lean is invoked with `Projects/MyLib/src` as its root, the file `Projects/MyLib/src/Literature/Novel/SciFi.lean` could be imported as `Literature.Novel.SciFi`.
::: TODO
Describe case sensitivity/preservation for filenames here
:::
# Encoding and Representation
%%%
tag := "module-encoding"
%%%
Lean {deftech}[source files] are Unicode text files encoded in UTF-8. {TODO}[Figure out the status of BOM and Lean]
Lines may end either with newline characters (`"\n"`, Unicode `'LINE FEED (LF)' (U+000A)`) or with a form feed and newline sequence (`"\r\n"`, Unicode `'CARRIAGE RETURN (CR)' (U+000D)` followed by `'LINE FEED (LF)' (U+000A)`).
However, Lean normalizes line endings when parsing or comparing files, so all files are compared as if all their line endings are `"\n"`.
::: TODO
Marginal note: this is to make cached files and `#guard_msgs` and the like work even when git changes line endings. Also keeps offsets stored in parsed syntax objects consistent.
:::
# Concrete Syntax
%%%
tag := "module-syntax"
%%%
Lean's concrete syntax is {ref "language-extension"}[extensible].
In a language like Lean, it's not possible to completely describe the syntax once and for all, because libraries may define syntax in addition to new constants or {tech}[inductive types].
Rather than completely describing the language here, the overall framework is described, while the syntax of each language construct is documented in the section to which it belongs.
## Whitespace
%%%
tag := "whitespace"
%%%
Tokens in Lean may be separated by any number of {deftech}[_whitespace_] character sequences.
Whitespace may be a space (`" "`, Unicode `'SPACE (SP)' (U+0020)`), a valid newline sequence, or a comment. {TODO}[xref]
Neither tab characters nor carriage returns not followed by newlines are valid whitespace sequences.
## Comments
%%%
tag := "comments"
%%%
Comments are stretches of the file that, despite not being whitespace, are treated as such.
Lean has two syntaxes for comments:
: Line comments
A `--` that does not occur as part of a token begins a _line comment_. All characters from the initial `-` to the newline are treated as whitespace.{index (subterm := "line")}[comment]
: Block comments
A `/-` that does not occur as part of a token and is not immediately followed by a `-` character begins a _block comment_.{index (subterm := "block")}[comment]
The block comment continues until a terminating `-/` is found.
Block comments may be nested; a `-/` only terminates the comment if prior nested block comment openers `/-` have been terminated by a matching `-/`.
`/--` and `/-!` begin {deftech}_documentation_ {TODO}[xref] rather than comments, which are also terminated with `-/` and may contain nested block comments.
Even though documentation resembles comments, they are their own syntactic category; their valid placement is determined by Lean's grammar.
## Keywords and Identifiers
%%%
tag := "keywords-and-identifiers"
%%%
An {tech}[identifier] consists of one or more identifier components, separated by `'.'`.{index}[identifier]
{deftech}[Identifier components] consist of a letter or letter-like character or an underscore (`'_'`), followed by zero or more identifier continuation characters.
Letters are English letters, upper- or lowercase, and the letter-like characters include a range of non-English alphabetic scripts, including the Greek script which is widely used in Lean, the Coptic script, the members of the Unicode letter-like symbol block, which contains a number of double-struck characters (including `ℕ` and `ℤ`) and abbreviations, the Latin-1 supplemental letters (with the exception of `×` and `÷`), and the Latin Extended-A block.
Identifier continuation characters consist of letters, letter-like characters, underscores (`'_'`), exclamation marks (`!`), question marks (`?`), subscripts, and single quotes (`'`).
As an exception, underscore alone is not a valid identifier.
```lean -show
def validIdentifier (str : String) : IO String :=
Lean.Parser.identFn.test str
/-- info: "Success! Final stack:\n `ℕ\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "ℕ"
/-- info: "Failure @0 (⟨1, 0⟩): expected identifier\nFinal stack:\n <missing>\nRemaining: \"?\"" -/
#check_msgs in
#eval validIdentifier "?"
/-- info: "Success! Final stack:\n `ℕ?\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "ℕ?"
/-- info: "Failure @0 (⟨1, 0⟩): expected identifier\nFinal stack:\n <missing>\nRemaining: \"_\"" -/
#check_msgs in
#eval validIdentifier "_"
/-- info: "Success! Final stack:\n `_3\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "_3"
/-- info: "Success! Final stack:\n `_.a\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "_.a"
/-- info: "Success! Final stack:\n `αποδεικνύοντας\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "αποδεικνύοντας"
/-- info: "Success! Final stack:\n `κύκ\nRemaining:\n\"λος\"" -/
#check_msgs in
#eval validIdentifier "κύκλος"
/-- info: "Success! Final stack:\n `øvelse\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "øvelse"
/-- info: "Success! Final stack:\n `Übersetzung\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "Übersetzung"
/- Here's some things that probably should be identifiers but aren't at the time of writing -/
/--
info: "Failure @0 (⟨1, 0⟩): expected token\nFinal stack:\n <missing>\nRemaining: \"переклад\""
-/
#check_msgs in
#eval validIdentifier "переклад"
/-- info: "Failure @0 (⟨1, 0⟩): expected token\nFinal stack:\n <missing>\nRemaining: \"汉语\"" -/
#check_msgs in
#eval validIdentifier "汉语"
```
Identifiers components may also be surrounded by double {deftech}[guillemets] (`'«'` and `'»'`).
Such identifier components may contain any character at all aside from `'»'`, even `'«'`, `'.'`, and newlines.
The guillemets are not part of the resulting identifier component, so `«x»` and `x` denote the same identifier.
`«Nat.add»`, on the other hand, is an identifier with a single component, while `Nat.add` has two.
```lean -show
/-- info: "Success! Final stack:\n `«\n »\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "«\n»"
/-- info: "Success! Final stack:\n `««one line\n and another»\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "««one line\nand another»"
/-- info: "Success! Final stack:\n `«one line\x00and another»\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "«one line\x00and another»"
/-- info: "Success! Final stack:\n `«one line\x0band another»\nAll input consumed." -/
#check_msgs in
#eval validIdentifier "«one line\x0Band another»"
```
Some potential identifier components may be reserved keywords.
The specific set of reserved keywords depends on the set of active syntax extensions, which may depend on the set of imported files and the currently-opened {TODO}[xref/deftech for namespace] namespaces; it is impossible to enumerate for Lean as a whole.
These keywords must also be quoted with guillemets to be used as identifier components in most syntactic contexts.
Contexts in which keywords may be used as identifiers without guillemets, such as constructor names in inductive types, are {deftech}_raw identifier_ contexts.{index (subterm:="raw")}[identifier]
Identifiers that contain one or more `'.'` characters, and thus consist of more than one identifier component, are called {deftech}[hierarchical identifiers].
Hierarchical identifiers are used to represent both import names and names in a namespace.
# Structure
%%%
tag := "module-structure"
%%%
:::syntax Lean.Parser.Module.module -open (title := "Modules")
```grammar
$hdr:header $cmd:command*
```
A source file consists of a {deftech}_file header_ followed by a sequence of {deftech}_commands_.
:::
If a source file's header begins with {keywordOf Lean.Parser.Module.header}`module`, then it is referred to as a {tech}_module_.
Modules provide greater control over what information is exposed to clients.
Modules are an experimental feature in Lean.
To use modules, the {option}`experimental.module` must be set to {lean}`true` in the project's Lake configuration file.
{optionDocs experimental.module}
## Headers
%%%
tag := "module-headers"
%%%
Module headers list the modules that should be elaborated prior to the current module.
Their declarations are visible in the current module.
:::syntax Lean.Parser.Module.header -open (title := "Module Headers")
The module header consists of an optional {keywordOf Lean.Parser.Module.header}`module` keyword followed by a sequence of {deftech}[`import` statements]:
```grammar
$[module]?
$i:import*
```
The optional {keyword}`prelude` keyword should only be used in Lean's source code:
```grammar
$[module]?
prelude
$i:import*
```
:::
If present, the {keyword}`prelude` keyword indicates that the file is part of the implementation of the Lean {deftech}_prelude_, which is the code that is available without any explicit imports—it should not be used outside of Lean's implementation.
:::syntax Lean.Parser.Module.prelude -open (title := "Prelude Modules")
```grammar
prelude
```
:::
::::syntax Lean.Parser.Module.import (title := "Imports")
All {tech}[source files] may use plain imports:
```grammar
import $mod:ident
```
In source files that are not modules, this imports the specified Lean file.
Importing a file makes its contents available in the current source file, as well as those from source files transitively imported by its imports.
Source file names do not necessarily correspond to namespaces.
Source files may add names to any namespace, and importing a source file has no effect on the set of currently open namespaces.
The {tech}[import name] is translated to a filename by replacing dots (`'.'`) in its name with directory separators and appending `.lean` or `.olean`.
Lean searches its include path for the corresponding intermediate build product or importable module file.
{tech}[Modules] may use the following import syntax:
```grammar
$[public]? $[meta]? import $[all]? $mod:ident
```
:::paragraph
All imports to a module must themselves be modules.
Without modifiers, the imported module's public scope is added to the current module's private scope. The imported module is not made available to modules that import the current module.
The modifiers have the following meanings:
: {keyword}`public`
The imported module's public scope is added to the current module's public scope and made available to the current module's importers.
: {keyword}`meta`
The contents of the imported module are made available at the {tech}[meta phase] in the current module.
: {keyword}`all`
The imported module's private scope is added to the current module's {tech}[private scope].
:::
::::
## Commands
%%%
tag := "commands"
%%%
{tech}[Commands] are top-level statements in Lean.
Some examples are inductive type declarations, theorems, function definitions, namespace modifiers like `open` or `variable`, and interactive queries such as `#check`.
The syntax of commands is user-extensible, and commands may even {ref "language-extension"}[add new syntax that is used to parse subsequent commands].
Specific Lean commands are documented in the corresponding chapters of this manual, rather than being listed here.
::: TODO
Make the index include links to all commands, then xref from here
:::
# Modules and Visibility
%%%
tag := "module-scopes"
%%%
:::paragraph
A {deftech}[module] is a source file that has opted in to a distinction between public and private information.
Lean ensures that private information can change without affecting clients that import only its public information.
This discipline brings a number of benefits:
: Much-improved average build times
Changes to files that affect only non-exported information (e.g. proofs, comments, and docstrings) will not trigger rebuilds outside of these files.
Even when dependent files have to be rebuilt, those files that cannot be affected (as determined by their {keywordOf Lean.Parser.Module.import}`import` annotations) can be skipped.
: Control over API evolution
Library authors can trust that changes to non-exported information will not affect downstream users of their library.
If only a function's signature is exposed, then downstream users cannot rely on definitional equalities that involve its unfolding; this means that the library's author is free to adopt a more efficient algorithm without unintentionally breaking client code.
: Avoiding accidental unfolding
Limiting the scope in which definitions can be unfolded allows for avoiding both reductions that should be replaced by application of more specific theorems as well as unproductive reductions that were not in fact necessary.
This improves the speed of proof elaboration.
: Smaller executables
Separating compile-time and run-time code allows for more aggressive dead code elimination, guaranteeing that metaprograms such as tactics do not make it into the final binary.
: Reduced memory usage
Excluding private information such as proofs from importing can improve Lean's memory use both while building and editing a project.
Porting mathlib4 to the module system has shown savings close to 50% from this even before imports are further minimized.{TODO}[link and format of mathlib name consistent with rest of manual]
:::
:::paragraph
Modules contain two separate scopes: the {deftech}_public scope_ consists of information that is visible in modules that import a module, while the {deftech}_private scope_ consists of information that is generally visible only within the module.
Some examples of information that can be private or public include:
: Names
Constants (such as definitions, inductive types, or constructors) may be private or public.
A public constant's type may only refer to public names.
: Definitions
A public definition may be {deftech}[exposed] or not.
If a public definition is not exposed, then it cannot be unfolded in contexts that only have access to the public scope.
Instead, clients must rely on the theorems about the definition that are provided in the public scope.
:::
Each declaration has default visibility rules.
Generally speaking, all names are private by default, unless defined in a {tech}[public section].
Even public names usually place the bodies of definitions in the private scope, and even proofs in exposed definitions are kept private.
The specific visibility rules for each declaration command are documented together with the declaration itself.
::::example "Private and Public Definitions"
:::leanModules +error
The module {module}`Greet.Create` defines a function {name}`greeting`.
Because there are no visibility modifiers, this function defaults to the {tech}[private scope]:
```leanModule (moduleName := Greet.Create)
module
def greeting (name : String) : String :=
s!"Hello, {name}"
```
The definition of {name}`greeting` is not visible in the module {module}`Greet`, even though it imports {module}`Greet.Create`:
```leanModule (moduleName := Greet) (name := noRef)
module
import Greet.Create
def greetTwice (name1 name2 : String) : String :=
greeting name1 ++ "\n" ++ greeting name2
```
```leanOutput noRef
Unknown identifier `greeting`
```
:::
:::leanModules
If {name}`greeting` is made public, then {name}`greetTwice` can refer to it:
```leanModule (moduleName := Greet.Create)
module
public def greeting (name : String) : String :=
s!"Hello, {name}"
```
```leanModule (moduleName := Greet)
module
import Greet.Create
def greetTwice (name1 name2 : String) : String :=
greeting name1 ++ "\n" ++ greeting name2
```
:::
::::
::::example "Exposed and Unexposed Definitions"
:::leanModules +error
The module {module}`Greet.Create` defines a public function {name}`greeting`.
```leanModule (moduleName := Greet.Create)
module
public def greeting (name : String) : String :=
s!"Hello, {name}"
```
Although the definition of {name}`greeting` is visible in the module {module}`Greet`, it cannot be unfolded in a proof because the definition's body is in the {tech}[private scope] of {module}`Greet`:
```leanModule (moduleName := Greet) (name := nonExp)
module
import Greet.Create
def greetTwice (name1 name2 : String) : String :=
greeting name1 ++ "\n" ++ greeting name2
theorem greetTwice_is_greet_twice {name1 name2 : String} :
greetTwice name1 name2 = "Hello, " ++ name1 ++ "\n" ++ "Hello, " ++ name2 := by
simp [greetTwice, greeting]
```
```leanOutput nonExp
Invalid simp theorem `greeting`: Expected a definition with an exposed body
```
:::
:::leanModules
Adding the {attrs}`@[expose]` attribute exposes the definition so that downstream modules can unfold {name}`greeting`:
```leanModule (moduleName := Greet.Create)
module
@[expose]
public def greeting (name : String) : String :=
s!"Hello, {name}"
```
Now, the proof can proceed:
```leanModule (moduleName := Greet)
module
import Greet.Create
def greetTwice (name1 name2 : String) : String :=
greeting name1 ++ "\n" ++ greeting name2
theorem greetTwice_is_greet_twice {name1 name2 : String} :
greetTwice name1 name2 = "Hello, " ++ name1 ++ "\n" ++ "Hello, " ++ name2 := by
simp [greetTwice, greeting, toString]
grind [String.append_assoc]
```
:::
::::
:::::example "Proofs are Private"
::::leanModules
:::paragraph
In this module, the function {name}`incr` is public, but its implementation is not exposed:
```leanModule (moduleName := Main)
module
public def incr : Nat → Nat
| 0 => 1
| n + 1 => incr n + 1
public theorem incr_eq_plus1 : incr = (· + 1) := by
funext n
induction n <;> simp [incr, *]
```
:::
Nonetheless, the proof of the theorem {name}`incr_eq_plus1` can unfold its definition.
This is because proofs of theorems are in the private scope.
This is the case both for public and private theorems.
::::
:::::
The option {option}`backward.privateInPublic` can be used while transitioning from ordinary source files to modules.
When it is set to {lean}`true`, private definitions are exported, though their names are not accessible in importing modules.
However, references to them in the public part of their defining module are allowed.
Such references result in a warning unless the option {option}`backward.privateInPublic.warn` is set to {lean}`false`.
These warnings can be used to locate and eventually eliminate these references, allowing {option}`backward.privateInPublic` to be disabled.
Similarly, {option}`backward.proofsInPublic` causes proofs created with {keywordOf Lean.Parser.Term.by}`by` to be public, rather than private; this can enable {keywordOf Lean.Parser.Term.by}`by` to fill in metavariables in its expected type.
Most use cases for {option}`backward.proofsInPublic` also require that {option}`backward.privateInPublic` is enabled.
{optionDocs backward.privateInPublic}
{optionDocs backward.privateInPublic.warn}
{optionDocs backward.proofsInPublic}
::::example "Exporting Private Definitions"
:::leanModules
In the module {module}`L.Defs`, the public definition of {name}`f` refers to the private definition {name}`drop2` in its signature.
Because {option}`backward.privateInPublic` is {lean}`true`, this is allowed, resulting in a warning:
```leanModule (moduleName := L.Defs) (name := warnPub)
module
set_option backward.privateInPublic true
def drop2 (xs : List α) : List α := xs.drop 2
public def f (xs : List α) (transform : List α → List α:= drop2) : List α :=
transform xs
```
```leanOutput warnPub
Private declaration `drop2` accessed publicly; this is allowed only because the `backward.privateInPublic` option is enabled.
Disable `backward.privateInPublic.warn` to silence this warning.
```
When the module is imported, references to {name}`f` use {name}`drop2` as a default argument value; however, its name is inaccessible in the module {module}`L`:
```leanModule (moduleName := L) (name := withPrivateInTerm)
module
import L.Defs
def xs := [1, 2, 3]
set_option pp.explicit true in
#check f xs
```
```leanOutput withPrivateInTerm
@f Nat xs (@drop2✝ Nat) : List Nat
```
:::
::::
::::example "Proofs in Public"
:::leanModules
In the plain source file {module}`NotMod`, the definition of {name}`two` uses the content of the proof to fill out the numeric value in the definition by solving a {tech}`metavariable`:
```leanModule (moduleName := NotMod)
structure Half (n : Nat) where
val : Nat
ok : val + val = n
abbrev two := Half.mk _ <| by
show 2 + 2 = 4
rfl
```
:::
:::leanModules +error
Converting this file to a module results in an error, because the body of the definition is exposed in the public part but the proof is private and thus cannot change the public type:
```leanModule (moduleName := Mod) (name := proofMeta)
module
public section
structure Half (n : Nat) where
val : Nat
ok : val + val = n
abbrev two := Half.mk _ <| by
show 2 + 2 = 4
rfl
```
```leanOutput proofMeta
tactic execution is stuck, goal contains metavariables
?m.3 + ?m.3 = ?m.5
```
:::
:::leanModules
Setting the option {option}`backward.proofsInPublic` causes the proof to be in the public part of the module so it can solve the metavariable:
```leanModule (moduleName := Mod)
module
public section
structure Half (n : Nat) where
val : Nat
ok : val + val = n
set_option backward.proofsInPublic true in
abbrev two := Half.mk _ <| by
show 2 + 2 = 4
rfl
```
:::
:::leanModules
However, it is typically better style to reformulate the definition so that the proof has a complete goal:
```leanModule (moduleName := Mod)
module
public section
structure Half (n : Nat) where
val : Nat
ok : val + val = n
abbrev two : Half 4 := Half.mk 2 <| by
rfl
```
:::
::::
The private scope of a module may be imported into another module using the {keywordOf Lean.Parser.Module.import}`all` modifier.
By default, this is only allowed if the imported module and the current module are from the same Lake {tech}[package], as its main purpose is to allow for separating definitions and proofs into separate modules for internal organization of a library.
The Lake package or library option {ref "Lake.PackageConfig allowImportAll" (domain := Manual.lakeTomlField)}`allowImportAll` can be set to allow other packages to access to the current package's private scopes via {keywordOf Lean.Parser.Module.import}`import all`.
The imported private scope includes private imports of the imported module, including nested {keywordOf Lean.Parser.Module.import}`import all`s.
As a consequence, the set of private scopes accessible to the current module is the transitive closure of {keywordOf Lean.Parser.Module.import}`import all` declarations.
The module system's {keywordOf Lean.Parser.Module.import}`import all` is more powerful than {keywordOf Lean.Parser.Module.import}`import` without the module system.
It makes imported private definitions accessible directly by name, as if they were defined in the current module.
A secondary use case for {keywordOf Lean.Parser.Module.import}`import all` is to access code in multiple modules within a library that should nonetheless not be provided to downstream consumers, as well as to allow tests to access information that is not part of the public API.
::::example "Importing Private Information"
:::leanModules (moduleRoot := Tree) +error
This library separates a module of definitions from a module of lemmas.
This is a common pattern in Lean code.
```leanModule (moduleName := Tree.Basic)
module
public inductive Tree (α : Type u) : Type u where
| leaf
| branch (left : Tree α) (val : α) (right : Tree α)
public def Tree.count : Tree α → Nat
| .leaf => 0
| .branch left _ right => left.count + 1 + right.count
```
However, because {name}`Tree.count` is not exposed, the proof in the lemma file cannot unfold it:
```leanModule (moduleName := Tree.Lemmas) (name := lemmasNoAll)
module
public import Tree.Basic
theorem Tree.count_leaf_eq_zero : count (.leaf : Tree α) = 0 := by
simp [count]
```
```leanOutput lemmasNoAll
Invalid simp theorem `count`: Expected a definition with an exposed body
```
:::
:::leanModules (moduleRoot := Tree)
Importing the private scope from {module}`Tree.Basic` into the lemma module allows the definition to be unfolded in the proof.
```leanModule (moduleName := Tree.Basic)
module
public inductive Tree (α : Type u) : Type u where
| leaf
| branch (left : Tree α) (val : α) (right : Tree α)
public def Tree.count : Tree α → Nat
| .leaf => 0
| .branch left _ right => left.count + 1 + right.count
```
```leanModule (moduleName := Tree.Lemmas)
module
import all Tree.Basic
public import Tree.Basic
theorem Tree.count_leaf_eq_zero : count (.leaf : Tree α) = 0 := by
simp [count]
```
:::
::::
## The Meta Phase
%%%
tag := "meta-phase"
%%%
Definitions in Lean result in both a representation in the type theory that is designed for formal reasoning and a compiled representation that is designed for execution.
This compiled representation is used to generate machine code, but it can also be executed directly using an interpreter.
The code runs during {tech -normalize}[elaboration], such as {ref "tactics"}[tactics] or {ref "macros"}[macros], is the compiled form of definitions.
If this compiled representation changes, then any code created by it may no longer be up to date, and it must be re-run.
Because the compiler performs non-trivial optimizations, changes to any definition in the transitive dependency chain of a function could in principle invalidate its compiled representation.
This means that metaprograms exported by modules induce a much stronger coupling than ordinary definitions.
Furthermore, metaprograms run _during_ the construction of ordinary terms; thus, they must be fully defined and compiled before use.
After all, a function definition without a body cannot be run.
The time at which metaprograms are run is referred to as the {deftech}_metaprogramming phase_, frequently just called the {deftech}_meta phase_.
Just as they distinguish between public and private information, modules additionally distinguish code that is available in the meta phase from ordinary code.
Any declaration used as an entry point to compile-time execution has to be tagged with the {keywordOf Lean.Parser.Module.import}`meta` modifier, which indicates that the declaration is available for use as a metaprogram.
This is automatically done in built-in metaprogramming syntax such as {keywordOf Lean.Parser.Command.syntax}`syntax`, {keywordOf Lean.Parser.Command.macro}`macro`, and {keywordOf Lean.Parser.Command.elab}`elab` but may need to be done explicitly when manually applying metaprogramming attributes such as {keyword}`app_delab` or when defining helper declarations.
A {keywordOf Parser.Command.declModifiers}`meta` definition may only access (and thus invoke) other {keywordOf Parser.Command.declModifiers}`meta` definitions in execution-relevant positions; a non-{keywordOf Parser.Command.declModifiers}`meta` definition likewise may only access other non-{keywordOf Parser.Command.declModifiers}`meta` definitions.
::::example "Meta Definitions"
:::leanModules +error
In this module, the helper function {name}`revArrays` reverses the order of the elements in each array literal in a term.
This is called by the macro {keyword}`rev!`.
```leanModule (moduleName := Main) (name := nonMeta)
module
open Lean
variable [Monad m] [MonadRef m] [MonadQuotation m]
partial def revArrays : Syntax → m Term
| `(#[$xs,*]) => `(#[$((xs : Array Term).reverse),*])
| other => do
match other with
| .node k i args =>
pure ⟨.node k i (← args.mapM revArrays)⟩
| _ => pure ⟨other⟩
macro "rev!" e:term : term => do
revArrays e
```
The error message indicates that {name}`revArrays` cannot be used from the macro because it is not defined in the module's {tech}[metaprogramming phase]:
```leanOutput nonMeta
Invalid `meta` definition `_aux___macroRules_termRev!__1`, `revArrays` not marked `meta`
```
:::
:::leanModules
Marking {name}`revArrays` with the {keywordOf Lean.Parser.Command.declModifiers}`meta` modifier allows the macro definition to call it:
```leanModule (moduleName := Main) (name := withMeta)
module
open Lean
variable [Monad m] [MonadRef m] [MonadQuotation m]
meta partial def revArrays : Syntax → m Term
| `(#[$xs,*]) => `(#[$((xs : Array Term).reverse),*])
| other => do
match other with
| .node k i args =>
pure ⟨.node k i (← args.mapM revArrays)⟩
| _ => pure ⟨other⟩
macro "rev!" e:term : term => do
revArrays e
#eval rev! #[1, 2, 3]
```
```leanOutput withMeta
#[3, 2, 1]
```
:::
::::
Libraries that were not originally part of the meta phase can be brought into it by importing a module with {keywordOf Parser.Module.import}`meta import`.
When a module is imported at the meta phase, all of its definitions are made available at that phase, whether or not they were marked {keywordOf Parser.Command.declModifiers}`meta`.
There is no meta-meta phase.
In addition to making the imported module's public contents available at the meta phase, {keywordOf Parser.Module.import}`meta import` indicates that the current module should be rebuilt if the compiled representation of the imported module changes, ensuring that modified metaprograms are re-run.
If a definition should be usable in both phases, then it must be defined in a separate module and imported at both phases.
::::example "Cross-Phase Code Reuse"
:::leanModules +error
In this module, the function {name}`toPalindrome` is defined in the meta phase, which allows it to be used in a macro but not in an ordinary definition:
```leanModule (moduleName := Phases) (name := bothPhases)
module
open Lean
variable [Monad m] [MonadRef m] [MonadQuotation m]
meta def toPalindrome (xs : Array α) : Array α := xs ++ xs.reverse
meta partial def palArrays : Syntax → m Term
| `(#[$xs,*]) => `(#[$(toPalindrome (xs : Array Term)),*])
| other => do
match other with
| .node k i args =>
pure ⟨.node k i (← args.mapM palArrays)⟩
| _ => pure ⟨other⟩
macro "pal!" e:term : term => do
palArrays e
#check pal! (#[1, 2, 3] ++ [6, 7, 8])
public def colors := toPalindrome #["red", "green", "blue"]
```
```leanOutput bothPhases
Invalid definition `colors`, may not access declaration `toPalindrome` marked as `meta`
```
:::
:::leanModules
Moving {name}`toPalindrome` to its own module, {module}`Phases.Pal`, allows this module to be imported at both phases:
```leanModule (moduleName := Phases.Pal)
module
public def toPalindrome (xs : Array α) : Array α := xs ++ xs.reverse
```
```leanModule (moduleName := Phases) (name := bothPhases)
module
meta import Phases.Pal
import Phases.Pal
open Lean
variable [Monad m] [MonadRef m] [MonadQuotation m]
meta partial def palArrays : Syntax → m Term
| `(#[$xs,*]) => `(#[$(toPalindrome (xs : Array Term)),*])
| other => do
match other with
| .node k i args =>
pure ⟨.node k i (← args.mapM palArrays)⟩
| _ => pure ⟨other⟩
local macro "pal!" e:term : term => do
palArrays e
#check pal! (#[1, 2, 3] ++ [6, 7, 8])
public def colors := toPalindrome #["red", "green", "blue"]
```
If the macro {keyword}`pal!` were public (that is, if it was not declared with the {keyword}`local` modifier) then the {keywordOf Lean.Parser.Module.import}`meta import` of {module}`Phases.Pal` would need to be declared {keywordOf Lean.Parser.Module.import}`public` as well.
:::
::::
In addition, the import must be public if the imported definition may be executed at compile time outside the current module, i.e. if it is reachable from some public {keywordOf Parser.Command.declModifiers}`meta` definition in the current module.
Use {keywordOf Parser.Module.import}`public meta import`.
If the declaration is already declared {keywordOf Parser.Command.declModifiers}`meta`, then {keywordOf Parser.Module.import}`public import` is sufficient.
Unlike definitions, most metaprograms are public by default.
Thus, most {keywordOf Lean.Parser.Module.import}`meta import` are also {keywordOf Parser.Module.import}`public` in practice.
The exception is when a definition is imported solely for use in local metaprograms, such as those declared with {keywordOf Parser.Command.syntax}`local syntax`, {keywordOf Parser.Command.macro}`local macro`, or {keywordOf Parser.Command.elab}`local elab`.
As a guideline, it is usually preferable to keep the amount of {keywordOf Lean.Parser.Command.declModifiers}`meta` annotations as small as possible.
This avoids locking otherwise-reusable declarations into the {tech}[meta phase] and it helps the build system avoid more rebuilds.
Thus, when a metaprogram depends on other code that does not itself need to be marked {keywordOf Lean.Parser.Command.declModifiers}`meta`, this other code should be placed in a separate module and not marked {keywordOf Lean.Parser.Command.declModifiers}`meta`.
Only the final module that actually registers a metaprogram needs the helpers to be in the meta phase.
This module should use {keywordOf Lean.Parser.Module.import}`public meta import` to import those helpers and then define its metaprograms using built-in syntax like {keywordOf Parser.Command.elab}`elab`, using {keywordOf Lean.Parser.Command.declaration}`meta def`, or using {keywordOf Lean.Parser.Command.section}`meta section`.
# Elaborated Modules
%%%
tag := "module-contents"
%%%
When Lean elaborates a source file, the result is an {tech}[environment].
The environment includes the constants, {tech}[inductive types], {tech}[theorems], {tech (key := "type class")}[type classes], {tech}[instances], and everything else declared in the file, along with side tables that track data as diverse as {tech}[simp sets], namespace aliases, and {tech}[documentation comments].
If the file contains a module, then the environment additionally tracks which information is public and private, and the phase at which definitions are available.
As the source file is processed by Lean, commands add content to the environment.
After elaboration, the environment is serialized to a {deftech (key:="olean")}[`.olean` file], which contains both the environment and a compacted heap region with the run-time objects needed by the environment.
This means that an imported source file can be loaded without re-executing all of its commands.
Environments that result from elaborating modules are serialized into three {tech (key:="olean")}[`.olean` files], containing the private, public, and server information in the environment.
The server information consists of data such as API documentation and source positions of definitions that is only needed when using the Lean language server and does not need to be loaded along with the public information in other contexts.
# Module System Errors and Patterns
:::paragraph
The following list contains common errors one might encounter when using the module system and especially porting existing files to the module system:
: Unknown constant errors
Check whether a private definition is being accessed in the {tech}[public scope].
If so, the problem can be solved by making the current declaration private as well, or by placing the reference into the private scope using the {keywordOf Lean.Parser.Term.structInstFieldDef}`private` modifier on a field or {keywordOf Lean.Parser.Term.by}`by` for a proof.
: Definitional equality errors, especially after porting
Failures of expected definitional equalities are usually due to a missing {attr}`expose` attribute on a definition or alternatively, if imported, an {keywordOf Lean.Parser.Module.import}`import all`.
Prefer the former if anyone outside your library might feasibly require the same access.
The error message should list non-exposed definitions that could not be unfolded.
This may also appear as a kernel error when a tactic directly emits proof terms that reference specific declarations without going through the elaborator, such as for proof by reflection.
In this case, there is no readily available trace for debugging; consider using {attrs}`@[expose]`` `{keywordOf Parser.Command.section}`section`s generously on the closure of relevant modules.
:::
## Recipe for Porting Existing Files
:::paragraph
To gain the benefits of the module system, source files must be made into modules.
Start by enabling the module system throughout all files with minimal breaking changes:
1. Prefix all files with {keywordOf Lean.Parser.Module.header}`module`.
2. Make all existing imports {keywordOf Lean.Parser.Command.declModifiers}`public` unless they will be used only in proofs.
* Add {keywordOf Lean.Parser.Module.import}`import all` when errors that mention references to private data occur.
* Add {keywordOf Lean.Parser.Module.import}`public meta import` when errors that mention “must be {keywordOf Lean.Parser.Module.import}`meta`” occur.
The {keywordOf Lean.Parser.Module.import}`public` may be omitted when defining local-only metaprograms.
3. Prefix the remainder of the file with `@[expose] public section` or, for programming-focused files, with {keywordOf Lean.Parser.Command.section}`public section`.
The latter should be used for programs that will be run but not reasoned about.
:::
After an initial build under the module system succeeds, the dependencies between modules can be iteratively minimized.
In particular, removing uses of {keywordOf Lean.Parser.Command.declModifiers}`public` and {attrs}`@[expose]` will help avoid unnecessary rebuilds.
# Packages, Libraries, and Targets
%%%
tag := "code-distribution"
%%%
Lean modules are organized into {tech}_packages_, which are units of code distribution.
A {tech}[package] may contain multiple libraries or executables.
Code in a package that is intended for use by other Lean packages is organized into {deftech (key:="library")}[libraries].
Code that is intended to be compiled and run as independent programs is organized into {deftech (key:="executable")}[executables].
Packages, libraries, and executables are described in detail in the section on {ref "lake"}[Lake, the standard Lean build tool]. |
reference-manual/Manual/IO.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Lean.Parser.Command
import Manual.IO.Console
import Manual.IO.Files
import Manual.IO.Threads
import Manual.IO.Ref
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
#doc (Manual) "IO" =>
%%%
tag := "io"
%%%
Lean is a pure functional programming language.
While Lean code is strictly evaluated at run time, the order of evaluation that is used during type checking, especially while checking {tech}[definitional equality], is formally unspecified and makes use of a number of heuristics that improve performance but are subject to change.
This means that simply adding operations that perform side effects (such as file I/O, exceptions, or mutable references) would lead to programs in which the order of effects is unspecified.
During type checking, even terms with free variables are reduced; this would make side effects even more difficult to predict.
Finally, a basic principle of Lean's logic is that functions are _functions_ that map each element of the domain to a unique element of the range.
Including side effects such as console I/O, arbitrary mutable state, or random number generation would violate this principle.
:::::keepEnv
```lean -show
/-- A type -/
axiom α : Type
```
Programs that may have side effects have a type (typically {lean}`IO α`) that distinguishes them from pure functions.
Logically speaking, {lean}`IO` describes the sequencing and data dependencies of side effects.
Many of the basic side effects, such as reading from files, are opaque constants from the perspective of Lean's logic.
Others are specified by code that is logically equivalent to the run-time version.
At run time, the compiler produces ordinary code.
:::::
# Logical Model
:::::keepEnv
```lean -show
/-- A type -/
axiom α : Type
```
Conceptually, Lean distinguishes evaluation or reduction of terms from _execution_ of side effects.
Term reduction is specified by rules such as {tech}[β] and {tech}[δ], which may occur anywhere at any time.
Side effects, which must be executed in the correct order, are abstractly described in Lean's logic.
When programs are run, the Lean runtime system is responsible for actually carrying out the described effects.
The type {lean}`IO α` is a description of a process that, by performing side effects, should either return a value of type {lean}`α` or throw an error.
It can be thought of as a {tech}[state monad] in which the state is the entire world.
Just as a value of type {lean}`StateM Nat Bool` computes a {lean}`Bool` while having the ability to mutate a natural number, a value of type {lean}`IO Bool` computes a {lean}`Bool` while potentially changing the world.
Error handling is accomplished by layering an appropriate exception monad transformer on top of this.
:::::
Because the entire world can't be represented in memory, the actual implementation uses an abstract token that stands for its state.
The Lean runtime system is responsible for providing the initial token when the program is run, and each primitive action accepts a token that represents the world and returns another when finished.
This ensures that effects occur in the proper order, and it clearly separates the execution of side effects from the reduction semantics of Lean terms.
Non-termination via general recursion is treated separately from the effects described by {name}`IO`.
Programs that may not terminate due to infinite loops must be defined as {ref "partial-unsafe"}[`partial`] functions.
From the logical perspective, they are treated as arbitrary constants; {name}`IO` is not needed.
A very important property of {lean}`IO` is that there is no way for values to “escape”.
Without using one of a few clearly-marked unsafe operators, programs have no way to extract a pure {lean}`Nat` from an {lean}`IO Nat`.
This ensures that the correct ordering of side effects is preserved, and it ensures that programs that have side effects are clearly marked as such.
## The `IO`, `EIO` and `BaseIO` Monads
%%%
tag := "io-monad"
%%%
There are two monads that are typically used for programs that interact with the real world:
* Actions in {lean}`IO` may throw exceptions of type {lean}`IO.Error` or modify the world.
* Actions in {lean}`BaseIO` can't throw exceptions, but they can modify the world.
The distinction makes it possible to tell whether exceptions are possible by looking at an action's type signature.
{lean}`BaseIO` actions are automatically promoted to {lean}`IO` as necessary.
{docstring BaseIO}
{docstring IO}
{lean}`IO` is an instance of {lean}`EIO`, in which the type of errors is a parameter.
In particular, {lean}`IO` is defined as {lean}`EIO IO.Error`.
In some circumstances, such as bindings to non-Lean libraries, it can be convenient to use {lean}`EIO` with a custom error type, which ensures that errors are handled at the boundaries between these and other {lean}`IO` actions.
```lean -show
-- Check claim in preceding paragraph
example : IO = EIO IO.Error := rfl
```
{docstring EIO}
{docstring IO.lazyPure}
{docstring BaseIO.toIO}
{docstring BaseIO.toEIO}
{docstring EIO.toBaseIO}
{docstring EIO.toIO}
{docstring EIO.toIO'}
{docstring IO.toEIO}
## Errors and Error Handling in `IO`
%%%
tag := "io-monad-errors"
%%%
Error handling in the {lean}`IO` monad uses the same facilities as any other {tech}[exception monad].
In particular, throwing and catching exceptions uses the methods of the {name}`MonadExceptOf` {tech}[type class].
The exceptions thrown in {lean}`IO` have the type {lean}`IO.Error`.
The constructors of this type represent the low-level errors that occur on most operating systems, such as files not existing.
The most-used constructor is {name IO.Error.userError}`userError`, which covers all other cases and includes a string that describes the problem.
{docstring IO.Error}
{docstring IO.Error.toString}
{docstring IO.ofExcept}
{docstring EIO.catchExceptions}
{docstring IO.userError}
::::example "Throwing and Catching Errors"
:::ioExample
This program repeatedly demands a password, using exceptions for control flow.
The syntax used for exceptions is available in all exception monads, not just {lean}`IO`.
When an incorrect password is provided, an exception is thrown, which is caught by the loop that repeats the password check.
A correct password allows control to proceed past the check, terminating the loop, and any other exceptions are re-thrown.
```ioLean
def accessControl : IO Unit := do
IO.println "What is the password?"
let password ← (← IO.getStdin).getLine
if password.trimAscii.copy != "secret" then
throw (.userError "Incorrect password")
else return
def repeatAccessControl : IO Unit := do
repeat
try
accessControl
break
catch
| .userError "Incorrect password" =>
continue
| other =>
throw other
def main : IO Unit := do
repeatAccessControl
IO.println "Access granted!"
```
When run with this input:
```stdin
publicinfo
secondtry
secret
```
the program emits:
```stdout
What is the password?
What is the password?
What is the password?
Access granted!
```
:::
::::
# Control Structures
%%%
tag := "io-monad-control"
%%%
Normally, programs written in {lean}`IO` use {ref "monads-and-do"}[the same control structures as those written in other monads].
There is one specific {lean}`IO` helper.
{docstring IO.iterate}
{include 0 Manual.IO.Console}
{include 0 Manual.IO.Ref}
{include 0 Manual.IO.Files}
# System and Platform Information
%%%
tag := "platform-info"
%%%
{docstring System.Platform.numBits}
{docstring System.Platform.target}
{docstring System.Platform.isWindows}
{docstring System.Platform.isOSX}
{docstring System.Platform.isEmscripten}
# Environment Variables
%%%
tag := "io-monad-getenv"
%%%
{docstring IO.getEnv}
# Timing
%%%
tag := "io-timing"
%%%
{docstring IO.sleep}
{docstring IO.monoNanosNow}
{docstring IO.monoMsNow}
{docstring IO.getNumHeartbeats}
{docstring IO.addHeartbeats}
# Processes
%%%
tag := "io-processes"
%%%
## Current Process
{docstring IO.Process.getCurrentDir}
{docstring IO.Process.setCurrentDir}
{docstring IO.Process.exit}
{docstring IO.Process.getPID}
## Running Processes
There are three primary ways to run other programs from Lean:
1. {lean}`IO.Process.run` synchronously executes another program, returning its standard output as a string. It throws an error if the process exits with an error code other than `0`.
2. {lean}`IO.Process.output` synchronously executes another program with an empty standard input, capturing its standard output, standard error, and exit code. No error is thrown if the process terminates unsuccessfully.
3. {lean}`IO.Process.spawn` starts another program asynchronously and returns a data structure that can be used to access the process's standard input, output, and error streams.
{docstring IO.Process.run}
::::example "Running a Program"
When run, this program concatenates its own source code with itself twice using the Unix tool `cat`.
:::ioExample
```ioLean
-- Main.lean begins here
def main : IO Unit := do
let src2 ← IO.Process.run {cmd := "cat", args := #["Main.lean", "Main.lean"]}
IO.println src2
-- Main.lean ends here
```
Its output is:
```stdout
-- Main.lean begins here
def main : IO Unit := do
let src2 ← IO.Process.run {cmd := "cat", args := #["Main.lean", "Main.lean"]}
IO.println src2
-- Main.lean ends here
-- Main.lean begins here
def main : IO Unit := do
let src2 ← IO.Process.run {cmd := "cat", args := #["Main.lean", "Main.lean"]}
IO.println src2
-- Main.lean ends here
```
:::
::::
::::example "Running a Program on a File"
This program uses the Unix utility `grep` as a filter to find four-digit palindromes.
It creates a file that contains all numbers from {lean}`0` through {lean}`9999`, and then invokes `grep` on it, reading the result from its standard output.
:::ioExample
```ioLean
def main : IO Unit := do
-- Feed the input to the subprocess
IO.FS.withFile "numbers.txt" .write fun h =>
for i in [0:10000] do
h.putStrLn (toString i)
let palindromes ← IO.Process.run {
cmd := "grep",
args := #[r#"^\([0-9]\)\([0-9]\)\2\1$"#, "numbers.txt"]
}
let count := palindromes.trimAscii.split "\n" |>.count
IO.println s!"There are {count} four-digit palindromes."
```
Its output is:
```stdout
There are 90 four-digit palindromes.
```
:::
::::
{docstring IO.Process.output}
::::example "Checking Exit Codes"
When run, this program first invokes `cat` on a nonexistent file and displays the resulting error code.
It then concatenates its own source code with itself twice using the Unix tool `cat`.
:::ioExample
```ioLean
-- Main.lean begins here
def main : IO UInt32 := do
let src1 ← IO.Process.output {cmd := "cat", args := #["Nonexistent.lean"]}
IO.println s!"Exit code from failed process: {src1.exitCode}"
let src2 ← IO.Process.output {cmd := "cat", args := #["Main.lean", "Main.lean"]}
if src2.exitCode == 0 then
IO.println src2.stdout
else
IO.eprintln "Concatenation failed"
return 1
return 0
-- Main.lean ends here
```
Its output is:
```stdout
Exit code from failed process: 1
-- Main.lean begins here
def main : IO UInt32 := do
let src1 ← IO.Process.output {cmd := "cat", args := #["Nonexistent.lean"]}
IO.println s!"Exit code from failed process: {src1.exitCode}"
let src2 ← IO.Process.output {cmd := "cat", args := #["Main.lean", "Main.lean"]}
if src2.exitCode == 0 then
IO.println src2.stdout
else
IO.eprintln "Concatenation failed"
return 1
return 0
-- Main.lean ends here
-- Main.lean begins here
def main : IO UInt32 := do
let src1 ← IO.Process.output {cmd := "cat", args := #["Nonexistent.lean"]}
IO.println s!"Exit code from failed process: {src1.exitCode}"
let src2 ← IO.Process.output {cmd := "cat", args := #["Main.lean", "Main.lean"]}
if src2.exitCode == 0 then
IO.println src2.stdout
else
IO.eprintln "Concatenation failed"
return 1
return 0
-- Main.lean ends here
```
:::
::::
{docstring IO.Process.spawn}
::::example "Asynchronous Subprocesses"
This program uses the Unix utility `grep` as a filter to find four-digit palindromes.
It feeds all numbers from {lean}`0` through {lean}`9999` to the `grep` process and then reads its result.
This code is only correct when `grep` is sufficiently fast and when the output pipe is large enough to contain all 90 four-digit palindromes.
:::ioExample
```ioLean
def main : IO Unit := do
let grep ← IO.Process.spawn {
cmd := "grep",
args := #[r#"^\([0-9]\)\([0-9]\)\2\1$"#],
stdin := .piped,
stdout := .piped,
stderr := .null
}
-- Feed the input to the subprocess
for i in [0:10000] do
grep.stdin.putStrLn (toString i)
-- Consume its output, after waiting 100ms for grep to process the data.
IO.sleep 100
let count := (← grep.stdout.readToEnd).trimAscii.split "\n" |>.count
IO.println s!"There are {count} four-digit palindromes."
```
Its output is:
```stdout
There are 90 four-digit palindromes.
```
:::
::::
{docstring IO.Process.SpawnArgs}
{docstring IO.Process.StdioConfig}
{docstring IO.Process.Stdio}
{docstring IO.Process.Stdio.toHandleType}
{docstring IO.Process.Child}
{docstring IO.Process.Child.wait}
{docstring IO.Process.Child.tryWait}
{docstring IO.Process.Child.kill}
{docstring IO.Process.Child.takeStdin}
::::example "Closing a Subprocess's Standard Input"
This program uses the Unix utility `grep` as a filter to find four-digit palindromes, ensuring that the subprocess terminates successfully.
It feeds all numbers from {lean}`0` through {lean}`9999` to the `grep` process, then closes the process's standard input, which causes it to terminate.
After checking `grep`'s exit code, the program extracts its result.
:::ioExample
```ioLean
def main : IO UInt32 := do
let grep ← do
let (stdin, child) ← (← IO.Process.spawn {
cmd := "grep",
args := #[r#"^\([0-9]\)\([0-9]\)\2\1$"#],
stdin := .piped,
stdout := .piped,
stderr := .null
}).takeStdin
-- Feed the input to the subprocess
for i in [0:10000] do
stdin.putStrLn (toString i)
-- Return the child without its stdin handle.
-- This closes the handle, because there are
-- no more references to it.
pure child
-- Wait for grep to terminate
if (← grep.wait) != 0 then
IO.eprintln s!"grep terminated unsuccessfully"
return 1
-- Consume its output
let count := (← grep.stdout.readToEnd).trimAscii.split "\n" |>.count
IO.println s!"There are {count} four-digit palindromes."
return 0
```
Its output is:
```stdout
There are 90 four-digit palindromes.
```
:::
::::
{docstring IO.Process.Output}
# Random Numbers
{docstring IO.setRandSeed}
{docstring IO.rand}
{docstring randBool}
{docstring randNat}
## Random Generators
{docstring RandomGen}
{docstring StdGen +hideStructureConstructor +hideFields}
{docstring stdRange}
{docstring stdNext}
{docstring stdSplit}
{docstring mkStdGen}
## System Randomness
{docstring IO.getRandomBytes}
{include 0 Manual.IO.Threads} |
reference-manual/Manual/Grind.lean | import VersoManual
import Lean.Parser.Term
import Manual.Meta
import Manual.Papers
import Manual.Grind.ConstraintPropagation
import Manual.Grind.CongrClosure
import Manual.Grind.CaseAnalysis
import Manual.Grind.EMatching
import Manual.Grind.Cutsat
import Manual.Grind.Algebra
import Manual.Grind.Linarith
import Manual.Grind.Annotation
import Manual.Grind.ExtendedExamples
-- Needed for the if-then-else normalization example.
import Std.Data.TreeMap
import Std.Data.HashMap
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Verso.Doc.Elab (CodeBlockExpander)
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
set_option pp.rawOnError true
-- TODO (@kim-em): `Lean.Grind.AddCommMonoid` and `Lean.Grind.AddCommGroup` are not yet documented.
set_option verso.docstring.allowMissing true
set_option linter.unusedVariables false
set_option linter.typography.quotes true
set_option linter.typography.dashes true
-- The verso default max line length is 60, which is very restrictive.
-- TODO: discuss with David.
set_option verso.code.warnLineLength 72
open Manual (comment)
#doc (Manual) "The `grind` tactic" =>
%%%
tag := "grind-tactic"
%%%
:::tutorials
* {ref "grind-index-map" (remote := "tutorials")}[Using `grind` for Ordered Maps]
:::
```lean -show
-- Open some namespaces for the examples.
open Lean Lean.Grind Lean.Meta.Grind
```
The {tactic}`grind` tactic uses techniques inspired by modern SMT solvers to automatically construct proofs.
It produces proofs by incrementally collecting sets of facts, deriving new facts from the existing ones using a set of cooperating techniques.
Behind the scenes, all proofs are by contradiction, so there is no operational distinction between the expected conclusion and the premises; {tactic}`grind` always attempts to derive a contradiction.
Picture a virtual whiteboard.
Every time {tactic}`grind` discovers a new equality, inequality, or Boolean literal it writes that fact on the board, merges equivalent terms into buckets, and invites each engine to read from—and add back to—the shared whiteboard.
In particular, because all true propositions are equal to {lean}`True` and all false propositions are equal to {lean}`False`, {tactic}`grind` tracks a set of known facts as part of tracking equivalence classes.
:::paragraph
The cooperating engines are:
* {tech}[congruence closure],
* {tech}[constraint propagation],
* {tech}[E‑matching],
* guided {ref "grind-split"}[case analysis], and
* a suite of satellite theory solvers, including both {ref "cutsat"}[linear integer arithmetic] and {ref "grind-ring"}[commutative rings].
Like other tactics, {tactic}`grind` produces ordinary Lean proof terms for every fact it adds.
Lean’s standard library is already annotated with `@[grind]` attributes, so common lemmas are discovered automatically.
:::
{tactic}`grind` is *not* designed for goals whose search space explodes combinatorially—think large‑`n` pigeonhole instances, graph‑coloring reductions, high‑order N‑queens boards, or a 200‑variable Sudoku encoded as Boolean constraints.
Such encodings require thousands (or millions) of case‑splits that overwhelm {tactic}`grind`’s branching search.
For bit‑level or pure Boolean combinatorial problems, use {tactic}`bv_decide`. The {tactic}`bv_decide` tactic calls a state‑of‑the‑art SAT solver (e.g. CaDiCaL or Kissat) and then returns a compact, machine‑checkable certificate.
All heavy search happens outside Lean; the certificate is replayed and verified inside Lean, so trust is preserved (verification time scales with certificate size).
:::TODO
Include this when it's available:
* *Full SMT problems that need substantial case analysis across multiple theories* (arrays, bit‑vectors, rich arithmetic, quantifiers, …) → use the forthcoming *`lean‑smt`* tactic—a tight Lean front‑end for CVC5 that replays unsat cores or models inside Lean.
:::
:::example "Congruence Closure" (open := true)
This proof succeeds instantly using {tech}[congruence closure], which discovers sets of equal terms.
```lean
example (a b c : Nat) (h₁ : a = b) (h₂ : b = c) :
a = c := by
grind
```
:::
:::example "Algebraic Reasoning" (open := true)
This proof uses {tactic}`grind`'s commutative ring solver.
```lean -show
open Lean.Grind
```
```lean
example [CommRing α] [NoNatZeroDivisors α] (a b c : α) :
a + b + c = 3 →
a ^ 2 + b ^ 2 + c ^ 2 = 5 →
a ^ 3 + b ^ 3 + c ^ 3 = 7 →
a ^ 4 + b ^ 4 = 9 - c ^ 4 := by
grind
```
:::
:::example "Finite-Field Reasoning" (open := true)
Arithmetic operations on {name}`Fin` overflow, wrapping around to {lean (type := "Fin 11")}`0` when the result would be outside the bound.
{tactic}`grind` can use this fact to prove theorems such as this:
```lean
example (x y : Fin 11) :
x ^ 2 * y = 1 →
x * y ^ 2 = y →
y * x = 1 := by
grind
```
:::
:::example "Linear Integer Arithmetic with Case Analysis" (open := true)
```lean
example (x y : Int) :
27 ≤ 11 * x + 13 * y →
11 * x + 13 * y ≤ 45 →
-10 ≤ 7 * x - 9 * y →
7 * x - 9 * y ≤ 4 →
False := by
grind
```
:::
# Error Messages
%%%
tag := "grind-errors"
%%%
When {tactic}`grind` fails, it prints the remaining subgoal followed by all the information returned by its subsystems—the contents of the “shared whiteboard.”
In particular, it presents equivalence classes of terms that it has determined to be equal.
The two largest classes are shown as `True propositions` and `False propositions`, listing every literal currently known to be provable or refutable.
Inspect these lists to spot missing facts or contradictory assumptions.
# Minimizing `grind` calls
The `grind only [...]` tactic invokes {tactic}`grind` with a limited set of theorems, which can improve performance.
Calls to `grind only` can be conveniently constructed using {tactic}`grind?`, which automatically records the theorems used by {tactic}`grind` and suggests a suitable `grind only`.
These theorems will typically include a symbol prefix such as `=`, `←`, or `→`, indicating the
pattern that triggered the instantiation. See the {ref "e-matching"}[section on E-matching] for details.
Some theorems may be labelled with a `usr` prefix, which indicates that a custom pattern was used.
{include 1 Manual.Grind.CongrClosure}
{include 1 Manual.Grind.ConstraintPropagation}
{include 1 Manual.Grind.CaseAnalysis}
{include 1 Manual.Grind.EMatching}
{include 1 Manual.Grind.Cutsat}
{include 1 Manual.Grind.Algebra}
{include 1 Manual.Grind.Linarith}
{include 1 Manual.Grind.Annotation}
# Reducibility
{tech}[Reducible] definitions in terms are eagerly unfolded by {tactic}`grind`.
This enables more efficient definitional equality comparisons and indexing.
:::example "Reducibility and Congruence Closure"
The definition of {name}`one` is not {tech}[reducible]:
```lean
def one := 1
```
This means that {tactic}`grind` does not unfold it:
```lean +error (name := noUnfold)
example : one = 1 := by grind
```
```leanOutput noUnfold
`grind` failed
case grind
h : ¬one = 1
⊢ False
[grind] Goal diagnostics
[facts] Asserted facts
[eqc] False propositions
[cutsat] Assignment satisfying linear constraints
```
{name}`two`, on the other hand, is an abbreviation and thus reducible:
```lean
abbrev two := 2
```
{tactic}`grind` unfolds {name}`two` before adding it to the “whiteboard”, allowing the proof to be completed immediately:
```lean
example : two = 2 := by grind
```
:::
E-matching patterns also unfold reducible definitions.
The patterns generated for theorems about abbreviations are expressed in terms of the unfolded abbreviations.
Abbreviations should not generally be recursive; in particular, when using {tactic}`grind`, recursive abbreviations can result in poor indexing performance and unpredictable patterns.
:::example "E-matching and Unfolding Abbreviations"
When adding {attr}`grind` annotations to theorems, E-matching patterns are generated based on the theorem statement.
These patterns determine when the theorem is instantiated.
The theorem {name}`one_eq_1` mentions the {tech}[semireducible] definition {name}`one`, and the resulting pattern is also {name}`one`:
```lean (name := one_eq_1)
def one := 1
@[grind? =]
theorem one_eq_1 : one = 1 := by rfl
```
```leanOutput one_eq_1
one_eq_1: [one]
```
Applying the same annotation to a theorem about the {tech}`reducible` abbreviation {name}`two` results in a pattern in which {name}`two` is unfolded:
```lean (name := two_eq_2)
abbrev two := 2
@[grind? =]
theorem two_eq_2: two = 2 := by grind
```
```leanOutput two_eq_2
two_eq_2: [@OfNat.ofNat `[Nat] `[2] `[instOfNatNat 2]]
```
:::
:::example "Recursive Abbreviations and `grind`"
Using the {attr}`grind` attribute to add E-matching patterns for a recursive abbreviation's {tech}[equational lemmas] does not result in useful patterns for recursive abbreviations.
The {attrs}`@[grind?]` attribute on this definition of the Fibonacci function results in three patterns, each corresponding to one of the three possibilities:
```lean (name := fib1) -keep
@[grind?]
def fib : Nat → Nat
| 0 => 0
| 1 => 1
| n + 2 => fib n + fib (n + 1)
```
```leanOutput fib1
fib.eq_1: [fib `[0]]
```
```leanOutput fib1
fib.eq_2: [fib `[1]]
```
```leanOutput fib1
fib.eq_3: [fib (#0 + 2)]
```
Replacing the definition with an abbreviation results in patterns in which occurrences of the function are unfolded.
These patterns are not particularly useful:
```lean (name := fib2) -keep
@[grind?]
abbrev fib : Nat → Nat
| 0 => 0
| 1 => 1
| n + 2 => fib n + fib (n + 1)
```
```leanOutput fib2
fib.eq_1: [@OfNat.ofNat `[Nat] `[0] `[instOfNatNat 0]]
```
```leanOutput fib2
fib.eq_2: [@OfNat.ofNat `[Nat] `[1] `[instOfNatNat 1]]
```
```leanOutput fib2
fib.eq_3: [@HAdd.hAdd `[Nat] `[Nat] `[Nat] `[instHAdd] (fib #0) (fib (#0 + 1))]
```
:::
```comment
# Diagnostics
TBD
Threshold notices, learned equivalence classes, integer assignments, algebraic basis, performed splits, instance statistics.
# Troubleshooting & FAQ
TBD
```
{include 1 Manual.Grind.ExtendedExamples} |
reference-manual/Manual/ErrorExplanations.lean | import Manual.Meta.ErrorExplanation
import Manual.ErrorExplanations.CtorResultingTypeMismatch
import Manual.ErrorExplanations.DependsOnNoncomputable
import Manual.ErrorExplanations.InductionWithNoAlts
import Manual.ErrorExplanations.InductiveParamMismatch
import Manual.ErrorExplanations.InductiveParamMissing
import Manual.ErrorExplanations.InferBinderTypeFailed
import Manual.ErrorExplanations.InferDefTypeFailed
import Manual.ErrorExplanations.InvalidDottedIdent
import Manual.ErrorExplanations.InvalidField
import Manual.ErrorExplanations.ProjNonPropFromProp
import Manual.ErrorExplanations.PropRecLargeElim
import Manual.ErrorExplanations.RedundantMatchAlt
import Manual.ErrorExplanations.SynthInstanceFailed
import Manual.ErrorExplanations.UnknownIdentifier
open Lean
open Verso.Doc Elab
open Verso.Genre Manual
/- Renders the suffix of an error explanation, allowing line breaks before capital letters. -/
inline_extension Inline.errorExplanationShortName (errorName : Name) where
data := toJson (getBreakableSuffix errorName)
traverse := fun _ _ _ => pure none
extraCss := [".error-explanation-short-name { hyphenate-character: ''; }"]
toTeX := none
toHtml := some fun _go _id info _content =>
open Verso.Output Html in do
let .ok (some errorName) := fromJson? (α := Option String) info
| HtmlT.logError "Invalid data for explanation name element"
pure .empty
let html := {{ <code class="error-explanation-short-name">{{errorName}}</code> }}
return html
/--
Renders a table-of-contents like summary of the error explanations defined by the current Lean
implementation.
-/
@[block_command]
def error_explanation_table : BlockCommandOf Unit
| () => do
let entries ← getErrorExplanations
let columns := 4
let header := true
let name := "error-explanation-table"
let alignment : Option TableConfig.Alignment := none
let headers ← #["Name", "Summary", "Severity", "Since"]
|>.mapM fun s => ``(Verso.Doc.Block.para #[Inline.text $(quote s)])
let vals ← entries.flatMapM fun (name, explan) => do
let sev := quote <| if explan.metadata.severity == .warning then "Warning" else "Error"
let sev ← ``(Inline.text $sev)
let nameLink ←
``(Inline.other (Inline.ref $(quote name.toString) $(quote errorExplanationDomain) Option.none)
#[Inline.other (Inline.errorExplanationShortName $(quote name)) #[]])
let summary ← ``(Inline.text $(quote explan.metadata.summary))
let since ← ``(Inline.text $(quote explan.metadata.sinceVersion))
#[nameLink, summary, sev, since]
|>.mapM fun s => ``(Verso.Doc.Block.para #[$s])
let blocks := (headers ++ vals).map fun c => Syntax.TSepArray.mk #[c]
``(Block.other (Block.table $(quote columns) $(quote header) $(quote name) $(quote alignment)) #[Block.ul #[$[Verso.Doc.ListItem.mk #[$blocks,*]],*]])
#doc (Manual) "Error Explanations" =>
%%%
number := false
htmlToc := false
%%%
This section provides explanations of errors and warnings that may be generated
by Lean when processing a source file. All error names listed below have the
`lean` package prefix.
{error_explanation_table}
{include 0 Manual.ErrorExplanations.CtorResultingTypeMismatch}
{include 0 Manual.ErrorExplanations.DependsOnNoncomputable}
{include 0 Manual.ErrorExplanations.InductionWithNoAlts}
{include 0 Manual.ErrorExplanations.InductiveParamMismatch}
{include 0 Manual.ErrorExplanations.InductiveParamMissing}
{include 0 Manual.ErrorExplanations.InferBinderTypeFailed}
{include 0 Manual.ErrorExplanations.InferDefTypeFailed}
{include 0 Manual.ErrorExplanations.InvalidDottedIdent}
{include 0 Manual.ErrorExplanations.InvalidField}
{include 0 Manual.ErrorExplanations.ProjNonPropFromProp}
{include 0 Manual.ErrorExplanations.PropRecLargeElim}
{include 0 Manual.ErrorExplanations.RedundantMatchAlt}
{include 0 Manual.ErrorExplanations.SynthInstanceFailed}
{include 0 Manual.ErrorExplanations.UnknownIdentifier} |
reference-manual/Manual/Terms.lean | import VersoManual
import Manual.Meta
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
set_option linter.constructorNameAsVariable false
set_option guard_msgs.diff true
#doc (Manual) "Terms" =>
%%%
tag := "terms"
%%%
{deftech}_Terms_ are the principal means of writing mathematics and programs in Lean.
The {deftech (key := "Lean elaborator")}[elaborator] translates them to Lean's minimal core language, which is then checked by the kernel and compiled for execution.
The syntax of terms is {ref "syntax-ext"}[arbitrarily extensible]; this chapter documents the term syntax that Lean provides out-of-the-box.
# Identifiers
%%%
tag := "identifiers-and-resolution"
%%%
:::syntax term (title := "Identifiers")
```
$x:ident
```
:::
An identifier term is a reference to a name.{margin}[The specific lexical syntax of identifiers is described {ref "keywords-and-identifiers"}[in the section on Lean's concrete syntax].]
Identifiers also occur in contexts where they bind names, such as {keywordOf Lean.Parser.Term.let}`let` and {keywordOf Lean.Parser.Term.fun}`fun`; however, these binding occurrences are not complete terms in and of themselves.
The mapping from identifiers to names is not trivial: at any point in a {tech}[module], some number of {tech}[namespaces] will be open, there may be {tech}[section variables], and there may be local bindings.
Furthermore, identifiers may contain multiple dot-separated atomic identifiers; the dot both separates namespaces from their contents and variables from fields or functions that use {tech}[field notation].
This creates ambiguity, because an identifier `A.B.C.D.e.f` could refer to any of the following:
* A name `f` in the namespace `A.B.C.D.e` (for instance, a function defined in `e`'s {keywordOf Lean.Parser.Command.declaration}`where` block).
* An application of `T.f` to `A.B.C.D.e` if `A.B.C.D.e` has type `T`
* A projection of field `f` from a structure named `A.B.C.D.e`
* A series of field projections `B.C.D.e` from structure value `A`, followed by an application of `f` using field notation
* If namespace `Q` is opened, it could be a reference to any of the above with a `Q` prefix, such as a name `f` in the namespace `Q.A.B.C.D.e`
This list is not exhaustive.
Given an identifier, the elaborator must discover which name or names an identifier refers to, and whether any of the trailing components are fields or functions applied via field notation.
This is called {deftech (key := "resolve")}_resolving_ the name.
Some declarations in the global environment are lazily created the first time they are referenced.
Resolving an identifier in a way that both creates one of these declarations and results in a reference to it is called {deftech}_realizing_ the name.
The rules for resolving and realizing a name are the same, so even though this section refers only to resolving names, it applies to both.
Name resolution is affected by the following:
* {tech (key := "pre-resolved identifier")}[Pre-resolved names] attached to the identifier
* The {tech}[macro scopes] attached to the identifier
* The local bindings in scope, including auxiliary definitions created as part of the elaboration of {keywordOf Lean.Parser.Term.letrec}`let rec`.
* Aliases created with {keywordOf Lean.Parser.Command.export}`export` in modules transitively imported by the current module
* The current {tech}[section scope], in particular the {tech}[current namespace], opened namespaces, and section variables
Any prefix of an identifier can resolve to a set of names.
The suffix that was not included in the resolution process is then treated as field projections or field notation.
Resolutions of longer prefixes take precedence over resolutions of shorter prefixes; in other words, as few components as of the identifier as possible are treated as field notation.
An identifier prefix may refer to any of the following, with earlier items taking precedence over later ones:
1. A locally-bound variable whose name is identical to the identifier prefix, including macro scopes, with closer local bindings taking precedence over outer local bindings.
2. A local auxiliary definition whose name is identical to the identifier prefix
3. A {tech}[section variable] whose name is identical to the identifier prefix
3. A global name that is identical to a prefix of the {tech}[current namespace] appended to the identifier prefix, or for which an alias exists in a prefix of the current namespace, with longer prefixes of the current namespace taking precedence over shorter ones
4. A global name that has been brought into scope via {keywordOf Lean.Parser.Command.open}`open` commands that is identical to the identifier prefix
If an identifier resolves to multiple names, then the elaborator attempts to use all of them.
If exactly one of them succeeds, then it is used as the meaning of the identifier.
It is an error if more than one succeed or if all fail.
::::keepEnv
:::example "Local Names Take Precedence"
Local bindings take precedence over global bindings:
```lean (name := localOverGlobal)
def x := "global"
#eval
let x := "local"
x
```
```leanOutput localOverGlobal
"local"
```
The innermost local binding of a name takes precedence over others:
```lean (name := innermostLocal)
#eval
let x := "outer"
let x := "inner"
x
```
```leanOutput innermostLocal
"inner"
```
:::
::::
::::keepEnv
:::example "Longer Prefixes of Current Namespace Take Precedence"
The namespaces `A`, `B`, and `C` are nested.
Both `A` and `C` contain a definition of `x`.
```lean (name := NS)
namespace A
def x := "A.x"
namespace B
namespace C
def x := "A.B.C.x"
```
When the current namespace is `A.B.C`, {lean}`x` resolves to {lean}`A.B.C.x`.
```lean (name := NSC)
#eval x
```
```leanOutput NSC
"A.B.C.x"
```
When the current namespace is `A.B`, {lean}`x` resolves to {lean}`A.x`.
```lean (name := NSB)
end C
#eval x
```
```leanOutput NSB
"A.x"
```
:::
::::
::::keepEnv
:::example "Longer Identifier Prefixes Take Precedence"
When an identifier could refer to different projections from names, the one with the longest name takes precedence:
```lean
structure A where
y : String
deriving Repr
structure B where
y : A
deriving Repr
def y : B := ⟨⟨"shorter"⟩⟩
def y.y : A := ⟨"longer"⟩
```
Given the above declarations, {lean}`y.y.y` could in principle refer either to the {name A.y}`y` field of the {name B.y}`y` field of {name}`y`, or to the {name A.y}`y` field of {name}`y.y`.
It refers to the {name A.y}`y` field of {name}`y.y`, because the name {name}`y.y` is a longer prefix of `y.y.y` than the name {name}`y`:
```lean (name := yyy)
#eval y.y.y
```
```leanOutput yyy
"longer"
```
:::
::::
::::keepEnv
:::example "Current Namespace Contents Take Precedence Over Opened Namespaces"
When an identifier could refer either to a name defined in a prefix of the current namespace or to an opened namespace, the former takes precedence.
```lean
namespace A
def x := "A.x"
end A
namespace B
def x := "B.x"
namespace C
open A
#eval x
```
Even though `A` was opened more recently than the declaration of {name}`B.x`, the identifier `x` resolves to {name}`B.x` rather than {name}`A.x` because `B` is a prefix of the current namespace `B.C`.
```lean (name := nestedVsOpen)
#eval x
```
```leanOutput nestedVsOpen
"B.x"
```
:::
::::
:::example "Ambiguous Identifiers"
In this example, `x` could refer either to {name}`A.x` or {name}`B.x`, and neither takes precedence.
Because both have the same type, it is an error.
```lean (name := ambi) +error
def A.x := "A.x"
def B.x := "B.x"
open A
open B
#eval x
```
```leanOutput ambi (whitespace := lax)
Ambiguous term
x
Possible interpretations:
B.x : String
A.x : String
```
:::
:::example "Disambiguation via Typing"
When otherwise-ambiguous names have different types, the types are used to disambiguate:
```lean (name := ambiNo)
def C.x := "C.x"
def D.x := 3
open C
open D
#eval (x : String)
```
```leanOutput ambiNo
"C.x"
```
:::
## Leading `.`
When an identifier begins with a dot (`.`), the type that the elaborator expects for the expression is used to resolve it, rather than the current namespace and set of open namespaces.
{tech}[Generalized field notation] is related: this {deftech}_leading dot notation_ uses the expected type of the identifier to resolve it to a name, while field notation uses the inferred type of the term immediately prior to the dot.
Identifiers with a leading `.` are to be looked up in the {deftech}_expected type's namespace_.
If the type expected for a term is a constant applied to zero or more arguments, then its namespace is the constant's name.
If the type is not an application of a constant (e.g., a function, a metavariable, or a universe) then it doesn't have a namespace.
If the name is not found in the expected type's namespace, but the constant can be unfolded to yield another constant, then its namespace is consulted.
This process is repeated until something other than an application of a constant is encountered, or until the constant can't be unfolded.
::::keepEnv
:::example "Leading `.`"
The expected type for {name List.replicate}`.replicate` is `List Unit`.
This type's namespace is `List`, so {name List.replicate}`.replicate` resolves to {name List.replicate}`List.replicate`.
```lean (name := dotRep)
#eval show List Unit from .replicate 3 ()
```
```leanOutput dotRep
[(), (), ()]
```
:::
:::example "Leading `.` and Unfolding Definitions"
The expected type for {name List.replicate}`.replicate` is `MyList Unit`.
This type's namespace is `MyList`, but there is no definition `MyList.replicate`.
Unfolding {lean}`MyList Unit` yields {lean}`List Unit`, so {name List.replicate}`.replicate` resolves to {name List.replicate}`List.replicate`.
```lean (name := dotRep)
def MyList α := List α
#eval show MyList Unit from .replicate 3 ()
```
```leanOutput dotRep
[(), (), ()]
```
:::
::::
# Function Types
%%%
tag := "function-types"
%%%
Lean's function types describe more than just the function's domain and codomain.
They also provide instructions for elaborating application sites by indicating that some parameters are to be discovered automatically via unification or {ref "instance-synth"}[type class synthesis], that others are optional with default values, and that yet others should be synthesized using a custom tactic script.
Furthermore, their syntax contains support for abbreviating {tech (key := "currying")}[curried] functions.
:::syntax term (title := "Function types")
Dependent function types include an explicit name:
```grammar
($x:ident : $t) → $t2
```
Non-dependent function types do not:
```grammar
$t1:term → $t2
```
:::
:::syntax term (title := "Curried Function Types")
Dependent function types may include multiple parameters that have the same type in a single set of parentheses:
```grammar
($x:ident* : $t) → $t
```
This is equivalent to repeating the type annotation for each parameter name in a nested function type.
:::
:::syntax term (title := "Implicit, Optional, and Auto Parameters")
Function types can describe functions that take implicit, instance implicit, optional, and automatic parameters.
All but instance implicit parameters require one or more names.
```grammar
($x:ident* : $t := $e) → $t
```
```grammar
($x:ident* : $t := by $tacs) → $t
```
```grammar
{$x:ident* : $t} → $t
```
```grammar
[$t] → $t
```
```grammar
[$x:ident : $t] → $t
```
```grammar
⦃$x:ident* : $t⦄ → $t
```
:::
:::example "Multiple Parameters, Same Type"
The type of {name}`Nat.add` can be written in the following ways:
* {lean}`Nat → Nat → Nat`
* {lean}`(a : Nat) → (b : Nat) → Nat`
* {lean}`(a b : Nat) → Nat`
The last two types allow the function to be used with {tech}[named arguments]; aside from this, all three are equivalent.
:::
# Functions
%%%
tag := "function-terms"
%%%
Terms with function types can be created via abstractions, introduced with the {keywordOf Lean.Parser.Term.fun}`fun` keyword.{margin}[In various communities, function abstractions are also known as _lambdas_, due to Alonzo Church's notation for them, or _anonymous functions_ because they don't need to be defined with a name in the global environment.]
While abstractions in the core type theory only allow a single variable to be bound, function terms are quite flexible in the high-level Lean syntax.
:::syntax term (title := "Function Abstraction")
The most basic function abstraction introduces a variable to stand for the function's parameter:
```grammar
fun $x:ident => $t
```
At elaboration time, Lean must be able to determine the function's domain.
A type ascription is one way to provide this information:
```grammar
fun $x:ident : term => $t
```
:::
Function definitions defined with keywords such as {keywordOf Lean.Parser.Command.declaration (parser := Lean.Parser.Command.definition)}`def` desugar to {keywordOf Lean.Parser.Term.fun}`fun`.
Inductive type declarations, on the other hand, introduce new values with function types (constructors and type constructors) that cannot themselves be implemented using just {keywordOf Lean.Parser.Term.fun}`fun`.
:::syntax term (title := "Curried Functions")
Multiple parameter names are accepted after {keywordOf Lean.Parser.Term.fun}`fun`:
```grammar
fun $x:ident $x:ident* => $t
```
```grammar
fun $x:ident $x:ident* : $t:term => $t
```
Different type annotations for multiple parameters require parentheses:
```grammar
free{"fun " "(" (ident)* ": " term")" " =>" term}
```
These are equivalent to writing nested {keywordOf Lean.Parser.Term.fun}`fun` terms.
:::
The {keywordOf Lean.Parser.Term.fun}`=>` may be replaced by {keywordOf Lean.Parser.Term.fun}`↦` in all of the syntax described in this section.
Function abstractions may also use pattern matching syntax as part of their parameter specification, avoiding the need to introduce a local variable that is immediately destructured.
This syntax is described in the {ref "pattern-fun"}[section on pattern matching].
## Implicit Parameters
%%%
tag := "implicit-functions"
%%%
Lean supports implicit parameters to functions.
This means that Lean itself can supply arguments to functions, rather than requiring users to supply all needed arguments.
Implicit parameters come in three varieties:
: Ordinary implicit parameters
Ordinary {deftech}[implicit] parameters are function parameters that Lean should determine values for via unification.
In other words, each call site should have exactly one potential argument value that would cause the function call as a whole to be well-typed.
The Lean elaborator attempts to find values for all implicit arguments at each occurrence of a function.
Ordinary implicit parameters are written in curly braces (`{` and `}`).
: Strict implicit parameters
{deftech}_Strict implicit_ parameters are identical to ordinary implicit parameters, except Lean will only attempt to find argument values when subsequent explicit arguments are provided at a call site.
Strict implicit parameters are written in double curly braces (`⦃` and `⦄`, or `{{` and `}}`).
: Instance implicit parameters
Arguments for {tech}_instance implicit_ parameters are found via {ref "instance-synth"}[type class synthesis].
Instance implicit parameters are written in square brackets (`[` and `]`).
Unlike the other kinds of implicit parameter, instance implicit parameters that are written without a `:` specify the parameter's type rather than providing a name.
Furthermore, only a single name is allowed.
Most instance implicit parameters omit the parameter name because instances synthesized as parameters to functions are already available in the functions' bodies, even without being named explicitly.
::::keepEnv
:::example "Ordinary vs Strict Implicit Parameters"
The difference between the functions {lean}`f` and {lean}`g` is that `α` is strictly implicit in {lean}`f`:
```lean
def f ⦃α : Type⦄ : α → α := fun x => x
def g {α : Type} : α → α := fun x => x
```
These functions are elaborated identically when applied to concrete arguments:
```lean
example : f 2 = g 2 := rfl
```
However, when the explicit argument is not provided, uses of {lean}`f` do not require the implicit `α` to be solved:
```lean
example := f
```
However, uses of `g` do require it to be solved, and fail to elaborate if there is insufficient information available:
```lean +error (name := noAlpha)
example := g
```
```leanOutput noAlpha
don't know how to synthesize implicit argument `α`
@g ?m.3
context:
⊢ Type
```
:::
::::
:::syntax term (title := "Functions with Varying Binders")
The most general syntax for {keywordOf Lean.Parser.Term.fun}`fun` accepts a sequence of binders:
```grammar
fun $p:funBinder $p:funBinder* => $t
```
:::
:::syntax Lean.Parser.Term.funBinder (title := "Function Binders")
Function binders may be identifiers:
```grammar
$x:ident
```
parenthesized sequences of identifiers:
```grammar
($x:ident $y:ident*)
```
sequences of identifiers with a type ascription:
```grammar
($x:ident $y:ident* : $t)
```
implicit parameters, with or without a type ascription:
```grammar
{$x:ident $x:ident*}
```
```grammar
{$x:ident $x:ident* : $t}
```
instance implicits, anonymous or named:
```grammar
[$t:term]
```
```grammar
[$x:ident : $t]
```
or strict implicit parameters, with or without a type ascription:
```grammar
⦃$x:ident $x:ident*⦄
```
```grammar
⦃$x:ident* : $t⦄
```
As usual, an `_` may be used instead of an identifier to create an anonymous parameter, and `⦃` and `⦄` may alternatively be written using `{{` and `}}`, respectively.
:::
Lean's core language does not distinguish between implicit, instance, and explicit parameters: the various kinds of function and function type are definitionally equal.
The differences can be observed only during elaboration.
```lean -show
-- Evidence of claims in prior paragraph
example : ({x : Nat} → Nat) = (Nat → Nat) := rfl
example : (fun {x} => 2 : {x : Nat} → Nat) = (fun x => 2 : Nat → Nat) := rfl
example : ([x : Repr Nat] → Nat) = (Repr Nat → Nat) := rfl
example : (⦃x : Nat⦄ → Nat) = (Nat → Nat) := rfl
```
If the expected type of a function includes implicit parameters, but its binders do not, then the resulting function may end up with more parameters than the binders indicated in the code.
This is because the implicit parameters are added automatically.
:::example "Implicit Parameters from Types"
The identity function can be written with a single explicit parameter.
As long as its type is known, the implicit type parameter is added automatically.
```lean (name := funImplAdd)
#check (fun x => x : {α : Type} → α → α)
```
```leanOutput funImplAdd
fun {α} x => x : {α : Type} → α → α
```
The following are all equivalent:
```lean (name := funImplThere)
#check (fun {α} x => x : {α : Type} → α → α)
```
```leanOutput funImplThere
fun {α} x => x : {α : Type} → α → α
```
```lean (name := funImplAnn)
#check (fun {α} (x : α) => x : {α : Type} → α → α)
```
```leanOutput funImplAnn
fun {α} x => x : {α : Type} → α → α
```
```lean (name := funImplAnn2)
#check (fun {α : Type} (x : α) => x : {α : Type} → α → α)
```
```leanOutput funImplAnn2
fun {α} x => x : {α : Type} → α → α
```
:::
# Function Application
%%%
tag := "function-application"
%%%
Ordinarily, function application is written using juxtaposition: the argument is placed after the function, with at least one space between them.
In Lean's type theory, all functions take exactly one argument and produce exactly one value.
All function applications combine a single function with a single argument.
Multiple arguments are represented via currying.
The high-level term language treats a function together with one or more arguments as a single unit, and supports additional features such as implicit, optional, and by-name arguments along with ordinary positional arguments.
The elaborator converts these to the simpler model of the core type theory.
:::freeSyntax term (title := "Function Application")
A function application consists of a term followed by one or more arguments, or by zero or more arguments and a final {deftech}[ellipsis].
```grammar
$e:term $e:argument+
***************
$e:term $e:argument* ".."
```
:::
{TODO}[Annotate with syntax kinds for incoming hyperlinks during traversal pass]
:::freeSyntax Lean.Parser.Term.argument (title := "Arguments")
Function arguments are either terms or {deftech}[named arguments].
```grammar
$e:term
***********
"("$x:ident ":=" $e:term")"
```
:::
The function's core-language type determines the placement of the arguments in the final expression.
Function types include names for their expected parameters.
In Lean's core language, non-dependent function types are encoded as dependent function types in which the parameter name does not occur in the body.
Furthermore, they are chosen internally such that they cannot be written as the name of a named argument; this is important to prevent accidental capture.
Each parameter expected by the function has a name.
Recurring over the function's argument types, arguments are selected from the sequence of arguments as follows:
* If the parameter's name matches the name provided for a named argument, then that argument is selected.
* If the parameter is {tech}[implicit], a fresh metavariable is created with the parameter's type and selected.
* If the parameter is {tech}[instance implicit], a fresh instance metavariable is created with the parameter's type and inserted. Instance metavariables are scheduled for later synthesis.
* If the parameter is a {tech}[strict implicit] parameter and there are any named or positional arguments that have not yet been selected, a fresh metavariable is created with the parameter's type and selected.
* If the parameter is explicit, then the next positional argument is selected and elaborated. If there are no positional arguments:
* If the parameter is declared as an {tech}[optional parameter], then its default value is selected as the argument.
* If the parameter is an {tech}[automatic parameter] then its associated tactic script is executed to construct the argument.
* If the parameter is neither optional nor automatic, and no ellipsis is present, then a fresh variable is selected as the argument. If there is an ellipsis, a fresh metavariable is selected as if the argument were implicit.
As a special case, when the function application occurs in a {ref "pattern-matching"}[pattern] and there is an ellipsis, optional and automatic arguments become universal patterns (`_`) instead of being inserted.
It is an error if the type is not a function type and arguments remain.
After all arguments have been inserted and there is an ellipsis, then the missing arguments are all set to fresh metavariables, just as if they were implicit arguments.
If any fresh variables were created for missing explicit positional arguments, the entire application is wrapped in a {keywordOf Lean.Parser.Term.fun}`fun` term that binds them.
Finally, instance synthesis is invoked and as many metavariables as possible are solved:
1. A type is inferred for the entire function application. This may cause some metavariables to be solved due to unification that occurs during type inference.
2. The instance metavariables are synthesized. {tech}[Default instances] are only used if the inferred type is a metavariable that is the output parameter of one of the instances.
3. If there is an expected type, it is unified with the inferred type; however, errors resulting from this unification are discarded. If the expected and inferred types can be equal, unification can solve leftover implicit argument metavariables. If they can't be equal, an error is not thrown because a surrounding elaborator may be able to insert {tech}[coercions] or {tech (key := "lift")}[monad lifts].
::::keepEnv
:::example "Named Arguments"
```lean -show
set_option linter.unusedVariables false
```
The {keywordOf Lean.Parser.Command.check}`#check` command can be used to inspect the arguments that were inserted for a function call.
The function {name}`sum3` takes three explicit {lean}`Nat` parameters, named `x`, `y`, and `z`.
```lean
def sum3 (x y z : Nat) : Nat := x + y + z
```
All three arguments can be provided positionally.
```lean (name := sum31)
#check sum3 1 3 8
```
```leanOutput sum31
sum3 1 3 8 : Nat
```
They can also be provided by name.
```lean (name := sum32)
#check sum3 (x := 1) (y := 3) (z := 8)
```
```leanOutput sum32
sum3 1 3 8 : Nat
```
When arguments are provided by name, it can be in any order.
```lean (name := sum33)
#check sum3 (y := 3) (z := 8) (x := 1)
```
```leanOutput sum33
sum3 1 3 8 : Nat
```
Named and positional arguments may be freely intermixed.
```lean (name := sum34)
#check sum3 1 (z := 8) (y := 3)
```
```leanOutput sum34
sum3 1 3 8 : Nat
```
Named and positional arguments may be freely intermixed.
If an argument is provided by name, it is used, even if it occurs after a positional argument that could have been used.
```lean (name := sum342)
#check sum3 1 (x := 8) (y := 3)
```
```leanOutput sum342
sum3 8 3 1 : Nat
```
If a named argument is to be inserted after arguments that aren't provided, a function is created in which the provided argument is filled out.
```lean (name := sum35)
#check sum3 (z := 8)
```
```leanOutput sum35
fun x y => sum3 x y 8 : Nat → Nat → Nat
```
Behind the scenes, the names of the arguments are preserved in the function type.
This means that the remaining arguments can again be passed by name.
```lean (name := sum36)
#check (sum3 (z := 8)) (y := 1)
```
```leanOutput sum36
fun x => (fun x y => sum3 x y 8) x 1 : Nat → Nat
```
Parameter names are taken from the function's _type_, and the names used for function parameters don't need to match the names used in the type.
This means that local bindings that conflict with a parameter's name don't prevent the use of named parameters, because Lean avoids this conflicts by renaming the function's parameter while leaving the name intact in the type.
```lean (name := sum15)
#check let x := 15; sum3 (z := x)
```
Here, the `x` that named {name}`sum3`'s first argument has been replaced, so as to not conflict with the surrounding {keywordOf Parser.Term.let}`let`:
```leanOutput sum15
let x := 15;
fun x_1 y => sum3 x_1 y x : Nat → Nat → Nat
```
Even though `x` was renamed, it can still be passed by name:
```lean (name := xNoCapture)
#check (let x := 15; sum3 (z := x)) (x := 4)
```
```leanOutput xNoCapture
(let x := 15;
fun x_1 y => sum3 x_1 y x)
4 : Nat → Nat
```
This is because the name `x` is still used in the type.
Enabling the option {option}`pp.piBinderNames` shows the parameter names in the type:
```lean (name := xRenamed)
set_option pp.piBinderNames true in
#check let x := 15; sum3 (z := x)
```
```leanOutput xRenamed
let x := 15;
fun x_1 y => sum3 x_1 y x : (x y : Nat) → Nat
```
:::
::::
Optional and automatic parameters are not part of Lean's core type theory.
They are encoded using the {name}`optParam` and {name}`autoParam` {tech}[gadgets].
{docstring optParam}
{docstring autoParam}
## Generalized Field Notation
%%%
tag := "generalized-field-notation"
%%%
The {ref "structure-fields"}[section on structure fields] describes the notation for projecting a field from a term whose type is a structure.
Generalized field notation consists of a term followed by a dot (`.`) and an identifier, not separated by spaces.
:::syntax term (title := "Field Notation")
```grammar
$e:term.$f:ident
```
:::
If a term's type is a constant applied to zero or more arguments, then {deftech}[field notation] can be used to apply a function to it, regardless of whether the term is a structure or type class instance that has fields.
The use of field notation to apply other functions is called {deftech}_generalized field notation_.
The identifier after the dot is looked up in the namespace of the term's type, which is the constant's name.
If the type is not an application of a constant (e.g. a metavariable or a universe) then it doesn't have a namespace and generalized field notation cannot be used.
As a special case, if an expression is a function, generalized field notation will look in the `Function` namespace. Therefore, {lean}`Nat.add.uncurry` is a use of generalized field notation that is equivalent to {lean}`Function.uncurry Nat.add`.
If the field is not found, but the constant can be unfolded to yield a further type which is a constant or application of a constant, then the process is repeated with the new constant.
When a function is found, the term before the dot becomes an argument to the function.
Specifically, it becomes the first explicit argument that would not be a type error.
Aside from that, the application is elaborated as usual.
:::example "Generalized Field Notation"
The type {lean}`Username` is a constant, so functions in the {name}`Username` namespace can be applied to terms with type {lean}`Username` with generalized field notation.
```lean
def Username := String
```
One such function is {name}`Username.validate`, which checks that a username contains no leading whitespace and that only a small set of acceptable characters are used.
In its definition, generalized field notation is used to call the functions {lean}`String.isPrefixOf`, {name}`String.any`, {lean}`Char.isAlpha`, and {lean}`Char.isDigit`.
In the case of {lean}`String.isPrefixOf`, which takes two {lean}`String` arguments, {lean}`" "` is used as the first because it's the term before the dot.
{name}`String.any` can be called on {lean}`name` using generalized field notation even though it has type {lean}`Username` because `Username.any` is not defined and {lean}`Username` unfolds to {lean}`String`.
```lean
def Username.validate (name : Username) : Except String Unit := do
if " ".isPrefixOf name then
throw "Unexpected leading whitespace"
if name.any notOk then
throw "Unexpected character"
return ()
where
notOk (c : Char) : Bool :=
!c.isAlpha &&
!c.isDigit &&
!c ∈ ['_', ' ']
def adminUser : Username := "admin"
```
However, {lean}`Username.validate` can't be called on {lean}`"root"` using field notation, because {lean}`String` does not unfold to {lean}`Username`.
```lean +error (name := notString)
#eval "admin".validate
```
```leanOutput notString
Invalid field `validate`: The environment does not contain `String.validate`, so it is not possible to project the field `validate` from an expression
"admin"
of type `String`
```
{lean}`adminUser`, on the other hand, has type {lean}`Username`, so the {lean}`Username.validate` function can be invoked with generalized field notation:
```lean (name := isUsername)
#eval adminUser.validate
```
```leanOutput isUsername
Except.ok ()
```
Going in the other direction, {name}`String.any` *can* be called on the {lean}`Username` value {lean}`adminUser` with generalized field notation, because the type {lean}`Username` unfolds to {lean}`String`.
```lean (name := isString1)
#eval adminUser.any (· == 'm')
```
```leanOutput isString1
true
```
:::
{optionDocs pp.fieldNotation}
:::syntax attr (title := "Controlling Field Notation")
The {attr}`pp_nodot` attribute causes Lean's pretty printer to not use field notation when printing a function.
```grammar
pp_nodot
```
:::
::::keepEnv
:::example "Turning Off Field Notation"
{lean}`Nat.half` is printed using field notation by default.
```lean
def Nat.half : Nat → Nat
| 0 | 1 => 0
| n + 2 => n.half + 1
```
```lean (name := succ1)
#check Nat.half Nat.zero
```
```leanOutput succ1
Nat.zero.half : Nat
```
Adding {attr}`pp_nodot` to {name}`Nat.half` causes ordinary function application syntax to be used instead when displaying the term.
```lean (name := succ2)
attribute [pp_nodot] Nat.half
#check Nat.half Nat.zero
```
```leanOutput succ2
Nat.half Nat.zero : Nat
```
:::
::::
## Pipeline Syntax
Pipeline syntax provides alternative ways to write function applications.
Repeated pipelines use parsing precedence instead of nested parentheses to nest applications of functions to positional arguments.
:::syntax term (title := "Pipelines")
Right pipe notation applies the term to the right of the pipe to the one on its left.
```grammar
$e |> $e
```
Left pipe notation applies the term on the left of the pipe to the one on its right.
```grammar
$e <| $e
```
:::
The intuition behind right pipeline notation is that the values on the left are being fed to the first function, its results are fed to the second one, and so forth.
In left pipeline notation, values on the right are fed leftwards.
:::example "Right pipeline notation"
Right pipelines can be used to call a series of functions on a term.
For readers, they tend to emphasize the data that's being transformed.
```lean (name := rightPipe)
#eval "Hello!" |> String.toList |> List.reverse |> List.head!
```
```leanOutput rightPipe
'!'
```
:::
:::example "Left pipeline notation"
Left pipelines can be used to call a series of functions on a term.
They tend to emphasize the functions over the data.
```lean (name := lPipe)
#eval List.head! <| List.reverse <| String.toList <| "Hello!"
```
```leanOutput lPipe
'!'
```
:::
:::syntax term (title := "Pipeline Fields")
There is a version of pipeline notation that's used for {tech}[generalized field notation].
```grammar
$e |>.$_:ident
```
```grammar
$e |>.$_:fieldIdx
```
:::
::::keepEnv
```lean -show
section
universe u
axiom T : Nat → Type u
variable {e : T 3} {arg : Char}
axiom T.f : {n : Nat} → Char → T n → String
```
{lean}`e |>.f arg` is an alternative syntax for {lean}`(e).f arg`.
:::example "Pipeline Fields"
Some functions are inconvenient to use with pipelines because their argument order is not conducive.
For example, {name}`Array.push` takes an array as its first argument, not a {lean}`Nat`, leading to this error:
```lean (name := arrPush) +error
#eval #[1, 2, 3] |> Array.push 4
```
```leanOutput arrPush
failed to synthesize instance of type class
OfNat (Array ?m.2) 4
numerals are polymorphic in Lean, but the numeral `4` cannot be used in a context where the expected type is
Array ?m.2
due to the absence of the instance above
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
Using pipeline field notation causes the array to be inserted at the first type-correct position:
```lean (name := arrPush2)
#eval #[1, 2, 3] |>.push 4
```
```leanOutput arrPush2
#[1, 2, 3, 4]
```
This process can be iterated:
```lean (name := arrPush3)
#eval #[1, 2, 3] |>.push 4 |>.reverse |>.push 0 |>.reverse
```
```leanOutput arrPush3
#[0, 1, 2, 3, 4]
```
:::
```lean -show
end
```
::::
# Numeric Literals
There are two kinds of numeric literal: natural number literals and {deftech}[scientific literals].
Both are overloaded via {tech (key := "type class")}[type classes].
## Natural Numbers
%%%
tag := "nat-literals"
%%%
```lean -show
section
variable {n : Nat}
```
Natural numbers can be specified in several forms:
- A sequence of digits 0 through 9 is a decimal literal
- `0b` or `0B` followed by a sequence of one or more 0s and 1s is a binary literal
- `0o` or `0O` followed by a sequence of one or more digits 0 through 7 is an octal literal
- `0x` or `0X` followed by a sequence of one or more hex digits (0 through 9 and A through F, case-insensitive) is a hexadecimal literal
All numeric literals can also contain internal underscores, except for between the first two characters in a binary, octal, or hexadecimal literal.
These are intended to help groups of digits in natural ways, for instance {lean}`1_000_000` or {lean}`0x_c0de_cafe`.
(While it is possible to write the number 123 as {lean}`1_2__3`, this is not recommended.)
When Lean encounters a natural number literal {lean}`n`, it interprets it via the overloaded method {lean}`OfNat.ofNat n`.
A {tech}[default instance] of {lean}`OfNat Nat n` ensures that the type {lean}`Nat` can be inferred when no other type information is present.
{docstring OfNat}
```lean -show
end
```
:::example "Custom Natural Number Literals"
The structure {lean}`NatInterval` represents an interval of natural numbers.
```lean
structure NatInterval where
low : Nat
high : Nat
low_le_high : low ≤ high
instance : Add NatInterval where
add
| ⟨lo1, hi1, le1⟩, ⟨lo2, hi2, le2⟩ =>
⟨lo1 + lo2, hi1 + hi2, by grind⟩
```
An {name}`OfNat` instance allows natural number literals to be used to represent intervals:
```lean
instance : OfNat NatInterval n where
ofNat := ⟨n, n, by omega⟩
```
```lean (name := eval8Interval)
#eval (8 : NatInterval)
```
```leanOutput eval8Interval
{ low := 8, high := 8, low_le_high := _ }
```
```lean (name := eval7Interval)
#eval (0b111 : NatInterval)
```
```leanOutput eval7Interval
{ low := 7, high := 7, low_le_high := _ }
```
:::
There are no separate integer literals.
Terms such as {lean}`-5` consist of a prefix negation (which can be overloaded via the {name}`Neg` type class) applied to a natural number literal.
## Scientific Numbers
Scientific number literals consist of a sequence of decimal digits followed (without intervening whitespace) by an optional decimal part (a period followed by zero or more decimal digits) and an optional exponent part (the letter `e` followed by an optional `+` or `-` and then followed by one or more decimal digits).
Scientific numbers are overloaded via the {name}`OfScientific` type class.
{docstring OfScientific}
There are an {lean}`OfScientific` instances for {name}`Float` and {name}`Float32`, but no separate floating-point literals.
## Strings
String literals are described in the {ref "string-syntax"}[chapter on strings.]
## Lists and Arrays
List and array literals contain comma-separated sequences of elements inside of brackets, with arrays prefixed by a hash mark (`#`).
Array literals are interpreted as list literals wrapped in a call to a conversion.
For performance reasons, very large list and array literals are converted to sequences of local definitions, rather than just iterated applications of the list constructor.
:::syntax term (title := "List Literals")
```grammar
[$_,*]
```
:::
:::syntax term (title := "Array Literals")
```grammar
#[$_,*]
```
:::
:::example "Long List Literals"
This list contains 32 elements.
The generated code is an iterated application of {name}`List.cons`:
```lean (name := almostLong)
#check
[1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1]
```
```leanOutput almostLong
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] : List Nat
```
With 33 elements, the list literal becomes a sequence of local definitions:
```lean (name := indeedLong)
#check
[1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,
1]
```
```leanOutput indeedLong
let y :=
let y :=
let y := [1, 1, 1, 1, 1];
1 :: 1 :: 1 :: 1 :: y;
let y := 1 :: 1 :: 1 :: 1 :: y;
1 :: 1 :: 1 :: 1 :: y;
let y :=
let y := 1 :: 1 :: 1 :: 1 :: y;
1 :: 1 :: 1 :: 1 :: y;
let y := 1 :: 1 :: 1 :: 1 :: y;
1 :: 1 :: 1 :: 1 :: y : List Nat
```
:::
# Structures and Constructors
{ref "anonymous-constructor-syntax"}[Anonymous constructors] and {ref "structure-constructors"}[structure instance syntax] are described in their respective sections.
# Conditionals
%%%
tag := "if-then-else"
%%%
The conditional expression is used to check whether a proposition is true or false.{margin}[Despite their syntactic similarity, the {keywordOf Lean.Parser.Tactic.tacIfThenElse}`if` used {ref "tactic-language-branching"}[in the tactic language] and the {keywordOf Lean.Parser.Term.doIf}`if` used {ref "tactic-language-branching"}[in `do`-notation] are separate syntactic forms, documented in their own sections.]
This requires that the proposition has a {name}`Decidable` instance, because it's not possible to check whether _arbitrary_ propositions are true or false.
There is also a {tech}[coercion] from {name}`Bool` to {lean}`Prop` that results in a decidable proposition (namely, that the {name}`Bool` in question is equal to {name}`true`), described in the {ref "decidable-propositions"}[section on decidability].
There are two versions of the conditional expression: one simply performs a case distinction, while the other additionally adds an assumption about the proposition's truth or falsity to the local context.
This allows run-time checks to generate compile-time evidence that can be used to statically rule out errors.
:::syntax term (title := "Conditionals")
Without a name annotation, the conditional expression expresses only control flow.
```grammar
if $e then
$e
else
$e
```
With the name annotation, the branches of the {keywordOf termDepIfThenElse}`if` have access to a local assumption that the proposition is respectively true or false.
```grammar
if $h : $e then
$e
else
$e
```
:::
::::keepEnv
:::example "Checking Array Bounds"
Array indexing requires evidence that the index in question is within the bounds of the array, so {name}`getThird` does not elaborate.
```lean +error -keep (name := getThird1)
def getThird (xs : Array α) : α := xs[2]
```
```leanOutput getThird1
failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid
- Use `a[i]?` notation instead, result is an `Option` type
- Use `a[i]'h` notation instead, where `h` is a proof that index is valid
α : Type ?u.7
xs : Array α
⊢ 2 < xs.size
```
Relaxing the return type to {name}`Option` and adding a bounds check results in the same error.
This is because the proof that the index is in bounds was not added to the local context.
```lean +error -keep (name := getThird2)
def getThird (xs : Array α) : Option α :=
if xs.size ≤ 2 then none
else xs[2]
```
```leanOutput getThird2
failed to prove index is valid, possible solutions:
- Use `have`-expressions to prove the index is valid
- Use `a[i]!` notation instead, runtime check is performed, and 'Panic' error message is produced if index is not valid
- Use `a[i]?` notation instead, result is an `Option` type
- Use `a[i]'h` notation instead, where `h` is a proof that index is valid
α : Type ?u.7
xs : Array α
⊢ 2 < xs.size
```
Naming the proof `h` is sufficient to enable the tactics that perform bounds checking to succeed, even though it does not occur explicitly in the text of the program.
```lean
def getThird (xs : Array α) : Option α :=
if h : xs.size ≤ 2 then none
else xs[2]
```
:::
::::
There is also a pattern-matching version of {keywordOf termIfLet}`if`.
If the pattern matches, then it takes the first branch, binding the pattern variables.
If the pattern does not match, then it takes the second branch.
:::syntax term (title := "Pattern-Matching Conditionals")
```grammar
if let $p := $e then
$e
else
$e
```
:::
If a {name}`Bool`-only conditional statement is ever needed, the {keywordOf boolIfThenElse}`bif` variant can be used.
:::syntax term (title := "Boolean-Only Conditional")
```grammar
bif $e then
$e
else
$e
```
:::
# Pattern Matching
%%%
tag := "pattern-matching"
%%%
{deftech}_Pattern matching_ is a way to recognize and destructure values using a syntax of {deftech}_patterns_ that are a subset of the terms.
A pattern that recognizes and destructures a value is similar to the syntax that would be used to construct the value.
One or more {deftech}_match discriminants_ are simultaneously compared to a series of {deftech}_match alternatives_.
Discriminants may be named.
Each alternative contains one or more comma-separated sequences of patterns; all pattern sequences must contain the same number of patterns as there are discriminants.
When a pattern sequence matches all of the discriminants, the term following the corresponding {keywordOf Lean.Parser.Term.match}`=>` is evaluated in an environment extended with values for each {tech}[pattern variable] as well as an equality hypothesis for each named discriminant.
This term is called the {deftech}_right-hand side_ of the match alternative.
:::syntax term (title := "Pattern Matching")
```grammar
match
$[(generalizing := $e)]?
$[(motive := $e)]?
$[$d:matchDiscr],*
with
$[| $[$e,*]|* => $e]*
```
:::
:::syntax matchDiscr (title := "Match Discriminants") -open
```grammar
$e:term
```
```grammar
$h:ident : $e:term
```
:::
Pattern matching expressions may alternatively use {tech}[quasiquotations] as patterns, matching the corresponding {name}`Lean.Syntax` values and treating the contents of {tech}[antiquotations] as ordinary patterns.
Quotation patterns are compiled differently than other patterns, so if one pattern in a {keywordOf Lean.Parser.Term.match}`match` is syntax, then all of them must be.
Quotation patterns are described in {ref "quote-patterns"}[the section on quotations].
Patterns are a subset of the terms.
They consist of the following:
: Catch-All Patterns
The hole syntax {lean}`_` is a pattern that matches any value and binds no pattern variables.
Catch-all patterns are not entirely equivalent to unused pattern variables.
They can be used in positions where the pattern's typing would otherwise require a more specific {tech}[inaccessible pattern], while variables cannot be used in these positions.
: Identifiers
If an identifier is not bound in the current scope and is not applied to arguments, then it represents a pattern variable.
{deftech}_Pattern variables_ match any value, and the values thus matched are bound to the pattern variable in the local environment in which the {tech}[right-hand side] is evaluated.
If the identifier is bound, it is a pattern if it is bound to the {tech}[constructor] of an {tech}[inductive type] or if its definition has the {attr}`match_pattern` attribute.
: Applications
Function applications are patterns if the function being applied is an identifier that is bound to a constructor or that has the {attr}`match_pattern` attribute and if all arguments are also patterns.
If the identifier is a constructor, the pattern matches values built with that constructor if the argument patterns match the constructor's arguments.
If it is a function with the {attr}`match_pattern` attribute, then the function application is unfolded and the resulting term's {tech}[normal form] is used as the pattern.
Default arguments are inserted as usual, and their normal forms are used as patterns.
{tech (key := "ellipsis")}[Ellipses], however, result in all further arguments being treated as universal patterns, even those with associated default values or tactics.
: Literals
{ref "char-syntax"}[Character literals] and {ref "string-syntax"}[string literals] are patterns that match the corresponding character or string.
{ref "raw-string-literals"}[Raw string literals] are allowed as patterns, but {ref "string-interpolation"}[interpolated strings] are not.
{ref "nat-syntax"}[Natural number literals] in patterns are interpreted by synthesizing the corresponding {name}`OfNat` instance and reducing the resulting term to {tech}[normal form], which must be a pattern.
Similarly, {tech}[scientific literals] are interpreted via the corresponding {name}`OfScientific` instance.
While {lean}`Float` has such an instance, {lean}`Float`s cannot be used as patterns because the instance relies on an opaque function that can't be reduced to a valid pattern.
: Structure Instances
{tech}[Structure instances] may be used as patterns.
They are interpreted as the corresponding structure constructor.
: Quoted names
Quoted names, such as {lean}`` `x `` and {lean}``` ``none ```, match the corresponding {name}`Lean.Name` value.
: Macros
Macros in patterns are expanded.
They are patterns if the resulting expansions are patterns.
: Inaccessible patterns
{deftech}[Inaccessible patterns] are patterns that are forced to have a particular value by later typing constraints.
Any term may be used as an inaccessible term.
Inaccessible terms are parenthesized, with a preceding period (`.`).
:::syntax term (title := "Inaccessible Patterns")
```grammar
.($e)
```
:::
:::example "Inaccessible Patterns"
A number's _parity_ is whether it's even or odd:
```lean
inductive Parity : Nat → Type where
| even (h : Nat) : Parity (h + h)
| odd (h : Nat) : Parity ((h + h) + 1)
def Nat.parity (n : Nat) : Parity n :=
match n with
| 0 => .even 0
| n' + 1 =>
match n'.parity with
| .even h => .odd h
| .odd h =>
have eq : (h + 1) + (h + 1) = (h + h + 1 + 1) :=
by omega
eq ▸ .even (h + 1)
```
Because a value of type {lean}`Parity` contains half of a number (rounded down) as part of its representation of evenness or oddness, division by two can be implemented (in an unconventional manner) by finding a parity and then extracting the number.
```lean
def half (n : Nat) : Nat :=
match n, n.parity with
| .(h + h), .even h => h
| .(h + h + 1), .odd h => h
```
Because the index structure of {name}`Parity.even` and {name}`Parity.odd` force the number to have a certain form that is not otherwise a valid pattern, patterns that match on it must use inaccessible patterns for the number being divided.
:::
Patterns may additionally be named.
{deftech}[Named patterns] associate a name with a pattern; in subsequent patterns and on the right-hand side of the match alternative, the name refers to the part of the value that was matched by the given pattern.
Named patterns are written with an `@` between the name and the pattern.
Just like discriminants, named patterns may also be provided with names for equality assumptions.
:::syntax term (title := "Named Patterns")
```grammar
$x:ident@$e
```
```grammar
$x:ident@$h:ident:$e
```
:::
```lean -show -keep
-- Check claims about patterns
-- Literals
/-- error: Invalid pattern: Expected a constructor or constant marked with `[match_pattern]` -/
#guard_msgs in
def foo (x : String) : String :=
match x with
| "abc" => ""
| r#"hey"# => ""
| s!"a{x}y" => _
| _ => default
structure Blah where
n : Nat
deriving Inhabited
instance : OfNat Blah n where
ofNat := ⟨n + 1⟩
/--
error: Missing cases:
(Blah.mk (Nat.succ (Nat.succ _)))
(Blah.mk Nat.zero)
-/
#check_msgs in
def abc (n : Blah) : Bool :=
match n with
| 0 => true
partial instance : OfNat Blah n where
ofNat :=
let rec f (x : Nat) : Blah :=
match x with
| 0 => f 99
| n + 1 => f n
f n
-- This shows that the partial instance was not unfolded
/--
error: Dependent elimination failed: Type mismatch when solving this alternative: it has type
motive (instOfNatBlah_1.f 0)
but is expected to have type
motive n✝
-/
#check_msgs in
def defg (n : Blah) : Bool :=
match n with
| 0 => true
/--
error: Dependent elimination failed: Type mismatch when solving this alternative: it has type
motive (Float.ofScientific 25 true 1)
but is expected to have type
motive x✝
-/
#check_msgs in
def twoPointFive? : Float → Option Float
| 2.5 => some 2.5
| _ => none
/--
info: @Neg.neg.{0} Float instNegFloat
(@OfScientific.ofScientific.{0} Float instOfScientificFloat (nat_lit 320) Bool.true (nat_lit 1)) : Float
-/
#check_msgs in
set_option pp.all true in
#check -32.0
structure OnlyThreeOrFive where
val : Nat
val2 := false
ok : val = 3 ∨ val = 5 := by rfl
-- Default args are synthesized in patterns too!
/--
error: Tactic `rfl` failed: The left-hand side
n = 3
is not definitionally equal to the right-hand side
n = 5
x✝ : OnlyThreeOrFive
n : Nat
⊢ n = 3 ∨ n = 5
-/
#check_msgs in
def ggg : OnlyThreeOrFive → Nat
| {val := n} => n
/--
error: Missing cases:
(OnlyThreeOrFive.mk _ true (Or.inr Eq.refl))
(OnlyThreeOrFive.mk _ true (Or.inl Eq.refl))
-/
#check_msgs in
def hhh : OnlyThreeOrFive → Nat
| {val := n, ok := p} => n
-- Ellipses don't synth default args in patterns
def ggg' : OnlyThreeOrFive → Nat
| .mk n .. => n
-- Ellipses do synth default args via tactics, but not exprs, otherwise
/--
error: could not synthesize default value for parameter 'ok' using tactics
---
error: Tactic `rfl` failed: The left-hand side
3 = 3
is not definitionally equal to the right-hand side
3 = 5
⊢ 3 = 3 ∨ 3 = 5
---
info: { val := 3, val2 := ?m.2647, ok := ⋯ } : OnlyThreeOrFive
-/
#check_msgs in
#check OnlyThreeOrFive.mk 3 ..
/-- info: { val := 3, val2 := ?_, ok := ⋯ } : OnlyThreeOrFive -/
#check_msgs in
set_option pp.mvars.anonymous false in
#check OnlyThreeOrFive.mk 3 (ok := .inl rfl) ..
/--
info: fun y =>
match
have this := ⟨y * 3, ⋯⟩;
this with
| ⟨x, z⟩ =>
match x, z with
| .(y * 3), ⋯ => () : Nat → Unit
-/
#check_msgs in
#check fun (y : Nat) => match show {n : Nat// n = y * 3} from ⟨y*3, rfl⟩ with
| ⟨x, z⟩ =>
match x, z with
| .(y * 3), rfl => ()
```
## Types
Each discriminant must be well typed.
Because patterns are a subset of terms, their types can also be checked.
Each pattern that matches a given discriminant must have the same type as the corresponding discriminant.
The {tech}[right-hand side] of each match alternative should have the same type as the overall {keywordOf Lean.Parser.Term.match}`match` term.
To support dependent types, matching a discriminant against a pattern refines the types that are expected within the scope of the pattern.
In both subsequent patterns in the same match alternative and the right-hand side's type, occurrences of the discriminant are replaced by the pattern that it was matched against.
::::keepEnv
```lean -show
variable {α : Type u}
```
:::example "Type Refinement"
This {tech}[indexed family] describes mostly-balanced trees, with the depth encoded in the type.
```lean
inductive BalancedTree (α : Type u) : Nat → Type u where
| empty : BalancedTree α 0
| branch
(left : BalancedTree α n)
(val : α)
(right : BalancedTree α n) :
BalancedTree α (n + 1)
| lbranch
(left : BalancedTree α (n + 1))
(val : α)
(right : BalancedTree α n) :
BalancedTree α (n + 2)
| rbranch
(left : BalancedTree α n)
(val : α)
(right : BalancedTree α (n + 1)) :
BalancedTree α (n + 2)
```
To begin the implementation of a function to construct a perfectly balanced tree with some initial element and a given depth, a {tech}[hole] can be used for the definition.
```lean -keep (name := fill1) +error
def BalancedTree.filledWith
(x : α) (depth : Nat) :
BalancedTree α depth :=
_
```
The error message demonstrates that the tree should have the indicated depth.
```leanOutput fill1
don't know how to synthesize placeholder
context:
α : Type u
x : α
depth : Nat
⊢ BalancedTree α depth
```
Matching on the expected depth and inserting holes results in an error message for each hole.
These messages demonstrate that the expected type has been refined, with `depth` replaced by the matched values.
```lean +error (name := fill2)
def BalancedTree.filledWith
(x : α) (depth : Nat) :
BalancedTree α depth :=
match depth with
| 0 => _
| n + 1 => _
```
The first hole yields the following message:
```leanOutput fill2
don't know how to synthesize placeholder
context:
α : Type u
x : α
depth : Nat
⊢ BalancedTree α 0
```
The second hole yields the following message:
```leanOutput fill2
don't know how to synthesize placeholder
context:
α : Type u
x : α
depth n : Nat
⊢ BalancedTree α (n + 1)
```
Matching on the depth of a tree and the tree itself leads to a refinement of the tree's type according to the depth's pattern.
This means that certain combinations are not well-typed, such as {lean}`0` and {name BalancedTree.branch}`branch`, because refining the second discriminant's type yields {lean}`BalancedTree α 0` which does not match the constructor's type.
```lean (name := patfail) +error
def BalancedTree.isPerfectlyBalanced
(n : Nat) (t : BalancedTree α n) : Bool :=
match n, t with
| 0, .empty => true
| 0, .branch left val right =>
isPerfectlyBalanced left &&
isPerfectlyBalanced right
| _, _ => false
```
```leanOutput patfail
Type mismatch
left.branch val right
has type
BalancedTree ?m.54 (?m.51 + 1)
but is expected to have type
BalancedTree α 0
```
:::
::::
### Pattern Equality Proofs
When a discriminant is named, {keywordOf Lean.Parser.Term.match}`match` generates a proof that the pattern and discriminant are equal, binding it to the provided name in the {tech}[right-hand side].
This is useful to bridge the gap between dependent pattern matching on indexed families and APIs that expect explicit propositional arguments, and it can help tactics that make use of assumptions to succeed.
:::example "Pattern Equality Proofs"
The function {lean}`last?`, which either throws an exception or returns the last element of its argument, uses the standard library function {lean}`List.getLast`.
This function expects a proof that the list in question is nonempty.
Naming the match on `xs` ensures that there's an assumption in scope that states that `xs` is equal to `_ :: _`, which {tactic}`simp_all` uses to discharge the goal.
```lean
def last? (xs : List α) : Except String α :=
match h : xs with
| [] =>
.error "Can't take first element of empty list"
| _ :: _ =>
.ok <| xs.getLast (show xs ≠ [] by intro h'; simp_all)
```
Without the name, {tactic}`simp_all` is unable to find the contradiction.
```lean +error (name := namedHyp)
def last?' (xs : List α) : Except String α :=
match xs with
| [] =>
.error "Can't take first element of empty list"
| _ :: _ =>
.ok <| xs.getLast (show xs ≠ [] by intro h'; simp_all)
```
```leanOutput namedHyp
simp_all made no progress
```
:::
### Explicit Motives
Pattern matching is not a built-in primitive of Lean.
Instead, it is translated to applications of {tech}[recursors] via {tech}[auxiliary matching functions].
Both require a {tech}_motive_ that explains the relationship between the discriminant and the resulting type.
Generally, the {keywordOf Lean.Parser.Term.match}`match` elaborator is capable of synthesizing an appropriate motive, and the refinement of types that occurs during pattern matching is a result of the motive that was selected.
In some specialized circumstances, a different motive may be needed and may be provided explicitly using the `(motive := …)` syntax of {keywordOf Lean.Parser.Term.match}`match`.
This motive should be a function type that expects at least as many parameters as there are discriminants.
The type that results from applying a function with this type to the discriminants in order is the type of the entire {keywordOf Lean.Parser.Term.match}`match` term, and the type that results from applying a function with this type to all patterns in each alternative is the type of that alternative's {tech}[right-hand side].
:::example "Matching with an Explicit Motive"
An explicit motive can be used to provide type information that is otherwise unavailable from the surrounding context.
Attempting to match on a number and a proof that it is in fact {lean}`5` is an error, because there's no reason to connect the number to the proof:
```lean +error (name := noMotive)
#eval
match 5, rfl with
| 5, rfl => "ok"
```
```leanOutput noMotive
Invalid match expression: This pattern contains metavariables:
Eq.refl ?m.76
```
An explicit motive explains the relationship between the discriminants:
```lean (name := withMotive)
#eval
match (motive := (n : Nat) → n = 5 → String) 5, rfl with
| 5, rfl => "ok"
```
```leanOutput withMotive
"ok"
```
:::
### Discriminant Refinement
When matching on an indexed family, the indices must also be discriminants.
Otherwise, the pattern would not be well typed: it is a type error if an index is just a variable but the type of a constructor requires a more specific value.
However, a process called {deftech}[discriminant refinement] automatically adds indices as additional discriminants.
::::keepEnv
:::example "Discriminant Refinement"
In the definition of {lean}`f`, the equality proof is the only discriminant.
However, equality is an indexed family, and the match is only valid when `n` is an additional discriminant.
```lean
def f (n : Nat) (p : n = 3) : String :=
match p with
| rfl => "ok"
```
Using {keywordOf Lean.Parser.Command.print}`#print` demonstrates that the additional discriminant was added automatically.
```lean (name := fDef)
#print f
```
```leanOutput fDef
def f : (n : Nat) → n = 3 → String :=
fun n p =>
match 3, p with
| .(n), ⋯ => "ok"
```
:::
::::
### Generalization
%%%
tag := "match-generalization"
%%%
The pattern match elaborator automatically determines the motive by finding occurrences of the discriminants in the expected type, generalizing them in the types of subsequent discriminants so that the appropriate pattern can be substituted.
Additionally, occurrences of the discriminants in the types of variables in the context are generalized and substituted by default.
This latter behavior can be turned off by passing the `(generalizing := false)` flag to {keywordOf Lean.Parser.Term.match}`match`.
:::::keepEnv
::::example "Matching, With and Without Generalization"
```lean -show
variable {α : Type u} (b : Bool) (ifTrue : b = true → α) (ifFalse : b = false → α)
```
In this definition of {lean}`boolCases`, the assumption {lean}`b` is generalized in the type of `h` and then replaced with the actual pattern.
This means that {lean}`ifTrue` and {lean}`ifFalse` have the types {lean}`true = true → α` and {lean}`false = false → α` in their respective cases, but `h`'s type mentions the original discriminant.
```lean +error (name := boolCases1) -keep
def boolCases (b : Bool)
(ifTrue : b = true → α)
(ifFalse : b = false → α) :
α :=
match h : b with
| true => ifTrue h
| false => ifFalse h
```
The error for the first case is typical of both:
```leanOutput boolCases1
Application type mismatch: The argument
h
has type
b = true
but is expected to have type
true = true
in the application
ifTrue h
```
Turning off generalization allows type checking to succeed, because {lean}`b` remains in the types of {lean}`ifTrue` and {lean}`ifFalse`.
```lean
def boolCases (b : Bool)
(ifTrue : b = true → α)
(ifFalse : b = false → α) :
α :=
match (generalizing := false) h : b with
| true => ifTrue h
| false => ifFalse h
```
In the generalized version, {name}`rfl` could have been used as the proof arguments as an alternative.
::::
:::::
## Custom Pattern Functions
%%%
tag := "match_pattern-functions"
%%%
```lean -show
section
variable {n : Nat}
```
In patterns, defined constants with the {attr}`match_pattern` attribute are unfolded and normalized rather than rejected.
This allows a more convenient syntax to be used for many patterns.
In the standard library, {name}`Nat.add`, {name}`HAdd.hAdd`, {name}`Add.add`, and {name}`Neg.neg` all have this attribute, which allows patterns like {lean}`n + 1` instead of {lean}`Nat.succ n`.
Similarly, {name}`Unit` and {name}`Unit.unit` are definitions that set the respective {tech}[universe parameters] of {name}`PUnit` and {name}`PUnit.unit` to 0; the {attr}`match_pattern` attribute on {name}`Unit.unit` allows it to be used in patterns, where it expands to {lean}`PUnit.unit.{0}`.
:::syntax attr (title := "Attribute for Match Patterns")
The {attr}`match_pattern` attribute indicates that a definition should be unfolded, rather than rejected, in a pattern.
```grammar
match_pattern
```
:::
::::keepEnv
```lean -show
section
variable {k : Nat}
```
:::example "Match Patterns Follow Reduction"
The following function can't be compiled:
```lean +error (name := nonPat)
def nonzero (n : Nat) : Bool :=
match n with
| 0 => false
| 1 + k => true
```
The error message on the pattern `1 + _` is:
```leanOutput nonPat
Invalid pattern(s): `k` is an explicit pattern variable, but it only occurs in positions that are inaccessible to pattern matching:
.(Nat.add 1 k)
```
This is because {name}`Nat.add` is defined by recursion on its second parameter, equivalently to:
```lean
def add : Nat → Nat → Nat
| a, Nat.zero => a
| a, Nat.succ b => Nat.succ (Nat.add a b)
```
No {tech}[ι-reduction] is possible, because the value being matched is a variable, not a constructor.
{lean}`1 + k` gets stuck as {lean}`Nat.add 1 k`, which is not a valid pattern.
In the case of {lean}`k + 1`, that is, {lean}`Nat.add k (.succ .zero)`, the second pattern matches, so it reduces to {lean}`Nat.succ (Nat.add k .zero)`.
The second pattern now matches, yielding {lean}`Nat.succ k`, which is a valid pattern.
:::
```lean -show
end
```
::::
```lean -show
end
```
## Pattern Matching Functions
%%%
tag := "pattern-fun"
%%%
:::syntax term (title := "Pattern-Matching Functions")
Functions may be specified via pattern matching by writing a sequence of patterns after {keywordOf Lean.Parser.Term.fun}`fun`, each preceded by a vertical bar (`|`).
```grammar
fun
$[| $pat,* => $term]*
```
This desugars to a function that immediately pattern-matches on its arguments.
:::
::::keepEnv
:::example "Pattern-Matching Functions"
{lean}`isZero` is defined using a pattern-matching function abstraction, while {lean}`isZero'` is defined using a pattern match expression:
```lean
def isZero : Nat → Bool :=
fun
| 0 => true
| _ => false
def isZero' : Nat → Bool :=
fun n =>
match n with
| 0 => true
| _ => false
```
Because the former is syntactic sugar for the latter, they are definitionally equal:
```lean
example : isZero = isZero' := rfl
```
The desugaring is visible in the output of {keywordOf Lean.Parser.Command.print}`#print`:
```lean (name := isZero)
#print isZero
```
outputs
```leanOutput isZero
def isZero : Nat → Bool :=
fun x =>
match x with
| 0 => true
| x => false
```
while
```lean (name := isZero')
#print isZero'
```
outputs
```leanOutput isZero'
def isZero' : Nat → Bool :=
fun n =>
match n with
| 0 => true
| x => false
```
:::
::::
## Other Pattern Matching Operators
In addition to {keywordOf Lean.Parser.Term.match}`match` and {keywordOf termIfLet}`if let`, there are a few other operators that perform pattern matching.
:::syntax term (title := "The {keyword}`matches` Operator")
The {keywordOf Lean.«term_Matches_|»}`matches` operator returns {lean}`true` if the term on the left matches the pattern on the right.
```grammar
$e matches $e
```
:::
When branching on the result of {keywordOf Lean.«term_Matches_|»}`matches`, it's usually better to use {keywordOf termIfLet}`if let`, which can bind pattern variables in addition to checking whether a pattern matches.
```lean -show
/--
info: match 4 with
| n.succ => true
| x => false : Bool
-/
#check_msgs in
#check 4 matches (n + 1)
```
If there are no constructor patterns that could match a discriminant or sequence of discriminants, then the code in question is unreachable, as there must be a false assumption in the local context.
The {keywordOf Lean.Parser.Term.nomatch}`nomatch` expression is a match with zero cases that can have any type whatsoever, so long as there are no possible cases that could match the discriminants.
:::syntax term (title := "Caseless Pattern Matches")
```grammar
nomatch $e,*
```
:::
::::keepEnv
:::example "Inconsistent Indices"
There are no constructor patterns that can match both proofs in this example:
```lean
example (p1 : x = "Hello") (p2 : x = "world") : False :=
nomatch p1, p2
```
This is because they separately refine the value of `x` to unequal strings.
Thus, the {keywordOf Lean.Parser.Term.nomatch}`nomatch` operator allows the example's body to prove {lean}`False` (or any other proposition or type).
:::
::::
When the expected type is a function type, {keywordOf Lean.Parser.Term.nofun}`nofun` is shorthand for a function that takes as many parameters as the type indicates in which the body is {keywordOf Lean.Parser.Term.nomatch}`nomatch` applied to all of the parameters.
:::syntax term (title := "Caseless Functions")
```grammar
nofun
```
:::
::::keepEnv
:::example "Impossible Functions"
Instead of introducing arguments for both equality proofs and then using both in a {keywordOf Lean.Parser.Term.nomatch}`nomatch`, it is possible to use {keywordOf Lean.Parser.Term.nofun}`nofun`.
```lean
example : x = "Hello" → x = "world" → False := nofun
```
:::
::::
## Elaborating Pattern Matching
%%%
tag := "pattern-match-elaboration"
draft := true
%%%
:::planned 209
Specify the elaboration of pattern matching to {deftech}[auxiliary match functions].
:::
# Holes
A {deftech}_hole_ or {deftech}_placeholder term_ is a term that indicates the absence of instructions to the elaborator.{index}[placeholder term]{index (subterm := "placeholder")}[term]
In terms, holes can be automatically filled when the surrounding context would only allow one type-correct term to be written where the hole is.
Otherwise, a hole is an error.
In patterns, holes represent universal patterns that can match anything.
:::syntax term (title := "Holes")
Holes are written with underscores.
```grammar
_
```
:::
::::keepEnv
:::example "Filling Holes with Unification"
The function {lean}`the` can be used similarly to {keywordOf Lean.Parser.Term.show}`show` or a {tech}[type ascription].
```lean
def the (α : Sort u) (x : α) : α := x
```
If the second parameter's type can be inferred, then the first parameter can be a hole.
Both of these commands are equivalent:
```lean
#check the String "Hello!"
#check the _ "Hello"
```
:::
::::
When writing proofs, it can be convenient to explicitly introduce unknown values.
This is done via {deftech}_synthetic holes_, which are never solved by unification and may occur in multiple positions.
They are primarily useful in tactic proofs, and are described in {ref "metavariables-in-proofs"}[the section on metavariables in proofs].
:::syntax term (title := "Synthetic Holes")
```grammar
?$x:ident
```
```grammar
?_
```
:::
# Type Ascription
{deftech}_Type ascriptions_ explicitly annotate terms with their types.
They are a way to provide Lean with the expected type for a term.
This type must be definitionally equal to the type that is expected based on the term's context.
Type ascriptions are useful for more than just documenting a program:
* There may not be sufficient information in the program text to derive a type for a term. Ascriptions are one way to provide the type.
* An inferred type may not be the one that was desired for a term.
* The expected type of a term is used to drive the insertion of {tech}[coercions], and ascriptions are one way to control where coercions are inserted.
:::syntax term (title := "Postfix Type Ascriptions")
Type ascriptions must be surrounded by parentheses.
They indicate that the first term's type is the second term.
```grammar
($_ : $_)
```
:::
In cases where the term that requires a type ascription is long, such as a tactic proof or a {keywordOf Lean.Parser.Term.do}`do` block, the postfix type ascription with its mandatory parentheses can be difficult to read.
Additionally, for both proofs and {keywordOf Lean.Parser.Term.do}`do` blocks, the term's type is essential to its interpretation.
In these cases, the prefix versions can be easier to read.
:::syntax term (title := "Prefix Type Ascriptions")
```grammar
show $_ from $_
```
When the term in the body of {keywordOf Lean.Parser.Term.show}`show` is a tactic proof, the keyword {keywordOf Lean.Parser.Term.show}`from` may be omitted.
```grammar
show $_ by $_
```
:::
:::example "Ascribing Statements to Proofs"
This example is unable to execute the tactic proof because the desired proposition is not known.
As part of running the earlier tactics, the proposition is automatically refined to be one that the tactics could prove.
However, their default cases fill it out incorrectly, leading to a proof that fails.
```lean (name := byBusted) +error
example (n : Nat) := by
induction n
next => rfl
next n' ih =>
simp only [HAdd.hAdd, Add.add, Nat.add] at *
rewrite [ih]
rfl
```
```leanOutput byBusted
Invalid rewrite argument: Expected an equality or iff proof or definition name, but `ih` is a proof of
0 ≍ n'
```
A prefix type ascription with {keywordOf Lean.Parser.Term.show}`show` can be used to provide the proposition being proved.
This can be useful in syntactic contexts where adding it as a local definition would be inconvenient.
```lean
example (n : Nat) := show 0 + n = n by
induction n
next => rfl
next n' ih =>
simp only [HAdd.hAdd, Add.add, Nat.add] at *
rewrite [ih]
rfl
```
:::
:::example "Ascribing Types to {keywordOf Lean.Parser.Term.do}`do` Blocks"
This example lacks sufficient type information to synthesize the {name}`Pure` instance.
```lean (name := doBusted) +error
example := do
return 5
```
```leanOutput doBusted
typeclass instance problem is stuck
Pure ?m.12
Note: Lean will not try to resolve this typeclass instance problem because the type argument to `Pure` is a metavariable. This argument must be fully determined before Lean will try to resolve the typeclass.
Hint: Adding type annotations and supplying implicit arguments to functions can give Lean more information for typeclass resolution. For example, if you have a variable `x` that you intend to be a `Nat`, but Lean reports it as having an unresolved type like `?m`, replacing `x` with `(x : Nat)` can get typeclass resolution un-stuck.
```
A prefix type ascription with {keywordOf Lean.Parser.Term.show}`show`, together with a {tech}[hole], can be used to indicate the monad.
The {tech (key := "default instance")}[default] {lean}`OfNat _ 5` instance provides enough type information to fill the hole with {lean}`Nat`.
```lean
example := show StateM String _ from do
return 5
```
:::
There is an important difference between postfix type ascriptions and {keywordOf Lean.Parser.Term.show}`show`.
Ordinary postfix type ascriptions change the type that is expected for the term, which can change the way that the term elaborates.
After elaboration, however, Lean infers the type of the resulting term and uses that inferred type for further elaboration tasks.
On the other hand, {keywordOf Lean.Parser.Term.show}`show` elaborates to a term whose inferred type is the ascribed type.
The difference can be observed when using {tech}[generalized field notation], where the ascribed type is only guaranteed to be used to resolve fields when using {keywordOf Lean.Parser.Term.show}`show`.
::::example "Postfix Ascription vs `show`"
:::paragraph
This definition establishes an alternative name for {lean}`List String`:
```lean
def Colors := List String
```
:::
:::paragraph
A postfix type ascription provides the type information that's needed to determine the implicit argument {name}`String` to {name}`List.nil`, but the resulting type is still {lean}`List String`:
```lean (name := nil)
#check ([] : Colors)
```
```leanOutput nil
[] : List String
```
:::
:::paragraph
When using {keywordOf Lean.Parser.Term.show}`show`, on the other hand, the elaborated term is constructed in such a way that the inferred type is {lean}`Colors`:
```lean (name := nil2)
#check (show Colors from [])
```
```leanOutput nil2
have this := [];
this : Colors
```
:::
:::paragraph
This function is designed to be invoked using {tech}[generalized field notation]:
```lean
def Colors.hasYellow (cs : Colors) : Bool :=
cs.any (·.toLower == "yellow")
```
:::
:::paragraph
Due to the differences in their inferred types, it can be used with {keywordOf Lean.Parser.Term.show}`show`, but not with the postfix type ascription:
```lean (name := nil3) +error
#eval ([] : Colors).hasYellow
```
```leanOutput nil3
Invalid field `hasYellow`: The environment does not contain `List.hasYellow`, so it is not possible to project the field `hasYellow` from an expression
[]
of type `List String`
```
```lean (name := nil4)
#eval (show Colors from []).hasYellow
```
```leanOutput nil4
false
```
:::
::::
# Quotation and Antiquotation
Quotation terms are described in the {ref "quotation"}[section on quotation].
# `do`-Notation
{keywordOf Lean.Parser.Term.do}`do`-notation is described {ref "do-notation"}[in the chapter on monads.]
# Proofs
The syntax for invoking tactics ({keywordOf Lean.Parser.Term.byTactic}`by`) is described in {ref "by"}[the section on proofs]. |
reference-manual/Manual/Releases.lean | import VersoManual
import Manual.Releases.«v4_28_0»
import Manual.Releases.«v4_27_0»
import Manual.Releases.«v4_26_0»
import Manual.Releases.«v4_25_1»
import Manual.Releases.«v4_25_0»
import Manual.Releases.«v4_24_0»
import Manual.Releases.«v4_23_0»
import Manual.Releases.«v4_22_0»
import Manual.Releases.«v4_21_0»
import Manual.Releases.«v4_20_0»
import Manual.Releases.«v4_19_0»
import Manual.Releases.«v4_18_0»
import Manual.Releases.«v4_17_0»
import Manual.Releases.«v4_16_0»
import Manual.Releases.«v4_15_0»
import Manual.Releases.«v4_14_0»
import Manual.Releases.«v4_13_0»
import Manual.Releases.«v4_12_0»
import Manual.Releases.«v4_11_0»
import Manual.Releases.«v4_10_0»
import Manual.Releases.«v4_9_0»
import Manual.Releases.«v4_8_0»
import Manual.Releases.«v4_7_0»
import Manual.Releases.«v4_6_0»
import Manual.Releases.«v4_5_0»
import Manual.Releases.«v4_4_0»
import Manual.Releases.«v4_3_0»
import Manual.Releases.«v4_2_0»
import Manual.Releases.«v4_1_0»
import Manual.Releases.«v4_0_0»
import Manual.Releases.«v4_0_0-m5»
import Manual.Releases.«v4_0_0-m4»
import Manual.Releases.«v4_0_0-m3»
import Manual.Releases.«v4_0_0-m2»
import Manual.Releases.«v4_0_0-m1»
open Manual
open Verso.Genre
#doc (Manual) "Release Notes" =>
%%%
tag := "release-notes"
file := "releases"
number := false
%%%
This section provides release notes about recent versions of Lean. When updating to a new version, please
read the corresponding release notes. They may contain advice that will help you understand
the differences with the previous version and upgrade your projects.
{include 0 Manual.Releases.«v4_28_0»}
{include 0 Manual.Releases.«v4_27_0»}
{include 0 Manual.Releases.«v4_26_0»}
{include 0 Manual.Releases.«v4_25_1»}
{include 0 Manual.Releases.«v4_25_0»}
{include 0 Manual.Releases.«v4_24_0»}
{include 0 Manual.Releases.«v4_23_0»}
{include 0 Manual.Releases.«v4_22_0»}
{include 0 Manual.Releases.«v4_21_0»}
{include 0 Manual.Releases.«v4_20_0»}
{include 0 Manual.Releases.«v4_19_0»}
{include 0 Manual.Releases.«v4_18_0»}
{include 0 Manual.Releases.«v4_17_0»}
{include 0 Manual.Releases.«v4_16_0»}
{include 0 Manual.Releases.«v4_15_0»}
{include 0 Manual.Releases.«v4_14_0»}
{include 0 Manual.Releases.«v4_13_0»}
{include 0 Manual.Releases.«v4_12_0»}
{include 0 Manual.Releases.«v4_11_0»}
{include 0 Manual.Releases.«v4_10_0»}
{include 0 Manual.Releases.«v4_9_0»}
{include 0 Manual.Releases.«v4_8_0»}
{include 0 Manual.Releases.«v4_7_0»}
{include 0 Manual.Releases.«v4_6_0»}
{include 0 Manual.Releases.«v4_5_0»}
{include 0 Manual.Releases.«v4_4_0»}
{include 0 Manual.Releases.«v4_3_0»}
{include 0 Manual.Releases.«v4_2_0»}
{include 0 Manual.Releases.«v4_1_0»}
{include 0 Manual.Releases.«v4_0_0»}
{include 0 Manual.Releases.«v4_0_0-m5»}
{include 0 Manual.Releases.«v4_0_0-m4»}
{include 0 Manual.Releases.«v4_0_0-m3»}
{include 0 Manual.Releases.«v4_0_0-m2»}
{include 0 Manual.Releases.«v4_0_0-m1»} |
reference-manual/Manual/Monads.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Manual.Monads.Syntax
import Manual.Monads.Zoo
import Manual.Monads.Lift
import Manual.Monads.API
import Manual.Monads.Laws
import Lean.Parser.Command
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
set_option maxRecDepth 1024
#doc (Manual) "Functors, Monads and `do`-Notation" =>
%%%
tag := "monads-and-do"
%%%
The type classes {name}`Functor`, {name}`Applicative`, and {name}`Monad` provide fundamental tools for functional programming.{margin}[An introduction to programming with these abstractions is available in [_Functional Programming in Lean_](https://lean-lang.org/functional_programming_in_lean/functor-applicative-monad.html).]
While they are inspired by the concepts of functors and monads in category theory, the versions used for programming are more limited.
The type classes in Lean's standard library represent the concepts as used for programming, rather than the general mathematical definition.
Instances of {deftech}[{name}`Functor`] allow an operation to be applied consistently throughout some polymorphic context.
Examples include transforming each element of a list by applying a function and creating new {lean}`IO` actions by arranging for a pure function to be applied to the result of an existing {lean}`IO` action.
Instances of {deftech}[{name}`Monad`] allow side effects with data dependencies to be encoded; examples include using a tuple to simulate mutable state, a sum type to simulate exceptions, and representing actual side effects with {lean}`IO`.
{deftech}[{name}`Applicative` functors] occupy a middle ground: like monads, they allow functions computed with effects to be applied to arguments that are computed with effects, but they do not allow sequential data dependencies where the output of an effect forms an input into another effectful operation.
The additional type classes {name}`Pure`, {name}`Bind`, {name}`SeqLeft`, {name}`SeqRight`, and {name}`Seq` capture individual operations from {name}`Applicative` and {name}`Monad`, allowing them to be overloaded and used with types that are not necessarily {name}`Applicative` functors or {name}`Monad`s.
The {name}`Alternative` type class describes applicative functors that additionally have some notion of failure and recovery.
{docstring Functor}
{docstring Pure}
{docstring Seq}
{docstring SeqLeft}
{docstring SeqRight}
{docstring Applicative}
:::::keepEnv
```lean -show
section
variable {α : Type u} {β : Type u}
```
::::example "Lists with Lengths as Applicative Functors"
The structure {name}`LenList` pairs a list with a proof that it has the desired length.
As a consequence, its `zipWith` operator doesn't require a fallback in case the lengths of its inputs differ.
```lean
structure LenList (length : Nat) (α : Type u) where
list : List α
lengthOk : list.length = length
def LenList.head (xs : LenList (n + 1) α) : α :=
xs.list.head <| by
intro h
cases xs
simp_all
subst_eqs
def LenList.tail (xs : LenList (n + 1) α) : LenList n α :=
match xs with
| ⟨_ :: xs', _⟩ => ⟨xs', by simp_all⟩
def LenList.map (f : α → β) (xs : LenList n α) : LenList n β where
list := xs.list.map f
lengthOk := by
cases xs
simp [List.length_map, *]
def LenList.zipWith (f : α → β → γ)
(xs : LenList n α) (ys : LenList n β) :
LenList n γ where
list := xs.list.zipWith f ys.list
lengthOk := by
cases xs; cases ys
simp [List.length_zipWith, *]
```
The well-behaved {name}`Applicative` instance applies functions to arguments element-wise.
Because {name}`Applicative` extends {name}`Functor`, a separate {name}`Functor` instance is not necessary, and {name Functor.map}`map` can be defined as part of the {name}`Applicative` instance.
```lean
instance : Applicative (LenList n) where
map := LenList.map
pure x := {
list := List.replicate n x
lengthOk := List.length_replicate
}
seq {α β} fs xs := fs.zipWith (· ·) (xs ())
```
The well-behaved {name}`Monad` instance takes the diagonal of the results of applying the function:
```lean
@[simp]
theorem LenList.list_length_eq (xs : LenList n α) :
xs.list.length = n := by
cases xs
simp [*]
def LenList.diagonal (square : LenList n (LenList n α)) : LenList n α :=
match n with
| 0 => ⟨[], rfl⟩
| n' + 1 => {
list :=
square.head.head :: (square.tail.map (·.tail)).diagonal.list
lengthOk := by simp
}
```
::::
```lean -show
end
```
:::::
{docstring Alternative}
{docstring Bind}
{docstring Monad}
{include 0 Manual.Monads.Laws}
{include 0 Manual.Monads.Lift}
{include 0 Manual.Monads.Syntax}
{include 0 Manual.Monads.API}
{include 0 Manual.Monads.Zoo} |
reference-manual/Manual/ExtractExamples.lean | import Manual.Meta.Example
import VersoManual
open Lean
open Verso Doc Multi
open Verso.Genre Manual
open Std (HashMap)
/--
A monad for traversing all example directives in a document, and
extracting them into individual files in a directory structured as the
document is structured. In the `Array (Array String × String)`, the
first string is the path to the example file we're writing, and the
second is the body of the example.
-/
abbrev ExtractM := ReaderT (Array String) (StateT (Array (Array String × String)) IO)
partial def extractExamples (_mode : Mode) (logError : String → IO Unit) (cfg : Manual.Config) (_state : TraverseState) (text : Part Manual) : IO Unit := do
let code := (← part text |>.run #[] |>.run #[]).snd
let dest := cfg.destination / "extracted-examples"
for (ctx, content) in code do
let filename := ctx.map (Slug.toString ∘ String.sluggify)
|>.foldl (init := dest) (· / ·)
|>.withExtension "lean"
filename.parent.forM IO.FS.createDirAll
IO.FS.writeFile filename content
where
part : Part Manual → ExtractM Unit
| .mk _ titleString _ intro subParts => withReader (·.push titleString) do
for b in intro do block b
for p in subParts do part p
block : Block Manual → ExtractM Unit
| .other which contents => do
if which.name == `Manual.example then
match FromJson.fromJson? which.data (α := Manual.ExampleBlockJson) with
| .error e => logError s!"Error deserializing example data: {e}"
| .ok (descrString, _, _, _, liveText) =>
let context ← read
let some txt := liveText
| return ()
modify fun saved =>
saved.push (context.push descrString, txt)
for b in contents do block b
| .concat bs | .blockquote bs =>
for b in bs do block b
| .ol _ lis | .ul lis =>
for li in lis do
for b in li.contents do block b
| .dl dis =>
for di in dis do
for b in di.desc do block b
| .para .. | .code .. => pure () |
reference-manual/Manual/Quotients.lean | import VersoManual
import Manual.Meta
import Manual.Language.Functions
import Manual.Language.InductiveTypes
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
#doc (Manual) "Quotients" =>
%%%
tag := "quotients"
%%%
{deftech}_Quotient types_ allow a new type to be formed by decreasing the granularity of an existing type's {tech}[propositional equality].
In particular, given a type $`A` and an equivalence relation $`\sim`, the quotient $`A / \sim` contains the same elements as $`A`, but every pair of elements that are related by $`\sim` are considered equal.
Equality is respected universally; nothing in Lean's logic can observe any difference between two equal terms.
Thus, quotient types provide a way to build an impenetrable abstraction barrier.
In particular, all functions from a quotient type must prove that they respect the equivalence relation.
{docstring Quotient}
A proof that two elements of the underlying type are related by the equivalence relation is sufficient to prove that they are equal in the {name}`Quotient`.
However, {tech}[definitional equality] is unaffected by the use of {lean}`Quotient`: two elements in the quotient are definitionally equal if and only if they are definitionally equal in the underlying type.
:::paragraph
Quotient types are not widely used in programming.
However, they occur regularly in mathematics:
: Integers
The integers are traditionally defined as a pair of natural numbers $`(n, k)` that encodes the integer $`n - k`.
In this encoding, two integers $`(n_1, k_1)` and $`(n_2, k_2)` are equal if $`n_1 + k_2 = n_2 + k_1`.
: Rational Numbers
The number $`\frac{n}{d}` can be encoded as the pair $`(n, d)`, where $`d \neq 0`.
Two rational numbers $`\frac{n_1}{d_1}` and $`\frac{n_2}{d_2}` are equal if $`n_1 d_2 = n_2 d_1`.
: Real Numbers
The real numbers can be represented as a Cauchy sequence, but this encoding is not unique.
Using a quotient type, two Cauchy sequences can be made equal when their difference converges to zero.
: Finite Sets
Finite sets can be represented as lists of elements.
With a quotient types, two finite sets can be made equal if they contain the same elements; this definition does not impose any requirements (such as decidable equality or an ordering relation) on the type of elements.
:::
One alternative to quotient types would be to reason directly about the equivalence classes introduced by the relation.
The downside of this approach is that it does not allow _computation_: in addition to knowing _that_ there is an integer that is the sum of 5 and 8, it is useful for $`5 + 8 = 13` to not be a theorem that requires proof.
Defining functions out of sets of equivalence classes relies on non-computational classical reasoning principles, while functions from quotient types are ordinary computational functions that additionally respect an equivalence relation.
# Alternatives to Quotient Types
%%%
tag := "quotient-alternatives"
%%%
While {name}`Quotient` is a convenient way to form quotients with reasonable computational properties, it is often possible to define quotients in other ways.
In general, a type $`Q` is said to be the quotient of $`A` by an equivalence relation $`\sim` if it respects the universal property of quotients: there is a function $`q:A\to Q` with the property that $`q(a)=q(b)` if and only if $`a\sim b` for all $`a` and $`b` in $`A`.
Quotients formed with {name}`Quotient` have this property up to {tech}[propositional equality]: elements of $`A` that are related by $`\sim` are equal, so they cannot be distinguished.
However, members of the same equivalence class are not necessarily {tech (key := "definitional equality")}[definitionally equal] in the quotient.
Quotients may also be implemented by designating a single representative of each equivalence class in $`A` itself, and then defining $`Q` as pair of elements in $`A` with proofs that they are such a canonical representative.
Together with a function that maps each $`a` in $`A` to its canonical representative, $`Q` is a quotient of $`A`.
Due to {tech}[proof irrelevance], representatives in $`Q` of the same equivalence class are {tech (key := "definitional equality")}[definitionally equal].
Such a manually implemented quotient $`Q` can be easier to work with than {name}`Quotient`.
In particular, because each equivalence class is represented by its single canonical representative, there's no need to prove that functions from the quotient respect the equivalence relation.
It can also have better computational properties due to the fact that the computations give normalized values (in contrast, elements of {name}`Quotient` can be represented in multiple ways).
Finally, because the manually implemented quotient is an {tech}[inductive type], it can be used in contexts where other kinds of types cannot, such as when defining a {ref "nested-inductive-types"}[nested inductive type].
However, not all quotients can be manually implemented.
:::example "Manually Quotiented Integers"
When implemented as pairs of {lean}`Nat`s, each equivalence class according to the desired equality for integers has a canonical representative in which at least one of the {lean}`Nat`s is zero.
This can be represented as a Lean structure:
```lean
structure Z where
a : Nat
b : Nat
canonical : a = 0 ∨ b = 0
```
Due to {tech}[proof irrelevance], every value of this structure type that represents the same integer is _already_ equal.
Constructing a {lean}`Z` can be made more convenient with a wrapper that uses the fact that subtraction of natural numbers truncates at zero to automate the construction of the proof:
```lean
def Z.mk' (n k : Nat) : Z where
a := n - k
b := k - n
canonical := by omega
```
This construction respects the equality demanded of integers:
```lean
theorem Z_mk'_respects_eq :
(Z.mk' n k = Z.mk' n' k') ↔ (n + k' = n' + k) := by
simp [Z.mk']
omega
```
To use this type in examples, it's convenient to have {name}`Neg`, {name}`OfNat`, and {name}`ToString` instances.
These instances make it easier to read or write examples.
```lean
instance : Neg Z where
neg n := Z.mk' n.b n.a
instance : OfNat Z n where
ofNat := Z.mk' n 0
instance : ToString Z where
toString n :=
if n.a = 0 then
if n.b = 0 then "0"
else s!"-{n.b}"
else toString n.a
```
```lean (name := intFive)
#eval (5 : Z)
```
```leanOutput intFive
5
```
```lean (name := intMinusFive)
#eval (-5 : Z)
```
```leanOutput intMinusFive
-5
```
Addition is addition of the underlying {lean}`Nat`s:
```lean
instance : Add Z where
add n k := Z.mk' (n.a + k.a) (n.b + k.b)
```
```lean (name := addInt)
#eval (-5 + 22: Z)
```
```leanOutput addInt
17
```
Because each equivalence class is uniquely represented, there's no need to write a proof that these functions from {lean}`Z` respect the equivalence relation.
However, in practice, the {ref "quotient-api"}[API for quotients] should be implemented for manually-constructed quotients and proved to respect the universal property.
:::
:::example "Built-In Integers as Quotients"
Lean's built-in integer type {lean}`Int` satisfies the universal property of quotients, and can thus be thought of as a quotient of pairs of {lean}`Nat`s.
The canonical representative of each equivalence class can be computed via comparison and subtraction:{margin}[This {lean}`toInt` function is called {name}`Int.subNatNat` in the standard library.]
```lean
def toInt (n k : Nat) : Int :=
if n < k then - (k - n : Nat)
else if n = k then 0
else (n - k : Nat)
```
It satisfies the universal property.
Two pairs of {lean}`Nat`s represent the same integer if and only if {lean}`toInt` computes the same {lean}`Int` for both pairs:
```lean
theorem toInt_sound :
n + k' = k + n' ↔
toInt n k = toInt n' k' := by
simp only [toInt]
split <;> split <;> omega
```
:::
# Setoids
%%%
tag := "setoids"
%%%
Quotient types are built on setoids.
A {deftech}_setoid_ is a type paired with a distinguished equivalence relation.
Unlike a quotient type, the abstraction barrier is not enforced, and proof automation designed around equality cannot be used with the setoid's equivalence relation.
Setoids are useful on their own, in addition to being a building block for quotient types.
{docstring Setoid}
{docstring Setoid.refl}
{docstring Setoid.symm}
{docstring Setoid.trans}
# Equivalence Relations
%%%
tag := "equivalence-relations"
%%%
An {deftech}_equivalence relation_ is a relation that is reflexive, symmetric, and transitive.
:::syntax term (title := "Equivalence Relations")
Equivalence according to some canonical equivalence relation for a type is written using `≈`, which is overloaded using the {tech}[type class] {name}`HasEquiv`.
```grammar
$_ ≈ $_
```
:::
{docstring HasEquiv}
```lean -show
section
variable (r : α → α → Prop)
```
The fact that a relation {lean}`r` is actually an equivalence relation is stated {lean}`Equivalence r`.
{docstring Equivalence}
```lean -show
end
```
Every {name}`Setoid` instance leads to a corresponding {name}`HasEquiv` instance.
```lean -show
-- Check preceding para
section
variable {α : Sort u} [Setoid α]
/-- info: instHasEquivOfSetoid -/
#check_msgs in
#synth HasEquiv α
end
```
# Quotient API
%%%
tag := "quotient-api"
%%%
The quotient API relies on a pre-existing {name}`Setoid` instance.
## Introducing Quotients
%%%
tag := "quotient-intro"
%%%
The type {lean}`Quotient` expects an instance of {lean}`Setoid` as an ordinary parameter, rather than as an {tech}[instance implicit] parameter.
This helps ensure that the quotient uses the intended equivalence relation.
The instance can be provided either by naming the instance or by using {name}`inferInstance`.
A value in the quotient is a value from the setoid's underlying type, wrapped in {lean}`Quotient.mk`.
{docstring Quotient.mk}
{docstring Quotient.mk'}
:::example "The Integers as a Quotient Type"
The integers, defined as pairs of natural numbers where the represented integer is the difference of the two numbers, can be represented via a quotient type.
This representation is not unique: both {lean}`(4, 7)` and {lean}`(1, 4)` represent {lean (type := "Int")}`-3`.
Two encoded integers should be considered equal when they are related by {name}`Z.eq`:
```lean
def Z' : Type := Nat × Nat
def Z.eq (n k : Z') : Prop :=
n.1 + k.2 = n.2 + k.1
```
This relation is an equivalence relation:
```lean
def Z.eq.eqv : Equivalence Z.eq where
refl := by
intro (x, y)
simp +arith [eq]
symm := by
intro (x, y) (x', y') heq
simp_all only [eq]
omega
trans := by
intro (x, y) (x', y') (x'', y'')
intro heq1 heq2
simp_all only [eq]
omega
```
Thus, it can be used as a {name}`Setoid`:
```lean
instance Z.instSetoid : Setoid Z' where
r := Z.eq
iseqv := Z.eq.eqv
```
The type {lean}`Z` of integers is then the quotient of {lean}`Z'` by the {name}`Setoid` instance:
```lean
def Z : Type := Quotient Z.instSetoid
```
The helper {lean}`Z.mk` makes it simpler to create integers without worrying about the choice of {name}`Setoid` instance:
```lean
def Z.mk (n : Z') : Z := Quotient.mk _ n
```
However, numeric literals are even more convenient.
An {name}`OfNat` instance allows numeric literals to be used for integers:
```lean
instance : OfNat Z n where
ofNat := Z.mk (n, 0)
```
:::
## Eliminating Quotients
%%%
tag := "quotient-elim"
%%%
Functions from quotients can be defined by proving that a function from the underlying type respects the quotient's equivalence relation.
This is accomplished using {lean}`Quotient.lift` or its binary counterpart {lean}`Quotient.lift₂`.
The variants {lean}`Quotient.liftOn` and {lean}`Quotient.liftOn₂` place the quotient parameter first rather than last in the parameter list.
{docstring Quotient.lift}
{docstring Quotient.liftOn}
{docstring Quotient.lift₂}
{docstring Quotient.liftOn₂}
:::example "Integer Negation and Addition"
```lean -show
def Z' : Type := Nat × Nat
def Z.eq (n k : Z') : Prop :=
n.1 + k.2 = n.2 + k.1
def Z.eq.eqv : Equivalence Z.eq where
refl := by
intro (x, y)
simp +arith [eq]
symm := by
intro (x, y) (x', y') heq
simp_all only [eq]
omega
trans := by
intro (x, y) (x', y') (x'', y'')
intro heq1 heq2
simp_all only [eq]
omega
instance Z.instSetoid : Setoid Z' where
r := Z.eq
iseqv := Z.eq.eqv
def Z : Type := Quotient Z.instSetoid
def Z.mk (n : Z') : Z := Quotient.mk _ n
```
Given the encoding {lean}`Z` of integers as a quotient of pairs of natural numbers, negation can be implemented by swapping the first and second projections:
```lean
def neg' : Z' → Z
| (x, y) => .mk (y, x)
```
This can be transformed into a function from {lean}`Z` to {lean}`Z` by proving that negation respects the equivalence relation:
```lean
instance : Neg Z where
neg :=
Quotient.lift neg' <| by
intro n k equiv
apply Quotient.sound
simp only [· ≈ ·, Setoid.r, Z.eq] at *
omega
```
Similarly, {lean}`Quotient.lift₂` is useful for defining binary functions from a quotient type.
Addition is defined point-wise:
```lean
def add' (n k : Nat × Nat) : Z :=
.mk (n.1 + k.1, n.2 + k.2)
```
Lifting it to the quotient requires a proof that addition respects the equivalence relation:
```lean
instance : Add Z where
add (n : Z) :=
n.lift₂ add' <| by
intro n k n' k'
intro heq heq'
apply Quotient.sound
cases n; cases k; cases n'; cases k'
simp_all only [· ≈ ·, Setoid.r, Z.eq]
omega
```
:::
When the function's result type is a {tech}[subsingleton], {name}`Quotient.recOnSubsingleton` or {name}`Quotient.recOnSubsingleton₂` can be used to define the function.
Because all elements of a subsingleton are equal, such a function automatically respects the equivalence relation, so there is no proof obligation.
{docstring Quotient.recOnSubsingleton}
{docstring Quotient.recOnSubsingleton₂}
## Proofs About Quotients
%%%
tag := "quotient-proofs"
%%%
The fundamental tools for proving properties of elements of quotient types are the soundness axiom and the induction principle.
The soundness axiom states that if two elements of the underlying type are related by the quotient's equivalence relation, then they are equal in the quotient type.
The induction principle follows the structure of recursors for inductive types: in order to prove that a predicate holds all elements of a quotient type, it suffices to prove that it holds for an application of {name}`Quotient.mk` to each element of the underlying type.
Because {name}`Quotient` is not an {tech}[inductive type], tactics such as {tactic}`cases` and {tactic}`induction` require that {name}`Quotient.ind` be specified explicitly with the {keyword}`using` modifier.
{docstring Quotient.sound}
{docstring Quotient.ind}
:::example "Proofs About Quotients"
```lean -show
def Z' : Type := Nat × Nat
def Z.eq (n k : Z') : Prop :=
n.1 + k.2 = n.2 + k.1
def Z.eq.eqv : Equivalence Z.eq where
refl := by
intro (x, y)
simp +arith [eq]
symm := by
intro (x, y) (x', y') heq
simp_all only [eq]
omega
trans := by
intro (x, y) (x', y') (x'', y'')
intro heq1 heq2
simp_all only [eq]
omega
instance Z.instSetoid : Setoid Z' where
r := Z.eq
iseqv := Z.eq.eqv
def Z : Type := Quotient Z.instSetoid
def Z.mk (n : Z') : Z := Quotient.mk _ n
def neg' : Z' → Z
| (x, y) => .mk (y, x)
instance : Neg Z where
neg :=
Quotient.lift neg' <| by
intro n k equiv
apply Quotient.sound
simp only [· ≈ ·, Setoid.r, Z.eq] at *
omega
def add' (n k : Nat × Nat) : Z :=
.mk (n.1 + k.1, n.2 + k.2)
instance : Add Z where
add (n : Z) :=
n.lift₂ add' <| by
intro n k n' k'
intro heq heq'
apply Quotient.sound
cases n; cases k; cases n'; cases k'
simp_all only [· ≈ ·, Setoid.r, Z.eq]
omega
instance : OfNat Z n where
ofNat := Z.mk (n, 0)
```
Given the definition of integers as a quotient type from the prior examples, {name}`Quotient.ind` and {name}`Quotient.sound` can be used to prove that negation is an additive inverse.
First, {lean}`Quotient.ind` is used to replace instances of `n` with applications of {name}`Quotient.mk`.
Having done so, the left side of the equality becomes definitionally equal to a single application of {name}`Quotient.mk`, via unfolding definitions and the computation rule for {name}`Quotient.lift`.
This makes {name}`Quotient.sound` applicable, which yields a new goal: to show that both sides are related by the equivalence relation.
This is provable using {tactic}`simp_arith`.
```lean
theorem Z.add_neg_inverse (n : Z) : n + (-n) = 0 := by
cases n using Quotient.ind
apply Quotient.sound
simp +arith [· ≈ ·, Setoid.r, eq]
```
:::
For more specialized use cases, {name}`Quotient.rec`, {name}`Quotient.recOn`, and {name}`Quotient.hrecOn` can be used to define dependent functions from a quotient type to a type in any other universe.
Stating that a dependent function respects the quotient's equivalence relation requires a means of dealing with the fact that the dependent result type is instantiated with different values from the quotient on each side of the equality.
{name}`Quotient.rec` and {name}`Quotient.recOn` use the {name}`Quotient.sound` to equate the related elements, inserting the appropriate cast into the statement of equality, while {name}`Quotient.hrecOn` uses heterogeneous equality.
{docstring Quotient.rec}
{docstring Quotient.recOn}
{docstring Quotient.hrecOn}
If two elements of a type are equal in a quotient, then they are related by the setoid's equivalence relation.
This property is called {name}`Quotient.exact`.
{docstring Quotient.exact}
# Logical Model
%%%
tag := "quotient-model"
%%%
Like functions and universes, quotient types are a built-in feature of Lean's type system.
However, the underlying primitives are based on the somewhat simpler {name}`Quot` type rather than on {name}`Quotient`, and {name}`Quotient` is defined in terms of {name}`Quot`.
The primary difference is that {name}`Quot` is based on an arbitrary relation, rather than a {name}`Setoid` instance.
The provided relation need not be an equivalence relation; the rules that govern {name}`Quot` and {name}`Eq` automatically extend the provided relation into its reflexive, transitive, symmetric closure.
When the relation is already an equivalence relation, {name}`Quotient` should be used instead of {name}`Quot` so Lean can make use of the fact that the relation is an equivalence relation.
The fundamental quotient type API consists of {lean}`Quot`, {name}`Quot.mk`, {name}`Quot.lift`, {name}`Quot.ind`, and {name}`Quot.sound`.
These are used in the same way as their {name}`Quotient`-based counterparts.
{docstring Quot}
{docstring Quot.mk}
{docstring Quot.lift}
{docstring Quot.ind}
{docstring Quot.sound}
## Quotient Reduction
%%%
tag := "quotient-reduction"
%%%
```lean -show
section
variable
(α β : Sort u)
(r : α → α → Prop)
(f : α → β)
(resp : ∀ x y, r x y → f x = f y)
(x : α)
```
In addition to the above constants, Lean's kernel contains a reduction rule for {name}`Quot.lift` that causes it to reduce when used with {name}`Quot.mk`, analogous to {tech}[ι-reduction] for inductive types.
Given a relation {lean}`r` over {lean}`α`, a function {lean}`f` from {lean}`α` to {lean}`β`, and a proof {lean}`resp` that {lean}`f` respects {lean}`r`, the term {lean}`Quot.lift f resp (Quot.mk r x)` is {tech (key := "definitional equality")}[definitionally equal] to {lean}`f x`.
```lean -show
end
```
```lean -show
section
```
```lean
variable
(r : α → α → Prop)
(f : α → β)
(ok : ∀ x y, r x y → f x = f y)
(x : α)
example : Quot.lift f ok (Quot.mk r x) = f x := rfl
```
```lean -show
end
```
## Quotients and Inductive Types
%%%
tag := "quotients-nested-inductives"
%%%
Because {name}`Quot` is not an inductive type, types implemented as quotients may not occur around {ref "nested-inductive-types"}[nested occurrences] in inductive type declarations.
These types declarations must be rewritten to remove the nested quotient, which can often be done by defining a quotient-free version and then separately defining an equivalence relation that implements the desired equality relation.
:::example "Nested Inductive Types and Quotients"
The nested inductive type of rose trees nests the recursive occurrence of {lean}`RoseTree` under {lean}`List`:
```lean
inductive RoseTree (α : Type u) where
| leaf : α → RoseTree α
| branch : List (RoseTree α) → RoseTree α
```
However, taking a quotient of the {name}`List` that identifies all elements in the style of {ref "squash-types"}[squash types] causes Lean to reject the declaration:
```lean +error (name := nestedquot)
inductive SetTree (α : Type u) where
| leaf : α → SetTree α
| branch :
Quot (fun (xs ys : List (SetTree α)) => True) →
SetTree α
```
```leanOutput nestedquot
(kernel) arg #2 of 'SetTree.branch' contains a non valid occurrence of the datatypes being declared
```
:::
## Low-Level Quotient API
{name}`Quot.liftOn` is an version of {name}`Quot.lift` that takes the quotient type's value first, by analogy to {name}`Quotient.liftOn`.
{docstring Quot.liftOn}
Lean also provides convenient elimination from {name}`Quot` into any subsingleton without further proof obligations, along with dependent elimination principles that correspond to those used for {name}`Quotient`.
{docstring Quot.recOnSubsingleton}
{docstring Quot.rec}
{docstring Quot.recOn}
{docstring Quot.hrecOn}
# Quotients and Function Extensionality
%%%
tag := "quotient-funext"
%%%
:::::keepEnv
Because Lean's definitional equality includes a computational reduction rule for {lean}`Quot.lift`, quotient types are used in the standard library to prove function extensionality, which would need to be an {ref "axioms"}[axiom] otherwise.
This is done by first defining a type of functions quotiented by extensional equality, for which extensional equality holds by definition.
```lean
variable {α : Sort u} {β : α → Sort v}
def extEq (f g : (x : α) → β x) : Prop :=
∀ x, f x = g x
def ExtFun (α : Sort u) (β : α → Sort v) :=
Quot (@extEq α β)
```
Extensional functions can be applied just like ordinary functions.
Application respects extensional equality by definition: if applying to functions gives equal results, then applying them gives equal results.
```lean
def extApp
(f : ExtFun α β)
(x : α) :
β x :=
f.lift (· x) fun g g' h => by
exact h x
```
```lean -show
section
variable (f : (x : α) → β x)
```
To show that two functions that are extensionally equal are in fact equal, it suffices to show that the functions that result from extensionally applying the corresponding extensional functions are equal.
This is because
```leanTerm
extApp (Quot.mk _ f)
```
is definitionally equal to
```leanTerm
fun x => (Quot.mk extEq f).lift (· x) (fun _ _ h => h x)
```
which is definitionally equal to {lean}`fun x => f x`, which is definitionally equal (by {tech}[η-equivalence]) to {lean}`f`.
A propositional version of the computation rule for {name}`Quot.lift` would not suffice, because the reducible expression occurs in the body of a function and rewriting by an equality in a function would already require function extensionality.
```lean -show
end
```
From here, it is enough to show that the extensional versions of the two functions are equal.
This is true due to {name}`Quot.sound`: the fact that they are in the quotient's equivalence relation is an assumption.
This proof is a much more explicit version of the one in the standard library:
```lean
theorem funext'
{f g : (x : α) → β x}
(h : ∀ x, f x = g x) :
f = g := by
suffices extApp (Quot.mk _ f) = extApp (Quot.mk _ g) by
unfold extApp at this
dsimp at this
exact this
suffices Quot.mk extEq f = Quot.mk extEq g by
apply congrArg
exact this
apply Quot.sound
exact h
```
:::::
# Squash Types
%%%
tag := "squash-types"
%%%
```lean -show
section
variable {α : Sort u}
```
Squash types are a quotient by the relation that relates all elements, transforming it into a {tech}[subsingleton].
In other words, if {lean}`α` is inhabited, then {lean}`Squash α` has a single element, and if {lean}`α` is uninhabited, then {lean}`Squash α` is also uninhabited.
Unlike {lean}`Nonempty α`, which is a proposition stating that {lean}`α` is inhabited and is thus represented by a dummy value at runtime, {lean}`Squash α` is a type that is represented identically to {lean}`α`.
Because {lean}`Squash α` is in the same universe as {lean}`α`, it is not subject to the restrictions on computing data from propositions.
```lean -show
end
```
{docstring Squash}
{docstring Squash.mk}
{docstring Squash.lift}
{docstring Squash.ind} |
reference-manual/Manual/Elaboration.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Manual.ValidatingProofs
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option guard_msgs.diff true
open Lean (Syntax SourceInfo)
#doc (Manual) "Elaboration and Compilation" =>
%%%
htmlSplit := .never
%%%
Roughly speaking, Lean's processing of a source file can be divided into the following stages:
: Parsing
The parser transforms sequences of characters into syntax trees of type {lean}`Syntax`.
Lean's parser is extensible, so the {lean}`Syntax` type is very general.
: Macro Expansion
Macros are transformations that replace syntactic sugar with more basic syntax.
Both the input and output of macro expansion have type {lean}`Syntax`.
: Elaboration
{deftech (key := "Lean elaborator") -normalize}[Elaboration] is the process of transforming Lean's user-facing syntax into its core type theory.
This core theory is much simpler, enabling the trusted kernel to be very small.
Elaboration additionally produces metadata, such as proof states or the types of expressions, used for Lean's interactive features, storing them in a side table.
: Kernel Checking
Lean's trusted kernel checks the output of the elaborator to ensure that it follows the rules of the type theory.
: Compilation
The compiler transforms elaborated Lean code into executables that can be run.
:::figure "The Lean Pipeline" (tag := "pipeline-overview")

:::
In reality, the stages described above do not strictly occur one after the other.
Lean parses a single {tech}[command] (top-level declaration), elaborates it, and performs any necessary kernel checks.
Macro expansion is part of elaboration; before translating a piece of syntax, the elaborator first expands any macros present at the outermost layer.
Macro syntax may remain at deeper layers, but it will be expanded when the elaborator reaches those layers.
There are multiple kinds of elaboration: command elaboration implements the effects of each top-level command (e.g. declaring {tech}[inductive types], saving definitions, evaluating expressions), while term elaboration is responsible for constructing the terms that occur in many commands (e.g. types in signatures, the right-hand sides of definitions, or expressions to be evaluated).
Tactic execution is a specialization of term elaboration.
When a command is elaborated, the state of Lean changes.
New definitions or types may have been saved for future use, the syntax may be extended, or the set of names that can be referred to without explicit qualification may have changed.
The next command is parsed and elaborated in this updated state, and itself updates the state for subsequent commands.
# Parsing
%%%
tag := "parser"
%%%
Lean's parser is a recursive-descent parser that uses dynamic tables based on Pratt parsing{citep pratt73}[] to resolve operator precedence and associativity.
When grammars are unambiguous, the parser does not need to backtrack; in the case of ambiguous grammars, a memoization table similar to that used in Packrat parsing avoids exponential blowup.
Parsers are highly extensible: users may define new syntax in any command, and that syntax becomes available in the next command.
The open namespaces in the current {tech}[section scope] also influence which parsing rules are used, because parser extensions may be set to be active only when a given namespace is open.
When ambiguity is encountered, the longest matching parse is selected.
If there is no unique longest match, then both matching parses are saved in the syntax tree in a {deftech}[choice node] to be resolved later by the elaborator.
When the parser fails, it returns a {lean}`Syntax.missing` node, allowing for error recovery.
When successful, the parser saves sufficient information to reconstruct the original source file.
Unsuccessful parses may miss some information for the regions of the file that cannot be parsed.
The {lean}`SourceInfo` record type records information about the origin of a piece of syntax, including its source location and the surrounding whitespace.
Based on the {lean}`SourceInfo` field, there are three relationships that {lean}`Syntax` can have to a source file:
* {lean}`SourceInfo.original` indicates that the syntax value was produced directly by the parser.
* {lean}`SourceInfo.synthetic` indicates that the syntax value was produced programmatically, e.g. by the macro expander. Synthetic syntax may nonetheless be marked _canonical_, in which case the Lean user interface treats it as if the user had written it. Synthetic syntax is annotated with positions in the original file, but does not include leading or trailing whitespace.
* {lean}`SourceInfo.none` indicates no relationship to a file.
The parser maintains a token table that tracks the reserved words that are currently part of the language.
Defining new syntax or opening namespaces can cause a formerly-valid identifier to become a keyword.
Each production in Lean's grammar is named.
The name of a production is called its {deftech}_kind_.
These syntax kinds are important, because they are the key used to look up the interpretation of the syntax in the elaborator's tables.
Syntax extensions are described in more detail in {ref "language-extension"}[a dedicated chapter].
# Macro Expansion and Elaboration
%%%
tag := "macro-and-elab"
%%%
Having parsed a command, the next step is to elaborate it.
The precise meaning of {deftech -normalize}_elaboration_ depends on what is being elaborated: elaborating a command effects a change in the state of Lean, while elaborating a term results in a term in Lean's core type theory.
Elaboration of both commands and terms may be recursive, both because of command combinators such as {keywordOf Lean.Parser.Command.in}`in` and because terms may contain other terms.
Command and term elaboration have different capabilities.
Command elaboration may have side effects on an environment, and it has access to run arbitrary computations in {lean}`IO`.
Lean {deftech}[environments] contain the usual mapping from names to definitions along with additional data defined in {deftech}[environment extensions], which are additional tables associated with an environment; environment extensions are used to track most other information about Lean code, including {tactic}`simp` lemmas, custom pretty printers, and internals such as the compiler's intermediate representations.
Command elaboration also maintains a message log with the contents of the compiler's informational output, warnings, and errors, a set of {tech}[info trees] that associate metadata with the original syntax (used for interactive features such as displaying proof states, identifier completion, and showing documentation), accumulated debugging traces, the open {tech}[section scopes], and some internal state related to macro expansion.
Term elaboration may modify all of these fields except the open scopes.
Additionally, it has access to all the machinery needed to create fully-explicit terms in the core language from Lean's terse, friendly syntax, including unification, type class instance synthesis, and type checking.
The first step in both term and command elaboration is macro expansion.
There is a table that maps syntax kinds to macro implementations; macro implementations are monadic functions that transform the macro syntax into new syntax.
Macros are saved in the same table and execute in the same monad for terms, commands, tactics, and any other macro-extensible part of Lean.
If the syntax returned by the macro is itself a macro, then that syntax is again expanded—this process is repeated until either a syntax whose kind is not a macro is produced, or until a maximum number of iterations is reached, at which point Lean produces an error.
Typical macros process some outer layer of their syntax, leaving some subterms untouched.
This means that even when macro expansion has been completed, there still may be macro invocations remaining in the syntax below the top level.
New macros may be added to the macro table.
Defining new macros is described in detail in {ref "macros"}[the section on macros].
After macro expansion, both the term and command elaborators consult tables that map syntax kinds to elaboration procedures.
Term elaborators map syntax and an optional expected type to a core language expression using the very powerful monad mentioned above.
Command elaborators accept syntax and return no value, but may have monadic side effects on the global command state.
While both term and command elaborators have access to {lean}`IO`, it's unusual that they perform side effects; exceptions include interactions with external tools or solvers.
The elaborator tables may be extended to enable the use of new syntax for both terms and commands by extending the tables.
See {ref "elaborators"}[the section on elaborators] for a description of how to add additional elaborators to Lean.
When commands or terms contain further commands or terms, they recursively invoke the appropriate elaborator on the nested syntax.
This elaborator will then expand macros before invoking elaborators from the table.
While macro expansion occurs prior to elaboration for a given “layer” of the syntax, macro expansion and elaboration are interleaved in general.
## Info Trees
When interacting with Lean code, much more information is needed than when simply importing it as a dependency.
For example, Lean's interactive environment can be used to view the types of selected expressions, to step through all the intermediate states of a proof, to view documentation, and highlight all occurrences of a bound variable.
The information necessary to use Lean interactively is stored in a side table called the {deftech}_info trees_ during elaboration.
```lean -show
open Lean.Elab (Info)
```
Info trees relate metadata to the user's original syntax.
Their tree structure corresponds closely to the tree structure of the syntax, although a given node in the syntax tree may have many corresponding info tree nodes that document different aspects of it.
This metadata includes the elaborator's output in Lean's core language, the proof state active at a given point, suggestions for interactive identifier completion, and much more.
The metadata can also be arbitrarily extended; the constructor {lean}`Info.ofCustomInfo` accepts a {lean}`Dynamic` type.
This can be used to add information to be used by custom code actions or other user interface extensions.
# The Kernel
Lean's trusted {deftech}_kernel_ is a small, robust implementation of a type checker for the core type theory.
It does not include a syntactic termination checker, nor does it perform unification; termination is guaranteed by elaborating all recursive functions into uses of primitive {tech}[recursors], and unification is expected to have already been carried out by the elaborator.
Before new inductive types or definitions are added to the environment by the command or term elaborators, they must be checked by the kernel to guard against potential bugs in elaboration.
Lean's kernel is written in C++.
There are independent re-implementations in [Rust](https://github.com/ammkrn/nanoda_lib) and [Lean](https://github.com/digama0/lean4lean), and the Lean project is interested in having as many implementations as possible so that they can be cross-checked against each other.
The language implemented by the kernel is a version of the Calculus of Constructions, a dependent type theory with the following features:
* Full dependent types
* Inductively-defined types that may be mutually inductive or include recursion nested under other inductive types
* An {tech}[impredicative], definitionally proof-irrelevant, extensional {tech}[universe] of {tech}[propositions]
* A {tech}[predicative], non-cumulative hierarchy of universes of data
* {ref "quotients"}[Quotient types] with a definitional computation rule
* Propositional function extensionality{margin}[Function extensionality is a theorem that can be proved using quotient types, but it is such an important consequence that it's worth listing separately.]
* Definitional {tech (key := "η-equivalence")}[η-equality] for functions and products
* Universe-polymorphic definitions
* Consistency: there is no axiom-free closed term of type {lean}`False`
```lean -show -keep
-- Test definitional eta for structures
structure A where
x : Nat
y : Int
example (a : A) : ⟨a.x, a.y⟩ = a := rfl
set_option linter.unusedVariables false in
inductive B where
| mk (x : Nat) (y : Int) : B
example (b : B) : ⟨b.1, b.2⟩ = b := rfl
/--
error: Type mismatch
rfl
has type
?m.836 = ?m.836
but is expected to have type
e1 = e2
-/
#check_msgs in
example (e1 e2 : Empty) : e1 = e2 := rfl
```
This theory is rich enough to express leading-edge research mathematics, and yet simple enough to admit a small, efficient implementation.
The presence of explicit proof terms makes it feasible to implement independent proof checkers, increasing our confidence.
It is described in detail by {citet carneiro19}[] and {citet ullrich23}[].
Lean's type theory does not feature subject reduction, the definitional equality is not necessarily transitive, and it is possible to make the type checker fail to terminate.
None of these metatheoretic properties cause problems in practice—failures of transitivity are exceedingly rare, and as far as we know, non-termination has not occurred except when crafting code specifically to exercise it.
Most importantly, logical soundness is not affected.
In practice, apparent non-termination is indistinguishable from sufficiently slow programs; the latter are the causes observed in the wild.
These metatheoretic properties are a result of having impredicativity, quotient types that compute, definitional proof irrelevance, and propositional extensionality; these features are immensely valuable both to support ordinary mathematical practice and to enable automation.
# Elaboration Results
%%%
tag := "elaboration-results"
%%%
Lean's core type theory does not include pattern matching or recursive definitions.
Instead, it provides low-level {tech}[recursors] that can be used to implement both case distinction and primitive recursion.
Thus, the elaborator must translate definitions that use pattern matching and recursion into definitions that use recursors.{margin}[More details on the elaboration of recursive definitions is available in the {ref "recursive-definitions"}[dedicated section] on the topic.]
This translation is additionally a proof that the function terminates for all potential arguments, because all functions that can be translated to recursors also terminate.
The translation to recursors happens in two phases: during term elaboration, uses of pattern matching are replaced by appeals to {deftech}_auxiliary matching functions_ (also referred to as {deftech}_matcher functions_) that implement the particular case distinction that occurs in the code.
These auxiliary functions are themselves defined using recursors, though they do not make use of the recursors' ability to actually implement recursive behavior.{margin}[They use variants of the `casesOn` construction that is described in the {ref "recursor-elaboration-helpers"}[section on recursors and elaboration], specialized to reduce code size.]
The term elaborator thus returns core-language terms in which pattern matching has been replaced with the use of special functions that implement case distinction, but these terms may still contain recursive occurrences of the function being defined.
A definition that still includes recursion, but has otherwise been elaborated to the core language, is called a {deftech}[pre-definition].
To see auxiliary pattern matching functions in Lean's output, set the option {option}`pp.match` to {lean}`false`.
{optionDocs pp.match}
```lean -show -keep
def third_of_five : List α → Option α
| [_, _, x, _, _] => some x
| _ => none
set_option pp.match false
/--
info: @[reducible] def third_of_five._sparseCasesOn_1.{u_1, u} : {α : Type u} →
{motive : List α → Sort u_1} →
(t : List α) →
((head : α) → (tail : List α) → motive (head :: tail)) → (Nat.hasNotBit 2 t.ctorIdx → motive t) → motive t :=
fun {α} {motive} t cons =>
List.rec (motive := fun t => (Nat.hasNotBit 2 t.ctorIdx → motive t) → motive t) (fun «else» => «else» ⋯)
(fun head tail tail_ih «else» => cons head tail) t
-/
#check_msgs in
#print third_of_five._sparseCasesOn_1
/--
info: third_of_five.eq_def.{u_1} {α : Type u_1} (x✝ : List α) :
third_of_five x✝ =
third_of_five.match_1 (fun x => Option α) x✝ (fun head head_1 x head_2 head_3 => some x) fun x => none
-/
#check_msgs in
#check third_of_five.eq_def
/--
info: def third_of_five.match_1.{u_1, u_2} : {α : Type u_1} →
(motive : List α → Sort u_2) →
(x : List α) →
((head head_1 x head_2 head_3 : α) → motive [head, head_1, x, head_2, head_3]) →
((x : List α) → motive x) → motive x :=
fun {α} motive x h_1 h_2 =>
third_of_five._sparseCasesOn_1 x
(fun head tail =>
third_of_five._sparseCasesOn_1 tail
(fun head_1 tail =>
third_of_five._sparseCasesOn_1 tail
(fun head_2 tail =>
third_of_five._sparseCasesOn_1 tail
(fun head_3 tail =>
third_of_five._sparseCasesOn_1 tail
(fun head_4 tail =>
third_of_five._sparseCasesOn_2 tail (h_1 head head_1 head_2 head_3 head_4) fun h =>
h_2 (head :: head_1 :: head_2 :: head_3 :: head_4 :: tail))
fun h => h_2 (head :: head_1 :: head_2 :: head_3 :: tail))
fun h => h_2 (head :: head_1 :: head_2 :: tail))
fun h => h_2 (head :: head_1 :: tail))
fun h => h_2 (head :: tail))
fun h => h_2 x
-/
#check_msgs in
#print third_of_five.match_1
```
:::paragraph
The pre-definition is then sent to the compiler and to the kernel.
The compiler receives the pre-definition as-is, with recursion intact.
The version sent to the kernel, on the other hand, undergoes a second transformation that replaces explicit recursion with {ref "structural-recursion"}[uses of recursors], {ref "well-founded-recursion"}[well-founded recursion], or {ref "partial-fixpoint"}[partial fixpoint recursion].
This split is for three reasons:
* The compiler can compile {ref "partial-unsafe"}[`partial` functions] that the kernel treats as opaque constants for the purposes of reasoning.
* The compiler can also compile {ref "partial-unsafe"}[`unsafe` functions] that bypass the kernel entirely.
* Translation to recursors does not necessarily preserve the cost model expected by programmers, in particular laziness vs strictness, but compiled code must have predictable performance.
The other strategies used to justify recursive definitions result in internal terms that are even further from the program as it was written.
The compiler stores an intermediate representation in an environment extension.
:::
For straightforwardly structurally recursive functions, the translation will use the type's recursor.
These functions tend to be relatively efficient when run in the kernel, their defining equations hold definitionally, and they are easy to understand.
Functions that use other patterns of recursion that cannot be captured by the type's recursor are translated using {tech}[well-founded recursion], which is structural recursion on a proof that some {tech}[measure] decreases at each recursive call, or using {ref "partial-fixpoint"}[partial fixpoints], which logically capture at least part of a function's specification by appealing to domain-theoretic constructions.
Lean can automatically derive many of these termination proofs, but some require manual proofs.
Well-founded recursion is more flexible, but the resulting functions are often slower to execute in the kernel due to the proof terms that show that a measure decreases, and their defining equations may hold only propositionally.
To provide a uniform interface to functions defined via structural and well-founded recursion and to check its own correctness, the elaborator proves {deftech}[equational lemmas] that relate the function to its original definition.
In the function's namespace, `eq_unfold` relates the function directly to its definition, `eq_def` relates it to the definition after instantiating implicit parameters, and $`N` lemmas `eq_N` relate each case of its pattern-matching to the corresponding right-hand side, including sufficient assumptions to indicate that earlier branches were not taken.
::::keepEnv
:::example "Equational Lemmas"
Given the definition of {lean}`thirdOfFive`:
```lean
def thirdOfFive : List α → Option α
| [_, _, x, _, _] => some x
| _ => none
```
equational lemmas are generated that relate {lean}`thirdOfFive` to its definition.
{lean}`thirdOfFive.eq_unfold` states that it can be unfolded to its original definition when no arguments are provided:
```signature
thirdOfFive.eq_unfold.{u_1} :
@thirdOfFive.{u_1} = fun {α : Type u_1} x =>
match x with
| [head, head_1, x, head_2, head_3] => some x
| x => none
```
{lean}`thirdOfFive.eq_def` states that it matches its definition when applied to arguments:
```signature
thirdOfFive.eq_def.{u_1} {α : Type u_1} :
∀ (x : List α),
thirdOfFive x =
match x with
| [head, head_1, x, head_2, head_3] => some x
| x => none
```
{lean}`thirdOfFive.eq_1` shows that its first defining equation holds:
```signature
thirdOfFive.eq_1.{u} {α : Type u}
(head head_1 x head_2 head_3 : α) :
thirdOfFive [head, head_1, x, head_2, head_3] = some x
```
{lean}`thirdOfFive.eq_2` shows that its second defining equation holds:
```signature
thirdOfFive.eq_2.{u_1} {α : Type u_1} :
∀ (x : List α),
(∀ (head head_1 x_1 head_2 head_3 : α),
x = [head, head_1, x_1, head_2, head_3] → False) →
thirdOfFive x = none
```
The final lemma {lean}`thirdOfFive.eq_2` includes a premise that the first branch could not have matched (that is, that the list does not have exactly five elements).
:::
::::
::::keepEnv
:::example "Recursive Equational Lemmas"
Given the definition of {lean}`everyOther`:
```lean
def everyOther : List α → List α
| [] => []
| [x] => [x]
| x :: _ :: xs => x :: everyOther xs
```
equational lemmas are generated that relate {lean}`everyOther`'s recursor-based implementation to its original recursive definition.
{lean}`everyOther.eq_unfold` states that `everyOther` with no arguments is equal to its unfolding:
```signature
everyOther.eq_unfold.{u} :
@everyOther.{u} = fun {α} x =>
match x with
| [] => []
| [x] => [x]
| x :: _ :: xs => x :: everyOther xs
```
{lean}`everyOther.eq_def` states that a `everyOther` is equal to its definition when applied to arguments:
```signature
everyOther.eq_def.{u} {α : Type u} :
∀ (x : List α),
everyOther x =
match x with
| [] => []
| [x] => [x]
| x :: _ :: xs => x :: everyOther xs
```
{lean}`everyOther.eq_1` demonstrates its first pattern:
```signature
everyOther.eq_1.{u} {α : Type u} : everyOther [] = ([] : List α)
```
{lean}`everyOther.eq_2` demonstrates its second pattern:
```signature
everyOther.eq_2.{u} {α : Type u} (x : α) : everyOther [x] = [x]
```
{lean}`everyOther.eq_3` demonstrates its final pattern:
```signature
everyOther.eq_3.{u} {α : Type u} (x y : α) (xs : List α) :
everyOther (x :: y :: xs) = x :: everyOther xs
```
Because the patterns do not overlap, no assumptions about prior patterns not having matched are necessary for the equational lemmas.
:::
::::
After elaborating a module, having checked each addition to the environment with the kernel, the changes that the module made to the global environment (including extensions) are serialized to a {deftech}[`.olean` file].
In these files, Lean terms and values are represented just as they are in memory; thus the file can be directly memory-mapped.
All code paths that lead to Lean adding to the environment involve the new type or definition first being checked by the kernel.
However, Lean is a very open, flexible system.
To guard against the possibility of poorly-written metaprograms jumping through hoops to add unchecked values to the environment, a separate tool `lean4checker` can be used to validate that the entire environment in a `.olean` file satisfies the kernel.
In addition to the `.olean` file, the elaborator produces a `.ilean` file, which is an index used by the language server.
This file contains information needed to work interactively with the module without fully loading it, such as the source positions of definitions.
The contents of `.ilean` files are an implementation detail and may change at any release.
Finally, the compiler is invoked to translate the intermediate representation of functions stored in its environment extension into C code.
A C file is produced for each Lean module; these are then compiled to native code using a bundled C compiler.
If the `precompileModules` option is set in the build configuration, then this native code can be dynamically loaded and invoked by Lean; otherwise, an interpreter is used.
For most workloads, the overhead of compilation is larger than the time saved by avoiding the interpreter, but some workloads can be sped up dramatically by pre-compiling tactics, language extensions, or other extensions to Lean.
# Initialization
%%%
tag := "initialization"
%%%
Before starting up, the elaborator must be correctly initialized.
Lean itself contains {deftech}[initialization] code that must be run in order to correctly construct the compiler's initial state; this code is run before loading any modules and before the elaborator is invoked.
Furthermore, each dependency may itself contribute initialization code, _e.g._ to set up environment extensions.
Internally, each environment extension is assigned a unique index into an array, and this array's size is equal to the number of registered environment extensions, so the number of extensions must be known in order to correctly allocate an environment.
After running Lean's own builtin initializers, the module's header is parsed and the dependencies' `.olean` files are loaded into memory.
A “pre-environment” is constructed that contains the union of the dependencies' environments.
Next, all initialization code specified by the dependencies is executed in the interpreter.
At this point, the number of environment extensions is known, so the pre-environment can be reallocated into an environment structure with a correctly-sized extensions array.
:::syntax command (title := "Initialization Blocks")
An {keywordOf Lean.Parser.Command.initialize}`initialize` block adds code to the module's initializers.
The contents of an {keywordOf Lean.Parser.Command.initialize}`initialize` block are treated as the contents of a {keywordOf Lean.Parser.Term.do}`do` block in the {lean}`IO` monad.
Sometimes, initialization only needs to extend internal data structures by side effects.
In that case the contents are expected to have type {lean}`IO Unit`:
```grammar
initialize
$cmd*
```
Initialization may also be used to construct values that contain references to internal state, such as attributes that are backed by an environment extension.
In this form of {keywordOf Lean.Parser.Command.initialize}`initialize`, initialization should return the specified type in the {lean}`IO` monad.
```grammar
initialize $x:ident : $t:term ←
$cmd*
```
:::
:::syntax command (title := "Compiler-Internal Initializers")
Lean's internals also define code that must run during initialization.
However, because Lean is a bootstrapping compiler, special care must be taken with initializers defined as part of Lean itself, and Lean's own initializers must run prior to importing or loading _any_ modules.
These initializers are specified using {keywordOf Lean.Parser.Command.initialize}`builtin_initialize`, which should not be used outside the compiler's implementation.
```grammar
builtin_initialize
$cmd*
```
```grammar
builtin_initialize $x:ident : $t:term ←
$cmd*
```
::: |
reference-manual/Manual/Tactics.lean | import VersoManual
import Lean.Parser.Term
import Manual.Meta
import Manual.Tactics.Reference
import Manual.Tactics.Conv
import Manual.Tactics.Custom
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
open Lean.Elab.Tactic
#doc (Manual) "Tactic Proofs" =>
%%%
tag := "tactics"
%%%
The tactic language is a special-purpose programming language for constructing proofs.
In Lean, {tech}[propositions] are represented by types, and proofs are terms that inhabit these types.
{margin}[The {ref "propositions"}[section on propositions] describes propositions in more detail.]
While terms are designed to make it convenient to indicate a specific inhabitant of a type, tactics are designed to make it convenient to demonstrate that a type is inhabited.
This distinction exists because it's important that definitions pick out the precise objects of interest and that programs return the intended results, but proof irrelevance means that there's no _technical_ reason to prefer one proof term over another.
For example, given two assumptions of a given type, a program must be carefully written to use the correct one, while a proof may use either without consequence.
Tactics are imperative programs that modify a {deftech}_proof state_.{index}[proof state]
A proof state consists of an ordered sequence of {deftech}_goals_, which are contexts of local assumptions together with types to be inhabited; a tactic may either _succeed_ with a possibly-empty sequence of further goals (called {deftech}_subgoals_) or _fail_ if it cannot make progress.
If a tactic succeeds with no subgoals, then the proof is complete.
If it succeeds with one or more subgoals, then its goal or goals will be proved when those subgoals have been proved.
The first goal in the proof state is called the {deftech}_main goal_.{index (subterm := "main")}[goal]{index}[main goal]
While most tactics affect only the main goal, operators such as {tactic}`<;>` and {tactic}`all_goals` can be used to apply a tactic to many goals, and operators such as bullets, {tactic}`next` or {tactic}`case` can narrow the focus of subsequent tactics to only a single goal in the proof state.
Behind the scenes, tactics construct {deftech}[proof terms].
Proof terms are independently checkable evidence of a theorem's truth, written in Lean's type theory.
Each proof is checked in the {tech}[kernel], and can be verified with independently-implemented external checkers, so the worst outcome from a bug in a tactic is a confusing error message, rather than an incorrect proof.
Each goal in a tactic proof corresponds to an incomplete portion of a proof term.
# Running Tactics
%%%
tag := "by"
%%%
:::TODO
The syntax of `by` is showing with commas instead of semicolons below
:::
:::syntax Lean.Parser.Term.byTactic (title := "Tactic Proofs with {keyword}`by`")
Tactics are included in terms using {keywordOf Lean.Parser.Term.byTactic}`by`, which is followed by a sequence of tactics in which each has the same indentation:
```grammar
by
$t
```
Alternatively, explicit braces and semicolons may be used:
```grammar
by { $t* }
```
:::
Tactics are invoked using the {keywordOf Lean.Parser.Term.byTactic}`by` term.
When the elaborator encounters {keywordOf Lean.Parser.Term.byTactic}`by`, it invokes the tactic interpreter to construct the resulting term.
Tactic proofs may be embedded via {keywordOf Lean.Parser.Term.byTactic}`by` in any context in which a term can occur.
# Reading Proof States
%%%
tag := "proof-states"
%%%
The goals in a proof state are displayed in order, with the main goal on top.
Goals may be either named or anonymous.
Named goals are indicated with `case` at the top (called a {deftech}_case label_), while anonymous goals have no such indicator.
Tactics assign goal names, typically on the basis of constructor names, parameter names, structure field names, or the nature of the reasoning step implemented by the tactic.
::::example "Named goals"
```customCSS
#lawful-option-cases .goal-name { background-color: var(--lean-compl-yellow); }
```
This proof state contains four goals, all of which are named.
This is part of a proof that the {lean}`Monad Option` instance is lawful (that is, to provide the {lean}`LawfulMonad Option` instance), and the case names (highlighted below) come from the names of the fields of {name}`LawfulMonad`.
```proofState (tag := "lawful-option-cases")
LawfulMonad Option := by
constructor
intro α β f x
rotate_right
intro α β γ x f g
rotate_right
intro α β x f
rotate_right
intro α β f x
rotate_right
```
::::
::::example "Anonymous Goals"
This proof state contains a single anonymous goal.
```proofState
∀ (n k : Nat), n + k = k + n := by
intro n k
```
::::
The {tactic}`case` and {tactic}`case'` tactics can be used to select a new main goal using the desired goal's name.
When names are assigned in the context of a goal which itself has a name, the new goals' names are appended to the main goal's name with a dot (`'.', Unicode FULL STOP (0x2e)`) between them.
::::example "Hierarchical Goal Names"
:::tacticExample
```setup
intro n k
induction n
```
In the course of an attempt to prove {goal}`∀ (n k : Nat), n + k = k + n`, this proof state can occur:
```pre
case zero
k : Nat
⊢ 0 + k = k + 0
case succ
k n✝ : Nat
a✝ : n✝ + k = k + n✝
⊢ n✝ + 1 + k = k + (n✝ + 1)
```
After {tacticStep}`induction k`, the two new cases' names have `zero` as a prefix, because they were created in a goal named `zero`:
```customCSS
#hierarchical-case-names .goal:not(:last-child) .goal-name { background-color: var(--lean-compl-yellow); }
```
```post (tag := "hierarchical-case-names")
case zero.zero
⊢ 0 + 0 = 0 + 0
case zero.succ
n✝ : Nat
a✝ : 0 + n✝ = n✝ + 0
⊢ 0 + (n✝ + 1) = n✝ + 1 + 0
case succ
k n✝ : Nat
a✝ : n✝ + k = k + n✝
⊢ n✝ + 1 + k = k + (n✝ + 1)
```
:::
::::
Each goal consists of a sequence of assumptions and a desired conclusion.
Each assumption has a name and a type; the conclusion is a type.
Assumptions are either arbitrary elements of some type or statements that are presumed true.
::::example "Assumption Names and Conclusion"
```customCSS
#ex-assumption-names .hypothesis .name { background-color: var(--lean-compl-yellow); }
```
This goal has four assumptions:
```proofState (tag := "ex-assumption-names")
∀ (α) (xs : List α), xs ++ [] = xs := by
intro α xs
induction xs
sorry
rename_i x xs ih
```
:::keepEnv
```lean -show
axiom α : Type
axiom x : α
axiom xs : List α
axiom ih : xs ++ [] = xs
```
They are:
* {lean}`α`, an arbitrary type
* {lean}`x`, an arbitrary {lean}`α`
* {lean}`xs`, an arbitrary {lean}`List α`
* {lean}`ih`, an induction hypothesis that asserts that appending the empty list to {lean}`xs` is equal to {lean}`xs`.
The conclusion is the statement that prepending `x` to both sides of the equality in the induction hypothesis results in equal lists.
:::
::::
Some assumptions are {deftech}_inaccessible_, {index}[inaccessible] {index (subterm := "inaccessible")}[assumption] which means that they cannot be referred to explicitly by name.
Inaccessible assumptions occur when an assumption is created without a specified name or when the assumption's name is shadowed by a later assumption.
Inaccessible assumptions should be regarded as anonymous; they are presented as if they had names because they may be referred to in later assumptions or in the conclusion, and displaying a name allows these references to be distinguished from one another.
In particular, inaccessible assumptions are presented with daggers (`†`) after their names.
::::example "Accessible Assumption Names"
```customCSS
#option-cases-accessible .hypothesis .name { background-color: var(--lean-compl-yellow); }
```
In this proof state, all assumptions are accessible.
```proofState (tag := "option-cases-accessible")
LawfulMonad Option := by
constructor
intro α β f x
rotate_right
sorry
rotate_right
sorry
rotate_right
sorry
rotate_right
```
::::
::::example "Inaccessible Assumption Names"
```customCSS
#option-cases-inaccessible .hypotheses .hypothesis:nth-child(even) .name { background-color: var(--lean-compl-yellow); }
```
In this proof state, only the first and third assumptions are accessible.
The second and fourth are inaccessible, and their names include a dagger to indicate that they cannot be referenced.
```proofState (tag := "option-cases-inaccessible")
LawfulMonad Option := by
constructor
intro α _ f _
rotate_right
sorry
rotate_right
sorry
rotate_right
sorry
rotate_right
```
::::
Inaccessible assumptions can still be used.
Tactics such as {tactic}`assumption` or {tactic}`simp` can scan the entire list of assumptions, finding one that is useful, and {tactic}`contradiction` can eliminate the current goal by finding an impossible assumption without naming it.
Other tactics, such as {tactic}`rename_i` and {tactic}`next`, can be used to name inaccessible assumptions, making them accessible.
Additionally, assumptions can be referred to by their type, by writing the type in single guillemets.
::::syntax term (title := "Assumptions by Type")
Single guillemets around a term represent a reference to some term in scope with that type.
```grammar
‹$t›
```
This can be used to refer to local lemmas by their theorem statement rather than by name, or to refer to assumptions regardless of whether they have explicit names.
::::
::::example "Assumptions by Type"
:::keepEnv
```lean -show
variable (n : Nat)
```
In the following proof, {tactic}`cases` is repeatedly used to analyze a number.
At the beginning of the proof, the number is named `x`, but {tactic}`cases` generates an inaccessible name for subsequent numbers.
Rather than providing names, the proof takes advantage of the fact that there is a single assumption of type {lean}`Nat` at any given time and uses {lean}`‹Nat›` to refer to it.
After the iteration, there is an assumption that `n + 3 < 3`, which {tactic}`contradiction` can use to remove the goal from consideration.
:::
```lean
example : x < 3 → x ∈ [0, 1, 2] := by
intros
iterate 3
cases ‹Nat›
. decide
contradiction
```
::::
::::example "Assumptions by Type, Outside Proofs"
Single-guillemet syntax also works outside of proofs:
```lean (name := evalGuillemets)
#eval
let x := 1
let y := 2
‹Nat›
```
```leanOutput evalGuillemets
2
```
This is generally not a good idea for non-propositions, however—when it matters _which_ element of a type is selected, it's better to select it explicitly.
::::
## Hiding Proofs and Large Terms
%%%
tag := "hiding-terms-in-proof-states"
%%%
Terms in proof states can be quite big, and there may be many assumptions.
Because of definitional proof irrelevance, proof terms typically give little useful information.
By default, they are not shown in goals in proof states unless they are {deftech}_atomic_, meaning that they contain no subterms.
Hiding proofs is controlled by two options: {option}`pp.proofs` turns the feature on and off, while {option}`pp.proofs.threshold` determines a size threshold for proof hiding.
:::example "Hiding Proof Terms"
In this proof state, the proof that `0 < n` is hidden.
```proofState
∀ (n : Nat) (i : Fin n), i.val > 5 → (⟨0, by cases i; omega⟩ : Fin n) < i := by
intro n i gt
/--
n : Nat
i : Fin n
gt : ↑i > 5
⊢ ⟨0, ⋯⟩ < i
-/
```
:::
{optionDocs pp.proofs}
{optionDocs pp.proofs.threshold}
Additionally, non-proof terms may be hidden when they are too large.
In particular, Lean will hide terms that are below a configurable depth threshold, and it will hide the remainder of a term once a certain amount in total has been printed.
Showing deep terms can be enabled or disabled with the option {option}`pp.deepTerms`, and the depth threshold can be configured with the option {option}`pp.deepTerms.threshold`.
The maximum number of pretty printer steps can be configured with the option {option}`pp.maxSteps`.
Printing very large terms can lead to slowdowns or even stack overflows in tooling; please be conservative when adjusting these options' values.
{optionDocs pp.deepTerms}
{optionDocs pp.deepTerms.threshold}
{optionDocs pp.maxSteps}
## Metavariables
%%%
tag := "metavariables-in-proofs"
%%%
Terms that begin with a question mark are {deftech}_metavariables_ that correspond to an unknown value.
They may stand for either {tech}[universe] levels or for terms.
Some metavariables arise as part of Lean's elaboration process, when not enough information is yet available to determine a value.
These metavariables' names have a numeric component at the end, such as `?m.392` or `?u.498`.
Other metavariables come into existence as a result of tactics or {tech}[synthetic holes].
These metavariables' names do not have a numeric component.
Metavariables that result from tactics frequently appear as goals whose {tech}[case labels] match the name of the metavariable.
::::example "Universe Level Metavariables"
In this proof state, the universe level of `α` is unknown:
```proofState
∀ (α : _) (x : α) (xs : List α), x ∈ xs → xs.length > 0 := by
intros α x xs elem
/--
α : Type ?u.912
x : α
xs : List α
elem : x ∈ xs
⊢ xs.length > 0
-/
```
::::
::::example "Type Metavariables"
In this proof state, the type of list elements is unknown.
The metavariable is repeated because the unknown type must be the same in both positions.
```proofState
∀ (x : _) (xs : List _), x ∈ xs → xs.length > 0 := by
intros x xs elem
/--
x : ?m.1035
xs : List ?m.1035
elem : x ∈ xs
⊢ xs.length > 0
-/
```
::::
::::example "Metavariables in Proofs"
:::tacticExample
{goal -show}`∀ (i j k : Nat), i < j → j < k → i < k`
```setup
intros i j k h1 h2
```
In this proof state,
```pre
i j k : Nat
h1 : i < j
h2 : j < k
⊢ i < k
```
applying the tactic {tacticStep}`apply Nat.lt_trans` results in the following proof state, in which the middle value of the transitivity step `?m` is unknown:
```post
case h₁
i j k : Nat
h1 : i < j
h2 : j < k
⊢ i < ?m
case a
i j k : Nat
h1 : i < j
h2 : j < k
⊢ ?m < k
case m
i j k : Nat
h1 : i < j
h2 : j < k
⊢ Nat
```
:::
::::
::::example "Explicitly-Created Metavariables"
:::tacticExample
{goal -show}`∀ (i j k : Nat), i < j → j < k → i < k`
```setup
intros i j k h1 h2
```
Explicit named holes are represented by metavariables, and additionally give rise to proof goals.
In this proof state,
```pre
i j k : Nat
h1 : i < j
h2 : j < k
⊢ i < k
```
applying the tactic {tacticStep}`apply @Nat.lt_trans i ?middle k ?p1 ?p2` results in the following proof state, in which the middle value of the transitivity step `?middle` is unknown and goals have been created for each of the named holes in the term:
```post
case middle
i j k : Nat
h1 : i < j
h2 : j < k
⊢ Nat
case p1
i j k : Nat
h1 : i < j
h2 : j < k
⊢ i < ?middle
case p2
i j k : Nat
h1 : i < j
h2 : j < k
⊢ ?middle < k
```
:::
::::
The display of metavariable numbers can be disabled using the {option}`pp.mvars`.
This can be useful when using features such as {keywordOf Lean.guardMsgsCmd}`#guard_msgs` that match Lean's output against a desired string, which is very useful when writing tests for custom tactics.
{optionDocs pp.mvars}
::::draft
:::planned 68
Demonstrate and explain diff labels that show the difference between the steps of a proof state.
:::
::::
# The Tactic Language
%%%
tag := "tactic-language"
%%%
A tactic script consists of a sequence of tactics, separated either by semicolons or newlines.
When separated by newlines, tactics must be indented to the same level.
Explicit curly braces and semicolons may be used instead of indentation.
Tactic sequences may be grouped by parentheses.
This allows a sequence of tactics to be used in a position where a single tactic would otherwise be grammatically expected.
Generally, execution proceeds from top to bottom, with each tactic running in the proof state left behind by the prior tactic.
The tactic language contains a number of control structures that can modify this flow.
Each tactic is a syntax extension in the `tactic` category.
This means that tactics are free to define their own concrete syntax and parsing rules.
However, with a few exceptions, the majority of tactics can be identified by a leading keyword; the exceptions are typically frequently-used built-in control structures such as {tactic}`<;>`.
## Control Structures
%%%
tag := "tactic-language-control"
%%%
Strictly speaking, there is no fundamental distinction between control structures and other tactics.
Any tactic is free to take others as arguments and arrange for their execution in any context that it sees fit.
Even if a distinction is arbitrary, however, it can still be useful.
The tactics in this section are those that resemble traditional control structures from programming, or those that _only_ recombine other tactics rather than making progress themselves.
### Success and Failure
%%%
tag := "tactic-language-success-failure"
%%%
When run in a proof state, every tactic either succeeds or fails.
Tactic failure is akin to exceptions: failures typically “bubble up” until handled.
Unlike exceptions, there is no operator to distinguish between reasons for failure; {tactic}`first` simply takes the first branch that succeeds.
::: tactic "fail"
:::
:::tactic "fail_if_success"
:::
:::tactic "try"
:::
:::tactic "first"
:::
### Branching
%%%
tag := "tactic-language-branching"
%%%
Tactic proofs may use pattern matching and conditionals.
However, their meaning is not quite the same as it is in terms.
While terms are expected to be executed once the values of their variables are known, proofs are executed with their variables left abstract and should consider _all_ cases simultaneously.
Thus, when {keyword}`if` and {keyword}`match` are used in tactics, their meaning is reasoning by cases rather than selection of a concrete branch.
All of their branches are executed, and the condition or pattern match is used to refine the main goal with more information in each branch, rather than to select a single branch.
:::tactic "if"
:::
:::example "Reasoning by cases with `if`"
In each branch of the {keywordOf Lean.Parser.Tactic.tacIfThenElse}`if`, an assumption is added that reflects whether `n = 0`.
```lean
example (n : Nat) : if n = 0 then n < 1 else n > 0 := by
if n = 0 then
simp [*]
else
simp only [↓reduceIte, gt_iff_lt, *]
omega
```
:::
:::tactic Lean.Parser.Tactic.match (show := "match")
When pattern matching, instances of the {tech (key := "match discriminant")}[discriminant] in the goal are replaced with the patterns that match them in each branch.
Each branch must then prove the refined goal.
Compared to the `cases` tactic, using `match` can allow a greater degree of flexibility in the cases analysis being performed, but the requirement that each branch solve its goal completely makes it more difficult to incorporate into larger automation scripts.
:::
:::example "Reasoning by cases with `match`"
In each branch of the {keywordOf Lean.Parser.Tactic.match}`match`, the discriminant `n` has been replaced by either `0` or `k + 1`.
```lean
example (n : Nat) : if n = 0 then n < 1 else n > 0 := by
match n with
| 0 =>
simp
| k + 1 =>
simp
```
:::
### Goal Selection
%%%
tag := "tactic-language-goal-selection"
%%%
Most tactics affect the {tech}[main goal].
Goal selection tactics provide a way to treat a different goal as the main one, rearranging the sequence of goals in the proof state.
:::tactic "case"
:::
:::tactic "case'"
:::
:::tactic "rotate_left"
:::
:::tactic "rotate_right"
:::
#### Sequencing
%%%
tag := "tactic-language-sequencing"
%%%
In addition to running tactics one after the other, each being used to solve the main goal, the tactic language supports sequencing tactics according to the way in which goals are produced.
The {tactic}`<;>` tactic combinator allows a tactic to be applied to _every_ {tech}[subgoal] produced by some other tactic.
If no new goals are produced, then the second tactic is not run.
:::tactic "<;>"
If the tactic fails on any of the {tech}[subgoals], then the whole {tactic}`<;>` tactic fails.
:::
::::example "Subgoal Sequencing"
:::tacticExample
```setup
intro x h
```
{goal -show}`∀x, x = 1 ∨ x = 2 → x < 3`
In this proof state:
```pre
x : Nat
h : x = 1 ∨ x = 2
⊢ x < 3
```
the tactic {tacticStep}`cases h` yields the following two goals:
```post
case inl
x : Nat
h✝ : x = 1
⊢ x < 3
case inr
x : Nat
h✝ : x = 2
⊢ x < 3
```
:::
:::tacticExample
```setup
intro x h
```
{goal -show}`∀x, x = 1 ∨ x = 2 → x < 3`
```pre -show
x : Nat
h : x = 1 ∨ x = 2
⊢ x < 3
```
Running {tacticStep}`cases h ; simp [*]` causes {tactic}`simp` to solve the first goal, leaving the second behind:
```post
case inr
x : Nat
h✝ : x = 2
⊢ x < 3
```
:::
:::tacticExample
```setup
intro x h
```
{goal -show}`∀x, x = 1 ∨ x = 2 → x < 3`
```pre -show
x : Nat
h : x = 1 ∨ x = 2
⊢ x < 3
```
Replacing the `;` with {tactic}`<;>` and running {tacticStep}`cases h <;> simp [*]` solves *both* of the new goals with {tactic}`simp`:
```post
```
:::
::::
#### Working on Multiple Goals
%%%
tag := "tactic-language-multiple-goals"
%%%
The tactics {tactic}`all_goals` and {tactic}`any_goals` allow a tactic to be applied to every goal in the proof state.
The difference between them is that if the tactic fails for in any of the goals, {tactic}`all_goals` itself fails, while {tactic}`any_goals` fails only if the tactic fails in all of the goals.
:::tactic "all_goals"
:::
:::tactic "any_goals"
:::
### Focusing
%%%
tag := "tactic-language-focusing"
%%%
Focusing tactics remove some subset of the proof goals (typically leaving only the main goal) from the consideration of some further tactics.
In addition to the tactics described here, the {tactic}`case` and {tactic}`case'` tactics focus on the selected goal.
:::tactic Lean.cdot (show := "·")
It is generally considered good Lean style to use bullets whenever a tactic line results in more than one new subgoal.
This makes it easier to read and maintain proofs, because the connections between steps of reasoning are more clear and any change in the number of subgoals while editing the proof will have a localized effect.
:::
:::tactic "next"
:::
:::tactic "focus"
:::
### Repetition and Iteration
%%%
tag := "tactic-language-iteration"
%%%
:::tactic "iterate"
:::
:::tactic "repeat"
:::
:::tactic "repeat'"
:::
:::tactic "repeat1'"
:::
## Names and Hygiene
%%%
tag := "tactic-language-hygiene"
%%%
Behind the scenes, tactics generate proof terms.
These proof terms exist in a local context, because assumptions in proof states correspond to local binders in terms.
Uses of assumptions correspond to variable references.
It is very important that the naming of assumptions be predictable; otherwise, small changes to the internal implementation of a tactic could either lead to variable capture or to a broken reference if they cause different names to be selected.
Lean's tactic language is _hygienic_. {index (subterm := "in tactics")}[hygiene]
This means that the tactic language respects lexical scope: names that occur in a tactic refer to the enclosing binding in the source code, rather than being determined by the generated code, and the tactic framework is responsible for maintaining this property.
Variable references in tactic scripts refer either to names that were in scope at the beginning of the script or to bindings that were explicitly introduced as part of the tactics, rather than to the names chosen for use in the proof term behind the scenes.
A consequence of hygienic tactics is that the only way to refer to an assumption is to explicitly name it.
Tactics cannot assign assumption names themselves, but must rather accept names from users; users are correspondingly obligated to provide names for assumptions that they wish to refer to.
When an assumption does not have a user-provided name, it is shown in the proof state with a dagger (`'†', DAGGER 0x2020`).
The dagger indicates that the name is _inaccessible_ and cannot be explicitly referred to.
Hygiene can be disabled by setting the option {option}`tactic.hygienic` to `false`.
This is not recommended, as many tactics rely on the hygiene system to prevent capture and thus do not incur the overhead of careful manual name selection.
{optionDocs tactic.hygienic}
::::example "Tactic hygiene: inaccessible assumptions"
:::tacticExample
```setup
skip
```
When proving that {goal}`∀ (n : Nat), 0 + n = n`, the initial proof state is:
```pre
⊢ ∀ (n : Nat), 0 + n = n
```
The tactic {tacticStep}`intro` results in a proof state with an inaccessible assumption:
```post
n✝ : Nat
⊢ 0 + n✝ = n✝
```
:::
::::
::::example "Tactic hygiene: accessible assumptions"
:::tacticExample
```setup
skip
```
When proving that {goal}`∀ (n : Nat), 0 + n = n`, the initial proof state is:
```pre
⊢ ∀ (n : Nat), 0 + n = n
```
The tactic {tacticStep}`intro n`, with the explicit name `n`, results in a proof state with an accessibly-named assumption:
```post
n : Nat
⊢ 0 + n = n
```
:::
::::
### Accessing Assumptions
%%%
tag := "tactic-language-assumptions"
%%%
Many tactics provide a means of specifying names for the assumptions that they introduce.
For example, {tactic}`intro` and {tactic}`intros` take assumption names as arguments, and {tactic}`induction`'s {keywordOf Lean.Parser.Tactic.induction}`with`-form allows simultaneous case selection, assumption naming, and focusing.
When an assumption does not have a name, one can be assigned using {tactic}`next`, {tactic}`case`, or {tactic}`rename_i`.
:::tactic "rename_i"
:::
## Assumption Management
%%%
tag := "tactic-language-assumption-management"
%%%
Larger proofs can benefit from management of proof states, removing irrelevant assumptions and making their names easier to understand.
Along with these operators, {tactic}`rename_i` allows inaccessible assumptions to be renamed, and {tactic}`intro`, {tactic}`intros` and {tactic}`rintro` convert goals that are implications or universal quantification into goals with additional assumptions.
:::tactic "rename"
:::
:::tactic "revert"
:::
:::tactic "clear"
:::
## Local Definitions and Proofs
%%%
tag := "tactic-language-local-defs"
%%%
{tactic}`have` and {tactic}`let` both create local assumptions.
Generally speaking, {tactic}`have` should be used when proving an intermediate lemma; {tactic}`let` should be reserved for local definitions.
:::tactic "have"
:::
:::tactic Lean.Parser.Tactic.tacticHave__
:::
:::tactic Lean.Parser.Tactic.tacticHave'
:::
:::tactic Lean.Parser.Tactic.tacticLet__ (show := "let")
:::
:::tactic Lean.Parser.Tactic.letrec (show := "let rec")
:::
:::tactic Lean.Parser.Tactic.tacticLetI__
:::
:::tactic Lean.Parser.Tactic.tacticLet'__
:::
## Configuration
%%%
tag := "tactic-config"
%%%
Many tactics are configurable.{index (subterm := "of tactics")}[configuration]
By convention, tactics share a configuration syntax, described using {syntaxKind}`optConfig`.
The specific options available to each tactic are described in the tactic's documentation.
:::syntax Lean.Parser.Tactic.optConfig -open (title := "Tactic Configuration")
A tactic configuration consists of zero or more {deftech}[configuration items]:
```grammar
$x:configItem*
```
:::
:::syntax Lean.Parser.Tactic.configItem -open (title := "Tactic Configuration Items")
Each configuration item has a name that corresponds to an underlying tactic option.
Boolean options may be enabled or disabled using prefix `+` and `-`:
```grammar
+$x
```
```grammar
-$x
```
Options may be assigned specific values using a syntax similar to that for named function arguments:
```grammar
($x:ident := $t)
```
Finally, the name `config` is reserved; it is used to pass an entire set of options as a data structure.
The specific type expected depends on the tactic.
```grammar
(config := $t)
```
:::
## Namespace and Option Management
%%%
tag := "tactic-language-namespaces-options"
%%%
Namespaces and options can be adjusted in tactic scripts using the same syntax as in terms.
:::tactic Lean.Parser.Tactic.set_option (show := "set_option")
:::
:::tactic Lean.Parser.Tactic.open (show := "open")
:::
### Controlling Unfolding
%%%
tag := "tactic-language-unfolding"
%%%
By default, only definitions marked reducible are unfolded, except when checking definitional equality.
These operators allow this default to be adjusted for some part of a tactic script.
:::tactic Lean.Parser.Tactic.withReducibleAndInstances
:::
:::tactic Lean.Parser.Tactic.withReducible
:::
:::tactic Lean.Parser.Tactic.withUnfoldingAll
:::
# Options
%%%
tag := "tactic-language-options"
%%%
These options affect the meaning of tactics.
{optionDocs tactic.customEliminators}
{optionDocs tactic.skipAssignedInstances}
{optionDocs tactic.simp.trace}
{include 0 Manual.Tactics.Reference}
{include 0 Manual.Tactics.Conv}
# Naming Bound Variables
%%%
tag := "bound-variable-name-hints"
%%%
When the {ref "the-simplifier"}[simplifier] or the {tactic}`rw` tactic introduce new binding forms such as function parameters, they select a name for the bound variable based on the one in the statement of the rewrite rule being applied.
This name is made unique if necessary.
In some situations, such as {ref "well-founded-preprocessing"}[preprocessing definitions for termination proofs that use well-founded recursion], the names that appear in termination proof obligations should be the corresponding names written in the original function definition.
The {name}`binderNameHint` {tech}[gadget] can be used to indicate that a bound variable should be named according to the variables bound in some other term.
By convention, the term {lean}`()` is used to indicate that a name should _not_ be taken from the original definition.
{docstring binderNameHint}
{include 0 Manual.Tactics.Custom} |
reference-manual/Manual/Iterators.lean | import VersoManual
import Std.Data.Iterators
import Std.Data.TreeMap
import Manual.Meta
import Manual.Interaction.FormatRepr
open Lean.MessageSeverity
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
open Std.Iterators Types
open Std (TreeMap Iter IterM IterStep Iterator PlausibleIterStep IteratorLoop IteratorAccess LawfulIteratorLoop)
#doc (Manual) "Iterators" =>
%%%
tag := "iterators"
%%%
An {deftech}_iterator_ provides sequential access to each element of some source of data.
Typical iterators allow the elements in a collection, such as a list, array, or {name Std.TreeMap}`TreeMap` to be accessed one by one, but they can also provide access to data by carrying out some {tech (key := "monad")}[monadic] effect, such as reading files.
Iterators provide a common interface to all of these operations.
Code that is written to the iterator API can be agnostic as to the source of the data.
Each iterator maintains an internal state that enables it to determine the next value.
Because Lean is a pure functional language, consuming an iterator does not invalidate it, but instead copies it with an updated state.
As usual, {tech (key := "reference count")}[reference counting] is used to optimize programs that use values only once into programs that destructively modify values.
To use iterators, import {module}`Std.Data.Iterators`.
:::example "Mixing Collections"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
Combining a list and an array using {name}`List.zip` or {name}`Array.zip` would ordinarily require converting one of them into the other collection.
Using iterators, they can be processed without conversion:
```lean (name := zip)
def colors : Array String := #["purple", "gray", "blue"]
def codes : List String := ["aa27d1", "a0a0a0", "0000c5"]
#eval colors.iter.zip codes.iter |>.toArray
```
```leanOutput zip
#[("purple", "aa27d1"), ("gray", "a0a0a0"), ("blue", "0000c5")]
```
:::
::::example "Avoiding Intermediate Structures"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
:::paragraph
In this example, an array of colors and a list of color codes are combined.
The program separates three intermediate stages:
1. The names and codes are combined into pairs.
2. The pairs are transformed into readable strings.
3. The strings are combined with newlines.
```lean (name := intermediate)
def colors : Array String := #["purple", "gray", "blue"]
def codes : List String := ["aa27d1", "a0a0a0", "0000c5"]
def go : IO Unit := do
let colorCodes := colors.iter.zip codes.iter
let colorCodes := colorCodes.map fun (name, code) =>
s!"{name} ↦ #{code}"
let colorCodes := colorCodes.fold (init := "") fun x y =>
if x.isEmpty then y else x ++ "\n" ++ y
IO.println colorCodes
#eval go
```
```leanOutput intermediate
purple ↦ #aa27d1
gray ↦ #a0a0a0
blue ↦ #0000c5
```
:::
The intermediate stages of the computation do not allocate new data structures.
Instead, all the steps of the transformation are fused into a single loop, with {name}`Iter.fold` carrying out one step at a time.
In each step, a single color and color code are combined into a pair, rewritten to a string, and added to the result string.
::::
The Lean standard library provides three kinds of iterator operations.
{deftech}_Producers_ create a new iterator from some source of data.
They determine which data is to be returned by an iterator, and how this data is to be computed, but they are not in control of _when_ the computations occur.
{deftech}_Consumers_ use the data in an iterator for some purpose.
Consumers request the iterator's data, and the iterator computes only enough data to satisfy a consumer's requests.
{deftech (key := "iterator combinator")}_Combinators_ are both consumers and producers: they create new iterators from existing iterators.
Examples include {name}`Iter.map` and {name}`Iter.filter`.
The resulting iterators produce data by consuming their underlying iterators, and do not actually iterate over the underlying collection until they themselves are consumed.
:::keepEnv
```lean -show
/-- A collection type. -/
structure Coll : Type u where
/-- The elements of the collection `Coll`. -/
structure Elem : Type u where
/-- Returns an iterator for `c`. -/
def Coll.iter (c : Coll) := (#[].iter : Iter Elem)
```
Each built-in collection for which it makes sense to do so can be iterated over.
In other words, the collection libraries include iterator {tech}[producers].
By convention, a collection type {name}`Coll` provides a function {name}`Coll.iter` that returns an iterator over the elements of a collection.
Examples include {name}`List.iter`, {name}`Array.iter`, and {name}`TreeMap.iter`.
Additionally, other built-in types such as ranges support iteration using the same convention.
:::
# Run-Time Considerations
For many use cases, using iterators can give a performance benefit by avoiding allocating intermediate data structures.
Without iterators, zipping a list with an array requires first converting one of them to the other type, allocating an intermediate structure, and then using the appropriate {name List.zip}`zip` function.
Using iterators, the intermediate structure can be avoided.
When an iterator is consumed, the resulting computation should be thought of as a single loop, even if the iterator itself is built using combinators from a number of underlying iterators.
One step of the loop may carry out multiple steps from the underlying iterators.
In many cases, the Lean compiler can optimize iterator computations, removing the intermediate overhead, but this is not guaranteed.
When profiling shows that significant time is taken by a tight loop that involves multiple sources of data, it can be necessary to inspect the compiler's IR to see whether the iterators' operations were fused.
In particular, if the IR contains many pattern matches over steps, then it can be a sign of a failure to inline or specialize.
If this is the case, it may be necessary to write a tail-recursive function by hand rather than using the higher-level API.
# Iterator Definitions
Iterators may be either monadic or pure, and they may be finite, productive, or potentially infinite.
{deftech (key:="monadic iterator")}_Monadic_ iterators use side effects in some {tech}[monad] to emit each value, and must therefore be used in the monad, while {deftech (key:="pure iterator")}_pure_ iterators do not require side effects.
For example, iterating over all files in a directory requires the {name}`IO` monad.
Pure iterators have type {name}`Iter`, while monadic iterators are represented by {name}`IterM`.
{docstring Iter}
{docstring IterM}
The types {name}`Iter` and {name}`IterM` are merely wrappers around an internal state.
This inner state type is the implicit parameter to the iterator types.
For basic producer iterators, like the one that results from {name}`List.iter`, this type is fairly simple; however, iterators that result from {tech (key := "iterator combinator")}[combinators] use polymorphic state types that can grow large.
Because Lean elaborates the specified return type of a function before elaborating its body, it may not be possible to automatically determine the internal state type of an iterator type returned by a function.
In these cases, it can be helpful to omit the return type from the signature and instead place a type annotation on the definition's body, which allows the specific iterator combinators invoked from the body to be used to determine the state type.
:::example "Iterator State Types"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
open Iterators.Types (ListIterator ArrayIterator Map)
```
Writing the internal state type explicitly for list and array iterators is feasible:
```lean
def reds := ["red", "crimson"]
example : @Iter (ListIterator String) String := reds.iter
example : @Iter (ArrayIterator String) String := reds.toArray.iter
```
However, the internal state type of a use of the {name}`Iter.map` combinator is quite complicated:
```lean
example :
@Iter
(Map (ListIterator String) Id Id @id fun x : String =>
pure x.length)
Nat :=
reds.iter.map String.length
```
Omitting the state type leads to an error:
```lean +error (name := noStateType)
example : Iter Nat := reds.iter.map String.length
```
```leanOutput noStateType
don't know how to synthesize implicit argument `α`
@Iter ?m.1 Nat
context:
⊢ Type
Note: Because this declaration's type has been explicitly provided, all parameter types and holes (e.g., `_`) in its header are resolved before its body is processed; information from the declaration body cannot be used to infer what these values should be
```
Rather than writing the state type by hand, it can be convenient to omit the return type and instead provide the annotation around the term:
```lean
example := (reds.iter.map String.length : Iter Nat)
example :=
show Iter Nat from
reds.iter.map String.length
```
:::
The actual process of iteration consists of producing a sequence of iteration steps when requested.
Each step returns an updated iterator with a new internal state along with either a data value (in {name}`IterStep.yield`), an indicator that the caller should request a data value again ({name}`IterStep.skip`), or an indication that iteration is finished ({name}`IterStep.done`).
Without the ability to {name IterStep.skip}`skip`, it would be much more difficult to work with iterator combinators such as {name}`Iter.filter` that do not yield values for all of those yielded by the underlying iterator.
With {name IterStep.skip}`skip`, the implementation of {name Iter.filter}`filter` doesn't need to worry about whether the underlying iterator is {tech (key:="finite iterator")}[finite] in order to be a well-defined function, and reasoning about its finiteness can be carried out in separate proofs.
Additionally, {name Iter.filter}`filter` would require an inner loop, which is much more difficult for the compiler to inline.
{docstring IterStep}
Steps taken by {name}`Iter` and {name}`IterM` are respectively represented by the types {name}`Iter.Step` and {name}`IterM.Step`.
Both types of step are wrappers around {name}`IterStep` that include {ref "iterator-plausibility"}[additional proofs] that are used to track termination behavior.
{docstring Iter.Step}
{docstring IterM.Step}
Steps are produced from iterators using {name}`Iterator.step`, which is a method of the {name}`Iterator` type class.
{name}`Iterator` is used for both pure and monadic iterators; pure iterators can be completely polymorphic in the choice of monad, which allows callers to instantiate it with {name}`Id`.
{docstring Iterator +allowMissing}
## Plausibility
%%%
tag := "iterator-plausibility"
%%%
In addition to the step function, instances of {name}`Iterator` include a relation {name}`Iterator.IsPlausibleStep`.
This relation exists because most iterators both maintain invariants over their internal state and yield values in a predictable manner.
For example, array iterators track both an array and a current index into it.
Stepping an array iterator results in an iterator over the same underlying array; it yields a value when the index is small enough, or is done otherwise.
The {deftech}_plausible steps_ from an iterator state are those which are related to it via the iterator's implementation of {name Iterator.IsPlausibleStep}`IsPlausibleStep`.
Tracking plausibility at the logical level makes it feasible to reason about termination behavior for monadic iterators.
Both {name}`Iter.Step` and {name}`IterM.Step` are defined in terms of {name}`PlausibleIterStep`; thus, both types can be used with {tech}[leading dot notation] for its namespace.
An {name}`Iter.Step` or {name}`IterM.Step` can be analyzed using the three {ref "match_pattern-functions"}[match pattern functions] {name}`PlausibleIterStep.yield`, {name}`PlausibleIterStep.skip`, and {name}`PlausibleIterStep.done`.
These functions pair the information in the underlying {name}`IterStep` with the surrounding proof object.
{docstring PlausibleIterStep}
{docstring PlausibleIterStep.yield}
{docstring PlausibleIterStep.skip}
{docstring PlausibleIterStep.done}
## Finite and Productive Iterators
:::paragraph
Not all iterators are guaranteed to return a finite number of results; it is perfectly sensible to iterate over all of the natural numbers.
Similarly, not all iterators are guaranteed to either return a single result or terminate; iterators may be defined using arbitrary programs.
Thus, Lean divides iterators into three termination classes:
* {deftech (key:="finite iterator")}_Finite_ iterators are guaranteed to finish iterating after a finite number of steps. These iterators have a {name}`Finite` instance.
* {deftech (key:="productive iterator")}_Productive_ iterators are guaranteed to yield a value or terminate in finitely many steps, but they may yield infinitely many values. These iterators have a {name}`Productive` instance.
* All other iterators, whose termination behavior is unknown. These iterators have neither instance.
All finite iterators are necessarily productive.
:::
{docstring Finite}
{docstring Productive}
Lean's standard library provides many functions that iterate over an iterator. These consumer functions usually do not
make any assumptions about the underlying iterator. In particular, such functions may run forever for certain iterators.
Sometimes, it is of utmost importance that a function does terminate.
For these cases, the combinator {name}`Iter.ensureTermination` results in an iterator that provides variants of consumers that are guaranteed to terminate.
They usually require proof that the involved iterator is finite.
{docstring Iter.ensureTermination}
{docstring IterM.ensureTermination}
::::example "Iterating Over `Nat`"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
open Iterators (Productive)
```
:::paragraph
To write an iterator that yields each natural number in turn, the first step is to implement its internal state.
This iterator only needs to remember the next natural number:
```lean
structure Nats where
next : Nat
```
:::
:::paragraph
This iterator will only ever yield the next natural number.
Thus, its step function will never return {name IterStep.skip}`skip` or {name IterStep.done}`done`.
Whenever it yields a value, the value will be the internal state's {name Nats.next}`next` field, and the successor iterator's {name Nats.next}`next` field will be one greater.
The {tactic}`grind` tactic suffices to show that the step is indeed plausible:
```lean
instance [Pure m] : Iterator Nats m Nat where
IsPlausibleStep it
| .yield it' n =>
n = it.internalState.next ∧
it'.internalState.next = n + 1
| _ => False
step it :=
let n := it.internalState.next
pure <| .deflate <|
.yield { it with internalState.next := n + 1 } n (by grind)
```
Whenever an iterator is defined, an {name}`IteratorLoop` instance should be provided.
They are required for most consumers of iterators such as {name}`Iter.toList` or the `for` loops.
One can use their default implementations as follows:
```lean
instance [Pure m] [Monad n] : IteratorLoop Nats m n :=
.defaultImplementation
```
:::
:::paragraph
```lean -show
section
variable [Pure m] [inst : Iterator Nats m Nat] (it it' : IterM (α := Nats) m Nat)
```
This {name Iterator.step}`step` function is productive because it never returns {name IterStep.skip}`skip`.
Thus, the proof that each chain of {name IterStep.skip}`skip`s has finite length can rely on the fact that when {lean}`it` is a {name}`Nats` iterator, {lean}`Iterator.IsPlausibleStep it (.skip it') = False`:
```lean -show
end
```
```lean
instance [Pure m] : Productive Nats m where
wf := .intro <| fun _ => .intro _ nofun
```
Because there are infinitely many {name}`Nat`s, the iterator is not finite.
:::
:::paragraph
A {name}`Nats` iterator can be created using this function:
```lean
def Nats.iter : Iter (α := Nats) Nat :=
toIterM { next := 0 } Id Nat |>.toIter
```
:::
:::paragraph
One can print all natural numbers by running the following function:
```lean
def f : IO Unit := do
for x in Nats.iter do
IO.println s!"{x}"
```
This function never terminates, printing all natural numbers in increasing order, one
after another.
:::
:::paragraph
This iterator is most useful with combinators such as {name}`Iter.zip`:
```lean (name := natzip)
#eval show IO Unit from do
let xs : List String := ["cat", "dog", "pachycephalosaurus"]
for (x, y) in Nats.iter.zip xs.iter do
IO.println s!"{x}: {y}"
```
```leanOutput natzip
0: cat
1: dog
2: pachycephalosaurus
```
:::
:::paragraph
In contrast to the previous example, this loop terminates because `xs.iter` is a finite iterator,
One can make sure that a loop actually terminates by providing a {name}`Finite` instance:
```lean (name := natfin)
#check type_of% (Nats.iter.zip ["cat", "dog"].iter).internalState
#synth Finite (Zip Nats Id (ListIterator String) String) Id
```
```leanOutput natfin
Zip Nats Id (ListIterator String) String : Type
```
```leanOutput natfin
Zip.instFinite₂
```
In contrast, `Nats.iter` has no `Finite` instance because it yields infinitely many values:
```lean (name := natinf) +error
#synth Finite Nats Id
```
```leanOutput natinf
failed to synthesize
Finite Nats Id
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
Because there are infinitely many {name}`Nat`s, using {name}`Iter.ensureTermination` results in an error:
```lean (name := natterm) +error
#eval show IO Unit from do
for x in Nats.iter.ensureTermination do
IO.println s!"{x}"
```
```leanOutput natterm
failed to synthesize instance for 'for_in%' notation
ForIn (EIO IO.Error) (Iter.Total Nat) ?m.12
```
:::
::::
::::example "Iterating Over Triples"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
open Iterators (Finite)
```
The type {name}`Triple` contains three values of the same type:
```lean
structure Triple α where
fst : α
snd : α
thd : α
```
The internal state of an iterator over {name}`Triple` can consist of a triple paired with a current position.
This position may either be one of the fields or an indication that iteration is finished.
```lean
inductive TriplePos where
| fst | snd | thd | done
```
Positions can be used to look up elements:
```lean
def Triple.get? (xs : Triple α) (pos : TriplePos) : Option α :=
match pos with
| .fst => some xs.fst
| .snd => some xs.snd
| .thd => some xs.thd
| _ => none
```
Each field's position has a successor position:
```lean
@[grind, grind cases]
inductive TriplePos.Succ : TriplePos → TriplePos → Prop where
| fst : Succ .fst .snd
| snd : Succ .snd .thd
| thd : Succ .thd .done
```
The iterator itself pairs a triple with the position of the next element:
```lean
structure TripleIterator α where
triple : Triple α
pos : TriplePos
```
Iteration begins at {name TriplePos.fst}`fst`:
```lean
def Triple.iter (xs : Triple α) : Iter (α := TripleIterator α) α :=
IterM.mk {triple := xs, pos := .fst : TripleIterator α} Id α |>.toIter
```
There are two plausible steps: either the iterator's position has a successor, in which case the next iterator is one that points at the same triple with the successor position, or it does not, in which case iteration is complete.
```lean
@[grind]
inductive TripleIterator.IsPlausibleStep :
@IterM (TripleIterator α) m α →
IterStep (@IterM (TripleIterator α) m α) α →
Prop where
| yield :
it.internalState.triple = it'.internalState.triple →
it.internalState.pos.Succ it'.internalState.pos →
it.internalState.triple.get? it.internalState.pos = some out →
IsPlausibleStep it (.yield it' out)
| done :
it.internalState.pos = .done →
IsPlausibleStep it .done
```
The corresponding step function yields the iterator and value describe by the relation:
```lean
instance [Pure m] : Iterator (TripleIterator α) m α where
IsPlausibleStep := TripleIterator.IsPlausibleStep
step
| ⟨xs, pos⟩ =>
pure <| .deflate <|
match pos with
| .fst => .yield ⟨xs, .snd⟩ xs.fst ?_
| .snd => .yield ⟨xs, .thd⟩ xs.snd ?_
| .thd => .yield ⟨xs, .done⟩ xs.thd ?_
| .done => .done <| ?_
where finally
all_goals grind [Triple.get?]
```
This iterator can now be converted to an array:
```lean
def abc : Triple Char := ⟨'a', 'b', 'c'⟩
```
```lean (name := abcToArray)
#eval abc.iter.toArray
```
```leanOutput abcToArray
#['a', 'b', 'c']
```
In general, `Iter.toArray` might run forever. One can prove that `abc` is finite, and the above example will terminate after finitely many steps, by
constructing a `Finite (Triple Char) Id` instance.
It's easiest to start at {name}`TriplePos.done` and work backwards toward {name}`TriplePos.fst`, showing that each position in turn has a finite chain of successors:
```lean
@[grind! .]
theorem acc_done [Pure m] :
Acc (IterM.IsPlausibleSuccessorOf (m := m))
⟨{ triple, pos := .done : TripleIterator α}⟩ :=
Acc.intro _ fun
| _, ⟨_, ⟨_, h⟩⟩ => by
cases h <;> grind [IterStep.successor_done]
@[grind! .]
theorem acc_thd [Pure m] :
Acc (IterM.IsPlausibleSuccessorOf (m := m))
⟨{ triple, pos := .thd : TripleIterator α}⟩ :=
Acc.intro _ fun
| ⟨{ triple, pos }⟩, ⟨h, h', h''⟩ => by
cases h'' <;> grind [IterStep.successor_yield]
@[grind! .]
theorem acc_snd [Pure m] :
Acc (IterM.IsPlausibleSuccessorOf (m := m))
⟨{ triple, pos := .snd : TripleIterator α}⟩ :=
Acc.intro _ fun
| ⟨{ triple, pos }⟩, ⟨h, h', h''⟩ => by
cases h'' <;> grind [IterStep.successor_yield]
@[grind! .]
theorem acc_fst [Pure m] :
Acc (IterM.IsPlausibleSuccessorOf (m := m))
⟨{ triple, pos := .fst : TripleIterator α}⟩ :=
Acc.intro _ fun
| ⟨{ triple, pos }⟩, ⟨h, h', h''⟩ => by
cases h'' <;> grind [IterStep.successor_yield]
instance [Pure m] : Finite (TripleIterator α) m where
wf := .intro <| fun
| { internalState := { triple, pos } } => by
cases pos <;> grind
```
To enable the iterator in {keywordOf Lean.Parser.Term.doFor}`for` loops, an instance of {name}`IteratorLoop` are needed:
```lean
instance [Monad m] [Monad n] :
IteratorLoop (TripleIterator α) m n :=
.defaultImplementation
```
```lean (name := abc)
#eval show IO Unit from do
for x in abc.iter do
IO.println x
```
```leanOutput abc
a
b
c
```
::::
::::example "Iterators and Effects"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
One way to iterate over the contents of a file is to read a specified number of bytes from a {name IO.FS.Stream}`Stream` at each step.
When EOF is reached, the iterator can close the file by letting its reference count drop to zero:
```lean
structure FileIterator where
stream? : Option IO.FS.Stream
count : USize := 8192
```
An iterator can be created by opening a file and converting its handle to a stream:
```lean
def iterFile
(path : System.FilePath)
(count : USize := 8192) :
IO (IterM (α := FileIterator) IO ByteArray) := do
let h ← IO.FS.Handle.mk path .read
let stream? := some (IO.FS.Stream.ofHandle h)
return IterM.mk { stream?, count } IO ByteArray
```
For this iterator, a {name IterStep.yield}`yield` is plausible when the file is still open, and {name IterStep.done}`done` is plausible when the file is closed.
The actual step function performs a read and closes the file if no bytes were returned:
```lean
instance : Iterator FileIterator IO ByteArray where
IsPlausibleStep it
| .yield .. =>
it.internalState.stream?.isSome
| .skip .. => False
| .done => it.internalState.stream?.isNone
step it := do
let { stream?, count } := it.internalState
match stream? with
| none => return .deflate <| .done rfl
| some stream =>
let bytes ← stream.read count
let it' :=
{ it with internalState.stream? :=
if bytes.size == 0 then none else some stream
}
return .deflate <| .yield it' bytes (by grind)
```
To use it in loops, an {name}`IteratorLoop` instance will be necessary.
```lean
instance [Monad n] : IteratorLoop FileIterator IO n :=
.defaultImplementation
```
This is enough support code to use the iterator to calculate file sizes:
```lean
def fileSize (name : System.FilePath) : IO Nat := do
let mut size := 0
let f := (← iterFile name)
for bytes in f do
size := size + bytes.size
return size
```
::::
## Accessing Elements
Some iterators support efficient random access.
For example, an array iterator can skip any number of elements in constant time by incrementing the index that it maintains into the array.
{docstring IteratorAccess +allowMissing}
{docstring IterM.nextAtIdx?}
## Loops
{docstring IteratorLoop +allowMissing}
{docstring IteratorLoop.defaultImplementation}
{docstring LawfulIteratorLoop +allowMissing}
## Universe Levels
To make the {tech}[universe levels] of iterators more flexible, a wrapper type {name Std.Shrink}`Shrink` is applied around the result of {name}`Iterator.step`.
This type is presently a placeholder.
It is present to reduce the scope of the breaking change when the full implementation is available.
{docstring Std.Shrink}
{docstring Std.Shrink.inflate}
{docstring Std.Shrink.deflate}
## Basic Iterators
In addition to the iterators provided by collection types, there are two basic iterators that are not connected to any underlying data structure.
{name}`Iter.empty` finishes iteration immediately after yielding no data, and {name}`Iter.repeat` yields the same element forever.
These iterators are primarily useful as parts of larger iterators built with combinators.
{docstring Iter.empty}
{docstring IterM.empty}
{docstring Iter.repeat}
# Consuming Iterators
:::paragraph
There are three primary ways to consume an iterator:
: Converting it to a sequential data structure
The functions {name}`Iter.toList`, {name}`Iter.toArray`, and their monadic equivalents {name}`IterM.toList` and {name}`IterM.toArray`, construct a lists or arrays that contain the values from the iterator, in order.
Only {tech}[finite iterators] can be converted to sequential data structures.
: {keywordOf Lean.Parser.Term.doFor}`for` loops
A {keywordOf Lean.Parser.Term.doFor}`for` loop can consume an iterator, making each value available in its body.
This requires that the iterator have an instance of {name}`IteratorLoop` for the loop's monad.
: Stepping through iterators
Iterators can provide their values one-by-one, with client code explicitly requesting each new value in turn.
When stepped through, iterators perform only enough computation to yield the requested value.
:::
:::example "Converting Iterators to Lists"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
In {name}`countdown`, an iterator over a range is transformed into an iterator over strings using {name}`Iter.map`.
This call to {name}`Iter.map` does not result in any iteration over the range until {name}`Iter.toList` is called, at which point each element of the range is produced and transformed into a string.
```lean (name := toListEx)
def countdown : String :=
let steps : Iter String := (0...10).iter.map (s!"{10 - ·}!\n")
String.join steps.toList
#eval IO.println countdown
```
```leanOutput toListEx
10!
9!
8!
7!
6!
5!
4!
3!
2!
1!
```
:::
:::example "Converting Infinite Iterators to Lists"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
Attempting to construct a list of all the natural numbers from an iterator will produce an endless loop:
```lean (name := toListInf) -keep
def allNats : List Nat :=
let steps : Iter Nat := (0...*).iter
steps.toList
```
The combinator {lean}`Iter.ensureTermination` results in an iterator where non-termination is ruled out.
These iterators are guaranteed to terminate after finitely many steps, and thus cannot be used when Lean cannot prove the iterator finite.
```lean (name := toListInf) +error -keep
def allNats : List Nat :=
let steps := (0...*).iter.ensureTermination
steps.toList
```
The resulting error message states that there is no {name}`Finite` instance:
```leanOutput toListInf
failed to synthesize instance of type class
Finite (Rxi.Iterator Nat) Id
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
:::
:::example "Consuming Iterators in Loops"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
This program creates an iterator of strings from a range, and then consumes the strings in a {keywordOf Lean.Parser.Term.doFor}`for` loop:
```lean (name := iterFor)
def countdown (n : Nat) : IO Unit := do
let steps : Iter String := (0...n).iter.map (s!"{n - ·}!")
for i in steps do
IO.println i
IO.println "Blastoff!"
#eval countdown 5
```
```leanOutput iterFor
5!
4!
3!
2!
1!
Blastoff!
```
:::
:::example "Consuming Iterators Directly"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
The function {name}`countdown` calls the range iterator's {name Iter.step}`step` function directly, handling each of the three possible cases.
```lean
def countdown (n : Nat) : IO Unit := do
let steps : Iter Nat := (0...n).iter
go steps
where
go iter := do
match iter.step with
| .done _ => pure ()
| .skip iter' _ => go iter'
| .yield iter' i _ => do
IO.println s!"{i}!"
if i == 2 then
IO.println s!"Almost there..."
go iter'
termination_by iter.finitelyManySteps
```
:::
## Stepping Iterators
Iterators are manually stepped using {name}`Iter.step` or {name}`IterM.step`.
{docstring Iter.step}
{docstring IterM.step}
### Termination
When manually stepping an finite iterator, the termination measures {name Iter.finitelyManySteps}`finitelyManySteps` and {name Iter.finitelyManySkips}`finitelyManySkips` can be used to express that each step brings iteration closer to the end.
The proof automation for {ref "well-founded-recursion"}[well-founded recursion] is pre-configured to prove that recursive calls after steps reduce these measures.
:::example "Finitely Many Skips"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
open Iterators (Productive)
```
This function returns the first element of an iterator, if there is one, or {name}`none` otherwise.
Because the iterator must be productive, it is guaranteed to return an element after at most a finite number of {name PlausibleIterStep.skip}`skip`s.
This function terminates even for infinite iterators.
```lean
def getFirst {α β} [Iterator α Id β] [Productive α Id]
(it : @Iter α β) : Option β :=
match it.step with
| .done .. => none
| .skip it' .. => getFirst it'
| .yield _ x .. => pure x
termination_by it.finitelyManySkips
```
:::
{docstring Iter.finitelyManySteps}
{docstring IterM.finitelyManySteps}
{docstring IterM.TerminationMeasures.Finite +allowMissing}
{docstring Iter.finitelyManySkips}
{docstring IterM.finitelyManySkips}
{docstring IterM.TerminationMeasures.Productive +allowMissing}
## Consuming Pure Iterators
{docstring Iter.fold}
{docstring Iter.foldM}
{docstring Iter.count}
{docstring Iter.any}
{docstring Iter.anyM}
{docstring Iter.all}
{docstring Iter.allM}
{docstring Iter.find? +allowMissing}
{docstring Iter.findM? +allowMissing}
{docstring Iter.findSome? +allowMissing}
{docstring Iter.findSomeM? +allowMissing}
{docstring Iter.atIdx?}
{docstring Iter.atIdxSlow?}
## Consuming Monadic Iterators
{docstring IterM.drain}
{docstring IterM.fold}
{docstring IterM.foldM}
{docstring IterM.count}
{docstring IterM.any}
{docstring IterM.anyM}
{docstring IterM.all}
{docstring IterM.allM}
{docstring IterM.find? +allowMissing}
{docstring IterM.findM? +allowMissing}
{docstring IterM.findSome? +allowMissing}
{docstring IterM.findSomeM? +allowMissing}
{docstring IterM.atIdx?}
## Collectors
Collectors consume an iterator, returning all of its data in a list or array.
To be collected, an iterator must be finite.
{docstring Iter.toArray}
{docstring IterM.toArray}
{docstring Iter.toList}
{docstring IterM.toList}
{docstring Iter.toListRev}
{docstring IterM.toListRev}
# Iterator Combinators
The documentation for iterator combinators often includes {deftech}_marble diagrams_ that show the relationship between the elements returned by the underlying iterators and the elements returned by the combinator's iterator.
Marble diagrams provide examples, not full specifications.
These diagrams consist of a number of rows.
Each row shows an example of an iterator's output, where `-` indicates a {name PlausibleIterStep.skip}`skip`, a term indicates a value returned with {name PlausibleIterStep.yield}`yield`, and `⊥` indicates the end of iteration.
Spaces indicate that iteration did not occur.
Unbound identifiers in the marble diagram stand for arbitrary values of the iterator's element type.
Vertical alignment in the marble diagram indicates a causal relationship: when two elements are aligned, it means that consuming the iterator in the lower row results in the upper rows being consumed.
In particular, consuming up to the $`n`th column of the lower iterator results in the consumption of the first $`n` columns from the upper iterator.
:::paragraph
A marble diagram for an identity iterator combinator that returns each element from the underlying iterator looks like this:
```
it ---a-----b---c----d⊥
it.id ---a-----b---c----d⊥
```
:::
:::paragraph
A marble diagram for an iterator combinator that duplicates each element of the underlying iterator looks like this:
```
it ---a ---b ---c ---d⊥
it.double ---a-a---b-b---c-c---d-d⊥
```
:::
:::paragraph
The marble diagram for {name}`Iter.filter` shows how some elements of the underlying iterator do not occur in the filtered iterator, but also that stepping the filtered iterator results in a {name PlausibleIterStep.skip}`skip` when the underlying iterator returns a value that doesn't satisfy the predicate:
```
it ---a--b--c--d-e--⊥
it.filter ---a-----c-------⊥
```
The diagram requires an explanatory note:
> (given that `f a = f c = true` and `f b = f d = d e = false`)
:::
:::paragraph
The diagram for {name}`Iter.zip` shows how consuming the combined iterator consumes the underlying iterators:
```
left --a ---b --c
right --x --y --⊥
left.zip right -----(a, x)------(b, y)-----⊥
```
The zipped iterator emits {name PlausibleIterStep.skip}`skip`s so long as `left` does.
When `left` emits `a`, the zipped iterator emits one more {name PlausibleIterStep.skip}`skip`.
After this, the zipped iterator switches to consuming `right`, and it emits {name PlausibleIterStep.skip}`skip`s so long as `right` does.
When `right` emits `x`, the zipped iterator emits the pair `(a, x)`.
This interleaving of `left` and `right` continues until one of them stops, at which point the zipped iterator stops.
Blank spaces in the upper rows of the marble diagram indicate that the iterator is not being consumed at that step.
:::
## Pure Combinators
{docstring IterM.mk}
{docstring Iter.toIterM}
{docstring Iter.take}
{docstring Iter.takeWhile}
{docstring Iter.toTake}
{docstring Iter.drop}
{docstring Iter.dropWhile}
{docstring Iter.stepSize}
{docstring Iter.map}
{docstring Iter.mapM}
{docstring Iter.mapWithPostcondition}
{docstring Iter.uLift}
{docstring Iter.flatMap}
{docstring Iter.flatMapM}
{docstring Iter.flatMapAfter}
{docstring Iter.flatMapAfterM}
{docstring Iter.filter}
{docstring Iter.filterM}
{docstring Iter.filterWithPostcondition}
{docstring Iter.filterMap}
{docstring Iter.filterMapM}
{docstring Iter.filterMapWithPostcondition}
{docstring Iter.zip}
{docstring Iter.attachWith}
## Monadic Combinators
{docstring IterM.toIter}
{docstring IterM.take}
{docstring IterM.takeWhile}
{docstring IterM.takeWhileM}
{docstring IterM.takeWhileWithPostcondition}
{docstring IterM.toTake}
{docstring IterM.drop}
{docstring IterM.dropWhile}
{docstring IterM.dropWhileM}
{docstring IterM.dropWhileWithPostcondition}
{docstring IterM.stepSize}
{docstring IterM.map}
{docstring IterM.mapM}
{docstring IterM.mapWithPostcondition}
{docstring IterM.uLift}
{docstring IterM.flatMap}
{docstring IterM.flatMapM}
{docstring IterM.flatMapAfter}
{docstring IterM.flatMapAfterM}
{docstring IterM.filter}
{docstring IterM.filterM}
{docstring IterM.filterWithPostcondition}
{docstring IterM.filterMap}
{docstring IterM.filterMapM}
{docstring IterM.filterMapWithPostcondition}
{docstring IterM.zip}
{docstring IterM.attachWith}
# Reasoning About Iterators
## Reasoning About Consumers
The iterator library provides a large number of useful lemmas.
Most theorems about finite iterators can be proven by rewriting the statement to one about lists, using the fact that the correspondence between iterator combinators and corresponding list operations has already been proved.
In practice, many of these theorems are already registered as {tactic}`simp` lemmas.
:::paragraph
The lemmas have a very predictable naming system, and many are in the {tech}[default simp set].
Some of the most important include:
* Consumer lemmas such as {name}`Iter.all_toList`, {name}`Iter.any_toList`, and {name}`Iter.foldl_toList` that introduce lists as a model.
* Simplification lemmas such as {name}`Iter.toList_map` that {name}`Iter.toList_filter` push the list model “inwards” in the goal.
* Producer lemmas such as {name}`List.toList_iter` and {name}`Array.toList_iter` that replace a producer with a list model, removing iterators from the goal entirely.
The latter two categories are typically automatic with {tactic}`simp`.
:::
:::example "Reasoning via Lists"
```imports -show
import Std.Data.Iterators
```
```lean -show
open Std
```
Every element returned by an iterator that multiplies the numbers consumed some other iterator by two is even.
To prove this statement, {name}`Iter.all_toList`, {name}`Iter.toList_map`, and {name}`Array.toList_iter` are used to replace the statement about iterators with one about lists, after which {tactic}`simp` discharges the goal:
```lean
example (l : Array Nat) :
(l.iter.map (· * 2)).all (· % 2 = 0) := by
rw [← Iter.all_toList]
rw [Iter.toList_map]
rw [Array.toList_iter]
simp
```
In fact, because most of the needed lemmas are in the {tech}[default simp set], the proof can be quite short:
```lean
example (l : Array Nat) :
(l.iter.map (· * 2)).all (· % 2 = 0) := by
simp [← Iter.all_toList]
```
:::
## Stepwise Reasoning
When there are not enough lemmas to prove a property by rewriting to a list model, it can be necessary to prove things about iterators by reasoning directly about their step functions.
The induction principles in this section are useful for stepwise reasoning.
{docstring Iter.inductSkips}
{docstring IterM.inductSkips}
{docstring Iter.inductSteps}
{docstring IterM.inductSteps}
The standard library also includes lemmas for the stepwise behavior of all the producers and combinators.
Examples include {name}`List.step_iter_nil`, {name}`List.step_iter_cons`, {name}`IterM.step_map`.
## Monads for Reasoning
{docstring Std.Iterators.PostconditionT}
{docstring Std.Iterators.PostconditionT.run}
{docstring Std.Iterators.PostconditionT.lift}
{docstring Std.Iterators.PostconditionT.liftWithProperty}
{docstring Iter.IsPlausibleIndirectOutput +allowMissing}
{docstring HetT}
{docstring IterM.stepAsHetT}
{docstring HetT.lift}
{docstring HetT.prun}
{docstring HetT.pure}
{docstring HetT.map}
{docstring HetT.pmap}
{docstring HetT.bind}
{docstring HetT.pbind}
## Equivalence
Iterator equivalence is defined in terms of the observable behavior of iterators, rather than their implementations.
In particular, the internal state is ignored.
{docstring Iter.Equiv}
{docstring IterM.Equiv} |
reference-manual/Manual/Coercions.lean | import VersoManual
import Manual.Meta
import Manual.Papers
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
open Lean (Syntax SourceInfo)
#doc (Manual) "Coercions" =>
%%%
tag := "coercions"
%%%
```lean -show
section
open Lean (TSyntax Name)
variable {c1 c2 : Name} {α : Type u}
```
When the Lean elaborator is expecting one type but produces a term with a different type, it attempts to automatically insert a {deftech}_coercion_, which is a specially designated function from the term's type to the expected type.
Coercions make it possible to use specific types to represent data while interacting with APIs that expect less-informative types.
They also allow mathematical developments to follow the usual practice of “punning”, where the same symbol is used to stand for both an algebraic structure and its carrier set, with the precise meaning determined by context.
:::paragraph
Lean's standard library and metaprogramming APIs define many coercions.
Some examples include:
* A {name}`Nat` may be used where an {name}`Int` is expected.
* A {name}`Fin` may be used where a {name}`Nat` is expected.
* An {lean}`α` may be used where an {lean}`Option α` is expected. The coercion wraps the value in {name}`some`.
* An {lean}`α` may be used where a {lean}`Thunk α` is expected. The coercion wraps the term in a function to delay its evaluation.
* When one syntax category {lean}`c1` embeds into another category {lean}`c2`, a coercion from {lean}`TSyntax c1` to {lean}`TSyntax c2` performs any necessary wrapping to construct a valid syntax tree.
Coercions are found using type class {tech}[synthesis].
The set of coercions can be extended by adding further instances of the appropriate type classes.
:::
```lean -show
end
```
:::example "Coercions"
All of the following examples rely on coercions:
```lean
example (n : Nat) : Int := n
example (n : Fin k) : Nat := n
example (x : α) : Option α := x
def th (f : Int → String) (x : Nat) : Thunk String := f x
open Lean in
example (n : Ident) : Term := n
```
In the case of {name}`th`, using {keywordOf Lean.Parser.Command.print}`#print` demonstrates that evaluation of the function application is delayed until the thunk's value is requested:
```lean (name := thunkEval)
#print th
```
```leanOutput thunkEval
def th : (Int → String) → Nat → Thunk String :=
fun f x => { fn := fun x_1 => f ↑x }
```
:::
```lean -show
section
variable {α : Type u}
```
Coercions are not used to resolve {tech}[generalized field notation]: only the inferred type of the term is considered.
However, a {tech}[type ascription] can be used to trigger a coercion to the type that has the desired generalized field.
Coercions are also not used to resolve {name}`OfNat` instances: even though there is a default instance for {lean}`OfNat Nat`, a coercion from {lean}`Nat` to {lean}`α` does not allow natural number literals to be used for {lean}`α`.
```lean -show
end
```
```lean -show
-- Test comment about field notation
/-- error: Unknown constant `Nat.bdiv` -/
#check_msgs in
#check Nat.bdiv
/-- info: Int.bdiv (x : Int) (m : Nat) : Int -/
#check_msgs in
#check Int.bdiv
/--
error: Invalid field `bdiv`: The environment does not contain `Nat.bdiv`, so it is not possible to project the field `bdiv` from an expression
n
of type `Nat`
-/
#check_msgs in
example (n : Nat) := n.bdiv 2
#check_msgs in
example (n : Nat) := (n : Int).bdiv 2
```
:::example "Coercions and Generalized Field Notation"
The name {lean +error}`Nat.bdiv` is not defined, but {lean}`Int.bdiv` exists.
The coercion from {lean}`Nat` to {lean}`Int` is not considered when looking up the field `bdiv`:
```lean +error (name := natBdiv)
example (n : Nat) := n.bdiv 2
```
```leanOutput natBdiv
Invalid field `bdiv`: The environment does not contain `Nat.bdiv`, so it is not possible to project the field `bdiv` from an expression
n
of type `Nat`
```
This is because coercions are only inserted when there is an expected type that differs from an inferred type, and generalized fields are resolved based on the inferred type of the term before the dot.
Coercions can be triggered by adding a type ascription, which additionally causes the inferred type of the entire ascription term to be {lean}`Int`, allowing the function {name}`Int.bdiv` to be found.
```lean
example (n : Nat) := (n : Int).bdiv 2
```
:::
::::example "Coercions and `OfNat`"
{lean}`Bin` is an inductive type that represents binary numbers.
```lean
inductive Bin where
| done
| zero : Bin → Bin
| one : Bin → Bin
def Bin.toString : Bin → String
| .done => ""
| .one b => b.toString ++ "1"
| .zero b => b.toString ++ "0"
instance : ToString Bin where
toString
| .done => "0"
| b => Bin.toString b
```
Binary numbers can be converted to natural numbers by repeatedly applying {lean}`Bin.succ`:
```lean
def Bin.succ (b : Bin) : Bin :=
match b with
| .done => Bin.done.one
| .zero b => .one b
| .one b => .zero b.succ
def Bin.ofNat (n : Nat) : Bin :=
match n with
| 0 => .done
| n + 1 => (Bin.ofNat n).succ
```
```lean -show -keep
--- Internal tests
/-- info: [0, 1, 10, 11, 100, 101, 110, 111, 1000] -/
#check_msgs in
#eval [
Bin.done,
Bin.done.succ,
Bin.done.succ.succ,
Bin.done.succ.succ.succ,
Bin.done.succ.succ.succ.succ,
Bin.done.succ.succ.succ.succ.succ,
Bin.done.succ.succ.succ.succ.succ.succ,
Bin.done.succ.succ.succ.succ.succ.succ.succ,
Bin.done.succ.succ.succ.succ.succ.succ.succ.succ]
```
```lean -show
def Bin.toNat : Bin → Nat
| .done => 0
| .zero b => 2 * b.toNat
| .one b => 2 * b.toNat + 1
def Bin.double : Bin → Bin
| .done => .done
| other => .zero other
theorem Bin.toNat_succ_eq_succ {b : Bin} : b.toNat = n → b.succ.toNat = n + 1 := by
intro hEq
induction b generalizing n <;> simp_all +arith [Bin.toNat, Bin.succ]
theorem Bin.toNat_double_eq_double {b : Bin} : b.toNat = n → b.double.toNat = n * 2 := by
intro hEq
induction b generalizing n <;> simp_all +arith [Bin.toNat, Bin.double]
theorem Bin.ofNat_toNat_eq {n : Nat} : (Bin.ofNat n).toNat = n := by
induction n <;> simp_all [Bin.ofNat, Bin.toNat, Bin.toNat_succ_eq_succ]
```
Even if {lean}`Bin.ofNat` is registered as a coercion, natural number literals cannot be used for {lean}`Bin`:
```lean
attribute [coe] Bin.ofNat
instance : Coe Nat Bin where
coe := Bin.ofNat
```
``` lean (name := nineFail) +error
#eval (9 : Bin)
```
```leanOutput nineFail
failed to synthesize instance of type class
OfNat Bin 9
numerals are polymorphic in Lean, but the numeral `9` cannot be used in a context where the expected type is
Bin
due to the absence of the instance above
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
This is because coercions are inserted in response to mismatched types, but a failure to synthesize an {name}`OfNat` instance is not a type mismatch.
The coercion can be used in the definition of the {lean}`OfNat Bin` instance:
```lean (name := ten)
instance : OfNat Bin n where
ofNat := n
#eval (10 : Bin)
```
```leanOutput ten
1010
```
::::
Most new coercions can be defined by declaring an instance of the {name}`Coe` {tech}[type class] and applying the {attr}`coe` attribute to the function that performs the coercion.
To enable more control over coercions or to enable them in more contexts, Lean provides further classes that can be implemented, described in the rest of this chapter.
:::example "Defining Coercions: Decimal Numbers"
Decimal numbers can be defined as arrays of digits.
```lean
structure Decimal where
digits : Array (Fin 10)
```
Adding a coercion allows them to be used in contexts that expect {lean}`Nat`, but also contexts that expect any type that {lean}`Nat` can be coerced to.
```lean
@[coe]
def Decimal.toNat (d : Decimal) : Nat :=
d.digits.foldl (init := 0) fun n d => n * 10 + d.val
instance : Coe Decimal Nat where
coe := Decimal.toNat
```
This can be demonstrated by treating a {lean}`Decimal` as an {lean}`Int` as well as a {lean}`Nat`:
```lean (name := digival)
def twoHundredThirteen : Decimal where
digits := #[2, 1, 3]
def one : Decimal where
digits := #[1]
#eval (one : Int) - (twoHundredThirteen : Nat)
```
```leanOutput digival
-212
```
:::
{docstring Coe}
# Coercion Insertion
%%%
tag := "coercion-insertion"
%%%
:::paragraph
The process of searching for a coercion from one type to another is called {deftech}_coercion insertion_.
Coercion insertion is attempted in the following situations where an error would otherwise occur:
* The expected type for a term is not equal to the type found for the term.
* A type or proposition is expected, but the term's type is not a {tech}[universe].
* A term is applied as though it were a function, but its type is not a function type.
Coercions are also inserted when they are explicitly requested.
Each situation in which coercions may be inserted has a corresponding prefix operator that triggers the appropriate insertion.
:::
```lean -show
section
variable {α : Type u} {α' : Type u'} {β : Type u} [Coe α α'] [Coe α' β] (e : α)
```
Because coercions are inserted automatically, nested {tech}[type ascriptions] provide a way to precisely control the types involved in a coercion.
If {lean}`α` and {lean}`β` are not the same type, {lean}`((e : α) : β)` arranges for {lean}`e` to have type {lean}`α` and then inserts a coercion from {lean}`α` to {lean}`β`.
```lean -show
end
```
When a coercion is discovered, the instances used to find it are unfolded and removed from the resulting term.
To the extent possible, calls to {name}`Coe.coe` and related functions do not occur in the final term.
This process of unfolding makes terms more readable.
Even more importantly, it means that coercions can control the evaluation of the coerced terms by wrapping them in functions.
:::example "Controlling Evaluation with Coercions"
The structure {name}`Later` represents a term that can be evaluated in the future, by calling the contained function.
```lean
structure Later (α : Type u) where
get : Unit → α
```
A coercion from any value to a later value is performed by creating a function that wraps it.
```lean
instance : CoeTail α (Later α) where
coe x := { get := fun () => x }
```
However, if coercion insertion resulted in an application of {name}`CoeTail.coe`, then this coercion would not have the desired effect at runtime, because the coerced value would be evaluated and then saved in the function's closure.
Because coercion implementations are unfolded, this instance is nonetheless useful.
```lean
def tomorrow : Later String :=
(Nat.fold 10000
(init := "")
(fun _ _ s => s ++ "tomorrow") : String)
```
Printing the resulting definition shows that the computation is inside the function's body:
```lean (name := tomorrow)
#print tomorrow
```
```leanOutput tomorrow
def tomorrow : Later String :=
{ get := fun x => Nat.fold 10000 (fun x x_1 s => s ++ "tomorrow") "" }
```
:::
```lean -show
section
variable {α : Type u}
```
::::example "Duplicate Evaluation in Coercions"
Because the contents of {lean}`Coe` instances are unfolded during coercion insertion, coercions that use their argument more than once should be careful to ensure that evaluation occurs just once.
This can be done by using a helper function that is not part of the instance, or by using {keywordOf Lean.Parser.Term.let}`let` to evaluate the coerced term and then reuse its resulting value.
The structure {name}`Twice` requires that both fields have the same value:
```lean
structure Twice (α : Type u) where
first : α
second : α
first_eq_second : first = second
```
One way to define a coercion from {lean}`α` to {lean}`Twice α` is with a helper function {name}`twice`.
The {attr}`coe` attribute marks it as a coercion so it can be shown correctly in proof goals and error messages.
```lean
@[coe]
def twice (x : α) : Twice α where
first := x
second := x
first_eq_second := rfl
instance : Coe α (Twice α) := ⟨twice⟩
```
When the {name}`Coe` instance is unfolded, the call to {name}`twice` remains, which causes its argument to be evaluated before the body of the function is executed.
As a result, the {keywordOf Lean.Parser.Term.dbgTrace}`dbg_trace` is included in the resulting term just once:
```lean (name := eval1)
#eval ((dbg_trace "hello"; 5 : Nat) : Twice Nat)
```
This is used to demonstrate the effect:
```leanOutput eval1
hello
```
Inlining the helper into the {name}`Coe` instance results in a term that duplicates the {keywordOf Lean.Parser.Term.dbgTrace}`dbg_trace`:
```lean (name := eval2)
instance : Coe α (Twice α) where
coe x := ⟨x, x, rfl⟩
#eval ((dbg_trace "hello"; 5 : Nat) : Twice Nat)
```
```leanOutput eval2
hello
hello
```
Introducing an intermediate name for the result of the evaluation prevents the duplication of {keywordOf Lean.Parser.Term.dbgTrace}`dbg_trace`:
```lean (name := eval3)
instance : Coe α (Twice α) where
coe x := let y := x; ⟨y, y, rfl⟩
#eval ((dbg_trace "hello"; 5 : Nat) : Twice Nat)
```
```leanOutput eval3
hello
```
::::
```lean -show
end
```
# Coercing Between Types
%%%
tag := "ordinary-coercion"
%%%
:::paragraph
Coercions between types are inserted when the Lean elaborator successfully constructs a term, inferring its type, in a context where a term of some other type was expected.
Before signaling an error, the elaborator attempts to insert a coercion from the inferred type to the expected type by synthesizing an instance of {lean}`CoeT`.
There are two ways that this might succeed:
1. There could be a chain of coercions from the inferred type to the expected type through a number of intermediate types.
These chained coercions are selected based on the inferred type and the expected type, but not the term being coerced.
2. There could be a single dependent coercion from the inferred type to the expected type.
Dependent coercions take the term being coerced into account as well as the inferred and expected types, but they cannot be chained.
:::
The simplest way to define a non-dependent coercion is by implementing a {name}`Coe` instance, which is enough to synthesize a {name}`CoeT` instance.
This instance participates in chaining, and may be applied any number of times.
The expected type of the expression is used to drive synthesis of {name}`Coe` instances, rather than the inferred type.
For instances that can be used at most once, or instances in which the inferred type should drive synthesis, one of the other coercion classes may be needed.
:::example "Defining Coercions"
The type {lean}`Even` represents the even natural numbers.
```lean
structure Even where
number : Nat
isEven : number % 2 = 0
```
A coercion allows even numbers to be used where natural numbers are expected.
The {attr}`coe` attribute marks the projection as a coercion so that it can be shown accordingly in proof states and error messages, as described in the {ref "coercion-impl"}[section on implementing coercions].
```lean
attribute [coe] Even.number
instance : Coe Even Nat where
coe := Even.number
```
With this coercion in place, even numbers can be used where natural numbers are expected.
```lean (name := four)
def four : Even := ⟨4, by omega⟩
#eval (four : Nat) + 1
```
```leanOutput four
5
```
Due to coercion chaining, there is also a coercion from {name}`Even` to {name}`Int` formed by chaining the {inst}`Coe Even Nat` instance with the existing coercion from {name}`Nat` to {name}`Int`:
```lean (name := four')
#eval (four : Int) - 5
```
```leanOutput four'
-1
```
:::
{deftech}[Dependent coercions] are needed when the specific term being coerced is required in order to determine whether or how to coerce the term: for example, only decidable propositions can be coerced to {name}`Bool`, so the proposition in question must occur as part of the instance's type so that it can require the {name}`Decidable` instance.
Non-dependent coercions are used whenever all values of the inferred type can be coerced to the target type.
:::example "Defining Dependent Coercions"
The string {lean}`"four"` can be coerced into the natural number {lean (type := "Nat")}`4` with this instance declaration:
```lean (name := fourCoe)
instance : CoeDep String "four" Nat where
coe := 4
#eval ("four" : Nat)
```
```leanOutput fourCoe
4
```
Ordinary type errors are produced for other strings:
```lean +error (name := threeCoe)
#eval ("three" : Nat)
```
```leanOutput threeCoe
Type mismatch
"three"
has type
String
but is expected to have type
Nat
```
:::
```lean -show
section
variable {α α' α'' β β' «…» γ: Sort _}
macro "…":term => Lean.mkIdentFromRef `«…»
variable [CoeHead α α'] [CoeOut α' …] [CoeOut … α''] [Coe α'' …] [Coe … β'] [CoeTail β' γ]
```
:::paragraph
Non-dependent coercions may be chained: if there is a coercion from {lean}`α` to {lean}`β` and from {lean}`β` to {lean}`γ`, then there is also a coercion from {lean}`α` to {lean}`γ`.
{index (subterm:="of coercions")}[chain]
The chain should be in the form {name}`CoeHead`$`?`{name}`CoeOut`$`*`{name}`Coe`$`*`{name}`CoeTail`$`?`, which is to say it may consist of:
* An optional instance of {inst}`CoeHead α α'`, followed by
* Zero or more instances of {inst}`CoeOut α' …`, …, {inst}`CoeOut … α''`, followed by
* Zero or more instances of {inst}`Coe α'' …`, …, {inst}`Coe … β'`, followed by
* An optional instance of {inst}`CoeTail β' γ`
Most coercions can be implemented as instances of {name}`Coe`.
{name}`CoeHead`, {name}`CoeOut`, and {name}`CoeTail` are needed in certain special situations.
:::
{name}`CoeHead` and {name}`CoeOut` instances are chained from the inferred type towards the expected type.
In other words, information in the type found for the term is used to resolve a chain of instances.
{name}`Coe` and {name}`CoeTail` instances are chained from the expected type towards the inferred type, so information in the expected type is used to resolve a chain of instances.
If these chains meet in the middle, a coercion has been found.
This is reflected in their type signatures: {name}`CoeHead` and {name}`CoeOut` use {tech}[semi-output parameters] for the coercion's target, while {name}`Coe` and {name}`CoeTail` use {tech}[semi-output parameters] for the coercions' source.
When an instance provides a value for a {tech}[semi-output parameter], the value is used during instance synthesis.
However, if no value is provided, then a value may be assigned by the synthesis algorithm.
Consequently, every semi-output parameter should be assigned a type when an instance is selected.
This means that {name}`CoeOut` should be used when the variables that occur in the coercion's output are a subset of those in its input, and {name}`Coe` should be used when the variables in the input are a subset of those in the output.
:::example "`CoeOut` vs `Coe` instances"
A {name}`Truthy` value is a value paired with an indication of whether it should be considered to be true or false.
A {name}`Decision` is either {name Decision.yes}`yes`, {name Decision.no}`no`, or {name Decision.maybe}`maybe`, with the latter containing further data for consideration.
```lean
structure Truthy (α : Type) where
val : α
isTrue : Bool
inductive Decision (α : Type) where
| yes
| maybe (val : α)
| no
```
{noVale "Made-up word for example purposes"}[“Truthy”] values can be converted to {name}`Bool`s by forgetting the contained value.
{name}`Bool`s can be converted to {name}`Decision`s by discounting the {name Decision.maybe}`maybe` case.
```lean
@[coe]
def Truthy.toBool : Truthy α → Bool :=
Truthy.isTrue
@[coe]
def Decision.ofBool : Bool → Decision α
| true => .yes
| false => .no
```
{name}`Truthy.toBool` must be a {name}`CoeOut` instance, because the target of the coercion contains fewer unknown type variables than the source, while {name}`Decision.ofBool` must be a {name}`Coe` instance, because the source of the coercion contains fewer variables than the target:
```lean
instance : CoeOut (Truthy α) Bool := ⟨Truthy.isTrue⟩
instance : Coe Bool (Decision α) := ⟨Decision.ofBool⟩
```
With these instances, coercion chaining works:
```lean (name := chainTruthiness)
#eval ({ val := 1, isTrue := true : Truthy Nat } : Decision String)
```
```leanOutput chainTruthiness
Decision.yes
```
Attempting to use the wrong class leads to an error:
```lean (name := coeOutErr) +error
instance : Coe (Truthy α) Bool := ⟨Truthy.isTrue⟩
```
```leanOutput coeOutErr
instance does not provide concrete values for (semi-)out-params
Coe (Truthy ?α) Bool
```
:::
```lean -show
end
```
{docstring CoeHead}
{docstring CoeOut}
{docstring CoeTail}
Instances of {name}`CoeT` can be synthesized when an appropriate chain of instances exists, or when there is a single applicable {name}`CoeDep` instance.{margin}[When coercing from {lean}`Nat` to another type, a {name}`NatCast` instances also suffices.]
If both exist, then the {name}`CoeDep` instance takes priority.
{docstring CoeT}
```lean -show
section
variable {α β : Sort _} {e : α} [CoeDep α e β]
```
Dependent coercions may not be chained.
As an alternative to a chain of coercions, a term {lean}`e` of type {lean}`α` can be coerced to {lean}`β` using an instance of {inst}`CoeDep α e β`.
Dependent coercions are useful in situations where only some of the values can be coerced; this mechanism is used to coerce only decidable propositions to {lean}`Bool`.
They are also useful when the value itself occurs in the coercion's target type.
```lean -show
end
```
{docstring CoeDep}
:::example "Dependent Coercion"
```lean -show
universe u
```
A type of non-empty lists can be defined as a pair of a list and a proof that it is not empty.
This type can be coerced to ordinary lists by applying the projection:
```lean
structure NonEmptyList (α : Type u) : Type u where
contents : List α
non_empty : contents ≠ []
instance : Coe (NonEmptyList α) (List α) where
coe xs := xs.contents
```
The coercion works as expected:
```lean
def oneTwoThree : NonEmptyList Nat := ⟨[1, 2, 3], by simp⟩
#eval (oneTwoThree : List Nat) ++ [4]
```
Arbitrary lists cannot, however, be coerced to non-empty lists, because some arbitrarily-chosen lists may indeed be empty:
```lean +error (name := coeFail) -keep
instance : Coe (List α) (NonEmptyList α) where
coe xs := ⟨xs, _⟩
```
```leanOutput coeFail
don't know how to synthesize placeholder for argument `non_empty`
context:
α : Type u_1
xs : List α
⊢ xs ≠ []
```
A dependent coercion can restrict the domain of the coercion to only lists that are not empty:
```lean (name := coeOk)
instance : CoeDep (List α) (x :: xs) (NonEmptyList α) where
coe := ⟨x :: xs, by simp⟩
#eval ([1, 2, 3] : NonEmptyList Nat)
```
```leanOutput coeOk
{ contents := [1, 2, 3], non_empty := _ }
```
Dependent coercion insertion requires that the term to be coerced syntactically matches the term in the instance header.
Lists that are known to be non-empty, but which are not syntactically instances of {lean (type := "{α : Type u} → α → List α → List α")}`(· :: ·)`, cannot be coerced with this instance.
```lean +error (name := coeFailDep)
#check
fun (xs : List Nat) =>
let ys : List Nat := xs ++ [4]
(ys : NonEmptyList Nat)
```
When coercion insertion fails, the original type error is reported:
```leanOutput coeFailDep
Type mismatch
ys
has type
List Nat
but is expected to have type
NonEmptyList Nat
```
:::
:::syntax term (title := "Coercions")
```grammar
↑$_:term
```
Coercions can be explicitly placed using the prefix operator {keywordOf coeNotation}`↑`.
:::
Unlike using nested {tech}[type ascriptions], the {keywordOf coeNotation}`↑` syntax for placing coercions does not require the involved types to be written explicitly.
:::example "Controlling Coercion Insertion"
Instance synthesis and coercion insertion interact with one another.
Synthesizing an instance may make type information known that later triggers coercion insertion.
The specific placement of coercions may matter.
In this definition of {lean}`sub`, the {inst}`Sub Int` instance is synthesized based on the function's return type.
This instance requires that the two parameters also be {lean}`Int`s, but they are {lean}`Nat`s.
Coercions are inserted around each argument to the subtraction operator.
This can be seen in the output of {keywordOf Lean.Parser.Command.print}`#print`.
```lean (name := subThenCoe)
def sub (n k : Nat) : Int := n - k
#print sub
```
```leanOutput subThenCoe
def sub : Nat → Nat → Int :=
fun n k => ↑n - ↑k
```
Placing the coercion operator outside the subtraction causes the elaborator to attempt to infer a type for the subtraction and then insert a coercion.
Because the arguments are both {lean}`Nat`s, the {inst}`Sub Nat` instance is selected, leading to the difference being a {lean}`Nat`.
The difference is then coerced to an {lean}`Int`.
```lean (name:=coeThenSub)
def sub' (n k : Nat) : Int := ↑ (n - k)
#print sub'
```
These two functions are not equivalent because subtraction of natural numbers truncates at zero:
```lean (name := subRes)
#eval sub 4 8
```
```leanOutput subRes
-4
```
```lean (name := subMark)
#eval sub' 4 8
```
```leanOutput subMark
0
```
:::
## Implementing Coercions
%%%
tag := "coercion-impl"
%%%
The appropriate {name}`CoeHead`, {name}`CoeOut`, {name}`Coe`, or {name}`CoeTail` instance is sufficient to cause a desired coercion to be inserted.
However, the implementation of the coercion should be registered as a coercion using the {attr}`coe` attribute.
This causes Lean to display uses of the coercion with the {keywordOf coeNotation}`↑` operator.
It also causes the {tactic}`norm_cast` tactic to treat the coercion as a cast, rather than as an ordinary function.
:::syntax attr (title := "Coercion Declarations")
```grammar
coe
```
{includeDocstring Lean.Attr.coe}
:::
:::example "Implementing Coercions"
The {tech}[enum inductive] type {lean}`Weekday` represents the days of the week:
```lean
inductive Weekday where
| mo | tu | we | th | fr | sa | su
```
As a seven-element type, it contains the same information as {lean}`Fin 7`.
There is a bijection:
```lean
def Weekday.toFin : Weekday → Fin 7
| mo => 0
| tu => 1
| we => 2
| th => 3
| fr => 4
| sa => 5
| su => 6
def Weekday.fromFin : Fin 7 → Weekday
| 0 => mo
| 1 => tu
| 2 => we
| 3 => th
| 4 => fr
| 5 => sa
| 6 => su
```
```lean -show
theorem Weekday.toFin_fromFin_id : Weekday.toFin (Weekday.fromFin n) = n := by
repeat (cases ‹Fin (_ + 1)› using Fin.cases; case zero => rfl)
apply Fin.elim0; assumption
theorem Weekday.fromFin_toFin_id : Weekday.fromFin (Weekday.toFin w) = w := by
cases w <;> rfl
```
Each type can be coerced to the other:
```lean
instance : Coe Weekday (Fin 7) where
coe := Weekday.toFin
instance : Coe (Fin 7) Weekday where
coe := Weekday.fromFin
```
While this works, instances of the coercions that occur in Lean's output are not presented using the coercion operator, which is what Lean users expect.
Instead, the name {lean}`Weekday.fromFin` is used explicitly:
```lean (name := wednesday)
def wednesday : Weekday := (2 : Fin 7)
#print wednesday
```
```leanOutput wednesday
def wednesday : Weekday :=
Weekday.fromFin 2
```
Adding the {attr}`coe` attribute to the definition of a coercion causes it to be displayed using the coercion operator:
```lean (name := friday)
attribute [coe] Weekday.fromFin
attribute [coe] Weekday.toFin
def friday : Weekday := (5 : Fin 7)
#print friday
```
```leanOutput friday
def friday : Weekday :=
↑5
```
:::
## Coercions from Natural Numbers and Integers
%%%
tag := "nat-api-cast"
%%%
The type classes {name}`NatCast` and {name}`IntCast` are special cases of {name}`Coe` that are used to define a coercion from {lean}`Nat` or {lean}`Int` to some other type that is in some sense canonical.
They exist to enable better integration with large libraries of mathematics, such as [Mathlib](https://github.com/leanprover-community/mathlib4), that make heavy use of coercions to map from the natural numbers or integers to other structures (typically rings).
Ideally, the coercion of a natural number or integer into these structures is a {tech}[simp normal form], because it is a convenient way to denote them.
When the coercion application is expected to be the {tech}[simp normal form] for a type, it is important that _all_ such coercions are {tech (key := "definitional equality")}[definitionally equal] in practice.
Otherwise, the {tech}[simp normal form] would need to choose a single chained coercion path, but lemmas could accidentally be stated using a different path.
Because {tactic}`simp`'s internal index is based on the underlying structure of the term, rather than its presentation in the surface syntax, these differences would cause the lemmas to not be applied where expected.
{lean}`NatCast` and {lean}`IntCast` instances, on the other hand, should be defined such that they are always {tech (key := "definitional equality")}[definitionally equal], avoiding the problem.
The Lean standard library's instances are arranged such that {name}`NatCast` or {name}`IntCast` instances are chosen preferentially over chains of coercion instances during coercion insertion.
They can also be used as {name}`CoeOut` instances, allowing a graceful fallback to coercion chaining when needed.
{docstring NatCast}
{docstring Nat.cast}
{docstring IntCast}
{docstring Int.cast}
# Coercing to Sorts
%%%
tag := "sort-coercion"
%%%
The Lean elaborator expects types in certain positions without necessarily being able to determine the type's {tech}[universe] ahead of time.
For example, the term following the colon in a definition header might be a proposition or a type.
The ordinary coercion mechanism is not applicable because it requires a specific expected type, and there's no way to express that the expected type could be _any_ universe in the {name}`Coe` class.
When a term is elaborated in a position where a proposition or type is expected, but the inferred type of the elaborated term is not a proposition or type, Lean attempts to recover from the error by synthesizing an instance of {name}`CoeSort`.
If the instance is found, and the resulting type is itself a type, then it the coercion is inserted and unfolded.
Not every situation in which the elaborator expects a universe requires {name}`CoeSort`.
In some cases, a particular universe is available as an expected type.
In these situations, ordinary coercion insertion using {name}`CoeT` is used.
Instances of {lean}`CoeSort` can be used to synthesize instances of {lean}`CoeOut`, so no separate instance is needed to support this use case.
In general, coercions to types should be implemented as {name}`CoeSort`.
{docstring CoeSort}
:::syntax term (title := "Explicit Coercion to Sorts")
```grammar
↥ $_:term
```
Coercions to sorts can be explicitly triggered using the {keyword}`↥` prefix operator.
:::
::: example "Sort Coercions"
A monoid is a type equipped with an associative binary operation and an identity element.
While monoid structure can be defined as a type class, it can also be defined as a structure that “bundles up” the structure with the type:
```lean
structure Monoid where
Carrier : Type u
op : Carrier → Carrier → Carrier
id : Carrier
op_assoc :
∀ (x y z : Carrier), op x (op y z) = op (op x y) z
id_op_identity : ∀ (x : Carrier), op id x = x
op_id_identity : ∀ (x : Carrier), op x id = x
```
The type {lean (type := "Type 1")}`Monoid` does not indicate the carrier:
```lean
def StringMonoid : Monoid where
Carrier := String
op := (· ++ ·)
id := ""
op_assoc := by intros; simp [String.append_assoc]
id_op_identity := by intros; simp
op_id_identity := by intros; simp
```
However, a {name}`CoeSort` instance can be implemented that applies the {name}`Monoid.Carrier` projection when a monoid is used in a position where Lean would expect a type:
```lean
instance : CoeSort Monoid (Type u) where
coe m := m.Carrier
example : StringMonoid := "hello"
```
:::
:::example "Sort Coercions as Ordinary Coercions"
The {tech}[inductive type] {name}`NatOrBool` represents the types {name}`Nat` and {name}`Bool`.
They can be coerced to the actual types {name}`Nat` and {name}`Bool`:
```lean
inductive NatOrBool where
| nat | bool
@[coe]
abbrev NatOrBool.asType : NatOrBool → Type
| .nat => Nat
| .bool => Bool
instance : CoeSort NatOrBool Type where
coe := NatOrBool.asType
open NatOrBool
```
The {name}`CoeSort` instance is used when {lean}`nat` occurs to the right of a colon:
```lean
def x : nat := 5
```
When an expected type is available, ordinary coercion insertion is used.
In this case, the {name}`CoeSort` instance is used to synthesize a {lean}`CoeOut NatOrBool Type` instance, which chains with the {inst}`Coe Type (Option Type)` instance to recover from the type error.
```lean
def y : Option Type := bool
```
:::
# Coercing to Function Types
%%%
tag := "fun-coercion"
%%%
Another situation where an expected type is not generally available is the function position in a function application term.
Dependent function types are common; together with {tech}[implicit] parameters, they cause information to flow from the elaboration of one argument to the elaboration of the others.
Attempting to deduce the type required for the function from the expected type of the entire application term and individually-inferred types of arguments will often fail.
In these situations, Lean uses the {name}`CoeFun` type class to coerce a non-function in an application position into a function.
Like {name}`CoeSort`, {name}`CoeFun` instances do not chain with other coercions while inserting a function coercion, but they can be used as {name}`CoeOut` instances during ordinary coercion insertion.
The second parameter to {name}`CoeFun` is an output parameter that determines the resulting function type.
This output parameter is function that computes the function type from the term that's being coerced, rather than the function type itself.
Unlike {name}`CoeDep`, the term itself is not taken into account during instance synthesis; it can, however, be used to create dependently typed coercions where the function type is determined by the term.
{docstring CoeFun}
:::syntax term (title := "Explicit Coercion to Functions")
```grammar
⇑ $_:term
```
:::
```lean -show
section
variable {α : Type u} {β : Type v}
```
:::example "Coercing Decorated Functions to Function Types"
The structure {lean}`NamedFun α β` pairs a function from {lean}`α` to {lean}`β` with a name.
```lean
structure NamedFun (α : Type u) (β : Type v) where
function : α → β
name : String
```
Existing functions can be named:
```lean
def succ : NamedFun Nat Nat where
function n := n + 1
name := "succ"
def asString [ToString α] : NamedFun α String where
function := ToString.toString
name := "asString"
def append : NamedFun (List α) (List α → List α) where
function := (· ++ ·)
name := "append"
```
Named functions can also be composed:
```lean
def NamedFun.comp
(f : NamedFun β γ)
(g : NamedFun α β) :
NamedFun α γ where
function := f.function ∘ g.function
name := f.name ++ " ∘ " ++ g.name
```
Unlike ordinary functions, named functions have a reasonable representation as a string:
```lean
instance : ToString (NamedFun α α'') where
toString f := s!"#<{f.name}>"
```
```lean (name := compDemo)
#eval asString.comp succ
```
```leanOutput compDemo
#<asString ∘ succ>
```
A {name}`CoeFun` instance allows them to be applied just like ordinary functions:
```lean
instance : CoeFun (NamedFun α α'') (fun _ => α → α'') where
coe | ⟨f, _⟩ => f
```
```lean (name := appendDemo)
#eval append [1, 2, 3] [4, 5, 6]
```
```leanOutput appendDemo
[1, 2, 3, 4, 5, 6]
```
:::
```lean -show
end
```
:::example "Dependent Coercion to Functions"
Sometimes, the type of the resulting function depends on the specific value that is being coerced.
A {lean}`Writer` represents a means of appending a representation of some value to a string:
```lean
structure Writer where
Writes : Type u
write : Writes → String → String
def natWriter : Writer where
Writes := Nat
write n out := out ++ toString n
def stringWriter : Writer where
Writes := String
write s out := out ++ s
```
Because the type of the parameter expected by the inner function depend on the {lean}`Writer.Writes` field, the {name}`CoeFun` instance extracts the field:
```lean
instance :
CoeFun Writer (·.Writes → String → String) where
coe w := w.write
```
With this instance, concrete {name}`Writer`s can be used as functions:
```lean (name := writeTwice)
#eval "" |> natWriter (5 : Nat) |> stringWriter " hello"
```
```leanOutput writeTwice
"5 hello"
```
:::
:::example "Coercing to Function Types"
A well-typed interpreter is an interpreter for a programming language that uses indexed families to rule out run-time type errors.
Functions written in the interpreted language can be interpreted as Lean functions, but their underlying source code can also be inspected.
The first step in the well-typed interpreter is to select the subset of Lean types that can be used.
These types are represented by an {tech}[inductive type] of codes {name}`Ty` and a function that maps these codes to actual types.
```lean
inductive Ty where
| nat
| arr (dom cod : Ty)
abbrev Ty.interp : Ty → Type
| .nat => Nat
| .arr t t' => t.interp → t'.interp
```
The language itself is represented by an {tech}[indexed family] over variable contexts and result types.
Variables are represented by [de Bruijn indices](https://en.wikipedia.org/wiki/De_Bruijn_index).
```lean
inductive Tm : List Ty → Ty → Type where
| zero : Tm Γ .nat
| succ (n : Tm Γ .nat) : Tm Γ .nat
| rep (n : Tm Γ .nat)
(start : Tm Γ t)
(f : Tm Γ (.arr .nat (.arr t t))) :
Tm Γ t
| lam (body : Tm (t :: Γ) t') : Tm Γ (.arr t t')
| app (f : Tm Γ (.arr t t')) (arg : Tm Γ t) : Tm Γ t'
| var (i : Fin Γ.length) : Tm Γ Γ[i]
deriving Repr
```
Because the {name}`OfNat` instance for {name}`Fin` requires that the upper bound be non-zero, {name}`Tm.var` can be inconvenient to use with numeric literals.
The helper {name}`Tm.v` can be used to avoid the need for type annotations in these cases.
```lean
def Tm.v
(i : Fin (Γ.length + 1)) :
Tm (t :: Γ) (t :: Γ)[i] :=
.var (Γ := t :: Γ) i
```
A function that adds two natural numbers uses the {name Tm.rep}`rep` operation to apply the successor {name}`Tm.succ` repeatedly.
```lean
def plus : Tm [] (.arr .nat (.arr .nat .nat)) :=
.lam <| .lam <| .rep (.v 1) (.v 0) (.lam (.lam (.succ (.v 0))))
```
Each typing context can be interpreted as a type of run-time environments that provide a value for each variable in the context:
```lean
def Env : List Ty → Type
| [] => Unit
| t :: Γ => t.interp × Env Γ
def Env.empty : Env [] := ()
def Env.extend (ρ : Env Γ) (v : t.interp) : Env (t :: Γ) :=
(v, ρ)
def Env.get (i : Fin Γ.length) (ρ : Env Γ) : Γ[i].interp :=
match Γ, ρ, i with
| _::_, (v, _), ⟨0, _⟩ => v
| _::_, (_, ρ'), ⟨i+1, _⟩ => ρ'.get ⟨i, by simp_all⟩
```
Finally, the interpreter is a recursive function over the term:
```lean
def Tm.interp (ρ : Env α'') : Tm α'' t → t.interp
| .zero => 0
| .succ n => n.interp ρ + 1
| .rep n start f =>
let f' := f.interp ρ
(n.interp ρ).fold (fun n _ x => f' n x) (start.interp ρ)
| .lam body => fun x => body.interp (ρ.extend x)
| .app f arg => f.interp ρ (arg.interp ρ)
| .var i => ρ.get i
```
Coercing a {name}`Tm` to a function consists of calling the interpreter.
```lean
instance : CoeFun (Tm [] α'') (fun _ => α''.interp) where
coe f := f.interp .empty
```
Because functions are represented by a first-order inductive type, their code can be inspected:
```lean (name := evalPlus)
#eval plus
```
```leanOutput evalPlus
Tm.lam (Tm.lam (Tm.rep (Tm.var 1) (Tm.var 0) (Tm.lam (Tm.lam (Tm.succ (Tm.var 0))))))
```
At the same time, due to the coercion, they can be applied just like native Lean functions:
```lean (name := eight)
#eval plus 3 5
```
```leanOutput eight
8
```
:::
# Implementation Details
%%%
tag := "coercion-impl-details"
%%%
Only ordinary coercion insertion uses chaining.
Inserting coercions to a {ref "sort-coercion"}[sort] or a {ref "fun-coercion"}[function] uses ordinary instance synthesis.
Similarly, {tech}[dependent coercions] are not chained.
## Unfolding Coercions
%%%
tag := "coercion-unfold-impl"
%%%
The coercion insertion mechanism unfolds applications of coercions, which allows them to control the specific shape of the resulting term.
This is important both to ensure readable proof goals and to control evaluation of the coerced term in compiled code.
Unfolding coercions is controlled by the {attr}`coe_decl` attribute, which is applied to each coercion method (e.g. {name}`Coe.coe`).
This attribute should be considered part of the internals of the coercion mechanism, rather than part of the public coercion API.
## Coercion Chaining
%%%
tag := "coercion-chain-impl"
%%%
:::paragraph
Coercion chaining is implemented through a collection of auxiliary type classes.
Users should not write instances of these classes directly, but knowledge of their structure can be useful when diagnosing the reason why a coercion was not inserted as expected.
The specific rules governing the ordering of instances in the chain (namely, that it should match {name}`CoeHead``?`{name}`CoeOut``*`{name}`Coe``*`{name}`CoeTail``?`) are implemented by the following type classes:
* {name}`CoeTC` is the transitive closure of {name}`Coe` instances.
* {name}`CoeOTC` is the middle of the chain, consisting of the transitive closure of {name}`CoeOut` instances followed by {name}`CoeTC`.
* {name}`CoeHTC` is the start of the chain, consisting of at most one {name}`CoeHead` instance followed by {name}`CoeOTC`.
* {name}`CoeHTCT` is the whole chain, consisting of `CoeHTC` followed by at most one {name}`CoeTail` instance. Alternatively, it might be a {name}`NatCast` instance.
* {name}`CoeT` represents the entire chain: it is either a {name}`CoeHTCT` chain or a single {name}`CoeDep` instance.
:::
:::figure "Auxiliary Classes for Coercions" (tag := "coe-aux-classes")

:::
{docstring CoeHTCT}
{docstring CoeHTC}
{docstring CoeOTC}
{docstring CoeTC} |
reference-manual/Manual/Meta.lean | import Lean.Elab.Command
import Lean.Elab.InfoTree
import Verso
import Verso.Doc.ArgParse
import Verso.Doc.Elab.Monad
import VersoManual
import Verso.Code
import SubVerso.Highlighting
import SubVerso.Examples
import Manual.Meta.Attribute
import Manual.Meta.Basic
import Manual.Meta.CheckMessages
import Manual.Meta.CustomStyle
import Manual.Meta.Env
import Manual.Meta.Example
import Manual.Meta.Figure
import Manual.Meta.LakeCheck
import Manual.Meta.LakeCmd
import Manual.Meta.LakeOpt
import Manual.Meta.LakeToml
import Manual.Meta.Lean
import Manual.Meta.ListBullet
import Manual.Meta.ModuleExample
import Manual.Meta.ParserAlias
import Manual.Meta.Syntax
import Manual.Meta.Tactics
import Manual.Meta.SpliceContents
import Manual.Meta.Markdown
import Manual.Meta.Namespace
import Manual.Meta.SectionNotes
import Manual.Meta.ConfigFile
open Verso ArgParse Doc Elab Genre.Manual Html Code Highlighted.WebAssets
open SubVerso.Highlighting Highlighted
open Lean Elab
open Lean.Elab.Tactic.GuardMsgs
open scoped Lean.Doc.Syntax
namespace Manual
/--
Comments out some content.
-/
@[role_expander comment]
def comment : RoleExpander
| _, _ => pure #[]
@[code_block_expander comment]
def commentCode : CodeBlockExpander
| _, _ => pure #[]
@[directive_expander comment]
def commentDirective : DirectiveExpander
| _, _ => pure #[]
-- These are part commands rather than block expanders so that it can be used in contexts where
-- block content doesn't fit, like right after an include. However, the blocks are still needed
-- for contexts where part commands aren't run.
@[part_command Lean.Doc.Syntax.codeblock, part_command Lean.Doc.Syntax.directive]
def commentBlock : PartCommand
| `(block| ::: $commentId $_* { $_* } )
| `(block| ``` $commentId $_* | $_ ``` ) => do
try
let n ← realizeGlobalConstNoOverloadWithInfo commentId
if n == ``comment then
return ()
else
throwUnsupportedSyntax
catch | _ => throwUnsupportedSyntax
| _ => throwUnsupportedSyntax
def Block.TODO : Block where
name := `Manual.TODO
def Inline.TODO : Inline where
name := `Manual.TODO
@[directive_expander TODO]
def TODO : DirectiveExpander
| args, blocks => do
ArgParse.done.run args
PointOfInterest.save (← getRef) "TODO"
(kind := .null)
(detail? := some "Author's note")
let content ← blocks.mapM elabBlock
pure #[← `(Block.other Block.TODO #[$content,*])]
@[role_expander TODO]
def TODOinline : RoleExpander
| args, inlines => do
ArgParse.done.run args
PointOfInterest.save (← getRef) "TODO"
(kind := .null)
(detail? := some "Author's note")
let content ← inlines.mapM elabInline
pure #[← `(Inline.other Inline.TODO #[$content,*])]
@[block_extension TODO]
def TODO.descr : BlockDescr where
traverse _ _ _ := do
if ← isDraft then pure none else pure (some <| .concat #[])
toTeX := none
extraCss := [r#"
div.TODO {
border: 5px solid red;
position: relative;
}
div.TODO::before {
content: "TODO";
position: absolute;
top: 0;
right: 0;
color: red;
font-size: large;
font-weight: bold;
transform: rotate(-90deg) translate(-2em);
}
"#]
toHtml :=
open Verso.Output.Html in
some <| fun _ goB _ _ content => do
pure {{<div class="TODO">{{← content.mapM goB}}</div>}}
@[inline_extension TODO]
def TODO.inlineDescr : InlineDescr where
traverse _ _ _ := do
if ← isDraft then pure none else pure (some <| .concat #[])
toTeX := none
extraCss := [r#"
span.TODO {
border: 3px solid red;
display: inline;
position: relative;
float: right;
clear: right;
margin-top: 1rem;
width: 15vw;
margin-right: -17vw;
color: red;
font-size: large;
font-weight: bold;
}
"#]
toHtml :=
open Verso.Output.Html in
some <| fun go _ _ content => do
pure {{<span class="TODO">{{← content.mapM go}}</span>}}
def Inline.noVale : Inline where
name := `Manual.noVale
structure NoValeConfig where
why : String
def NoValeConfig.parse [Monad m] [MonadError m] : ArgParse m NoValeConfig :=
NoValeConfig.mk <$> .positional `why .string
/--
Skip the grammar and style check of this text.
The string parameter should contain an explanation of why the text should be skipped.
-/
@[role_expander noVale]
def noVale : RoleExpander
| args, contents => do
let {why := _} ← NoValeConfig.parse.run args
return #[← ``(Inline.other Inline.noVale #[$(← contents.mapM elabInline),*])]
@[inline_extension noVale]
def noVale.descr : InlineDescr where
traverse _ _ _ := pure none
toTeX := none
toHtml := some <| fun go _ _ content => open Verso.Output.Html in do
pure {{<span class="no-vale">{{← content.mapM go}}</span>}}
structure PlannedConfig where
issue : Option Nat
def PlannedConfig.parse [Monad m] [MonadError m] [MonadLiftT CoreM m] : ArgParse m PlannedConfig :=
PlannedConfig.mk <$> ((some <$> .positional `issue .nat) <|> pure none)
def Block.planned : Block where
name := `Manual.planned
@[directive_expander planned]
def planned : DirectiveExpander
| args, blocks => do
let {issue} ← PlannedConfig.parse.run args
PointOfInterest.save (← getRef) s!"Planned content ({issue})" (kind := .event)
let fileMap ← getFileMap
let fileName ← getFileName
let loc : Option (Nat × String) :=
((·.line, System.FilePath.normalize fileName |>.toString) ∘ fileMap.utf8PosToLspPos) <$> (← getRef).getPos?
let content ← blocks.mapM elabBlock
pure #[← `(Block.other {Block.planned with data := ToJson.toJson (α := Option Nat × Option (Nat × String)) ($(quote issue), $(quote loc))} #[$content,*])]
@[block_extension planned]
def planned.descr : BlockDescr where
traverse _ data _ := do
match FromJson.fromJson? (α := Option Nat × Option (Nat × String)) data with
| .ok (none, loc?) | .ok (some 0, loc?) =>
-- TODO add source locations to Verso ASTs upstream, then report here
if let some (line, file) := loc? then
logError s!"Missing issue number for planned content indicator at {file} line {line}"
else
logError s!"Missing issue number for planned content indicator"
| .ok (some n, loc?) =>
if !(← isDraft) then
let loc := loc?.map (fun (l, f) => s!" at {f} line {l}") |>.getD ""
logError s!"Planned content {n} in final rendering{loc}"
else
pure ()
| .error e =>
logError s!"Failed to deserialize issue number from {data} during traversal: {e}"
pure none
toTeX := none
extraCss := [r#"
div.planned {
font-style: italic;
}
div.planned .label {
font-size: large;
text-align: center;
font-family: var(--verso-structure-font-family);
}
"#]
toHtml :=
open Verso.Output.Html in
some <| fun _ goB _ data content => do
let issue : Option Nat ←
match FromJson.fromJson? (α := Option Nat × Option (Nat × String)) data with
| .ok v => pure v.1
| .error e =>
HtmlT.logError s!"Failed to deserialize issue number from {data}: {e}"
pure none
pure {{
<div class="planned">
<div class="label">"Planned Content"</div>
{{← content.mapM goB}}
{{if let some issue := issue then {{
<p>
"Tracked at issue "
<a href=s!"https://github.com/leanprover/reference-manual/issues/{issue}">
s!"#{issue}"
</a>
</p>
}} else .empty
}}
</div>
}}
@[role_expander versionString]
def versionString : RoleExpander
| #[], #[] => do pure #[← ``(Verso.Doc.Inline.code $(quote Lean.versionString))]
| _, _ => throwError "Unexpected arguments"
inductive FFIDocType where
| function
| type
deriving DecidableEq, Repr, ToJson, FromJson
open Syntax in
open FFIDocType in
instance : Quote FFIDocType where
quote
| .function => mkCApp ``function #[]
| .type => mkCApp ``type #[]
def FFIDocType.describe : FFIDocType → String
| .function => "function"
| .type => "type"
structure FFIConfig where
name : String
kind : FFIDocType := .function
open FFIDocType in
def FFIConfig.parse [Monad m] [MonadError m] [MonadLiftT CoreM m] : ArgParse m FFIConfig :=
FFIConfig.mk <$> .positional `name .string <*> ((·.getD .function) <$> .named `kind kind true)
where
kind : ValDesc m FFIDocType := {
description := doc!"{function} or {type}",
signature := .Ident
get := fun
| .name b => do
let b' ← liftM <| realizeGlobalConstNoOverloadWithInfo b
if b' == ``function then pure .function
else if b' == ``type then pure .type
else throwErrorAt b "Expected {``function} or {``type}"
| _ => throwError "Expected identifier"
}
/--
Indicates that an element is a C type.
Currently does nothing other than indicate this fact for future use.
-/
@[role_expander ctype]
def ctype : RoleExpander
| args, contents => do
ArgParse.done.run args
let #[x] := contents
| throwError "Expected exactly one parameter"
let `(inline|code($t)) := x
| throwError "Expected exactly one code item"
pure #[← ``(Inline.code $(quote t.getString))]
def Inline.ckw : Inline where
name := `Manual.ckw
/--
Indicates that an element is a C keyword.
-/
@[role_expander ckw]
def ckw : RoleExpander
| args, contents => do
ArgParse.done.run args
let #[x] := contents
| throwError "Expected exactly one parameter"
let `(inline|code($t)) := x
| throwError "Expected exactly one code item"
pure #[← ``(Inline.code $(quote t.getString))]
@[inline_extension ckw]
def ckw.descr : InlineDescr where
traverse _ _ _ := pure none
toTeX := none
toHtml := some fun goI _ _ content => open Verso.Output.Html in do
return {{<span class="c-keyword">{{← content.mapM goI}}</span>}}
extraCss :=
[".c-keyword code { font-weight: 600; }"]
def Block.ffi : Block where
name := `Manual.ffi
@[directive_expander ffi]
def ffi : DirectiveExpander
| args, blocks => do
let config : FFIConfig ← FFIConfig.parse.run args
if h : blocks.size = 0 then
throwError "Expected at least one block"
else
let firstBlock := blocks[0]
let moreBlocks := blocks.extract 1 blocks.size
let `(block|``` | $contents ```) := firstBlock
| throwErrorAt firstBlock "Expected code block"
let body ← moreBlocks.mapM elabBlock
pure #[← `(Block.other {Block.ffi with data := ToJson.toJson ($(quote config.name), $(quote config.kind), $(quote contents.getString))} #[$body,*])]
@[block_extension ffi]
def ffi.descr : BlockDescr where
traverse id info _ := do
let .ok (name, _declType, _signature) := FromJson.fromJson? (α := String × FFIDocType × String) info
| do logError "Failed to deserialize FFI doc data"; pure none
let path ← (·.path) <$> read
let _ ← Verso.Genre.Manual.externalTag id path name
Index.addEntry id {term := .code name}
pure none
toHtml := some <| fun _goI goB id info contents =>
open Verso.Doc.Html in
open Verso.Output Html in do
let .ok (_name, ffiType, signature) := FromJson.fromJson? (α := String × FFIDocType × String) info
| do Verso.Doc.Html.HtmlT.logError "Failed to deserialize FFI doc data"; pure .empty
let sig : Html := {{<pre>{{signature}}</pre>}}
let xref ← HtmlT.state
let idAttr := xref.htmlId id
return {{
<div class="namedocs" {{idAttr}}>
<span class="label">"FFI " {{ffiType.describe}}</span>
<pre class="signature">{{sig}}</pre>
<div class="text">
{{← contents.mapM goB}}
</div>
</div>
}}
toTeX := some <| fun _goI goB _ _ contents =>
contents.mapM goB -- TODO
open Verso.Output.Html in
inline_extension Inline.multiCode where
traverse _ _ _ := pure none
toHtml := some <| fun goI _id _data contents => do return {{<span class="multi-code">{{← contents.mapM goI}}</span>}}
toTeX := none
@[role]
def multiCode : RoleExpanderOf Unit
| (), contents => do ``(Inline.other Inline.multiCode #[$(← contents.mapM elabInline),*])
structure LeanSectionConfig where
«variables» : Option String
section
variable [Monad m] [MonadError m] [MonadLiftT CoreM m]
instance : FromArgs LeanSectionConfig m where
fromArgs :=
LeanSectionConfig.mk <$> .named `variables .string true
end
section
open Lean Elab Command
-- Take from BuiltinCommands.lean
private def addScope (isNewNamespace : Bool) (header : String) (newNamespace : Name)
(isNoncomputable : Bool := false) (attrs : List (TSyntax ``Parser.Term.attrInstance) := []) :
CommandElabM Unit := do
modify fun s => { s with
env := s.env.registerNamespace newNamespace,
scopes := { s.scopes.head! with
header := header, currNamespace := newNamespace
isNoncomputable := s.scopes.head!.isNoncomputable || isNoncomputable
attrs := s.scopes.head!.attrs ++ attrs
} :: s.scopes
}
pushScope
if isNewNamespace then
activateScoped newNamespace
end |
reference-manual/Manual/Attributes.lean | import VersoManual
import Manual.Meta
import Lean.Parser.Command
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
set_option pp.rawOnError true
set_option maxRecDepth 3000
set_option linter.unusedVariables false
#doc (Manual) "Attributes" =>
%%%
tag := "attributes"
htmlSplit := .never
%%%
{deftech}_Attributes_ are an extensible set of compile-time annotations on declarations.
They can be added as a {ref "declaration-modifiers"}[declaration modifier] or using the {keywordOf Lean.Parser.Command.attribute}`attribute` command.
Attributes can associate information with declarations in compile-time tables (including {tech}[custom simp sets], {tech}[macros], and {tech}[instances]), impose additional requirements on definitions (e.g. rejecting them if their type is not a type class), or generate additional code.
As with {tech}[macros] and custom {tech}[elaborators] for terms, commands, and tactics, the {tech}[syntax category] `attr` of attributes is designed to be extended, and there is a table that maps each extension to a compile-time program that interprets it.
Attributes are applied as {deftech}_attribute instances_ that pair a scope indicator with an attribute.
These may occur either in attributes as declaration modifiers or the stand-alone {keywordOf Lean.Parser.Command.attribute}`attribute` command.
:::syntax Lean.Parser.Term.attrInstance (title := "Attribute Instances")
```grammar
$_:attrKind $_:attr
```
An `attrKind` is the optional {ref "scoped-attributes"}[attribute scope] keywords {keyword}`local` or {keyword}`scoped`.
These control the visibility of the attribute's effects.
The attribute itself is anything from the extensible {tech}[syntax category] `attr`.
:::
The attribute system is very powerful: attributes can associate arbitrary information with declarations and generate any number of helpers.
This imposes some design trade-offs: storing this information takes space, and retrieving it takes time.
As a result, some attributes can only be applied to a declaration in the module where the declaration is defined.
This allows lookups to be much faster in large projects, because they don't need to examine data for all modules.
Each attribute determines how to store its own metadata and what the appropriate tradeoff between flexibility and performance is for a given use case.
# Attributes as Modifiers
Attributes can be added to declarations as a {ref "declaration-modifiers"}[declaration modifier].
They are placed between the documentation comment and the visibility modifiers.
:::syntax Lean.Parser.Term.attributes -open (title := "Attributes")
```grammar
@[$_:attrInstance,*]
```
:::
# The {keyword}`attribute` Command
The {keywordOf Lean.Parser.Command.attribute}`attribute` command can be used to modify a declaration's attributes.
Some example uses include:
* registering a pre-existing declaration as an {tech}[instance] in the local scope by adding {attr}`instance`,
* marking a pre-existing theorem as a simp lemma or an extensionality lemma, using {attr}`simp` or {attr}`ext`, and
* temporarily removing a simp lemma from the default {tech}[simp set].
:::syntax command (title := "Attribute Modification")
The {keywordOf Lean.Parser.Command.attribute}`attribute` command adds or removes attributes from an existing declaration.
The identifier is the name whose attributes are being modified.
```grammar
attribute [$_,*] $_
```
:::
In addition to attribute instances that add an attribute to an existing declaration, some attributes can be removed; this is called {deftech}_erasing_ the attribute.
Attributes can be erased by preceding their name with `-`.
Not all attributes support erasure, however.
:::syntax Lean.Parser.Command.eraseAttr (title := "Erasing Attributes")
Attributes are erased by preceding their name with a `-`.
```grammar
-$_:ident
```
:::
# Scoped Attributes
%%%
tag := "scoped-attributes"
%%%
Many attributes can be applied in a particular scope.
This determines whether the attribute's effect is visible only in the current section scope, in namespaces that open the current namespace, or everywhere.
These scope indications are also used to control {ref "syntax-rules"}[syntax extensions] and {ref "instance-attribute"}[type class instances].
Each attribute is responsible for defining precisely what these terms mean for its particular effect.
:::syntax attrKind -open (title := "Attribute Scopes") (alias := Lean.Parser.Term.attrKind)
Globally-scoped declarations (the default) are in effect whenever the {tech}[module] in which they're established is transitively imported.
They are indicated by the absence of another scope modifier.
```grammar
```
Locally-scoped declarations are in effect only for the extent of the {tech}[section scope] in which they are established.
```grammar
local
```
Scoped declarations are in effect whenever the {tech (key := "current namespace")}[namespace] in which they are established is opened.
```grammar
scoped
```
::: |
reference-manual/Manual/Simp.lean | import VersoManual
import Lean.Parser.Term
import Manual.Meta
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
#doc (Manual) "The Simplifier" =>
%%%
tag := "the-simplifier"
%%%
The simplifier is one of the most-used features of Lean.
It performs inside-out rewriting of terms based on a database of simplification rules.
The simplifier is highly configurable, and a number of tactics use it in different ways.
# Invoking the Simplifier
%%%
tag := "simp-tactic-naming"
%%%
Lean's simplifier can be invoked in a variety of ways.
The most common patterns are captured in a set of tactics.
The {ref "simp-tactics"}[tactic reference] contains a complete list of simplification tactics.
Simplification tactics all contain `simp` in their name.
Aside from that, they are named according to a system of prefixes and suffixes that describe their functionality:
: `-!` suffix
Sets the {name Lean.Meta.Simp.Config.autoUnfold}`autoUnfold` configuration option to `true`, causing the simplifier to unfold all definitions
: `-?` suffix
Causes the simplifier to keep track of which rules it employed during simplification and suggest a minimal {tech}[simp set] as an edit to the tactic script
: `-_arith` suffix
Enables the use of linear arithmetic simplification rules
: `d-` prefix
Causes the simplifier to simplify only with rewrites that hold definitionally
: `-_all` suffix
Causes the simplifier to repeatedly simplify all assumptions and the conclusion of the goal, taking as many hypotheses into account as possible, until no further simplification is possible
There are two further simplification tactics, {tactic}`simpa` and {tactic}`simpa!`, which are used to simultaneously simplify a goal and either a proof term or an assumption before discharging the goal.
This simultaneous simplification makes proofs more robust to changes in the {tech}[simp set].
## Parameters
%%%
tag := "simp-tactic-params"
%%%
The simplification tactics have the following grammar:
:::syntax tactic (title := "Simplification Tactics")
```grammar
simp $_:optConfig $[only]? $[ [ $[$e],* ] ]? $[at $[$h]*]?
```
:::
In other words, an invocation of a simplification tactic takes the following modifiers, in order, all of which are optional:
* A set of {ref "tactic-config"}[configuration options], which should include the fields of {name}`Lean.Meta.Simp.Config` or {name}`Lean.Meta.DSimp.Config`, depending on whether the simplifier being invoked is a version of {tactic}`simp` or a version of {tactic}`dsimp`.
* The {keywordOf Lean.Parser.Tactic.simp}`only` modifier excludes the default simp set, instead beginning with an empty{margin}[Technically, the simp set always includes {name}`eq_self` and {name}`iff_self` in order to discharge reflexive cases.] simp set.
* The lemma list adds or removes lemmas from the simp set. There are three ways to specify lemmas in the lemma list:
* `*`, which adds all assumptions in the proof state to the simp set
* `-` followed by a lemma, which removes the lemma from the simp set
* A lemma specifier, consisting of the following in sequence:
* An optional `↓` or `↑`, which respectively cause the lemma to be applied before or after entering a subterm (`↑` is the default). The simplifier typically simplifies subterms before attempting to simplify parent terms, as simplified arguments often make more rules applicable; `↓` causes the parent term to be simplified with the rule prior to the simplification of subterms.
* An optional `←`, which causes equational lemmas to be used from right to left rather than from left to right.
* A mandatory lemma, which can be a simp set name, a lemma name, or a term. Terms are treated as if they were named lemmas with fresh names.
* A location specifier, preceded by {keywordOf Lean.Parser.Tactic.simp}`at`, which consists of a sequence of locations. Locations may be:
- The name of an assumption, indicating that its type should be simplified
- An asterisk `*`, indicating that all assumptions and the conclusion should be simplified
- A turnstile `⊢`, indicating that the conclusion should be simplified
By default, only the conclusion is simplified.
::::example "Location specifiers for {tactic}`simp`"
:::tacticExample
{goal -show}`∀ (p : Nat → Prop) (x : Nat) (h : p (x + 5 + 2)) (h' : p (3 + x + 9)), p (6 + x + 1)`
```setup
intro p x h h'
```
In this proof state,
```pre
p : Nat → Prop
x : Nat
h : p (x + 5 + 2)
h' : p (3 + x + 9)
⊢ p (6 + x + 1)
```
the tactic {tacticStep}`simp +arith` simplifies only the goal:
```post
p : Nat → Prop
x : Nat
h : p (x + 5 + 2)
h' : p (3 + x + 9)
⊢ p (x + 7)
```
:::
:::tacticExample
{goal -show}`∀ (p : Nat → Prop) (x : Nat) (h : p (x + 5 + 2)) (h' : p (3 + x + 9)), p (6 + x + 1)`
```setup
intro p x h h'
```
```pre -show
p : Nat → Prop
x : Nat
h : p (x + 5 + 2)
h' : p (3 + x + 9)
⊢ p (6 + x + 1)
```
Invoking {tacticStep}`simp +arith at h` yields a goal in which the hypothesis `h` has been simplified:
```post
p : Nat → Prop
x : Nat
h' : p (3 + x + 9)
h : p (x + 7)
⊢ p (6 + x + 1)
```
:::
:::tacticExample
{goal -show}`∀ (p : Nat → Prop) (x : Nat) (h : p (x + 5 + 2)) (h' : p (3 + x + 9)), p (6 + x + 1)`
```setup
intro p x h h'
```
```pre -show
p : Nat → Prop
x : Nat
h : p (x + 5 + 2)
h' : p (3 + x + 9)
⊢ p (6 + x + 1)
```
The conclusion can be additionally simplified by adding `⊢`, that is, {tacticStep}`simp +arith at h ⊢`:
```post
p : Nat → Prop
x : Nat
h' : p (3 + x + 9)
h : p (x + 7)
⊢ p (x + 7)
```
:::
:::tacticExample
{goal -show}`∀ (p : Nat → Prop) (x : Nat) (h : p (x + 5 + 2)) (h' : p (3 + x + 9)), p (6 + x + 1)`
```setup
intro p x h h'
```
```pre -show
p : Nat → Prop
x : Nat
h : p (x + 5 + 2)
h' : p (3 + x + 9)
⊢ p (6 + x + 1)
```
Using {tacticStep}`simp +arith at *` simplifies all assumptions together with the conclusion:
```post
p : Nat → Prop
x : Nat
h : p (x + 7)
h' : p (x + 12)
⊢ p (x + 7)
```
:::
::::
# Rewrite Rules
%%%
tag := "simp-rewrites"
%%%
The simplifier has three kinds of rewrite rules:
: Declarations to unfold
The simplifier will only unfold {tech}[reducible] definitions by default.
However, a rewrite rule can be added for any {tech}[semireducible] or {tech}[irreducible] definition that causes the simplifier to unfold it as well.
When the simplifier is running in definitional mode ({tactic}`dsimp` and its variants), definition unfolding only replaces the defined name with its value; otherwise, it also uses the equational lemmas produced by the equation compiler.
: Equational lemmas
The simplifier can treat equality proofs as rewrite rules, in which case the left side of the equality will be replaced with the right. These equational lemmas may have any number of parameters. The simplifier instantiates parameters to make the left side of the equality match the goal, and it performs a proof search to instantiate any additional parameters.
: Simplification procedures
The simplifier supports simplification procedures, known as {deftech}_simprocs_, that use Lean metaprogramming to perform rewrites that can't be efficiently specified using equations. Lean includes simprocs for the most important operations on built-in types.
:::keepEnv
```lean -show
-- Validate the above description of reducibility
@[irreducible]
def foo (x : α) := x
set_option allowUnsafeReducibility true in
@[semireducible]
def foo' (x : α) := x
@[reducible]
def foo'' (x : α) := x
/--
error: unsolved goals
α✝ : Type u_1
x y : α✝
⊢ x = y ∧ y = x
-/
#check_msgs in
example : foo (x, y) = (y, x) := by
simp [foo]
/-- error: `simp` made no progress -/
#check_msgs in
example : foo (x, y) = (y, x) := by
simp
/--
error: unsolved goals
α✝ : Type u_1
x y : α✝
⊢ x = y ∧ y = x
-/
#check_msgs in
example : foo' (x, y) = (y, x) := by
simp [foo']
/-- error: `simp` made no progress -/
#check_msgs in
example : foo' (x, y) = (y, x) := by
simp
/--
error: unsolved goals
α✝ : Type u_1
x y : α✝
⊢ x = y ∧ y = x
-/
#check_msgs in
example : foo'' (x, y) = (y, x) := by
simp [foo'']
/--
error: unsolved goals
α✝ : Type u_1
x y : α✝
⊢ x = y ∧ y = x
-/
#check_msgs in
example : foo'' (x, y) = (y, x) := by
simp
```
:::
Due to {tech}[propositional extensionality], equational lemmas can rewrite propositions to simpler, logically equivalent propositions.
When the simplifier rewrites a proof goal to {lean}`True`, it automatically closes it.
As a special case of equational lemmas, propositions other than equality can be tagged as rewrite rules
They are preprocessed into rules that rewrite the proposition to {lean}`True`.
:::::example "Rewriting Propositions"
::::tacticExample
{goal -show}`∀(α β : Type) (w y : α) (x z : β), (w, x) = (y, z)`
```setup
intro α β w y x z
```
When asked to simplify an equality of pairs:
```pre
α β : Type
w y : α
x z : β
⊢ (w, x) = (y, z)
```
{tacticStep}`simp` yields a conjunction of equalities:
```post
α β : Type
w y : α
x z : β
⊢ w = y ∧ x = z
```
The default simp set contains {lean}`Prod.mk.injEq`, which shows the equivalence of the two statements:
```signature
Prod.mk.injEq.{u, v} {α : Type u} {β : Type v} (fst : α) (snd : β) :
∀ (fst_1 : α) (snd_1 : β),
((fst, snd) = (fst_1, snd_1)) = (fst = fst_1 ∧ snd = snd_1)
```
::::
:::::
In addition to rewrite rules, {tactic}`simp` has a number of built-in reduction rules, {ref "simp-config"}[controlled by the `config` parameter].
Even when the simp set is empty, {tactic}`simp` can replace `let`-bound variables with their values, reduce {keywordOf Lean.Parser.Term.match}`match` expressions whose {tech (key := "match discriminant")}[discriminants] are constructor applications, reduce structure projections applied to constructors, or apply lambdas to their arguments.
# Simp sets
%%%
tag := "simp-sets"
%%%
A collection of rules used by the simplifier is called a {deftech}_simp set_.
A simp set is specified in terms of modifications from a {deftech}_default simp set_.
These modifications can include adding rules, removing rules, or adding a set of rules.
The `only` modifier to the {tactic}`simp` tactic causes it to start with an empty simp set, rather than the default one.
Rules are added to the default simp set using the {attr}`simp` attribute.
:::syntax attr (alias := Lean.Meta.simpExtension) (title := "Registering {keyword}`simp` Lemmas")
The {attr}`simp` attribute adds a declaration to the default simp set.
If the declaration is a definition, the definition is marked for unfolding; if it is a theorem, then the theorem is registered as a rewrite rule.
```grammar
simp
```
```grammar
simp ↑ $p?
```
```grammar
simp ↓ $p?
```
```grammar
simp $p:prio
```
```lean -show
-- Check above claim about default priority
/-- info: 1000 -/
#check_msgs in
#eval eval_prio default
```
:::
{deftech}_Custom simp sets_ are created with {name Lean.Meta.registerSimpAttr}`registerSimpAttr`, which must be run during {tech}[initialization] by placing it in an {keywordOf Lean.Parser.Command.initialize}`initialize` block.
As a side effect, it creates a new attribute with the same interface as {attr}`simp` that adds rules to the custom simp set.
The returned value is a {name Lean.Meta.SimpExtension}`SimpExtension`, which can be used to programmatically access the contents of the custom simp set.
The {tactic}`simp` tactics can be instructed to use the new simp set by including its attribute name in the rule list.
{docstring Lean.Meta.registerSimpAttr}
{docstring Lean.Meta.SimpExtension}
# Simp Normal Forms
%%%
tag := "simp-normal-forms"
%%%
The default {tech}[simp set] contains all the theorems and simplification procedures marked with the {attr}`simp` attribute.
The {deftech}_simp normal form_ of an expression is the result of applying the default simp set via the {tactic}`simp` tactic until no more rules can be applied.
When an expression is in simp normal form, it is as reduced as possible according to the default simp set, often making it easier to work with in proofs.
The {tactic}`simp` tactic *does not guarantee confluence*, which means that the simp normal form of an expression may depend on the order in which the elements of the default simp set are applied.
The order in which the rules are applied can be changed by assigning priorities when setting the {attr}`simp` attribute.
When designing a Lean library, it's important to think about what the appropriate simp normal form for the various combinations of the library's operators is.
This can serve as a guide when selecting which rules the library should add to the default simp set.
In particular, the right-hand side of simp lemmas should be in simp normal form; this helps ensure that simplification terminates.
Additionally, each concept in the library should be expressed through one simp normal form, even if there are multiple equivalent ways to state it.
If a concept is stated in two different ways in different simp lemmas, then some desired simplifications may not occur because the simplifier does not connect them.
Even though simplification doesn't need to be confluent, striving for confluence is helpful because it makes the library more predictable and tends to reveal missing or poorly chosen simp lemmas.
The default simp set is as much a part of a library's interface as the type signatures of the constants that it exports.
Libraries should not add rules to the default simp set that don't mention at least one constant defined in the library.
Otherwise, importing a library could change the behavior of {tactic}`simp` for some unrelated library.
If a library relies on additional simplification rules for definitions or declarations from other libraries, please create a custom simp set and either instruct users to use it or provide a dedicated tactic.
# Terminal vs Non-Terminal Positions
%%%
tag := "terminal-simp"
%%%
To write maintainable proofs, avoid using {tactic}`simp` without {keywordOf Lean.Parser.Tactic.simp}`only` unless it closes the goal.
Such uses of {tactic}`simp` that do not close a goal are referred to as {deftech}_non-terminal simps_.
This is because additions to the default simp set may make {tactic}`simp` more powerful or just cause it to select a different sequence of rewrites and arrive at a different simp normal form.
When {keywordOf Lean.Parser.Tactic.simp}`only` is specified, additional lemmas will not affect that invocation of the tactic.
In practice, terminal uses of {tactic}`simp` are not nearly as likely to be broken by the addition of new simp lemmas, and when they are, it's easier to understand the issue and fix it.
When working in non-terminal positions, {tactic}`simp?` (or one of the other simplification tactics with `?` in their names) can be used to generate an appropriate invocation with {keywordOf Lean.Parser.Tactic.simp}`only`.
Just as {tactic}`apply?` or {tactic}`rw?` suggest the use of relevant lemmas, {tactic}`simp?` suggests an invocation of {tactic}`simp` with a minimal simp set that was used to reach the normal form.
:::example "Using {tactic}`simp?`"
The non-terminal {tactic}`simp?` in this proof suggests a smaller {tactic}`simp` with {keywordOf Lean.Parser.Tactic.simp}`only`:
```lean (name:=simpHuhDemo)
example (xs : Array Unit) : xs.size = 2 → xs = #[(), ()] := by
intros
ext
simp?
assumption
```
The suggested rewrite is:
```leanOutput simpHuhDemo
Try this:
[apply] simp only [List.size_toArray, List.length_cons, List.length_nil, Nat.zero_add, Nat.reduceAdd]
```
which results in the more maintainable proof:
```lean
example (xs : Array Unit) : xs.size = 2 → xs = #[(), ()] := by
intros
ext
simp only [
List.size_toArray, List.length_cons, List.length_nil,
Nat.zero_add, Nat.reduceAdd
]
assumption
```
:::
# Configuring Simplification
%%%
tag := "simp-config"
%%%
{tactic}`simp` is primarily configured via a configuration parameter, passed as a named argument called `config`.
{docstring Lean.Meta.Simp.Config}
{docstring Lean.Meta.Simp.neutralConfig}
{docstring Lean.Meta.DSimp.Config}
## Options
%%%
tag := "simp-options"
%%%
Some global options affect {tactic}`simp`:
{optionDocs simprocs}
{optionDocs tactic.simp.trace}
{optionDocs linter.unnecessarySimpa}
{optionDocs trace.Meta.Tactic.simp.rewrite}
{optionDocs trace.Meta.Tactic.simp.discharge}
# Simplification vs Rewriting
%%%
tag := "simp-vs-rw"
%%%
Both {tactic}`simp` and {tactic}`rw`/{tactic}`rewrite` use equational lemmas to replace parts of terms with equivalent alternatives.
Their intended uses and their rewriting strategies differ, however.
Tactics in the {tactic}`simp` family are primarily used to reformulate a problem in a standardized way, making it more amenable to both human understanding and further automation.
In particular, simplification should never render an otherwise-provable goal impossible.
Tactics in the {tactic}`rw` family are primarily used to apply hand-selected transformations that do not always preserve provability nor place terms in standardized forms.
These different emphases are reflected in the differences of behavior between the two families of tactics.
The {tactic}`simp` tactics primarily rewrite from the inside out.
The smallest possible expressions are simplified first so that they can unlock further simplification opportunities for the surrounding expressions.
The {tactic}`rw` tactics select the leftmost outermost subterm that matches the pattern, rewriting it a single time.
Both tactics allow their strategy to be overridden: when adding a lemma to a simp set, the `↓` modifier causes it to be applied prior to the simplification of subterms, and the {name Lean.Meta.Rewrite.Config.occs}`occs` field of {tactic}`rw`'s configuration parameter allows a different occurrence to be selected, either via a whitelist or a blacklist. |
reference-manual/Manual/BasicTypes.lean | import VersoManual
import Manual.Meta
import Manual.BasicTypes.Nat
import Manual.BasicTypes.Int
import Manual.BasicTypes.String
import Manual.BasicTypes.Array
import Manual.BasicTypes.ByteArray
import Manual.BasicTypes.Fin
import Manual.BasicTypes.UInt
import Manual.BasicTypes.BitVec
import Manual.BasicTypes.Float
import Manual.BasicTypes.Char
import Manual.BasicTypes.Option
import Manual.BasicTypes.Empty
import Manual.BasicTypes.Products
import Manual.BasicTypes.Sum
import Manual.BasicTypes.List
import Manual.BasicTypes.Maps
import Manual.BasicTypes.Subtype
import Manual.BasicTypes.Thunk
import Manual.BasicTypes.Range
open Manual.FFIDocType
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
#doc (Manual) "Basic Types" =>
%%%
tag := "basic-types"
%%%
Lean includes a number of built-in types that are specially supported by the compiler.
Some, such as {lean}`Nat`, additionally have special support in the kernel.
Other types don't have special compiler support _per se_, but rely in important ways on the internal representation of types for performance reasons.
{include 0 Manual.BasicTypes.Nat}
{include 0 Manual.BasicTypes.Int}
{include 0 Manual.BasicTypes.Fin}
{include 0 Manual.BasicTypes.UInt}
{include 0 Manual.BasicTypes.BitVec}
{include 0 Manual.BasicTypes.Float}
{include 0 Manual.BasicTypes.Char}
{include 0 Manual.BasicTypes.String}
# The Unit Type
The unit type is the canonical type with exactly one element, named {name Unit.unit}`unit` and represented by the empty tuple {lean}`()`.
It describes only a single value, which consists of said constructor applied to no arguments whatsoever.
{lean}`Unit` is analogous to `void` in languages derived from C: even though `void` has no elements that can be named, it represents the return of control flow from a function with no additional information.
In functional programming, {lean}`Unit` is the return type of things that “return nothing”.
Mathematically, this is represented by a single completely uninformative value, as opposed to an empty type such as {lean}`Empty`, which represents unreachable code.
:::leanSection
```lean -show
variable {m : Type → Type} [Monad m] {α : Type}
```
When programming with {ref "monads-and-do"}[monads], {lean}`Unit` is especially useful.
For any type {lean}`α`, {lean}`m α` represents an action that has side effects and returns a value of type {lean}`α`.
The type {lean}`m Unit` represents an action that has some side effects but does not return a value.
:::
There are two variants of the unit type:
* {lean}`Unit` is a {lean}`Type` that exists in the smallest non-propositional {tech}[universe].
* {lean}`PUnit` is {tech (key := "universe polymorphism")}[universe polymorphic] and can be used in any non-propositional {tech}[universe].
Behind the scenes, {lean}`Unit` is actually defined as {lean}`PUnit.{1}`.
{lean}`Unit` should be preferred over {name}`PUnit` when possible to avoid unnecessary universe parameters.
If in doubt, use {lean}`Unit` until universe errors occur.
{docstring Unit}
{docstring Unit.unit}
{docstring PUnit}
## Definitional Equality
{deftech}_Unit-like types_ are inductive types that have a single constructor which takes no non-proof parameters.
{lean}`PUnit` is one such type.
All elements of unit-like types are {tech (key := "definitional equality")}[definitionally equal] to all other elements.
:::example "Definitional Equality of {lean}`Unit`"
Every term with type {lean}`Unit` is definitionally equal to every other term with type {lean}`Unit`:
```lean
example (e1 e2 : Unit) : e1 = e2 := rfl
```
:::
::::keepEnv
:::example "Definitional Equality of Unit-Like Types"
Both {lean}`CustomUnit` and {lean}`AlsoUnit` are unit-like types, with a single constructor that takes no parameters.
Every pair of terms with either type is definitionally equal.
```lean
inductive CustomUnit where
| customUnit
example (e1 e2 : CustomUnit) : e1 = e2 := rfl
structure AlsoUnit where
example (e1 e2 : AlsoUnit) : e1 = e2 := rfl
```
Types with parameters, such as {lean}`WithParam`, are also unit-like if they have a single constructor that does not take parameters.
```lean
inductive WithParam (n : Nat) where
| mk
example (x y : WithParam 3) : x = y := rfl
```
Constructors with non-proof parameters are not unit-like, even if the parameters are all unit-like types.
```lean
inductive NotUnitLike where
| mk (u : Unit)
```
```lean +error (name := NotUnitLike)
example (e1 e2 : NotUnitLike) : e1 = e2 := rfl
```
```leanOutput NotUnitLike
Type mismatch
rfl
has type
?m.13 = ?m.13
but is expected to have type
e1 = e2
```
Constructors of unit-like types may take parameters that are proofs.
```lean
inductive ProofUnitLike where
| mk : 2 = 2 → ProofUnitLike
example (e1 e2 : ProofUnitLike) : e1 = e2 := rfl
```
:::
::::
{include 0 Manual.BasicTypes.Empty}
# Booleans
{docstring Bool}
The constructors {lean}`Bool.true` and {lean}`Bool.false` are exported from the {lean}`Bool` namespace, so they can be written {lean}`true` and {lean}`false`.
## Run-Time Representation
Because {lean}`Bool` is an {tech}[enum inductive] type, it is represented by a single byte in compiled code.
## Booleans and Propositions
Both {lean}`Bool` and {lean}`Prop` represent notions of truth.
From a purely logical perspective, they are equivalent: {tech}[propositional extensionality] means that there are fundamentally only two propositions, namely {lean}`True` and {lean}`False`.
However, there is an important pragmatic difference: {lean}`Bool` classifies _values_ that can be computed by programs, while {lean}`Prop` classifies statements for which code generation doesn't make sense.
In other words, {lean}`Bool` is the notion of truth and falsehood that's appropriate for programs, while {lean}`Prop` is the notion that's appropriate for mathematics.
Because proofs are erased from compiled programs, keeping {lean}`Bool` and {lean}`Prop` distinct makes it clear which parts of a Lean file are intended for computation.
```lean -show
section BoolProp
axiom b : Bool
/-- info: b = true : Prop -/
#check_msgs in
#check (b : Prop)
example : (true = true) = True := by simp
#check decide
```
A {lean}`Bool` can be used wherever a {lean}`Prop` is expected.
There is a {tech}[coercion] from every {lean}`Bool` {lean}`b` to the proposition {lean}`b = true`.
By {lean}`propext`, {lean}`true = true` is equal to {lean}`True`, and {lean}`false = true` is equal to {lean}`False`.
Not every proposition can be used by programs to make run-time decisions.
Otherwise, a program could branch on whether the Collatz conjecture is true or false!
Many propositions, however, can be checked algorithmically.
These propositions are called {tech}_decidable_ propositions, and have instances of the {lean}`Decidable` type class.
The function {name}`Decidable.decide` converts a proof-carrying {lean}`Decidable` result into a {lean}`Bool`.
This function is also a coercion from decidable propositions to {lean}`Bool`, so {lean}`(2 = 2 : Bool)` evaluates to {lean}`true`.
```lean -show
/-- info: true -/
#check_msgs in
#eval (2 = 2 : Bool)
end BoolProp
```
## Syntax
:::syntax term (title := "Boolean Infix Operators")
The infix operators `&&`, `||`, and `^^` are notations for {lean}`Bool.and`, {lean}`Bool.or`, and {lean}`Bool.xor`, respectively.
```grammar
$_:term && $_:term
```
```grammar
$_:term || $_:term
```
```grammar
$_:term ^^ $_:term
```
:::
:::syntax term (title := "Boolean Negation")
The prefix operator `!` is notation for {lean}`Bool.not`.
```grammar
!$_:term
```
:::
## API Reference
### Logical Operations
```lean -show
section ShortCircuit
axiom BIG_EXPENSIVE_COMPUTATION : Bool
```
The functions {name}`cond`, {name Bool.and}`and`, and {name Bool.or}`or` are short-circuiting.
In other words, {lean}`false && BIG_EXPENSIVE_COMPUTATION` does not need to execute {lean}`BIG_EXPENSIVE_COMPUTATION` before returning `false`.
These functions are defined using the {attr}`macro_inline` attribute, which causes the compiler to replace calls to them with their definitions while generating code, and the definitions use nested pattern matching to achieve the short-circuiting behavior.
```lean -show
end ShortCircuit
```
{docstring cond}
{docstring Bool.dcond}
{docstring Bool.not}
{docstring Bool.and}
{docstring Bool.or}
{docstring Bool.xor}
### Comparisons
Most comparisons on Booleans should be performed using the {inst}`DecidableEq Bool`, {inst}`LT Bool`, {inst}`LE Bool` instances.
{docstring Bool.decEq}
### Conversions
{docstring Bool.toISize}
{docstring Bool.toUInt8}
{docstring Bool.toUInt16}
{docstring Bool.toUInt32}
{docstring Bool.toUInt64}
{docstring Bool.toUSize}
{docstring Bool.toInt8}
{docstring Bool.toInt16}
{docstring Bool.toInt32}
{docstring Bool.toInt64}
{docstring Bool.toNat}
{docstring Bool.toInt}
{include 0 Manual.BasicTypes.Option}
{include 0 Manual.BasicTypes.Products}
{include 0 Manual.BasicTypes.Sum}
{include 0 Manual.BasicTypes.List}
{include 0 Manual.BasicTypes.Array}
{include 0 Manual.BasicTypes.ByteArray}
{include 0 Manual.BasicTypes.Range}
{include 0 Manual.BasicTypes.Maps}
{include 0 Manual.BasicTypes.Subtype}
{include 0 Manual.BasicTypes.Thunk} |
reference-manual/Manual/Defs.lean | import VersoManual
import Manual.Meta
import Manual.RecursiveDefs
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
set_option maxRecDepth 1500
#doc (Manual) "Definitions" =>
%%%
tag := "definitions"
%%%
The following commands in Lean are definition-like: {TODO}[Render commands as their name (a la tactic index)]
* {keyword}`def`
* {keyword}`abbrev`
* {keyword}`example`
* {keyword}`theorem`
* {keyword}`opaque`
All of these commands cause Lean to {tech (key := "elaborator") -normalize}[elaborate] a term based on a {tech}[signature].
With the exception of {keywordOf Lean.Parser.Command.example}`example`, which discards the result, the resulting expression in Lean's core language is saved for future use in the environment.
The {keywordOf Lean.Parser.Command.declaration}`instance` command is described in the {ref "instance-declarations"}[section on instance declarations].
# Modifiers
%%%
tag := "declaration-modifiers"
%%%
Declarations accept a consistent set of {deftech}_modifiers_, all of which are optional.
Modifiers change some aspect of the declaration's interpretation; for example, they can add documentation or change its scope.
The order of modifiers is fixed, but not every kind of declaration accepts every kind of modifier.
:::syntax declModifiers -open (alias:=Lean.Parser.Command.declModifiers) (title := "Declaration Modifiers")
Modifiers consist of the following, in order, all of which are optional:
1. a documentation comment,
2. a list of {tech}[attributes],
3. namespace control, specifying whether the resulting name is {tech}[private] or {tech}[protected],
4. the {keyword}`noncomputable` keyword, which exempts a definition from compilation,
5. the {keyword}`unsafe` keyword, and
6. a recursion modifier {keyword}`partial` or {keyword}`nonrec`, which disable termination proofs or disallow recursion entirely.
```grammar
$[$_:docComment]?
$[$_:attributes]?
$[$_]?
$[noncomputable]?
$[unsafe]?
$[$_]?
```
:::
{deftech}_Documentation comments_ are used to provide in-source API documentation for the declaration that they modify.
Documentation comments are not, in fact comments: it is a syntax error to put a documentation comment in a position where it is not processed as documentation.
They also occur in positions where some kind of text is required, but string escaping would be onerous, such as the desired messages on the {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command.
:::syntax docComment -open (title := "Documentation Comments")
Documentation comments are like ordinary block comments, but they begin with the sequence `/--` rather than `/-`; just like ordinary comments, they are terminated with `-/`.
```grammar
/--
...
-/
```
:::
Attributes are an extensible collection of modifiers that associate additional information with declarations.
They are described in a {ref "attributes"}[dedicated section].
If a declaration is marked {deftech (key := "private")}[{keyword}`private`], then it is not accessible outside the module in which it is defined.
If it is {keyword}`protected`, then opening its namespace does not bring it into scope.
Functions marked {keyword}`noncomputable` are not compiled and cannot be executed.
Functions must be noncomputable if they use noncomputable reasoning principles such as the axiom of choice or excluded middle to produce data that is relevant to the answer that they return, or if they use features of Lean that are exempted from code generation for efficiency reasons, such as {tech}[recursors].
Noncomputable functions are very useful for specification and reasoning, even if they cannot be compiled and executed.
The {keyword}`unsafe` marker exempts a definition from kernel checking and enables it to access features that may undermine Lean's guarantees.
It should be used with great care, and only with a thorough understanding of Lean's internals.
# Headers and Signatures
%%%
tag := "signature-syntax"
%%%
The {deftech}[_header_] of a definition or declaration consists of the constant being declared or defined, if relevant, together with its signature.
The {deftech}_signature_ of a constant specifies how it can be used.
The information present in the signature is more than just the type, including information such as {tech (key := "universe parameter")}[universe level parameters] and the default values of its optional parameters.
In Lean, signatures are written in a consistent format in different kinds of declarations.
## Declaration Names
Most headers begin with a {deftech}_declaration name_, which is followed by the signature proper: its parameters and the resulting type.
A declaration name is a name that may optionally include universe parameters.
:::syntax declId -open (title := "Declaration Names")
Declaration names without universe parameters consist of an identifier:
```grammar
$_:ident
```
Declaration names with universe parameters consist of an identifier followed by a period and one or more universe parameter names in braces:
```grammar
$_.{$_, $_,*}
```
These universe parameter names are binding occurrences.
:::
Examples do not include declaration names, and names are optional for instance declarations.
## Parameters and Types
%%%
tag := "parameter-syntax"
%%%
After the name, if present, is the header's signature.
The signature specifies the declaration's parameters and type.
:::syntax declSig -open (title := "Declaration Signatures")
A signature consists of zero or more parameters, followed by a colon and a type.
```grammar
$_* : $_
```
:::
:::syntax optDeclSig -open (title := "Optional Signatures")
Signatures are often optional.
In these cases, parameters may be supplied even if the type is omitted.
```grammar
$_* $[: $_]?
```
:::
Parameters may have three forms:
* An identifier, which names a parameter but does not provide a type.
These parameters' types must be inferred during elaboration.
* An underscore (`_`), which indicates a parameter that is not accessible by name in the local scope.
These parameters' types must also be inferred during elaboration.
* A bracketed binder, which may specify every aspect of one or more parameters, including their names, their types, default values, and whether they are explicit, implicit, strictly implicit, or instance-implicit.
## Bracketed Parameter Bindings
%%%
tag := "bracketed-parameter-syntax"
%%%
Parameters other than identifiers or underscores are collectively referred to as {deftech}_bracketed binders_ because every syntactic form for specifying them has some kind of brackets, braces, or parentheses.
All bracketed binders specify the type of a parameter, and most include parameter names.
The name is optional for instance implicit parameters.
Using an underscore (`_`) instead of a parameter name indicates an anonymous parameter.
:::syntax bracketedBinder -open (title := "Explicit Parameters")
Parenthesized parameters indicate explicit parameters.
If more than one identifier or underscore is provided, then all of them become parameters with the same type.
```grammar
($x $x* : $t)
```
:::
:::syntax bracketedBinder (title := "Optional and Automatic Parameters")
Parenthesized parameters with a `:=` assign default values to parameters.
Parameters with default values are called {deftech}_optional parameters_.
At a call site, if the parameter is not provided, then the provided term is used to fill it in.
Prior parameters in the signature are in scope for the default value, and their values at the call site are substituted into the default value term.
If a {ref "tactics"}[tactic script] is provided, then the tactics are executed at the call site to synthesize a parameter value; parameters that are filled in via tactics are called {deftech}_automatic parameters_.
```grammar
($x $x* : $t := $e)
```
:::
:::syntax bracketedBinder (title := "Implicit Parameters")
Parameters in curly braces indicate {tech}[implicit] parameters.
Unless provided by name at a call site, these parameters are expected to be synthesized via unification at call sites.
Implicit parameters are synthesized at all call sites.
```grammar
{$x $x* : $t}
```
:::
:::syntax bracketedBinder (title := "Strict Implicit Parameters")
Parameters in double curly braces indicate {tech}[strict implicit] parameters.
`⦃ … ⦄` and `{{ … }}` are equivalent.
Like implicit parameters, these parameters are expected to be synthesized via unification at call sites when they are not provided by name.
Strict implicit parameters are only synthesized at call sites when subsequent parameters in the signature are also provided.
```grammar
⦃$x $x* : $t⦄
```
```grammar
{{$x $x* : $t}}
```
:::
:::syntax bracketedBinder (title := "Instance Implicit Parameters")
Parameters in square brackets indicate {tech}[instance implicit] parameters, which are synthesized at call sites using {tech (key := "synthesis")}[instance synthesis].
```grammar
[$[$x :]? $t]
```
:::
The parameters are always in scope in the signature's type, which occurs after the colon.
They are also in scope in the declaration's body, while names bound in the type itself are only in scope in the type.
Thus, parameter names are used twice:
* As names in the declaration's function type, bound as part of a {tech (key := "dependent")}[dependent function type].
* As names in the declaration's body.
In function definitions, they are bound by a {keywordOf Lean.Parser.Term.fun}`fun`.
:::example "Parameter Scope"
The signature of {lean}`add` contains one parameter, `n`.
Additionally, the signature's type is {lean}`(k : Nat) → Nat`, which is a function type that includes `k`.
The parameter `n` is in scope in the function's body, but `k` is not.
```lean
def add (n : Nat) : (k : Nat) → Nat
| 0 => n
| k' + 1 => 1 + add n k'
```
Like {lean}`add`, the signature of {lean}`mustBeEqual` contains one parameter, `n`.
It is in scope both in the type, where it occurs in a proposition, and in the body, where it occurs as part of the message.
```lean
def mustBeEqual (n : Nat) : (k : Nat) → n = k → String :=
fun _ =>
fun
| rfl => s!"Equal - both are {n}!"
```
:::
The section on {ref "function-application"}[function application] describes the interpretation of {tech (key := "optional parameter")}[optional], {tech (key := "automatic parameter")}[automatic], {tech}[implicit], and {tech}[instance implicit] parameters in detail.
## Automatic Implicit Parameters
%%%
tag := "automatic-implicit-parameters"
%%%
By default, otherwise-unbound names that occur in signatures are converted into implicit parameters when possible
These parameters are called {deftech}_automatic implicit parameters_.
This is possible when they are not in the function position of an application and when there is sufficient information available in the signature to infer their type and any ordering constraints on them.
This process is iterated: if the inferred type for the freshly-inserted implicit parameter has dependencies that are not uniquely determined, then these dependencies are replaced with further implicit parameters.
Implicit parameters that don't correspond to names written in signatures are assigned names akin to those of {tech}[inaccessible] hypotheses in proofs, which cannot be referred to.
They show up in signatures with a trailing dagger (`'✝'`).
This prevents an arbitrary choice of name by Lean from becoming part of the API by being usable as a {tech}[named argument].
::::leanSection
```lean -show
variable {α : Type u} {β : Type v}
```
:::example "Automatic Implicit Parameters"
In this definition of {lean}`map`, {lean}`α` and {lean}`β` are not explicitly bound.
Rather than this being an error, they are converted into implicit parameters.
Because they must be types, but nothing constrains their universes, the universe parameters `u` and `v` are also inserted.
```lean
def map (f : α → β) : (xs : List α) → List β
| [] => []
| x :: xs => f x :: map f xs
```
The full signature of {lean}`map` is:
```signature
map.{u, v} {α : Type u} {β : Type v}
(f : α → β) (xs : List α) :
List β
```
:::
::::
::::example "No Automatic Implicit Parameters"
:::leanSection
```lean -show
universe u v
variable {α : Type u} {β : Type v}
```
In this definition, {lean}`α` and {lean}`β` are not explicitly bound.
Because {option}`autoImplicit` is disabled, this is an error:
:::
:::keepEnv
```lean +error (name := noAuto)
set_option autoImplicit false
def map (f : α → β) : (xs : List α) → List β
| [] => []
| x :: xs => f x :: map f xs
```
```leanOutput noAuto
Unknown identifier `α`
Note: It is not possible to treat `α` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```
```leanOutput noAuto
Unknown identifier `β`
Note: It is not possible to treat `β` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```
:::
The full signature allows the definition to be accepted:
```lean -keep
set_option autoImplicit false
def map.{u, v} {α : Type u} {β : Type v}
(f : α → β) :
(xs : List α) → List β
| [] => []
| x :: xs => f x :: map f xs
```
Universe parameters are inserted automatically for parameters without explicit type annotations.
The type parameters' universes can be inferred, and the appropriate universe parameters inserted, even when {option}`autoImplicit` is disabled:
```lean -keep
set_option autoImplicit false
def map {α β} (f : α → β) :
(xs : List α) → List β
| [] => []
| x :: xs => f x :: map f xs
```
::::
:::::example "Iterated Automatic Implicit Parameters"
:::leanSection
```lean -show
variable (i : Fin n)
```
Given a number bounded by {lean}`n`, represented by the type `Fin n`, an {lean}`AtLeast i` is a natural number paired with a proof that it is at least as large as `i`.
:::
```lean
structure AtLeast (i : Fin n) where
val : Nat
val_gt_i : val ≥ i.val
```
These numbers can be added:
```lean
def AtLeast.add (x y : AtLeast i) : AtLeast i :=
AtLeast.mk (x.val + y.val) <| by
cases x
cases y
dsimp only
omega
```
::::paragraph
:::leanSection
```lean -show
variable (i : Fin n)
```
The signature of {lean}`AtLeast.add` requires multiple rounds of automatic implicit parameter insertion.
First, {lean}`i` is inserted; but its type depends on the upper bound {lean}`n` of {lean}`Fin n`.
In the second round, {lean}`n` is inserted, using a machine-chosen name.
Because {lean}`n`'s type is {lean}`Nat`, which has no dependencies, the process terminates.
The final signature can be seen with {keywordOf Lean.Parser.Command.check}`#check`:
:::
```lean (name := checkAdd)
#check AtLeast.add
```
```leanOutput checkAdd
AtLeast.add {n✝ : Nat} {i : Fin n✝} (x y : AtLeast i) : AtLeast i
```
::::
:::::
Automatic implicit parameter insertion takes place after the insertion of parameters due to {tech}[section variables].
Parameters that correspond to section variables have the same name as the corresponding variable, even when they do not correspond to a name written directly in the signature, and disabling automatic implicit parameters has no effect the parameters that correspond to section variables.
However, when automatic implicit parameters are enabled, section variable declarations that contain otherwise-unbound variables receive additional section variables that follow the same rules as those for implicit parameters.
Automatic implicit parameters insertion is controlled by two options.
By default, automatic implicit parameter insertion is _relaxed_, which means that any unbound identifier may be a candidate for automatic insertion.
Setting the option {option}`relaxedAutoImplicit` to {lean}`false` disables relaxed mode and causes only identifiers that consist of a single character followed by zero or more digits to be considered for automatic insertion.
{optionDocs relaxedAutoImplicit}
{optionDocs autoImplicit}
::::example "Relaxed vs Non-Relaxed Automatic Implicit Parameters"
Misspelled identifiers or missing imports can end up as unwanted implicit parameters, as in this example:
```lean
inductive Answer where
| yes
| maybe
| no
```
:::keepEnv
```lean (name := asnwer) +error
def select (choices : α × α × α) : Asnwer → α
| .yes => choices.1
| .maybe => choices.2.1
| .no => choices.2.2
```
The resulting error message states that the argument's type is not a constant, so dot notation cannot be used in the pattern:
```leanOutput asnwer
Invalid dotted identifier notation: The expected type of `.yes`
Asnwer
is not of the form `C ...` or `... → C ...` where C is a constant
```
This is because the signature is:
```signature
select.{u_1, u_2}
{α : Type u_1}
{Asnwer : Sort u_2}
(choices : α × α × α) :
Asnwer → α
```
:::
Disabling relaxed automatic implicit parameters makes the error more clear, while still allowing the type to be inserted automatically:
:::keepEnv
```lean (name := asnwer2) +error
set_option relaxedAutoImplicit false
def select (choices : α × α × α) : Asnwer → α
| .yes => choices.1
| .maybe => choices.2.1
| .no => choices.2.2
```
```leanOutput asnwer2
Unknown identifier `Asnwer`
Note: It is not possible to treat `Asnwer` as an implicitly bound variable here because it has multiple characters while the `relaxedAutoImplicit` option is set to `false`.
```
:::
Correcting the error allows the definition to be accepted.
:::keepEnv
```lean
set_option relaxedAutoImplicit false
def select (choices : α × α × α) : Answer → α
| .yes => choices.1
| .maybe => choices.2.1
| .no => choices.2.2
```
:::
Turning off automatic implicit parameters entirely leads to the definition being rejected:
:::keepEnv
```lean +error (name := noauto)
set_option autoImplicit false
def select (choices : α × α × α) : Answer → α
| .yes => choices.1
| .maybe => choices.2.1
| .no => choices.2.2
```
```leanOutput noauto
Unknown identifier `α`
Note: It is not possible to treat `α` as an implicitly bound variable here because the `autoImplicit` option is set to `false`.
```
:::
::::
# Definitions
Definitions add a new constant to the global environment as a name that stands for a term.
As part of the kernel's definitional equality, this new constant may be replaced via {tech (key := "δ")}[δ-reduction] with the term that it stands for.
In the elaborator, this replacement is governed by the constant's {tech}[reducibility].
The new constant may be {tech (key := "universe polymorphism")}[universe polymorphic], in which case occurrences may instantiate it with different universe level parameters.
Function definitions may be recursive.
To preserve the consistency of Lean's type theory as a logic, recursive functions must either be opaque to the kernel (e.g. by {ref "partial-functions"}[declaring them {keyword}`partial`]) or proven to terminate with one of the strategies described in {ref "recursive-definitions"}[the section on recursive definitions].
The headers and bodies of definitions are elaborated together.
If the header is incompletely specified (e.g. a parameter's type or the codomain is missing), then the body may provide sufficient information for the elaborator to reconstruct the missing parts.
However, {tech}[instance implicit] parameters must be specified in the header or as {tech}[section variables].
:::syntax Lean.Parser.Command.declaration (alias := Lean.Parser.Command.definition) (title := "Definitions")
Definitions that use `:=` associate the term on the right-hand side with the constant's name.
The term is wrapped in a {keywordOf Lean.Parser.Term.fun}`fun` for each parameter, and the type is found by binding the parameters in a function type.
Definitions with {keyword}`def` are {tech}[semireducible].
```grammar
$_:declModifiers
def $_ $_ := $_
```
Definitions may use pattern matching.
These definitions are desugared to uses of {keywordOf Lean.Parser.Term.match}`match`.
```grammar
$_:declModifiers
def $_ $_
$[| $_ => $_]*
```
Values of structure types, or functions that return them, may be defined by providing values for their fields, following {keyword}`where`:
```grammar
$_:declModifiers
def $_ $_ where
$_*
```
In {tech}[modules], the bodies of definitions defined with {keyword}`def` are not exposed by default.
:::
:::syntax Lean.Parser.Command.declaration (alias := Lean.Parser.Command.abbrev) (title := "Abbreviations")
{deftech}[Abbreviations] are identical to definitions with {keyword}`def`, except they are {tech}[reducible].
```grammar
$_:declModifiers
abbrev $_ $_ := $_
```
```grammar
$_:declModifiers
abbrev $_ $_
$[| $_ => $_]*
```
```grammar
$_:declModifiers
abbrev $_ $_ where
$_*
```
In {tech}[modules], the bodies of definitions defined with {keyword}`abbrev` are exposed by default.
:::
{deftech}_Opaque constants_ are defined constants that are not subject to {tech (key := "δ")}[δ-reduction] in the kernel.
They are useful for specifying the existence of some function.
Unlike {tech}[axioms], opaque declarations can only be used for types that are inhabited, so they do not risk introducing inconsistency.
Also unlike axioms, the inhabitant of the type is used in compiled code.
The {attr}`implemented_by` attribute can be used to instruct the compiler to emit a call to some other function as the compilation of an opaque constant.
:::syntax Lean.Parser.Command.declaration (alias := Lean.Parser.Command.opaque) (title := "Opaque Constants")
Opaque definitions with right-hand sides are elaborated like other definitions.
This demonstrates that the type is inhabited; the inhabitant plays no further role.
```grammar
$_:declModifiers
opaque $_ $_ := $_
```
Opaque constants may also be specified without right-hand sides.
The elaborator fills in the right-hand side by synthesizing an instance of {name}`Inhabited`, or {name}`Nonempty` if that fails.
```grammar
$_:declModifiers
opaque $_ $_
```
:::
# Theorems
:::paragraph
Because {tech}[propositions] are types whose inhabitants count as proofs, {deftech}[theorems] and definitions are technically very similar.
However, because their use cases are quite different, they differ in many details:
* The theorem statement must be a proposition.
The types of definitions may inhabit any {tech}[universe].
* A theorem's header (that is, the theorem statement) is completely elaborated before the body is elaborated.
Section variables only become parameters to the theorem if they (or their dependents) are mentioned in the header.
This prevents changes to a proof from unintentionally changing the theorem statement.
* Theorems are {tech}[irreducible] by default.
Because all proofs of the same proposition are {tech (key := "definitional equality")}[definitionally equal], there few reasons to unfold a theorem.
:::
Theorems may be recursive, subject to the same conditions as {ref "recursive-definitions"}[recursive function definitions].
However, it is more common to use tactics such as {tactic}`induction` or {tactic}`fun_induction` instead.
:::syntax Lean.Parser.Command.declaration (alias := Lean.Parser.Command.theorem) (title := "Theorems")
The syntax of theorems is like that of definitions, except the codomain (that is, the theorem statement) in the signature is mandatory.
```grammar
$_:declModifiers
theorem $_ $_ := $_
```
```grammar
$_:declModifiers
theorem $_ $_
$[| $_ => $_]*
```
```grammar
$_:declModifiers
theorem $_ $_ where
$_*
```
In {tech}[modules], proofs of theorems are not exposed by default.
:::
# Example Declarations
An {deftech}[example] is an anonymous definition that is elaborated and then discarded.
Examples are useful for incremental testing during development and to make it easier to understand a file.
:::syntax Lean.Parser.Command.declaration (alias := Lean.Parser.Command.example) (title := "Examples")
```grammar
$_:declModifiers
example $_:optDeclSig := $_
```
```grammar
$_:declModifiers
example $_:optDeclSig
$[| $_ => $_]*
```
```grammar
$_:declModifiers
example $_:optDeclSig where
$_*
```
:::
{include 0 Manual.RecursiveDefs} |
reference-manual/Manual/Runtime.lean | import VersoManual
import Manual.Meta
import Manual.Meta.LexedText
import Manual.Papers
import Std.Internal.Async.Process
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
#doc (Manual) "Run-Time Code" =>
%%%
tag := "runtime"
%%%
Compiled Lean code uses services provided by the Lean runtime.
The runtime contains efficient, low-level primitives that bridge the gap between the Lean language and the supported platforms.
These services include:
: Memory management
Lean does not require programmers to manually manage memory.
Space is allocated when needed to store a value, and values that can no longer be reached (and are thus irrelevant) are deallocated.
In particular, Lean uses {tech (key := "reference count")}[reference counting], where each allocated object maintains a count of incoming references.
The compiler emits calls to memory management routines that allocate memory and modify reference counts, and these routines are provided by the runtime, along with the data structures that represent Lean values in compiled code.
: Multiple Threads
The {name}`Task` API provides the ability to write parallel and concurrent code.
The runtime is responsible for scheduling Lean tasks across operating-system threads.
: Primitive operators
Many built-in types, including {lean}`Nat`, {lean}`Array`, {lean}`String`, and fixed-width integers, have special representations for reasons of efficiency.
The runtime provides implementations of these types' primitive operators that take advantage of these optimized representations.
There are many primitive operators.
They are described in their respective sections under {ref "basic-types"}[Basic Types].
# Boxing
%%%
tag := "boxing"
%%%
:::paragraph
Lean values may be represented at runtime in two ways:
* {deftech}_Boxed_ values may be pointers to heap values or require shifting and masking.
* {deftech}_Unboxed_ values are immediately available.
:::
Boxed values are either a pointer to an object, in which case the lowest-order bit is 0, or an immediate value, in which case the lowest-order bit is 1 and the value is found by shifting the representation to the right by one bit.
Types with an unboxed representation, such as {name}`UInt8` and {tech}[enum inductive] types, are represented as the corresponding C types in contexts where the compiler can be sure that the value has said type.
In some contexts, such as generic container types like {name}`Array`, otherwise-unboxed values must be boxed prior to storage.
In other words, {name}`Bool.not` is called with and returns unboxed `uint8_t` values because the {tech}[enum inductive] type {name}`Bool` has an unboxed representation, but the individual {name}`Bool` values in an {lean}`Array Bool` are boxed.
A field of type {lean}`Bool` in an inductive type's constructor is represented unboxed, while {lean}`Bool`s stored in polymorphic fields that are instantiated as {lean}`Bool` are boxed.
# Reference Counting
%%%
tag := "reference-counting"
%%%
Lean uses {deftech (key := "reference count")}_reference counting_ for memory management.
Each allocated object maintains a count of how many other objects refer to it.
When a new reference is added, the count is incremented, and when a reference is dropped, the count is decremented.
When a reference count reaches zero, the object is no longer reachable and can play no part in the further execution of the program.
It is deallocated and all of its references to other objects are dropped, which may trigger further deallocations.
:::paragraph
Reference counting provides a number of benefits:
: Reuse of Memory
If an object's reference count drops to zero just as another of the same size is to be allocated, then the original object's memory can be safely reused for the new object.
As a result, many common data-structure traversals (such as {name}`List.map`) do not need to allocate memory when there is exactly one reference to the data structure to be traversed.
: Opportunistic In-Place Updates
Primitive types, such as {ref "String"}[strings] and {ref "Array"}[arrays], may provide operations that copy shared data but modify unshared data in-place.
As long as they hold the only reference to the value being modified, many operations on these primitive types will modify it rather than copy it.
This can lead to substantial performance benefits.
Carefully-written {lean}`Array` code avoids the performance overhead of immutable data structures while maintaining the ease of reasoning provided by pure functions.
: Predictability
Reference counts are decremented at predictable times.
As a result, reference-counted objects can be used to manage other resources, such as file handles.
In Lean, a {name IO.FS.Handle}`Handle` does not need to be explicitly closed because it is closed immediately when it is no longer accessible.
: Simpler FFI
Objects managed with reference counting don't need to be relocated as part of reclaiming unused memory.
This greatly simplifies interaction with code written in other languages, such as C.
:::
The traditional drawbacks of reference counting include the performance overhead due to updating reference counts along with the inability to recognize and deallocate cyclic data.
The former drawback is minimized by an analysis based on _borrowing_ that allows many reference count updates to be elided.
Nevertheless, multi-threaded code requires that reference count updates are synchronized between threads, which also imposes a substantial overhead.
To reduce this overhead, Lean values are partitioned into those which are reachable from multiple threads and those which are not.
Single-threaded reference counts can be updated much faster than multi-threaded reference counts, and many values are accessed only on a single thread.
Together, these techniques greatly reduce the performance overhead of reference counting.
Because the verifiable fragment of Lean cannot create cyclic data, the Lean runtime does not have a technique to detect it.
{citet countingBeans}[] provide more details on the implementation of reference counting in Lean.
## Observing Uniqueness
Ensuring that arrays and strings are uniquely referenced is key to writing fast code in Lean.
The primitive {name}`dbgTraceIfShared` can be used to check whether a data structure is aliased.
When called, it returns its argument unchanged, printing the provided trace message if the argument's reference count is greater than one.
{docstring dbgTraceIfShared}
Due to the specifics of how {keywordOf Lean.Parser.Command.eval}`#eval` is implemented, using {name}`dbgTraceIfShared` with {keywordOf Lean.Parser.Command.eval}`#eval` can be misleading.
Instead, it should be used in code that's explicitly compiled and run.
::::example "Observing Uniqueness"
:::ioExample
This program reads a line of input from the user, printing it after replacing its first character with a space.
Replacing characters in a string uses an in-place update if the string is not shared and the characters are both contained in the 7-bit ASCII subset of Unicode.
The {name}`dbgTraceIfShared` call does nothing, indicating that the string will indeed be updated in place rather than copied.
```ioLean
def process (str : String) (h : str.startPos ≠ str.endPos) : IO Unit := do
IO.println ((dbgTraceIfShared "String update" str).startPos.set ' ' h)
def main : IO Unit := do
let line := (← (← IO.getStdin).getLine).trimAscii.copy
if h : line.startPos ≠ line.endPos then
process line h
```
When run with this input:
```stdin
Here is input.
```
the program emits:
```stdout
ere is input.
```
with an empty standard error output:
```stderr
```
:::
:::ioExample
This version of the program retains a reference to the original string, which necessitates copying the string in the call to {name}`String.set`.
This fact is visible in its standard error output.
```ioLean
def process (str : String) (h : str.startPos ≠ str.endPos) : IO Unit := do
IO.println ((dbgTraceIfShared "String update" str).startPos.set ' ' h)
def main : IO Unit := do
let line := (← (← IO.getStdin).getLine).trimAscii.copy
if h : line.startPos ≠ line.endPos then
process line h
IO.println "Original input:"
IO.println line
```
When run with this input:
```stdin
Here is input.
```
the program emits:
```stdout
ere is input.
Original input:
Here is input.
```
In its standard error, the message passed to {name}`dbgTraceIfShared` is visible.
```stderr
shared RC String update
```
:::
::::
## Compiler IR
The compiler option {option}`trace.compiler.ir.result` can be used to inspect the compiler's intermediate representation (IR) for a function.
In this intermediate representation, reference counting, allocation, and reuse are explicit:
* The `isShared` operator checks whether a reference count is `1`.
* `ctor_`$`n` allocates the $`n`th constructor of a type.
* `proj_`$`n` retrieves the $`n`th field from a constructor value.
* `set `$`x``[`$`n``]` mutates the $`n`th field of the constructor in $`x`.
* `ret `$`x` returns the value in $`x`.
The specifics of reference count manipulations can depend on the results of optimization passes such as inlining.
While the vast majority of Lean code doesn't require this kind of attention to achieve good performance, knowing how to diagnose unique reference issues can be very important when writing performance-critical code.
{optionDocs trace.compiler.ir.result}
:::example "Reference Counts in IR"
Compiler IR can be used to observe when reference counts are incremented, which can help diagnose situations when a value is expected to have a unique incoming reference, but is in fact shared.
Here, {lean}`process` and {lean}`process'` each take a string as a parameter and modify it with {name}`String.set`, returning a pair of strings.
While {lean}`process` returns a constant string as the second element of the pair, {lean}`process'` returns the original string.
```lean
set_option trace.compiler.ir.result true
```
```lean (name := p1)
def process (str : String) : String × String :=
(str.set 0 ' ', "")
```
```lean (name := p2)
def process' (str : String) : String × String:=
(str.set 0 ' ', str)
```
The IR for {lean}`process` includes no `inc` or `dec` instructions.
If the incoming string `x_1` is a unique reference, then it is still a unique reference when passed to {name}`String.set`, which can then use in-place modification:
```leanOutput p1 (allowDiff := 5)
[Compiler.IR] [result]
def process._closed_0 : obj :=
let x_1 : obj := "";
ret x_1
def process (x_1 : obj) : obj :=
let x_2 : tagged := 0;
let x_3 : u32 := 32;
let x_4 : obj := String.set x_1 x_2 x_3;
let x_5 : obj := process._closed_0;
let x_6 : obj := ctor_0[Prod.mk] x_4 x_5;
ret x_6
```
The IR for {lean}`process'`, on the other hand, increments the reference count of the string just before calling {name}`String.set`.
Thus, the modified string `x_4` is a copy, regardless of whether the original reference to `x_1` is unique:
```leanOutput p2
[Compiler.IR] [result]
def process' (x_1 : obj) : obj :=
let x_2 : tagged := 0;
let x_3 : u32 := 32;
inc x_1;
let x_4 : obj := String.set x_1 x_2 x_3;
let x_5 : obj := ctor_0[Prod.mk] x_4 x_1;
ret x_5
```
:::
:::example "Memory Reuse in IR"
The function {lean}`discardElems` is a simplified version of {name}`List.map` that replaces every element in a list with {lean}`()`.
Inspecting its intermediate representation demonstrates that it will reuse the list's memory when its reference is unique.
```lean (name := discardElems)
set_option trace.compiler.ir.result true
def discardElems : List α → List Unit
| [] => []
| x :: xs => () :: discardElems xs
```
This emits the following IR:
```leanOutput discardElems
[Compiler.IR] [result]
def discardElems._redArg (x_1 : tobj) : tobj :=
case x_1 : tobj of
List.nil →
let x_2 : tagged := ctor_0[List.nil];
ret x_2
List.cons →
let x_3 : u8 := isShared x_1;
case x_3 : u8 of
Bool.false →
let x_4 : tobj := proj[1] x_1;
let x_5 : tobj := proj[0] x_1;
dec x_5;
let x_6 : tagged := ctor_0[PUnit.unit];
let x_7 : tobj := discardElems._redArg x_4;
set x_1[1] := x_7;
set x_1[0] := x_6;
ret x_1
Bool.true →
let x_8 : tobj := proj[1] x_1;
inc x_8;
dec x_1;
let x_9 : tagged := ctor_0[PUnit.unit];
let x_10 : tobj := discardElems._redArg x_8;
let x_11 : obj := ctor_1[List.cons] x_9 x_10;
ret x_11
[Compiler.IR] [result]
def discardElems (x_1 : ◾) (x_2 : tobj) : tobj :=
let x_3 : tobj := discardElems._redArg x_2;
ret x_3
```
In the IR, the {name}`List.cons` case explicitly checks whether the argument value is shared (i.e. whether its reference count is greater than one).
If the reference is unique, the reference count of the discarded list element `x_5` is decremented and the constructor value is reused.
If it is shared, a new {name}`List.cons` is allocated in `x_11` for the result.
:::
### More Topics
%%%
draft := true
%%%
:::planned 208
* Compact regions
* When should C code increment or decrement reference counts?
* What is the meaning of the borrow annotation (`@&`)?
:::
# Multi-Threaded Execution
Lean includes primitives for parallel and concurrent programs, described using {tech}[tasks].
The Lean runtime system includes a task manager that assigns hardware resources to tasks.
Along with the API for defining tasks, this is described in detail in the {ref "concurrency"}[section on multi-threaded programs].
# Foreign Function Interface
%%%
tag := "ffi"
%%%
*The current interface was designed for internal use in Lean and should be considered unstable*.
It will be refined and extended in the future.
Lean offers efficient interoperability with any language that supports the C ABI.
This support is, however, currently limited to transferring Lean data types; in particular, it is not yet possible to pass or return compound data structures such as C {c}`struct`s by value from or to Lean.
There are two primary attributes for interoperating with other languages:
{TODO}[It can also be used with `def` to provide an internal definition, but ensuring consistency of both definitions is up to the user.]
* `@[export sym] def leanSym : ...`
:::syntax attr (title := "External Symbols")
```grammar
extern $s:str
```
Binds a Lean declaration to the specified external symbol.
:::
:::syntax attr (title := "Exported Symbols")
```grammar
export $x:ident
```
Exports a Lean constant with the unmangled symbol name `sym`.
:::
For simple examples of how to call foreign code from Lean and vice versa, see [the FFI](https://github.com/leanprover/lean4/tree/master/tests/lake/examples/ffi) and [reverse FFI](https://github.com/leanprover/lean4/tree/master/tests/lake/examples/reverse-ffi) examples in the Lean source repository.
## The Lean ABI
:::leanSection
```lean -show
variable {α₁ αₙ β αᵢ}
private axiom «α₂→…→αₙ₋₁».{u} : Type u
local macro "..." : term => ``(«α₂→…→αₙ₋₁»)
```
The Lean {deftech}_Application Binary Interface_ (ABI) describes how the signature of a Lean declaration is encoded in the platform-native calling convention.
It is based on the standard C ABI and calling convention of the target platform.
Lean declarations can be marked for interaction with foreign functions using either the attribute {attr}`extern "sym"`, which causes compiled code to use the C declaration {c}`sym` as the implementation, or the attribute {attr}`export sym`, which makes the declaration available as {c}`sym` to C.
In both cases, the C declaration's type is derived from the Lean type of the declaration with the attribute.
Let {lean}`α₁ → ... → αₙ → β` be the declaration's {tech (key := "normal form")}[normalized] type.
If `n` is 0, the corresponding C declaration is
```c
extern s sym;
```
where {c}`s` is the C translation of {lean}`β` as specified in {ref "ffi-types"}[the next section].
In the case of a definition marked {attr}`extern`, the symbol's value is only guaranteed to be initialized after calling the Lean module's initializer or that of an importing module.
The section on {ref "ffi-initialization"}[initialization] describes initializers in greater detail.
If `n` is greater than 0, the corresponding C declaration is
```c
s sym(t₁, ..., tₙ);
```
where the parameter types `tᵢ` are the C translations of the types {lean}`αᵢ`.
In the case of {attr}`extern`, all {tech}[irrelevant] types are removed first.
:::
### Translating Types from Lean to C
%%%
tag := "ffi-types"
%%%
```lean -show
universe u
variable (p : Prop)
private axiom «...» : Sort u
local macro "..." : term => ``(«...»)
```
In the {tech (key := "application binary interface")}[ABI], Lean types are translated to C types as follows:
* The integer types {lean}`UInt8`, …, {lean}`UInt64`, {lean}`USize` are represented by the C types {c}`uint8_t`, ..., {c}`uint64_t`, {c}`size_t`, respectively.
If their {ref "fixed-int-runtime"}[run-time representation] requires {tech (key := "boxed")}[boxing], then they are unboxed at the FFI boundary.
* {lean}`Char` is represented by {c}`uint32_t`.
* {lean}`Float` is represented by {c}`double`.
* {name}`Nat` and {name}`Int` are represented by {c}`lean_object *`.
Their runtime values is either a pointer to an opaque bignum object or, if the lowest bit of the “pointer” is 1 ({c}`lean_is_scalar`), an encoded natural number or integer ({c}`lean_box`/{c}`lean_unbox`).
* A universe {lean}`Sort u`, type constructor {lean}`... → Sort u`, or proposition {lean}`p`` :`{lean}` Prop` is {tech}[irrelevant] and is either statically erased (see above) or represented as a {c}`lean_object *` with the runtime value {c}`lean_box(0)`
* The ABI for other inductive types that don't have special compiler support depends on the specifics of the type.
It is the same as the {ref "run-time-inductives"}[run-time representation] of these types.
Its runtime value is either a pointer to an object of a subtype of {c}`lean_object` (see the “Inductive types” section below) or it is the value {c}`lean_box(cidx)` for the {c}`cidx`th constructor of an inductive type if this constructor does not have any relevant parameters.
```lean -show
variable (u : Unit)
```
:::example "`Unit` in the ABI"
The runtime value of {lean}`u`` : `{lean}`Unit` is always `lean_box(0)`.
:::
### Borrowing
%%%
tag := "ffi-borrowing"
%%%
By default, all {c}`lean_object *` parameters of an {attr}`extern` function are considered {deftech}_owned_.
The external code is passed a “virtual RC token” and is responsible for passing this token along to another consuming function (exactly once) or freeing it via {c}`lean_dec`.
To reduce reference counting overhead, parameters can be marked as {deftech}_borrowed_ by prefixing their type with {keywordOf Lean.Parser.Term.borrowed}`@&`.
Borrowed objects must only be passed to other non-consuming functions (arbitrarily often) or converted to owned values using {c}`lean_inc`.
In `lean.h`, the {c}`lean_object *` aliases {c}`lean_obj_arg` and {c}`b_lean_obj_arg` are used to mark this difference on the C side.
Return values and `@[export]` parameters are always owned at the moment.
:::syntax term (title := "Borrowed Parameters")
```grammar
@& $_
```
Parameters may be marked as {tech}[borrowed] by prefixing their types with {keyword}`@&`.
:::
## Initialization
%%%
tag := "ffi-initialization"
%%%
When including Lean code in a larger program, modules must be {deftech (key := "initialize")}_initialized_ before accessing any of their declarations.
Module initialization entails:
* initialization of all “constant definitions” (nullary functions), including closed terms lifted out of other functions,
* execution of all code marked with the {attr}`init` attribute, and
* execution of all code marked with the {attr}`builtin_init` attribute, if the `builtin` parameter of the module initializer has been set.
The module initializer is automatically run with the `builtin` flag for executables compiled from Lean code and for “plugins” loaded with `lean --plugin`.
For all other modules imported by `lean`, the initializer is run without `builtin`.
In other words, {attr}`init` functions are run if and only if their module is imported, regardless of whether they have native code available, while {attr}`builtin_init` functions are only run for native executable or plugins, regardless of whether their module is imported.
The Lean compiler uses built-in initializers for purposes such as registering basic parsers that should be available even without importing their module, which is necessary for bootstrapping.
The initializer for module `A.B` in a package `foo` is called {c}`initialize_foo_A_B`.
For modules in the Lean core (e.g., {module}`Init.Prelude`), the initializer is called {c}`initialize_Init_Prelude`.
Module initializers will automatically initialize any imported modules.
They are also idempotent (when run with the same `builtin` flag), but not thread-safe.
*Important for process-related functionality*: applications that use process-related functions from `libuv`, such as {name}`Std.Internal.IO.Process.getProcessTitle` and {name}`Std.Internal.IO.Process.setProcessTitle`, must call `lean_setup_args(argc, argv)` (which returns a potentially modified `argv` that must be used in place of the original) *before* calling `lean_initialize()` or `lean_initialize_runtime_module()`.
This sets up process handling capabilities correctly, which is essential for certain system-level operations that Lean's runtime may depend on.
Together with initialization of the Lean runtime, code like the following should be run exactly once before accessing any Lean declarations:
```c
void lean_initialize_runtime_module();
void lean_initialize();
char ** lean_setup_args(int argc, char ** argv);
lean_object * initialize_A_B(uint8_t builtin);
lean_object * initialize_C(uint8_t builtin);
...
argv = lean_setup_args(argc, argv); // if using process-related functionality
lean_initialize_runtime_module();
// necessary (and replaces `lean_initialize_runtime_module`) for code that (indirectly) accesses the `Lean` package:
//lean_initialize();
lean_object * res;
// use same default as for Lean executables
uint8_t builtin = 1;
res = initialize_foo_A_B(builtin);
if (lean_io_result_is_ok(res)) {
lean_dec_ref(res);
} else {
lean_io_result_show_error(res);
lean_dec(res);
return ...; // do not access Lean declarations if initialization failed
}
res = initialize_bar_C(builtin);
if (lean_io_result_is_ok(res)) {
...
//lean_init_task_manager(); // necessary for code that (indirectly) uses `Task`
lean_io_mark_end_initialization();
```
In addition, any other thread not spawned by the Lean runtime itself must be initialized for Lean use by calling
```c
void lean_initialize_thread();
```
and should be finalized in order to free all thread-local resources by calling
```c
void lean_finalize_thread();
```
## `@[extern]` in the Interpreter
The Lean interpreter can run Lean declarations for which symbols are available in loaded shared libraries, which includes declarations that are marked {attr}`extern`.
To run this code (e.g. with {keywordOf Lean.Parser.Command.eval}`#eval`), the following steps are necessary:
1. The module containing the declaration and its dependencies must be compiled into a shared library
1. This shared library should be provided to `lean --load-dynlib=` to run code that imports the module.
It is not sufficient to load the foreign library containing the external symbol because the interpreter depends on code that is emitted for each {attr}`extern` declaration.
Thus it is not possible to interpret an {attr}`extern` declaration in the same file.
The Lean source repository contains an example of this usage in [`tests/compiler/foreign`](https://github.com/leanprover/lean4/tree/master/tests/compiler/foreign/). |
reference-manual/Manual/BuildTools.lean | import VersoManual
import Lean.Parser.Command
import Manual.Meta
import Manual.BuildTools.Lake
import Manual.BuildTools.Elan
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
#doc (Manual) "Build Tools and Distribution" =>
%%%
tag := "build-tools-and-distribution"
shortContextTitle := "Build Tools"
%%%
:::paragraph
The Lean {deftech}_toolchain_ is the collection of command-line tools that are used to check proofs and compile programs in collections of Lean files.
Toolchains are managed by `elan`, which installs toolchains as needed.
Lean toolchains are designed to be self-contained, and most command-line users will never need to explicitly invoke any other than `lake` and `elan`.
They contain the following tools:
: `lean`
The Lean compiler, used to elaborate and compile a Lean source file.
: `lake`
The Lean build tool, used to incrementally invoke `lean` and other tools while tracking dependencies.
: `leanc`
The C compiler that ships with Lean, which is a version of [Clang](https://clang.llvm.org/).
: `leanmake`
An implementation of the `make` build tool, used for compiling C dependencies.
: `leanchecker`
A tool that replays elaboration results from {tech}[`.olean` files] through the Lean kernel, providing additional assurance that all terms were properly checked.
:::
In addition to these build tools, toolchains contain files that are needed to build Lean code.
This includes source code, {tech}[`.olean` files], compiled libraries, C header files, and the compiled Lean run-time system.
They also include external proof automation tools that are used by tactics included with Lean, such as `cadical` for {tactic}`bv_decide`.
{include 0 Manual.BuildTools.Lake}
{include 0 Manual.BuildTools.Elan}
# Reservoir
%%%
tag := "reservoir"
draft := true
%%%
::: planned 76
* Concepts
* Package and toolchain versions
* Tags and builds
::: |
reference-manual/Manual/Axioms.lean | import VersoManual
import Manual.Meta
import Manual.Papers
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
#doc (Manual) "Axioms" =>
%%%
tag := "axioms"
htmlSplit := .never
%%%
:::leanSection
```lean -show
universe u
```
{deftech}_Axioms_ are postulated constants.
While the axiom's type must itself be a type (that is, it must have type {lean}`Sort u`), there are no further requirements.
Axioms do not {tech (key := "reduction")}[reduce] to other terms.
:::
Axioms can be used to experiment with the consequences of an idea before investing the time required to construct a model or prove a theorem.
They can also be used to adopt reasoning principles that can't otherwise be accessed in Lean's type theory; Lean itself provides {ref "standard-axioms"}[three such axioms] that are known to be consistent.
However, axioms should be used with caution: axioms that are inconsistent with one another, or just false, undermine the very foundations of proofs.
Lean automatically tracks the axioms that each proof depends on so that they can be audited.
# Axiom Declarations
%%%
tag := "axiom-declarations"
%%%
Axioms declarations include a name and a type:
:::syntax Lean.Parser.Command.axiom (title := "Axiom Declarations")
```grammar
axiom $_ $_
```
:::
Axioms declarations may be modified with all possible {ref "declaration-modifiers"}[declaration modifiers].
Documentation comments, attributes, {keyword}`private`, and {keyword}`protected` have the same meaning as for other declarations.
The modifiers {keyword}`partial`, {keyword}`nonrec`, {keyword}`noncomputable` and {keyword}`unsafe` have no effect.
# Consistency
%%%
tag := "axiom-consistency"
%%%
Using axioms is risky.
Because they introduce a new constant of any type, and an inhabitant of a type that is a proposition counts as a proof of the proposition, axioms can be used to prove even false propositions.
Any proof that relies on an axiom can be trusted only to the extent that the axiom is both true and consistent with the other axioms used.
By their very nature, Lean cannot check whether new axioms are consistent; please exercise care when adding axioms.
:::example "Inconsistencies From Axioms"
Axioms may introduce inconsistency, either alone or in combination with other axioms.
Assuming a false statement allows any statement at all to be proved:
```lean
axiom false_is_true : False
theorem two_eq_five : 2 = 5 := false_is_true.elim
```
Inconsistency may also arise from axioms that are incompatible with other properties of Lean.
For example, parametricity is a powerful reasoning technique when used in languages that support it, but it is not compatible with Lean's standard axioms.
If parametricity held, then the “free theorem” from the introduction to Wadler's [_Theorems for Free_](https://dl.acm.org/doi/pdf/10.1145/99370.99404) (1989), which describes a technique for using parametricity to derive theorems about polymorphic functions, would be true.
As an axiom, it reads:
```lean
axiom List.free_theorem {α β}
(f : {α : _} → List α → List α) (g : α → β) :
f ∘ (List.map g) = (List.map g) ∘ f
```
However, a consequence of excluded middle is that all propositions are decidable; this means that a function can _check_ whether they are true or false.
This function can't be compiled, but it still exists.
This can be used to define polymorphic functions that are not parametric:
```lean
open Classical in
noncomputable def nonParametric
{α : _} (xs : List α) :
List α :=
if α = Nat then [] else xs
```
The existence of this function contradicts the “free theorem”:
```lean
theorem unit_not_nat : Unit ≠ Nat := by
intro eq
have ⟨allEq⟩ := eq ▸ inferInstanceAs (Subsingleton Unit)
specialize allEq 0 1
contradiction
example : False := by
have := List.free_theorem nonParametric (fun () => 42)
unfold nonParametric at this
simp [unit_not_nat] at this
have := congrFun this [()]
contradiction
```
:::
# Reduction
%%%
tag := "axiom-reduction"
%%%
Even consistent axioms can cause difficulties.
{tech}[Definitional equality] identifies terms modulo reduction rules.
The {tech}[ι-reduction] rule specifies the interaction of recursors and constructors; because axioms are not constructors, it does not apply to them.
Ordinarily, terms without free variables reduce to applications of constructors, but axioms can cause them to get “stuck,” resulting in large terms.
:::example "Axioms and Stuck Reduction"
Adding an additional `0` to {lean}`Nat` with an axiom results in some definitional reductions getting stuck.
In this example, two {name}`Nat.succ` constructors are successfully moved to the outside of the term by reduction, but {name}`Nat.rec` is unable to make further progress after encountering {lean}`Nat.otherZero`.
```lean (name := otherZero)
axiom Nat.otherZero : Nat
#reduce 4 + (Nat.otherZero + 2)
```
```leanOutput otherZero
((Nat.rec ⟨fun x => x, PUnit.unit⟩ (fun n n_ih => ⟨fun x => (n_ih.1 x).succ, n_ih⟩) Nat.otherZero).1 4).succ.succ
```
:::
Furthermore, the Lean compiler is not able to generate code for axioms.
At runtime, Lean values must be represented by concrete data in memory, but axioms do not have a concrete representation.
Definitions that contain non-proof code that relies on axioms must be marked {keyword}`noncomputable` and can't be compiled.
:::example "Axioms and Compilation"
Adding an additional `0` to {lean}`Nat` with an axiom makes it so functions that use it can't be compiled.
In particular, {name}`List.length'` returns the axiom {name}`Nat.otherZero` instead of {name}`Nat.zero` as the length of the empty list.
```lean (name := otherZero2) +error
axiom Nat.otherZero : Nat
def List.length' : List α → Nat
| [] => Nat.otherZero
| _ :: _ => xs.length
```
```leanOutput otherZero2
`Nat.otherZero` not supported by code generator; consider marking definition as `noncomputable`
```
Axioms used in proofs rather than programs do not prevent a function from being compiled.
The compiler does not generate code for proofs, so axioms in proofs are no problem.
{lean}`nextOdd` computes the next odd number from a {lean}`Nat`, which may be the number itself or one greater:
```lean
def nextOdd (k : Nat) :
{ n : Nat // n % 2 = 1 ∧ (n = k ∨ n = k + 1) } where
val := if k % 2 = 1 then k else k + 1
property := by
by_cases k % 2 = 1 <;>
simp [*] <;> omega
```
The tactic proof generates a term that transitively relies on three axioms:
```lean (name:=printAxNextOdd)
#print axioms nextOdd
```
```leanOutput printAxNextOdd
'nextOdd' depends on axioms: [propext, Classical.choice, Quot.sound]
```
Because they occur only in a proof, the compiler has no problem generating code:
```lean (name := evalNextOdd)
#eval (nextOdd 4, nextOdd 5)
```
```leanOutput evalNextOdd
(5, 5)
```
:::
# Standard Axioms
%%%
tag := "standard-axioms"
%%%
There are seven standard axioms in Lean. The first three axioms are important parts of how mathematics is done in Lean:
* ```signature
Classical.choice.{u} {α : Sort u} : Nonempty α → α
```
* ```signature
propext {a b : Prop} : (a ↔ b) → a = b
```
* ```signature
Quot.sound.{u} {α : Sort u}
{r : α → α → Prop} {a b : α} :
r a b → Quot.mk r a = Quot.mk r b
```
All three of these axioms are discussed in the book [Theorem Proving in Lean](https://lean-lang.org/theorem_proving_in_lean4/find/?domain=Verso.Genre.Manual.section&name=axioms-and-computation).
The axiom {name}`sorryAx` is used as part of the implementation of the {tactic}`sorry` tactic and {lean}`sorry` term.
Uses of this axiom are not intended to occur in finished proofs, as it can be used to prove anything:
* ```signature
sorryAx {α : Sort u} (synthetic := true) : α
```
Three final axioms do not truly exist for their _mathematical_ content; from a mathematical perspective they prove trivial statements:
* ```signature
Lean.trustCompiler : True
```
* ```signature
Lean.ofReduceBool (a b : Bool) : Lean.reduceBool a = b → a = b
```
* ```signature
Lean.ofReduceNat (a b : Nat) : Lean.reduceNat a = b → a = b
```
These axioms instead track proofs that depend on the correctness of the entire compiler, and not just on the much smaller {tech}`kernel`.
:::example "Creating and Tracking Proofs That Trust the Compiler"
The functions {name}`Lean.reduceBool` and {name}`Lean.reduceNat` can be invoked to have the compiler perform a calculation; this can greatly improve performance of implementations of proof by reflection.
```lean
def largeNumber : Nat := Lean.reduceNat (230_000 + 4_500 + 1_000_067)
```
The resulting term depends on the axiom {name}`Lean.trustCompiler` in order to track the fact that this calculation depends on the correctness of the compiler.
```lean (name := printAxExC1)
#print axioms largeNumber
```
```leanOutput printAxExC1
'largeNumber' depends on axioms: [Lean.trustCompiler]
```
The most common way that proofs end up trusting the compiler is through the {tactic}`native_decide` tactic:
```lean (name := printAxExC2)
def bigSum : (List.range 1_001).sum = 500_500 := by native_decide
#print axioms bigSum
```
```leanOutput printAxExC2
'bigSum' depends on axioms: [Lean.ofReduceBool, Lean.trustCompiler]
```
:::
# Displaying Axiom Dependencies
%%%
tag := "print-axioms"
%%%
The command {keywordOf Lean.Parser.Command.printAxioms}`#print axioms`, followed by a defined identifier, displays all the axioms that a definition transitively relies on.
In other words, if a proof uses another proof, which itself uses an axiom, then the axiom is reported by {keywordOf Lean.Parser.Command.printAxioms}`#print axioms` for both.
::::keepEnv
This can be used to audit the assumptions made by a proof, for instance detecting that a proof transitively depends on the {tactic}`sorry` tactic.
```lean
def lazy : 4 == 2 + 1 + 1 := by sorry
```
```lean (name := printAxEx4)
#print axioms lazy
```
```leanOutput printAxEx4
'lazy' depends on axioms: [sorryAx]
```
:::example "Printing Axioms of Simple Definitions" (keep := true)
Consider the following three constants:
```lean
def addThree (n : Nat) : Nat := 1 + n + 2
theorem excluded_middle (P : Prop) : P ∨ ¬ P := Classical.em P
theorem simple_equality (P : Prop) : (P ∨ False) = P := or_false P
```
Regular functions like {lean}`addThree` that we might want to actually evaluation typically do not depend on any axioms:
```lean (name := printAxEx2)
#print axioms addThree
```
```leanOutput printAxEx2
'addThree' does not depend on any axioms
```
The excluded middle theorem is only true if we use classical reasoning, so the foundation for classical reasoning shows up alongside other axioms:
```lean (name := printAxEx1)
#print axioms excluded_middle
```
```leanOutput printAxEx1
'excluded_middle' depends on axioms: [propext, Classical.choice, Quot.sound]
```
Finally, the idea that two equivalent propositions are equal directly relies on {tech}[propositional extensionality].
```lean (name := printAxEx3)
#print axioms simple_equality
```
```leanOutput printAxEx3
'simple_equality' depends on axioms: [propext]
```
:::
:::example "Using {keywordOf Lean.Parser.Command.printAxioms}`#print axioms` with {keywordOf Lean.guardMsgsCmd}`#guard_msgs`"
You can use {keywordOf Lean.Parser.Command.printAxioms}`#print axioms`
together with {keywordOf Lean.guardMsgsCmd}`#guard_msgs` to ensure
that updates to libraries from other projects cannot silently
introduce unwanted dependencies on axioms.
For example, if the proof of {name}`double_neg_elim` below changed in such a way that it used more
axioms than those listed, then the {keywordOf Lean.guardMsgsCmd}`#guard_msgs` command would report an error.
```lean
theorem double_neg_elim (P : Prop) : (¬ ¬ P) = P :=
propext Classical.not_not
/--
info: 'double_neg_elim' depends on axioms:
[propext, Classical.choice, Quot.sound]
-/
#guard_msgs (whitespace := lax) in
#print axioms double_neg_elim
```
:::
:::: |
reference-manual/Manual/Classes.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Manual.Classes.InstanceDecls
import Manual.Classes.InstanceSynth
import Manual.Classes.DerivingHandlers
import Manual.Classes.BasicClasses
import Lean.Parser.Command
open Manual
open Verso.Genre
open Verso.Genre.Manual
open Verso.Genre.Manual.InlineLean
open Lean.Parser.Command (declModifiers)
set_option pp.rawOnError true
set_option linter.unusedVariables false
set_option maxRecDepth 100000
#doc (Manual) "Type Classes" =>
%%%
tag := "type-classes"
%%%
An operation is _polymorphic_ if it can be used with multiple types.
In Lean, polymorphism comes in three varieties:
1. {tech}[universe polymorphism], where the sorts in a definition can be instantiated in various ways,
2. functions that take types as (potentially implicit) parameters, allowing a single body of code to work with any type, and
3. {deftech}_ad-hoc polymorphism_, implemented with type classes, in which operations to be overloaded may have different implementations for different types.
Because Lean does not allow case analysis of types, polymorphic functions implement operations that are uniform for any choice of type argument; for example, {name}`List.map` does not suddenly compute differently depending on whether the input list contains {name}`String`s or {name}`Nat`s.
Ad-hoc polymorphic operations are useful when there is no “uniform” way to implement an operation; the canonical use case is for overloading arithmetic operators so that they work with {name}`Nat`, {name}`Int`, {name}`Float`, and any other type that has a sensible notion of addition.
Ad-hoc polymorphism may also involve multiple types; looking up a value at a given index in a collection involves the collection type, the index type, and the type of member elements to be extracted.
A {deftech}_type class_{margin}[Type classes were first described in {citehere wadlerBlott89}[]] describes a collection of overloaded operations (called {deftech}_methods_) together with the involved types.
Type classes are very flexible.
Overloading may involve multiple types; operations like indexing into a data structure can be overloaded for a specific choice of data structure, index type, element type, and even a predicate that asserts the presence of the key in the structure.
Due to Lean's expressive type system, overloading operations is not restricted only to types; type classes may be parameterized by ordinary values, by families of types, and even by predicates or propositions.
All of these possibilities are used in practice:
: Natural number literals
The {name}`OfNat` type class is used to interpret natural number literals.
Instances may depend not only on the type being instantiated, but also on the number literal itself.
: Computational effects
Type classes such as {name}`Monad`, whose parameter is a function from one type to another, are used to provide {ref "monads-and-do"}[special syntax for programs with side effects.]
The “type” for which operations are overloaded is actually a type-level function, such as {name}`Option`, {name}`IO`, or {name}`Except`.
: Predicates and propositions
The {name}`Decidable` type class allows a decision procedure for a proposition to be found automatically by Lean.
This is used as the basis for {keywordOf termIfThenElse}`if`-expressions, which may branch on any decidable proposition.
While ordinary polymorphic definitions simply expect instantiation with arbitrary parameters, the operators overloaded with type classes are to be instantiated with {deftech}_instances_ that define the overloaded operation for some specific set of parameters.
These {deftech}[instance-implicit] parameters are indicated in square brackets.
At invocation sites, Lean either {deftech (key := "synthesis")}_synthesizes_ {index}[instance synthesis] {index (subterm := "of type class instances")}[synthesis] a suitable instance from the available candidates or signals an error.
Because instances may themselves have instance parameters, this search process may be recursive and result in a final composite instance value that combines code from a variety of instances.
Thus, type class instance synthesis is also a means of constructing programs in a type-directed manner.
Here are some typical use cases for type classes:
* Type classes may represent overloaded operators, such as arithmetic that can be used with a variety of types of numbers or a membership predicate that can be used for a variety of data structures. There is often a single canonical choice of operator for a given type—after all, there is no sensible alternative definition of addition for {lean}`Nat`—but this is not an essential property, and libraries may provide alternative instances if needed.
* Type classes can represent an algebraic structure, providing both the extra structure and the axioms required by the structure. For example, a type class that represents an Abelian group may contain methods for a binary operator, a unary inverse operator, an identity element, as well as proofs that the binary operator is associative and commutative, that the identity is an identity, and that the inverse operator yields the identity element on both sides of the operator. Here, there may not be a canonical choice of structure, and a library may provide many ways to instantiate a given set of axioms; there are two equally canonical monoid structures over the integers.
* A type class can represent a relation between two types that allows them to be used together in some novel way by a library.
The {lean}`Coe` class represents automatically-inserted coercions from one type to another, and {lean}`MonadLift` represents a way to run operations with one kind of effect in a context that expects another kind.
* Type classes can represent a framework of type-driven code generation, where instances for polymorphic types each contribute some portion of a final program.
The {name}`Repr` class defines a canonical pretty printer for a type, and polymorphic types end up with polymorphic {name}`Repr` instances.
When pretty printing is finally invoked on an expression with a known concrete type, such as {lean}`List (Nat × (String ⊕ Int))`, the resulting pretty printer contains code assembled from the {name}`Repr` instances for {name}`List`, {name}`Prod`, {name}`Nat`, {name}`Sum`, {name}`String`, and {name}`Int`.
# Class Declarations
%%%
tag := "class"
%%%
Type classes are declared with the {keywordOf Lean.Parser.Command.declaration}`class` keyword.
:::syntax command (title := "Type Class Declarations")
```grammar
$_:declModifiers
class $d:declId $_:bracketedBinder* $[: $_]?
$[extends $[$[$_ : ]?$_],*]?
where
$[$_:declModifiers $_ ::]?
$_
$[deriving $[$x:ident],*]?
```
Declares a new type class.
:::
:::keepEnv
```lean -show
-- Just make sure that the `deriving` clause is legit
class A (n : Nat) where
k : Nat
eq : n = k
deriving DecidableEq
```
:::
The {keywordOf Lean.Parser.Command.declaration}`class` declaration creates a new single-constructor inductive type, just as if the {keywordOf Lean.Parser.Command.declaration}`structure` command had been used instead.
In fact, the results of the {keywordOf Lean.Parser.Command.declaration}`class` and {keywordOf Lean.Parser.Command.declaration}`structure` commands are almost identical, and features such as default values may be used the same way in both.
Please refer to {ref "structures"}[the documentation for structures] for more information about default values, inheritance, and other features of structures.
The differences between structure and class declarations are:
: Methods instead of fields
Instead of creating field projections that take a value of the structure type as an explicit parameter, {tech}[methods] are created. Each method takes the corresponding instance as an instance-implicit parameter.
: Instance-implicit parent classes
The constructor of a class that extends other classes takes its class parents' instances as instance-implicit parameters, rather than explicit parameters.
When instances of this class are defined, instance synthesis is used to find the values of inherited fields.
Parents that are not classes are still explicit parameters to the underlying constructor.
: Parent projections via instance synthesis
Structure field projections make use of {ref "structure-inheritance"}[inheritance information] to project parent structure fields from child structure values.
Classes instead use instance synthesis: given a child class instance, synthesis will construct the parent; thus, methods are not added to child classes in the same way that projections are added to child structures.
: Registered as class
The resulting inductive type is registered as a type class, for which instances may be defined and that may be used as the type of instance-implicit arguments.
: Out and semi-out parameters are considered
The {name}`outParam` and {name}`semiOutParam` {tech}[gadgets] have no meaning in structure definitions, but they are used in class definitions to control instance search.
While {keywordOf Lean.Parser.Command.declaration}`deriving` clauses are allowed for class definitions to maintain the parallel between class and structure elaboration, they are not frequently used and should be considered an advanced feature.
:::example "No Instances of Non-Classes"
Lean rejects instance-implicit parameters of types that are not classes:
```lean +error (name := notClass)
def f [n : Nat] : n = n := rfl
```
```leanOutput notClass
invalid binder annotation, type is not a class instance
Nat
Note: Use the command `set_option checkBinderAnnotations false` to disable the check
```
:::
::::example "Class vs Structure Constructors"
A very small algebraic hierarchy can be represented either as structures ({name}`S.Magma`, {name}`S.Semigroup`, and {name}`S.Monoid` below), a mix of structures and classes ({name}`C1.Monoid`), or only using classes ({name}`C2.Magma`, {name}`C2.Semigroup`, and {name}`C2.Monoid`):
```lean
namespace S
structure Magma (α : Type u) where
op : α → α → α
structure Semigroup (α : Type u) extends Magma α where
op_assoc : ∀ x y z, op (op x y) z = op x (op y z)
structure Monoid (α : Type u) extends Semigroup α where
ident : α
ident_left : ∀ x, op ident x = x
ident_right : ∀ x, op x ident = x
end S
namespace C1
class Monoid (α : Type u) extends S.Semigroup α where
ident : α
ident_left : ∀ x, op ident x = x
ident_right : ∀ x, op x ident = x
end C1
namespace C2
class Magma (α : Type u) where
op : α → α → α
class Semigroup (α : Type u) extends Magma α where
op_assoc : ∀ x y z, op (op x y) z = op x (op y z)
class Monoid (α : Type u) extends Semigroup α where
ident : α
ident_left : ∀ x, op ident x = x
ident_right : ∀ x, op x ident = x
end C2
```
{name}`S.Monoid.mk` and {name}`C1.Monoid.mk` have identical signatures, because the parent of the class {name}`C1.Monoid` is not itself a class:
```signature
S.Monoid.mk.{u} {α : Type u}
(toSemigroup : S.Semigroup α)
(ident : α)
(ident_left : ∀ (x : α), toSemigroup.op ident x = x)
(ident_right : ∀ (x : α), toSemigroup.op x ident = x) :
S.Monoid α
```
```signature
C1.Monoid.mk.{u} {α : Type u}
(toSemigroup : S.Semigroup α)
(ident : α)
(ident_left : ∀ (x : α), toSemigroup.op ident x = x)
(ident_right : ∀ (x : α), toSemigroup.op x ident = x) :
C1.Monoid α
```
Similarly, because neither `S.Magma` nor `C2.Magma` inherits from another structure or class, their constructors are identical:
```signature
S.Magma.mk.{u} {α : Type u} (op : α → α → α) : S.Magma α
```
```signature
C2.Magma.mk.{u} {α : Type u} (op : α → α → α) : C2.Magma α
```
{name}`S.Semigroup.mk`, however, takes its parent as an ordinary parameter, while {name}`C2.Semigroup.mk` takes its parent as an instance implicit parameter:
```signature
S.Semigroup.mk.{u} {α : Type u}
(toMagma : S.Magma α)
(op_assoc : ∀ (x y z : α),
toMagma.op (toMagma.op x y) z = toMagma.op x (toMagma.op y z)) :
S.Semigroup α
```
```signature
C2.Semigroup.mk.{u} {α : Type u} [toMagma : C2.Magma α]
(op_assoc : ∀ (x y z : α),
toMagma.op (toMagma.op x y) z = toMagma.op x (toMagma.op y z)) :
C2.Semigroup α
```
Finally, {name}`C2.Monoid.mk` takes its semigroup parent as an instance implicit parameter.
The references to `op` become references to the method {name}`C2.Magma.op`, relying on instance synthesis to recover the implementation from the {name}`C2.Semigroup` instance-implicit parameter via its parent projection:
```signature
C2.Monoid.mk.{u} {α : Type u}
[toSemigroup : C2.Semigroup α]
(ident : α)
(ident_left : ∀ (x : α), C2.Magma.op ident x = x)
(ident_right : ∀ (x : α), C2.Magma.op x ident = x) :
C2.Monoid α
```
::::
Parameters to type classes may be marked with {deftech}_gadgets_, which are special versions of the identity function that cause the elaborator to treat a value differently.
Gadgets never change the _meaning_ of a term, but they may cause it to be treated differently in elaboration-time search procedures.
The gadgets {name}`outParam` and {name}`semiOutParam` affect {ref "instance-synth"}[instance synthesis], so they are documented in that section.
Whether a type is a class or not has no effect on definitional equality.
Two instances of the same class with the same parameters are not necessarily identical and may in fact be very different.
::::example "Instances are Not Unique"
This implementation of binary heap insertion is buggy:
```lean
structure Heap (α : Type u) where
contents : Array α
deriving Repr
def Heap.bubbleUp [Ord α] (i : Nat) (xs : Heap α) : Heap α :=
if h : i = 0 then xs
else if h : i ≥ xs.contents.size then xs
else
let j := i / 2
if Ord.compare xs.contents[i] xs.contents[j] == .lt then
Heap.bubbleUp j { xs with contents := xs.contents.swap i j }
else xs
def Heap.insert [Ord α] (x : α) (xs : Heap α) : Heap α :=
let i := xs.contents.size
{xs with contents := xs.contents.push x}.bubbleUp i
```
The problem is that a heap constructed with one {name}`Ord` instance may later be used with another, leading to the breaking of the heap invariant.
One way to correct this is to make the heap type depend on the selected `Ord` instance:
```lean
structure Heap' (α : Type u) [Ord α] where
contents : Array α
def Heap'.bubbleUp [inst : Ord α]
(i : Nat) (xs : @Heap' α inst) :
@Heap' α inst :=
if h : i = 0 then xs
else if h : i ≥ xs.contents.size then xs
else
let j := i / 2
if inst.compare xs.contents[i] xs.contents[j] == .lt then
Heap'.bubbleUp j {xs with contents := xs.contents.swap i j}
else xs
def Heap'.insert [Ord α] (x : α) (xs : Heap' α) : Heap' α :=
let i := xs.contents.size
{xs with contents := xs.contents.push x}.bubbleUp i
```
In the improved definitions, {name}`Heap'.bubbleUp` is needlessly explicit; the instance does not need to be explicitly named here because Lean would select the indicated instances nonetheless, but it does bring the correctness invariant front and center for readers.
::::
## Sum Types as Classes
%%%
tag := "class inductive"
%%%
Most type classes follow the paradigm of a set of overloaded methods from which clients may choose freely.
This is naturally modeled by a product type, from which the overloaded methods are projections.
Some classes, however, are sum types: they require that the recipient of the synthesized instance first check _which_ of the available instance constructors was provided.
To account for these classes, a class declaration may consist of an arbitrary {tech}[inductive type], not just an extended form of structure declaration.
:::syntax Lean.Parser.Command.declaration (title := "Class Inductive Type Declarations")
```grammar
$_:declModifiers
class inductive $d:declId $_:optDeclSig where
$[| $_ $c:ident $_]*
$[deriving $[$x:ident],*]?
```
:::
Class inductive types are just like other inductive types, except they may participate in instance synthesis.
The paradigmatic example of a class inductive is {name}`Decidable`: synthesizing an instance in a context with free variables amounts to synthesizing the decision procedure, but if there are no free variables, then the truth of the proposition can be established by instance synthesis alone (as is done by the {tactic (show:="decide")}`Lean.Parser.Tactic.decide` tactic).
## Class Abbreviations
%%%
tag := "class-abbrev"
%%%
In some cases, many related type classes may co-occur throughout a codebase.
Rather than writing all the names repeatedly, it would be possible to define a class that extends all the classes in question, contributing no new methods itself.
However, this new class has a disadvantage: its instances must be declared explicitly.
The {keywordOf Lean.Parser.Command.classAbbrev}`class abbrev` command allows the creation of {deftech}_class abbreviations_ in which one name is short for a number of other class parameters.
Behind the scenes, a class abbreviation is represented by a class that extends all the others.
Its constructor is additionally declared to be an instance so the new class can be constructed by instance synthesis alone.
::::keepEnv
:::example "Class Abbreviations"
Both {name}`plusTimes1` and {name}`plusTimes2` require that their parameters' type have {name}`Add` and {name}`Mul` instances:
```lean
class abbrev AddMul (α : Type u) := Add α, Mul α
def plusTimes1 [AddMul α] (x y z : α) := x + y * z
class AddMul' (α : Type u) extends Add α, Mul α
def plusTimes2 [AddMul' α] (x y z : α) := x + y * z
```
Because {name}`AddMul` is a {keywordOf Lean.Parser.Command.classAbbrev}`class abbrev`, no additional declarations are necessary to use {name}`plusTimes1` with {lean}`Nat`:
```lean (name := plusTimes1)
#eval plusTimes1 2 5 7
```
```leanOutput plusTimes1
37
```
However, {name}`plusTimes2` fails, because there is no {lean}`AddMul' Nat` instance—no instances whatsoever have yet been declared:
```lean (name := plusTimes2a) +error
#eval plusTimes2 2 5 7
```
```leanOutput plusTimes2a
failed to synthesize instance of type class
AddMul' ?m.8
Hint: Type class instance resolution failures can be inspected with the `set_option trace.Meta.synthInstance true` command.
```
Declaring a very general instance takes care of the problem for {lean}`Nat` and every other type:
```lean (name := plusTimes2b)
instance [Add α] [Mul α] : AddMul' α where
#eval plusTimes2 2 5 7
```
```leanOutput plusTimes2b
37
```
:::
::::
{include 0 Manual.Classes.InstanceDecls}
{include 0 Manual.Classes.InstanceSynth}
# Deriving Instances
%%%
tag := "deriving-instances"
%%%
Lean can automatically generate instances for many classes, a process known as {deftech}_deriving_ instances.
Instance deriving can be invoked either when defining a type or as a stand-alone command.
:::syntax Lean.Parser.Command.optDeriving -open (title := "Instance Deriving (Optional)")
As part of a command that creates a new inductive type, a {keywordOf Lean.Parser.Command.declaration}`deriving` clause specifies a comma-separated list of class names for which instances should be generated:
```grammar
$[deriving $[$_],*]?
```
:::
:::syntax Lean.Parser.Command.deriving (title := "Stand-Alone Deriving of Instances")
The stand-alone {keywordOf Lean.Parser.Command.deriving}`deriving` command specifies a number of class names and subject names.
Each of the specified classes are derived for each of the specified subjects.
```grammar
deriving instance $[$_],* for $_,*
```
:::
::::keepEnv
:::example "Deriving Multiple Classes"
After specifying multiple classes to derive for multiple types, as in this code:
```lean
structure A where
structure B where
deriving instance BEq, Repr for A, B
```
all the instances exist for all the types, so all four {keywordOf Lean.Parser.Command.synth}`#synth` commands succeed:
```lean
#synth BEq A
#synth BEq B
#synth Repr A
#synth Repr B
```
:::
::::
{include 2 Manual.Classes.DerivingHandlers}
{include 0 Manual.Classes.BasicClasses} |
reference-manual/Manual/Papers.lean | import VersoManual.Bibliography
open Verso.Genre.Manual
def beyondNotations : InProceedings where
title := inlines!"Beyond notations: Hygienic macro expansion for theorem proving languages"
authors := #[inlines!"Sebastian Ullrich", inlines!"Leonardo de Moura"]
year := 2020
booktitle := inlines!"Proceedings of the International Joint Conference on Automated Reasoning"
def carneiro19 : Thesis where
title := inlines!"The Type Theory of Lean"
author := inlines!"Mario Carneiro"
year := 2019
university := inlines!"Carnegie Mellon University"
url := some "https://github.com/digama0/lean-type-theory/releases/download/v1.0/main.pdf"
degree := inlines!"Masters thesis"
def castPaper : ArXiv where
title := inlines!"Simplifying Casts and Coercions"
authors := #[inlines!"Robert Y. Lewis", inlines!"Paul-Nicolas Madelaine"]
year := 2020
id := "2001.10594"
def constructionsOnConstructors : InProceedings where
title := inlines!"A Few Constructions on Constructors"
authors := #[inlines!"Conor McBride", inlines!"Healfdene Goguen", inlines!"James McKinna"]
year := 2004
booktitle := inlines!"Types for Proofs and Programs, International Workshop, TYPES 2004"
series := some (inlines!"LNCS 3839")
url := "https://doi.org/10.1007/11617990_12"
def cuttingToTheChase : InProceedings where
title := inlines!"Cutting to the Chase: Solving Linear Integer Arithmetic"
authors := #[inlines!"Dejan Jovanović", inlines!"Leonardo de Moura"]
year := 2023
booktitle := inlines!"Automated Deduction: CADE '23"
series := some (inlines!"LNCS 6803")
url := "https://link.springer.com/chapter/10.1007/978-3-642-22438-6_26"
def doUnchained : InProceedings where
title := inlines!"`do` Unchained: Embracing Local Imperativity in a Purely Functional Language"
authors := #[inlines!"Sebastian Ullrich", inlines!"Leonardo de Moura"]
url := some "https://dl.acm.org/doi/10.1145/3547640"
year := 2022
booktitle := inlines!"Proceedings of the ACM on Programming Languages: ICFP 2022"
def countingBeans : InProceedings where
title := inlines!"Counting Immutable Beans: Reference Counting Optimized for Purely Functional Programming"
authors := #[inlines!"Sebastian Ullrich", inlines!"Leonardo de Moura"]
url := some "https://arxiv.org/abs/1908.05647"
year := 2019
booktitle := inlines!"Proceedings of the 31st Symposium on Implementation and Application of Functional Languages (IFL 2019)"
def hoare69 : Article where
title := inlines!"An Axiomatic Basis for Computer Programming"
authors := #[inlines!"C. A. R. Hoare"]
journal := inlines!"Communications of the ACM"
year := 1969
volume := inlines!"12"
number := inlines!"10"
month := none
pages := some (576, 583)
def pratt73 : InProceedings where
title := inlines!"Top down operator precedence"
authors := #[inlines!"Vaughan Pratt"]
year := 1973
booktitle := inlines!"Proceedings of the 1st Annual ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages"
def tabledRes : ArXiv where
title := inlines!"Tabled typeclass resolution"
authors := #[inlines!"Daniel Selsam", inlines!"Sebastian Ullrich", inlines!"Leonardo de Moura"]
year := 2020
id := "2001.04301"
def ullrich23 : Thesis where
title := inlines!"An Extensible Theorem Proving Frontend"
author := inlines!"Sebastian Ullrich"
year := 2023
university := inlines!"Karlsruhe Institute of Technology"
url := some "https://www.lean-lang.org/papers/thesis-sebastian.pdf"
degree := inlines!"Dr. Ing. dissertation"
def launchbury94 : InProceedings where
title := inlines!"Lazy functional state threads"
authors := #[inlines!"John Launchbury", inlines!"Simon L Peyton Jones"]
year := 1994
booktitle := inlines!"Proceedings of the ACM SIGPLAN 1994 Conference on Programming Language Design and Implementation"
def manolios2006 : InProceedings where
title := inlines!"Termination Analysis with Calling Context Graphs"
authors := #[inlines!"Panagiotis Manolios", inlines!"Daron Vroon"]
year := 2006
booktitle := inlines!"Proceedings of the International Conference on Computer Aided Verification (CAV 2006)"
series := some <| inlines!"LNCS 4144"
url := "https://doi.org/10.1007/11817963_36"
def bulwahn2007 : InProceedings where
title := inlines!"Finding Lexicographic Orders for Termination Proofs in Isabelle/HOL"
authors := #[inlines!"Lukas Bulwahn", inlines!"Alexander Krauss", inlines!"Tobias Nipkow"]
year := 2007
booktitle := inlines!"Proceedings of the International Conference on Theorem Proving in Higher Order Logics (TPHOLS 2007)"
series := some <| inlines!"LNTCS 4732"
url := "https://doi.org/10.1007/978-3-540-74591-4_5"
def streicher1993 : Thesis where
title := inlines!"Investigations into Intensional Type Theory"
author := inlines!"Thomas Streicher"
year := 1993
university := inlines!"Ludwig-Maximilians-Universität München"
url := "https://www2.mathematik.tu-darmstadt.de/~streicher/HabilStreicher.pdf"
degree := inlines!"Habilitation"
def wadler1989 : InProceedings where
title := inlines!"Theorems for free!"
authors := #[inlines!"Philip Wadler"]
year := 1989
booktitle := inlines!"Proceedings of the Fourth International Conference on Functional Programming Languages and Computer Architecture"
url := "https://dl.acm.org/doi/pdf/10.1145/99370.99404"
def wadlerBlott89 : InProceedings where
title := inlines!"How to make ad-hoc polymorphism less ad hoc"
authors := #[inlines!"Philip Wadler", inlines!"Stephen Blott"]
year := 1980
booktitle := inlines!"Proceedings of the 16th Symposium on Principles of Programming Languages"
def wadler2003 : InProceedings where
title := inlines!"A Prettier Printer"
authors := #[inlines!"Philip Wadler"]
year := 2003
booktitle := inlines!"The Fun of Programming, A symposium in honour of Professor Richard Bird's 60th birthday"
url := "https://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf" |
reference-manual/Manual/Types.lean | import VersoManual
import Manual.Meta
import Manual.Language.Functions
import Manual.Language.InductiveTypes
import Manual.Quotients
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option maxRecDepth 800
#doc (Manual) "The Type System" =>
%%%
tag := "type-system"
shortContextTitle := "Type System"
%%%
{deftech}_Terms_, also known as {deftech}_expressions_, are the fundamental units of meaning in Lean's core language.
They are produced from user-written syntax by the {tech (key := "Lean elaborator")}[elaborator].
Lean's type system relates terms to their _types_, which are also themselves terms.
Types can be thought of as denoting sets, while terms denote individual elements of these sets.
A term is {deftech}_well-typed_ if it has a type under the rules of Lean's type theory.
Only well-typed terms have a meaning.
Terms are a dependently typed λ-calculus: they include function abstraction, application, variables, and `let`-bindings.
In addition to bound variables, variables in the term language may refer to {tech}[constructors], {tech}[type constructors], {tech}[recursors], {deftech}[defined constants], or opaque constants.
Constructors, type constructors, recursors, and opaque constants are not subject to substitution, while defined constants may be replaced with their definitions.
A {deftech}_derivation_ demonstrates the well-typedness of a term by explicitly indicating the precise inference rules that are used.
Implicitly, well-typed terms can stand in for the derivations that demonstrate their well-typedness.
Lean's type theory is explicit enough that derivations can be reconstructed from well-typed terms, which greatly reduces the overhead that would be incurred from storing a complete derivation, while still being expressive enough to represent modern research mathematics.
This means that proof terms are sufficient evidence of the truth of a theorem and are amenable to independent verification.
In addition to having types, terms are also related by {deftech}_definitional equality_.
This is the mechanically-checkable relation that syntactically equates terms modulo their computational behavior.
Definitional equality includes the following forms of {deftech}[reduction]:
: {deftech}[β] (beta)
Applying a function abstraction to an argument by substitution for the bound variable
: {deftech}[δ] (delta)
Replacing occurrences of {tech}[defined constants] by the definition's value
: {deftech}[ι] (iota)
Reduction of recursors whose targets are constructors (primitive recursion)
: {deftech}[ζ] (zeta)
Replacement of let-bound variables by their defined values
: Quotient reduction
{ref "quotient-model"}[Reduction of the quotient type's function lifting operator] when applied to an element of a quotient
Terms in which all possible reductions have been carried out are in {deftech}_normal form_.
::::keepEnv
```lean -show
axiom α : Type
axiom β : Type
axiom f : α → β
structure S where
f1 : α
f2 : β
axiom x : S
-- test claims in next para
example : (fun x => f x) = f := by rfl
example : S.mk x.f1 x.f2 = x := by rfl
export S (f1 f2)
```
Definitional equality includes {deftech}[η-equivalence] of functions and single-constructor inductive types.
That is, {lean}`fun x => f x` is definitionally equal to {lean}`f`, and {lean}`S.mk x.f1 x.f2` is definitionally equal to {lean}`x`, if {lean}`S` is a structure with fields {lean}`f1` and {lean}`f2`.
It also features {deftech}_proof irrelevance_: any two proofs of the same proposition are definitionally equal.
It is reflexive and symmetric, but not transitive.
::::
Definitional equality is used by conversion: if two terms are definitionally equal, and a given term has one of them as its type, then it also has the other as its type.
Because definitional equality includes reduction, types can result from computations over data.
::::keepEnv
:::Manual.example "Computing types"
When passed a natural number, the function {lean}`LengthList` computes a type that corresponds to a list with precisely that many entries in it:
```lean
def LengthList (α : Type u) : Nat → Type u
| 0 => PUnit
| n + 1 => α × LengthList α n
```
Because Lean's tuples nest to the right, multiple nested parentheses are not needed:
```lean
example : LengthList Int 0 := ()
example : LengthList String 2 :=
("Hello", "there", ())
```
If the length does not match the number of entries, then the computed type will not match the term:
```lean +error (name := wrongNum)
example : LengthList String 5 :=
("Wrong", "number", ())
```
```leanOutput wrongNum
Application type mismatch: The argument
()
has type
Unit
but is expected to have type
LengthList String 3
in the application
("number", ())
```
:::
::::
The basic types in Lean are {tech}[universes], {tech}[function] types, the quotient former {name}`Quot`, and {tech}[type constructors] of {tech}[inductive types].
{tech}[Defined constants], applications of {tech}[recursors], function application, {tech}[axioms] or {tech}[opaque constants] may additionally give types, just as they can give rise to terms in any other type.
{include Manual.Language.Functions}
# Propositions
%%%
tag := "propositions"
%%%
{deftech}[Propositions] are meaningful statements that admit proof. {index}[proposition]
Nonsensical statements are not propositions, but false statements are.
All propositions are classified by {lean}`Prop`.
Propositions have the following properties:
: Definitional proof irrelevance
Any two proofs of the same proposition are completely interchangeable.
: Run-time irrelevance
Propositions are erased from compiled code.
: Impredicativity
Propositions may quantify over types from any universe whatsoever.
: Restricted Elimination
With the exception of {tech}[subsingletons], propositions cannot be eliminated into non-proposition types.
: {deftech (key := "propositional extensionality")}[Extensionality] {index (subterm := "of propositions")}[extensionality]
Any two logically equivalent propositions can be proven to be equal with the {lean}`propext` axiom.
{docstring propext}
# Universes
Types are classified by {deftech}_universes_. {index}[universe]{margin}[Universes are also referred to as {deftech}_sorts_.]
Each universe has a {deftech (key:="universe level")}_level_, {index (subterm := "of universe")}[level] which is a natural number.
The {lean}`Sort` operator constructs a universe from a given level. {index}[`Sort`]
If the level of a universe is smaller than that of another, the universe itself is said to be smaller.
With the exception of propositions (described later in this chapter), types in a given universe may only quantify over types in smaller universes.
{lean}`Sort 0` is the type of propositions, while each `Sort (u + 1)` is a type that describes data.
Every universe is an element of the next larger universe, so {lean}`Sort 5` includes {lean}`Sort 4`.
This means that the following examples are accepted:
```lean
example : Sort 5 := Sort 4
example : Sort 2 := Sort 1
```
On the other hand, {lean}`Sort 3` is not an element of {lean}`Sort 5`:
```lean +error (name := sort3)
example : Sort 5 := Sort 3
```
```leanOutput sort3
Type mismatch
Type 2
has type
Type 3
of sort `Type 4` but is expected to have type
Type 4
of sort `Type 5`
```
Similarly, because {lean}`Unit` is in {lean}`Sort 1`, it is not in {lean}`Sort 2`:
```lean
example : Sort 1 := Unit
```
```lean +error (name := unit1)
example : Sort 2 := Unit
```
```leanOutput unit1
Type mismatch
Unit
has type
Type
of sort `Type 1` but is expected to have type
Type 1
of sort `Type 2`
```
Because propositions and data are used differently and are governed by different rules, the abbreviations {lean}`Type` and {lean}`Prop` are provided to make the distinction more convenient. {index}[`Type`] {index}[`Prop`]
`Type u` is an abbreviation for `Sort (u + 1)`, so {lean}`Type 0` is {lean}`Sort 1` and {lean}`Type 3` is {lean}`Sort 4`.
{lean}`Type 0` can also be abbreviated {lean}`Type`, so `Unit : Type` and `Type : Type 1`.
{lean}`Prop` is an abbreviation for {lean}`Sort 0`.
## Predicativity
Each universe contains dependent function types, which additionally represent universal quantification and implication.
A function type's universe is determined by the universes of its argument and return types.
The specific rules depend on whether the return type of the function is a proposition.
Predicates, which are functions that return propositions (that is, where the result of the function is some type in `Prop`) may have argument types in any universe whatsoever, but the function type itself remains in `Prop`.
In other words, propositions feature {deftech}[_impredicative_] {index}[impredicative]{index (subterm := "impredicative")}[quantification] quantification, because propositions can themselves be statements about all propositions (and all other types).
:::Manual.example "Impredicativity"
Proof irrelevance can be written as a proposition that quantifies over all propositions:
```lean
example : Prop := ∀ (P : Prop) (p1 p2 : P), p1 = p2
```
A proposition may also quantify over all types, at any given level:
```lean
example : Prop := ∀ (α : Type), ∀ (x : α), x = x
example : Prop := ∀ (α : Type 5), ∀ (x : α), x = x
```
:::
For universes at {tech (key := "universe level")}[level] `1` and higher (that is, the `Type u` hierarchy), quantification is {deftech}[_predicative_]. {index}[predicative]{index (subterm := "predicative")}[quantification]
For these universes, the universe of a function type is the least upper bound of the argument and return types' universes.
:::Manual.example "Universe levels of function types"
Both of these types are in {lean}`Type 2`:
```lean
example (α : Type 1) (β : Type 2) : Type 2 := α → β
example (α : Type 2) (β : Type 1) : Type 2 := α → β
```
:::
:::Manual.example "Predicativity of {lean}`Type`"
This example is not accepted, because `α`'s level is greater than `1`. In other words, the annotated universe is smaller than the function type's universe:
```lean +error (name := toosmall)
example (α : Type 2) (β : Type 1) : Type 1 := α → β
```
```leanOutput toosmall
Type mismatch
α → β
has type
Type 2
of sort `Type 3` but is expected to have type
Type 1
of sort `Type 2`
```
:::
Lean's universes are not {deftech}[cumulative];{index}[cumulativity] a type in `Type u` is not automatically also in `Type (u + 1)`.
Each type inhabits precisely one universe.
:::Manual.example "No cumulativity"
This example is not accepted because the annotated universe is larger than the function type's universe:
```lean +error (name := toobig)
example (α : Type 2) (β : Type 1) : Type 3 := α → β
```
```leanOutput toobig
Type mismatch
α → β
has type
Type 2
of sort `Type 3` but is expected to have type
Type 3
of sort `Type 4`
```
:::
## Polymorphism
Lean supports {deftech}_universe polymorphism_, {index (subterm := "universe")}[polymorphism] {index}[universe polymorphism] which means that constants defined in the Lean environment can take {deftech}[universe parameters].
These parameters can then be instantiated with universe levels when the constant is used.
Universe parameters are written in curly braces following a dot after a constant name.
:::Manual.example "Universe-polymorphic identity function"
When fully explicit, the identity function takes a universe parameter `u`. Its signature is:
```signature
id.{u} {α : Sort u} (x : α) : α
```
:::
Universe variables may additionally occur in {ref "level-expressions"}[universe level expressions], which provide specific universe levels in definitions.
When the polymorphic definition is instantiated with concrete levels, these universe level expressions are also evaluated to yield concrete levels.
::::keepEnv
:::Manual.example "Universe level expressions"
In this example, {lean}`Codec` is in a universe that is one greater than the universe of the type it contains:
```lean
structure Codec.{u} : Type (u + 1) where
type : Type u
encode : Array UInt32 → type → Array UInt32
decode : Array UInt32 → Nat → Option (type × Nat)
```
Lean automatically infers most level parameters.
In the following example, it is not necessary to annotate the type as {lean}`Codec.{0}`, because {lean}`Char`'s type is {lean}`Type 0`, so `u` must be `0`:
```lean
def Codec.char : Codec where
type := Char
encode buf ch := buf.push ch.val
decode buf i := do
let v ← buf[i]?
if h : v.isValidChar then
let ch : Char := ⟨v, h⟩
return (ch, i + 1)
else
failure
```
:::
::::
Universe-polymorphic definitions in fact create a _schematic definition_ that can be instantiated at a variety of levels, and different instantiations of universes create incompatible values.
::::keepEnv
:::Manual.example "Universe polymorphism and definitional equality"
This can be seen in the following example, in which {lean}`T` is a gratuitously-universe-polymorphic function that always returns {lean}`true`.
Because it is marked {keywordOf Lean.Parser.Command.declaration}`opaque`, Lean can't check equality by unfolding the definitions.
Both instantiations of {lean}`T` have the parameters and the same type, but their differing universe instantiations make them incompatible.
```lean +error (name := uniIncomp)
opaque T.{u} (_ : Nat) : Bool :=
(fun (α : Sort u) => true) PUnit.{u}
set_option pp.universes true
def test.{u, v} : T.{u} 0 = T.{v} 0 := rfl
```
```leanOutput uniIncomp
Type mismatch
rfl.{?u.46}
has type
Eq.{?u.46} ?m.48 ?m.48
but is expected to have type
Eq.{1} (T.{u} 0) (T.{v} 0)
```
:::
::::
Auto-bound implicit arguments are as universe-polymorphic as possible.
Defining the identity function as follows:
```lean
def id' (x : α) := x
```
results in the signature:
```signature
id'.{u} {α : Sort u} (x : α) : α
```
:::Manual.example "Universe monomorphism in auto-bound implicit parameters"
On the other hand, because {name}`Nat` is in universe {lean}`Type 0`, this function automatically ends up with a concrete universe level for `α`, because `m` is applied to both {name}`Nat` and `α`, so both must have the same type and thus be in the same universe:
```lean
partial def count [Monad m] (p : α → Bool) (act : m α) : m Nat := do
if p (← act) then
return 1 + (← count p act)
else
return 0
```
```lean -show -keep
/-- info: Nat : Type -/
#check_msgs in
#check Nat
/--
info: count.{u_1} {m : Type → Type u_1} {α : Type} [Monad m] (p : α → Bool) (act : m α) : m Nat
-/
#check_msgs in
#check count
```
:::
### Level Expressions
%%%
tag := "level-expressions"
%%%
Levels that occur in a definition are not restricted to just variables and addition of constants.
More complex relationships between universes can be defined using level expressions.
```
Level ::= 0 | 1 | 2 | ... -- Concrete levels
| u, v -- Variables
| Level + n -- Addition of constants
| max Level Level -- Least upper bound
| imax Level Level -- Impredicative LUB
```
Given an assignment of level variables to concrete numbers, evaluating these expressions follows the usual rules of arithmetic.
The `imax` operation is defined as follows:
$$`\mathtt{imax}\ u\ v = \begin{cases}0 & \mathrm{when\ }v = 0\\\mathtt{max}\ u\ v&\mathrm{otherwise}\end{cases}`
`imax` is used to implement {tech}[impredicative] quantification for {lean}`Prop`.
In particular, if `A : Sort u` and `B : Sort v`, then `(x : A) → B : Sort (imax u v)`.
If `B : Prop`, then the function type is itself a {lean}`Prop`; otherwise, the function type's level is the maximum of `u` and `v`.
### Universe Variable Bindings
Universe-polymorphic definitions bind universe variables.
These bindings may be either explicit or implicit.
Explicit universe variable binding and instantiation occurs as a suffix to the definition's name.
Universe parameters are defined or provided by suffixing the name of a constant with a period (`.`) followed by a comma-separated sequence of universe variables between curly braces.
::::keepEnv
:::Manual.example "Universe-polymorphic `map`"
The following declaration of {lean}`map` declares two universe parameters (`u` and `v`) and instantiates the polymorphic {name}`List` with each in turn:
```lean
def map.{u, v} {α : Type u} {β : Type v}
(f : α → β) :
List.{u} α → List.{v} β
| [] => []
| x :: xs => f x :: map f xs
```
:::
::::
Just as Lean automatically instantiates implicit parameters, it also automatically instantiates universe parameters.
When {ref "automatic-implicit-parameters"}[automatic implicit parameter insertion] is enabled (i.e. the {option}`autoImplicit` option is set to {lean}`true`, which is the default), it is not necessary to explicitly bind universe variables; they are inserted automatically.
When it is set to {lean}`false`, then they must be added explicitly or declared using the `universe` command. {TODO}[xref]
:::Manual.example "Automatic Implicit Parameters and Universe Polymorphism"
When `autoImplicit` is {lean}`true` (which is the default setting), this definition is accepted even though it does not bind its universe parameters:
```lean -keep
set_option autoImplicit true
def map {α : Type u} {β : Type v} (f : α → β) : List α → List β
| [] => []
| x :: xs => f x :: map f xs
```
When `autoImplicit` is {lean}`false`, the definition is rejected because `u` and `v` are not in scope:
```lean +error (name := uv)
set_option autoImplicit false
def map {α : Type u} {β : Type v} (f : α → β) : List α → List β
| [] => []
| x :: xs => f x :: map f xs
```
```leanOutput uv
unknown universe level `u`
```
```leanOutput uv
unknown universe level `v`
```
:::
In addition to using `autoImplicit`, particular identifiers can be declared as universe variables in a particular {tech}[section scope] using the `universe` command.
:::syntax Lean.Parser.Command.universe (title := "Universe Parameter Declarations")
```grammar
universe $x:ident $xs:ident*
```
Declares one or more universe variables for the extent of the current scope.
Just as the `variable` command causes a particular identifier to be treated as a parameter with a particular type, the `universe` command causes the subsequent identifiers to be implicitly quantified as universe parameters in declarations that mention them, even if the option `autoImplicit` is {lean}`false`.
:::
:::Manual.example "The `universe` command when `autoImplicit` is `false`"
```lean -keep
set_option autoImplicit false
universe u
def id₃ (α : Type u) (a : α) := a
```
:::
Because the automatic implicit parameter feature only inserts parameters that are used in the declaration's {tech}[header], universe variables that occur only on the right-hand side of a definition are not inserted as arguments unless they have been declared with `universe` even when `autoImplicit` is `true`.
:::Manual.example "Automatic universe parameters and the `universe` command"
This definition with an explicit universe parameter is accepted:
```lean -keep
def L.{u} := List (Type u)
```
Even with automatic implicit parameters, this definition is rejected, because `u` is not mentioned in the header, which precedes the `:=`:
```lean +error (name := unknownUni) -keep
set_option autoImplicit true
def L := List (Type u)
```
```leanOutput unknownUni
unknown universe level `u`
```
With a universe declaration, `u` is accepted as a parameter even on the right-hand side:
```lean -keep
universe u
def L := List (Type u)
```
The resulting definition of `L` is universe-polymorphic, with `u` inserted as a universe parameter.
Declarations in the scope of a `universe` command are not made polymorphic if the universe variables do not occur in them or in other automatically-inserted arguments.
```lean
universe u
def L := List (Type 0)
#check L
```
:::
### Universe Unification
%%%
draft := true
%%%
:::planned 99
* Rules for unification, properties of algorithm
* Lack of injectivity
* Universe inference for unannotated inductive types
:::
### Universe Lifting
When a type's universe is smaller than the one expected in some context, {deftech}_universe lifting_ operators can bridge the gap.
These are wrappers around terms of a given type that are in larger universes than the wrapped type.
There are two lifting operators:
* {name}`PLift` can lift any type, including {tech}[propositions], by one level. It can be used to include proofs in data structures such as lists.
* {name}`ULift` can lift any non-proposition type by any number of levels.
{docstring PLift}
{docstring ULift}
{include 0 Language.InductiveTypes}
{include 0 Manual.Quotients} |
reference-manual/Manual/RecursiveDefs.lean | import VersoManual
import Manual.Meta
import Manual.RecursiveDefs.Structural
import Manual.RecursiveDefs.WF
import Manual.RecursiveDefs.PartialFixpoint
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Lean.Elab.Tactic.GuardMsgs.WhitespaceMode
#doc (Manual) "Recursive Definitions" =>
%%%
tag := "recursive-definitions"
%%%
Allowing arbitrary recursive function definitions would make Lean's logic inconsistent.
General recursion makes it possible to write circular proofs: “{tech}[proposition] $`P` is true because proposition $`P` is true”.
Outside of proofs, an infinite loop could be assigned the type {name}`Empty`, which can be used with {keywordOf Lean.Parser.Term.nomatch}`nomatch` or {name Empty.rec}`Empty.rec` to prove any theorem.
Banning recursive function definitions outright would render Lean far less useful: {tech}[inductive types] are key to defining both predicates and data, and they have a recursive structure.
Furthermore, most useful recursive functions do not threaten soundness, and infinite loops usually indicate mistakes in definitions rather than intentional behavior.
Instead of banning recursive functions, Lean requires that each recursive function is defined safely.
While elaborating recursive definitions, the Lean elaborator also produces a justification that the function being defined is safe.{margin}[The section on {ref "elaboration-results"}[the elaborator's output] in the overview of elaboration contextualizes the elaboration of recursive definitions in the overall context of the elaborator.]
There are five main kinds of recursive functions that can be defined:
: Structurally recursive functions
Structurally recursive functions take an argument such that the function makes recursive calls only on strict sub-components of said argument.{margin}[Strictly speaking, arguments whose types are {tech}[indexed families] are grouped together with their indices, with the whole collection considered as a unit.]
The elaborator translates the recursion into uses of the argument's {tech}[recursor].
Because every type-correct use of a recursor is guaranteed to avoid infinite regress, this translation is evidence that the function terminates.
Applications of functions defined via recursors are definitionally equal to the result of the recursion, and are typically relatively efficient inside the kernel.
: Recursion over well-founded relations
Many functions are also difficult to convert to structural recursion; for instance, a function may terminate because the difference between an array index and the size of the array decreases as the index increases, but {name}`Nat.rec` isn't applicable because the index that increases is the function's argument.
Here, there is a {tech}[measure] of termination that decreases at each recursive call, but the measure is not itself an argument to the function.
In these cases, {tech}[well-founded recursion] can be used to define the function.
Well-founded recursion is a technique for systematically transforming recursive functions with a decreasing measure into recursive functions over proofs that every sequence of reductions to the measure eventually terminates at a minimum.
Applications of functions defined via well-founded recursion are not necessarily definitionally equal to their return values, but this equality can be proved as a proposition.
Even when definitional equalities exist, these functions are frequently slow to compute with because they require reducing proof terms that are often very large.
: Recursive functions as partial fixpoints
The definition of a function can be understood as an equation that specifies its behavior.
In certain cases, the existence of a function that satisfies this specification can be proven even when the recursive function does not necessarily terminate for all inputs.
This strategy is even applicable in some cases where the function definition does not necessarily terminate for all inputs.
These partial functions emerge as fixed points of these equations are called {tech}_partial fixpoints_.
In particular, any function whose return type is in certain monads (e.g. {name}`Option`) can be defined using this strategy.
Lean generates additional partial correctness theorems for these monadic functions.
As with well-founded recursion, applications of functions defined as partial fixpoints are not definitionally equal to their return values, but Lean generates theorems that propositionally equate the function to its unfolding and to the reduction behavior specified in its definition.
: Partial functions with nonempty codomains
For many applications, it's not important to reason about the implementation of certain functions.
A recursive function might be used only as part of the implementation of proof automation steps, or it might be an ordinary program that will never be formally proved correct.
In these cases, the Lean kernel does not need either definitional or propositional equalities to hold for the definition; it suffices that soundness is maintained.
Functions marked {keywordOf Lean.Parser.Command.declaration}`partial` are treated as opaque constants by the kernel and are neither unfolded nor reduced.
All that is required for soundness is that their return type is inhabited.
Partial functions may still be used in compiled code as usual, and they may appear in propositions and proofs; their equational theory in Lean's logic is simply very weak.
: Unsafe recursive definitions
Unsafe definitions have none of the restrictions of partial definitions.
They may freely make use of general recursion, and they may use features of Lean that break assumptions about its equational theory, such as primitives for casting ({name}`unsafeCast`), checking pointer equality ({name}`ptrAddrUnsafe`), and observing {tech}[reference counts] ({name}`isExclusiveUnsafe`).
However, any declaration that refers to an unsafe definition must itself be marked {keywordOf Lean.Parser.Command.declaration}`unsafe`, making it clear when logical soundness is not guaranteed.
Unsafe operations can be used to replace the implementations of other functions with more efficient variants in compiled code, while the kernel still uses the original definition.
The replaced function may be opaque, which results in the function name having a trivial equational theory in the logic, or it may be an ordinary function, in which case the function is used in the logic.
Use this feature with care: logical soundness is not at risk, but the behavior of programs written in Lean may diverge from their verified logical models if the unsafe implementation is incorrect.
:::TODO
Table providing an overview of all strategies and their properties
:::
As described in the {ref "elaboration-results"}[overview of the elaborator's output], elaboration of recursive functions proceeds in two phases:
1. The definition is elaborated as if Lean's core type theory had recursive definitions.
Aside from using recursion, this provisional definition is fully elaborated.
The compiler generates code from these provisional definitions.
2. A termination analysis attempts to use the five techniques to justify the function to Lean's kernel.
If the definition is marked {keywordOf Lean.Parser.Command.declaration}`unsafe` or {keywordOf Lean.Parser.Command.declaration}`partial`, then that technique is used.
If an explicit {keywordOf Lean.Parser.Command.declaration}`termination_by` or {keywordOf Lean.Parser.Command.declaration}`partial_fixpoint` clause is present, then the indicated technique is the only one attempted.
If there is no such clause, then the elaborator performs a search, testing each parameter to the function as a candidate for structural recursion, and attempting to find a measure with a well-founded relation that decreases at each recursive call.
This section describes the rules that govern recursive functions.
After a description of mutual recursion, each of the five kinds of recursive definitions is specified, along with the tradeoffs between reasoning power and flexibility that go along with each.
# Mutual Recursion
%%%
tag := "mutual-syntax"
%%%
Just as a recursive definition is one that mentions the name being defined in the body of the definition, {deftech}_mutually recursive_ definitions are definitions that may be recursive or mention one another.
To use mutual recursion between multiple declarations, they must be placed in a {deftech}[mutual block].
:::syntax command (title := "Mutual Declaration Blocks")
The general syntax for mutual recursion is:
```grammar
mutual
$[$declaration:declaration]*
end
```
where the declarations must be definitions or theorems.
:::
The declarations in a mutual block are not in scope in each others' signatures, but they are in scope in each others' bodies.
Even though the names are not in scope in signatures, they will not be inserted as auto-bound implicit parameters.
:::example "Mutual Block Scope"
Names defined in a mutual block are not in scope in each others' signatures.
```lean +error (name := mutScope) -keep
mutual
abbrev NaturalNum : Type := Nat
def n : NaturalNum := 5
end
```
```leanOutput mutScope
Unknown identifier `NaturalNum`
```
Without the mutual block, the definition succeeds:
```lean
abbrev NaturalNum : Type := Nat
def n : NaturalNum := 5
```
:::
:::example "Mutual Block Scope and Automatic Implicit Parameters"
Names defined in a mutual block are not in scope in each others' signatures.
Nonetheless, they cannot be used as automatic implicit parameters:
```lean +error (name := mutScopeTwo) -keep
mutual
abbrev α : Type := Nat
def identity (x : α) : α := x
end
```
```leanOutput mutScopeTwo
Unknown identifier `α`
```
With a different name, the implicit parameter is automatically added:
```lean
mutual
abbrev α : Type := Nat
def identity (x : β) : β := x
end
```
:::
Elaborating recursive definitions always occurs at the granularity of mutual blocks, as if there were a singleton mutual block around every declaration that is not itself part of such a block.
Local definitions introduced via {keywordOf Lean.Parser.Term.letrec}`let rec` and
{keywordOf Lean.Parser.Command.declaration}`where` are lifted out of their context, introducing parameters for captured free variables as necessary, and treated as if they were separate definitions within the {keywordOf Lean.Parser.Command.mutual}`mutual` block as well. {TODO}[Explain this mechanism in more detail, here or in the term section.]
Thus, helpers defined in a {keywordOf Lean.Parser.Command.declaration}`where` block may use mutual recursion both with one another and with the definition in which they occur, but they may not mention each other in their type signatures.
After the first step of elaboration, in which definitions are still recursive, and before translating recursion using the techniques above, Lean identifies the actually (mutually) recursive cliques{TODO}[define this term, it's useful] among the definitions in the mutual block and processes them separately and in dependency order.
{include 0 Manual.RecursiveDefs.Structural}
{include 0 Manual.RecursiveDefs.WF}
{include 0 Manual.RecursiveDefs.PartialFixpoint}
# Partial and Unsafe Definitions
%%%
tag := "partial-unsafe"
%%%
While most Lean functions can be reasoned about in Lean's type theory as well as compiled and run, definitions marked {keyword}`partial` or {keyword}`unsafe` cannot be meaningfully reasoned about.
From the perspective of the logic, {keyword}`partial` functions are opaque constants, and theorems that refer to {keyword}`unsafe` definitions are summarily rejected.
In exchange for the inability to use these functions for reasoning, there are far fewer requirements placed on them; this can make it possible to write programs that would be impractical or cost-prohibitive to prove anything about, while not giving up formal reasoning for the rest.
In essence, the {keyword}`partial` subset of Lean is a traditional functional programming language that is nonetheless deeply integrated with the theorem proving features, and the {keyword}`unsafe` subset features the ability to break Lean's runtime invariants in certain rare situations, at the cost of less integration with Lean's theorem-proving features.
Analogously, {keyword}`noncomputable` definitions may use features that don't make sense in programs, but are meaningful in the logic.
## Partial Functions
%%%
tag := "partial-functions"
%%%
The {keyword}`partial` modifier may only be applied to function definitions.
Partial functions are not required to demonstrate termination, and Lean does not attempt to do so.
These functions are “partial” in the sense that they do not necessarily specify a mapping from each element of the domain to an element of the codomain, because they might fail to terminate for some or all elements of the domain.
They are elaborated into {tech}[pre-definitions] that contain explicit recursion, and type checked using the kernel; however, they are subsequently treated as opaque constants by the logic.
The function's return type must be inhabited; this ensures soundness.
Otherwise, a partial function could have a type such as {lean}`Unit → Empty`.
Together with {name}`Empty.elim`, the existence of such a function could be used to prove {lean}`False` even if it does not reduce.
With partial definitions, the kernel is responsible for the following:
* It ensures that the pre-definition's type is indeed a well-formed type.
* It checks that the pre-definition's type is a function type.
* It ensures that the function's codomain is inhabited by demanding a {lean}`Nonempty` or {lean}`Inhabited` instance.
* It checks that the resulting term would be type-correct if Lean had recursive definitions.
Even though recursive definitions are not part of the kernel's type theory, the kernel can still be used to check that the body of the definition has the right type.
This works the same way as in other functional languages: uses of recursion are type checked by checking the body in an environment in which the definition is already associated with its type.
Having ensured that it type checks, the body is discarded and only the opaque constant is retained by the kernel.
As with all Lean functions, the compiler generates code from the elaborated {tech}[pre-definition].
Even though partial functions are not unfolded by the kernel, it is still possible to reason about other functions that call them so long as this reasoning doesn't depend on the implementation of the partial function itself.
:::example "Partial Functions in Proofs"
The recursive function {name}`nextPrime` inefficiently computes the next prime number after a given number by repeatedly testing candidates with trial division.
Because there are infinitely many prime numbers, it always terminates; however, formulating this proof would be nontrivial.
It is thus marked {keyword}`partial`.
```lean
def isPrime (n : Nat) : Bool := Id.run do
for i in [2:n] do
if i * i > n then return true
if n % i = 0 then return false
return true
partial def nextPrime (n : Nat) : Nat :=
let n := n + 1
if isPrime n then n else nextPrime n
```
It is nonetheless possible to prove that the following two functions are equal:
```lean
def answerUser (n : Nat) : String :=
s!"The next prime is {nextPrime n}"
def answerOtherUser (n : Nat) : String :=
" ".intercalate [
"The",
"next",
"prime",
"is",
toString (nextPrime n)
]
```
In fact, the proof is by {tactic}`rfl`:
```lean
theorem answer_eq_other : answerUser = answerOtherUser := by
rfl
```
:::
## Unsafe Definitions
%%%
tag := "unsafe"
%%%
Unsafe definitions have even fewer safeguards than partial functions.
Their codomains do not need to be inhabited, they are not restricted to function definitions, and they have access to features of Lean that might violate internal invariants or break abstractions.
As a result, they cannot be used at all as part of mathematical reasoning.
While partial functions are treated as opaque constants by the type theory, unsafe definitions may only be referenced from other unsafe definitions.
As a consequence, any function that calls an unsafe function must be unsafe itself.
Theorems are not allowed to be declared unsafe.
In addition to unrestricted use of recursion, unsafe functions can cast from one type to another, check whether two values are the very same object in memory, retrieve pointer values, and run {lean}`IO` actions from otherwise-pure code.
Using these operators requires a thorough understanding of the Lean implementation.
{docstring unsafeCast}
{docstring ptrEq +allowMissing}
{docstring ptrEqList +allowMissing}
{docstring ptrAddrUnsafe +allowMissing}
{docstring isExclusiveUnsafe}
{docstring unsafeIO}
{docstring unsafeEIO}
{docstring unsafeBaseIO}
Frequently, unsafe operators are used to write fast code that takes advantage of low-level details.
Just as Lean code may be replaced at runtime with C code via the FFI,{TODO}[xref] safe Lean code may be replaced with unsafe Lean code for runtime programs.
This is accomplished by adding the {attr}`implemented_by` attribute to the function that is to be replaced, which is often an {keyword}`opaque` definition.
While this does not threaten Lean's soundness as a logic because the constant to be replaced has already been checked by the kernel and the unsafe replacement is only used in run-time code, it is still risky.
Both C code and unsafe code may execute arbitrary side effects.
:::syntax attr (title := "Replacing Run-Time Implementations")
The {attr}`implemented_by` attribute instructs the compiler to replace one constant with another in compiled code.
The replacement constant may be unsafe.
```grammar
implemented_by $_:ident
```
:::
:::example "Checking Equality with Pointers"
Ordinarily, a {lean}`BEq` instance's equality predicate must fully traverse both of its arguments to determine whether they are equal.
If they are, in fact, the very same object in memory, this is wasteful indeed.
A pointer equality test can be used prior to the traversal to catch this case.
The type being compared is {name}`Tree`, a type of binary trees.
```lean
inductive Tree α where
| empty
| branch (left : Tree α) (val : α) (right : Tree α)
```
An unsafe function may use pointer equality to terminate the structural equality test more quickly, falling back to structural checks when pointer equality fails.
```lean
unsafe def Tree.fastBEq [BEq α] (t1 t2 : Tree α) : Bool :=
if ptrEq t1 t2 then
true
else
match t1, t2 with
| .empty, .empty => true
| .branch l1 x r1, .branch l2 y r2 =>
if ptrEq x y || x == y then
l1.fastBEq l2 && r1.fastBEq r2
else false
| _, _ => false
```
An {attr}`implemented_by` attribute on an opaque definition bridges the worlds of safe and unsafe code.
```lean
@[implemented_by Tree.fastBEq]
opaque Tree.beq [BEq α] (t1 t2 : Tree α) : Bool
instance [BEq α] : BEq (Tree α) where
beq := Tree.beq
```
:::
::::example "Taking Advantage of Run-Time Representations"
Because a {name}`Fin` is represented identically to its underlying {name}`Nat`, {lean}`List.map Fin.val` can be replaced by {name}`unsafeCast` to avoid a linear-time traversal that, in practice, does nothing:
```lean
unsafe def unFinImpl (xs : List (Fin n)) : List Nat :=
unsafeCast xs
@[implemented_by unFinImpl]
def unFin (xs : List (Fin n)) : List Nat :=
xs.map Fin.val
```
:::paragraph
From the perspective of the Lean kernel, {lean}`unFin` is defined using {name}`List.map`:
```lean
theorem unFin_length_eq_length {xs : List (Fin n)} :
(unFin xs).length = xs.length := by
simp [unFin]
```
In compiled code, there is no traversal of the list.
:::
This kind of replacement is risky: the correspondence between the proof and the compiled code depends fully on the equivalence of the two implementations, which cannot be proved in Lean.
The correspondence relies on details of Lean's implementation.
These “escape hatches” should be used very carefully.
::::
# Controlling Reduction
%%%
tag := "reducibility"
htmlSplit := .never
%%%
While checking proofs and programs, Lean takes {deftech}_reducibility_, also known as _transparency_, into account.
A definition's reducibility controls the contexts in which it is unfolded during elaboration and proof execution.
There are three levels of reducibility:
: {deftech}[Reducible]
Reducible definitions are unfolded essentially everywhere, on demand.
Type class instance synthesis, definitional equality checks, and the rest of the language treat the definition as being essentially an abbreviation.
This is the setting applied by the {keywordOf Lean.Parser.Command.declaration}`abbrev` command.
: {deftech}[Semireducible]
Semireducible definitions are not unfolded by potentially expensive automation such as type class instance synthesis or {tactic}`simp`, but they are unfolded while checking definitional equality and while resolving {tech}[generalized field notation].
The {keywordOf Lean.Parser.Command.declaration}`def` command generally creates semireducible definitions unless a different reducibility level is specified with an attribute; however, definitions that use {tech}[well-founded recursion] are irreducible by default.
: {deftech}[Irreducible]
Irreducible definitions are not unfolded at all during elaboration.
Definitions can be made irreducible by applying the {attr}`irreducible` attribute.
:::example "Reducibility and Instance Synthesis"
These three aliases for {lean}`String` are respectively reducible, semireducible, and irreducible.
```lean
abbrev Phrase := String
def Clause := String
@[irreducible]
def Utterance := String
```
The reducible and semireducible aliases are unfolded during the elaborator's definitional equality check, causing them to be considered equivalent to {lean}`String`:
```lean
def hello : Phrase := "Hello"
def goodMorning : Clause := "Good morning"
```
The irreducible alias, on the other hand, is rejected as the type for a string, because the elaborator's definitional equality test does not unfold it:
```lean +error (name := irred)
def goodEvening : Utterance := "Good evening"
```
```leanOutput irred
Type mismatch
"Good evening"
has type
String
but is expected to have type
Utterance
```
Because {lean}`Phrase` is reducible, the {inst}`ToString String` instance can be used as a {inst}`ToString Phrase` instance:
```lean
#synth ToString Phrase
```
However, {lean}`Clause` is semireducible, so the {inst}`ToString String` instance cannot be used:
```lean +error (name := toStringClause)
#synth ToString Clause
```
```leanOutput toStringClause
failed to synthesize
ToString Clause
Hint: Additional diagnostic information may be available using the `set_option diagnostics true` command.
```
The instance can be explicitly enabled by creating a {lean}`ToString Clause` instance that reduces to the {lean}`ToString String` instance.
This example works because semireducible definitions are unfolded while checking definitional equality:
```lean
instance : ToString Clause := inferInstanceAs (ToString String)
```
:::
:::example "Reducibility and Generalized Field Notation"
{tech}[Generalized field notation] unfolds reducible and semireducible declarations while searching for matching names.
Given the semireducible alias {name}`Sequence` for {name}`List`:
```lean
def Sequence := List
def Sequence.ofList (xs : List α) : Sequence α := xs
```
generalized field notation allows {name}`List.reverse` to be accessed from a term of type {lean}`Sequence Nat`.
```lean
#check let xs : Sequence Nat := .ofList [1,2,3]; xs.reverse
```
However, declaring {name}`Sequence` to be irreducible prevents the unfolding:
```lean +error (name := irredSeq)
attribute [irreducible] Sequence
#check let xs : Sequence Nat := .ofList [1,2,3]; xs.reverse
```
```leanOutput irredSeq
Invalid field `reverse`: The environment does not contain `Sequence.reverse`, so it is not possible to project the field `reverse` from an expression
xs
of type `Sequence Nat`
```
:::
:::syntax attr (title := "Reducibility Annotations")
A definition's reducibility can be set using one of the three reducibility attributes:
```grammar
reducible
```
```grammar
semireducible
```
```grammar
irreducible
```
These attributes can only be applied globally in the same file as the definition being modified, but they may be {keywordOf attrInst (parser := Lean.Parser.Term.attrKind)}`local`ly applied anywhere.
:::
## Reducibility and Tactics
The tactics {tactic}`with_reducible`, {tactic}`with_reducible_and_instances`, and {tactic}`with_unfolding_all` control which definitions are unfolded by most tactics.
:::example "Reducibility and Tactics"
The functions {lean}`plus`, {lean}`sum`, and {lean}`tally` are all synonyms for {lean}`Nat.add` that are respectively reducible, semireducible, and irreducible:
```lean
abbrev plus := Nat.add
def sum := Nat.add
@[irreducible]
def tally := Nat.add
```
The reducible synonym is unfolded by {tactic}`simp`:
```lean
theorem plus_eq_add : plus x y = x + y := by simp
```
The semireducible synonym is not, however, unfolded by {tactic}`simp`:
```lean -keep +error (name := simpSemi)
theorem sum_eq_add : sum x y = x + y := by simp
```
Nonetheless, the definitional equality check induced by {tactic}`rfl` unfolds the {lean}`sum`:
```lean
theorem sum_eq_add : sum x y = x + y := by rfl
```
The irreducible {lean}`tally`, however, is not reduced by definitional equality.
```lean -keep +error (name := reflIr)
theorem tally_eq_add : tally x y = x + y := by rfl
```
The {tactic}`simp` tactic can unfold any definition, even irreducible ones, when they are explicitly provided:
```lean -keep (name := simpName)
theorem tally_eq_add : tally x y = x + y := by simp [tally]
```
Similarly, part of a proof can be instructed to ignore irreducibility by placing it in a {tactic}`with_unfolding_all` block:
```lean
theorem tally_eq_add : tally x y = x + y := by with_unfolding_all rfl
```
:::
## Modifying Reducibility
The reducibility of a definition can be globally modified in the module in which it is defined by applying the appropriate attribute with the {keywordOf Lean.Parser.Command.attribute}`attribute` command.
In other modules, the reducibility of imported definitions can be modified by applying the attribute with the {keyword}`local` modifier.
The {keywordOf Lean.Parser.commandSeal__}`seal` and {keywordOf Lean.Parser.commandUnseal__}`unseal` commands are a shorthand for this process.
:::syntax command (title := "Local Irreducibility")
{includeDocstring Lean.Parser.commandSeal__}
```grammar
seal $_:ident $_*
```
:::
:::syntax command (title := "Local Reducibility")
{includeDocstring Lean.Parser.commandUnseal__}
```grammar
unseal $_:ident $_*
```
:::
## Options
For performance, the elaborator and many tactics construct indices and caches.
Many of these take reducibility into account, and there's no way to invalidate and regenerate them if reducibility changes globally.
Unsafe changes to reducibility settings that could have unpredictable results are disallowed by default, but they can be enabled by using the {option}`allowUnsafeReducibility` option.
{optionDocs allowUnsafeReducibility} |
reference-manual/Manual/VCGen.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Std.Tactic.Do
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Verso.Code.External (lit)
set_option pp.rawOnError true
set_option verso.docstring.allowMissing true
set_option linter.unusedVariables false
set_option linter.typography.quotes true
set_option linter.typography.dashes true
set_option mvcgen.warning false
open Manual (comment)
open Std.Do
#doc (Manual) "The `mvcgen` tactic" =>
%%%
tag := "mvcgen-tactic"
%%%
:::tutorials
* {ref "mvcgen-tactic-tutorial" (remote := "tutorials")}[Verifying Imperative Programs Using `mvcgen`]
:::
The {tactic}`mvcgen` tactic implements a _monadic verification condition generator_:
It breaks down a goal involving a program written using Lean's imperative {keywordOf Lean.Parser.Term.do}`do` notation into a number of smaller {tech}_verification conditions_ ({deftech}[VCs]) that are sufficient to prove the goal.
In addition to a reference that describes the use of {tactic}`mvcgen`, this chapter includes a {ref "mvcgen-tactic-tutorial" (remote := "tutorials")}[tutorial] that can be read independently of the reference.
In order to use the {tactic}`mvcgen` tactic, {module}`Std.Tactic.Do` must be imported and the namespace {namespace}`Std.Do` must be opened.
# Overview
The workflow of {tactic}`mvcgen` consists of the following:
1. Monadic programs are re-interpreted according to a {tech}[predicate transformer semantics].
An instance of {name}`WP` determines the monad's interpretation.
Each program is interpreted as a mapping from arbitrary {tech}[postconditions] to the {tech}[weakest precondition] that would ensure the postcondition.
This step is invisible to most users, but library authors who want to enable their monads to work with {tactic}`mvcgen` need to understand it.
2. Programs are composed from smaller programs.
Each statement in a {keywordOf Lean.Parser.Term.do}`do`-block is associated with a predicate transformer, and there are general-purpose rules for combining these statements with sequencing and control-flow operators.
A statement with its pre- and postconditions is called a {tech}_Hoare triple_.
In a program, the postcondition of each statement should suffice to prove the precondition of the next one, and loops require a specified {deftech}_loop invariant_, which is a statement that must be true at the beginning of the loop and at the end of each iteration.
Designated {tech}_specification lemmas_ associate functions with Hoare triples that specify them.
3. Applying the weakest-precondition semantics of a monadic program to a desired proof goal results in the precondition that must hold in order to prove the goal.
Any missing steps such as loop invariants or proofs that a statement's precondition implies its postcondition become new subgoals.
These missing steps are called the {deftech}_verification conditions_.
The {tactic}`mvcgen` tactic performs this transformation, replacing the goal with its verification conditions.
During this transformation, {tactic}`mvcgen` uses specification lemmas to discharge proofs about individual statements.
4. After supplying loop invariants, many verification conditions can in practice be discharged automatically.
Those that cannot can be proven using either a {ref "tactic-ref-spred"}[special proof mode] or ordinary Lean tactics, depending on whether they are expressed in the logic of program assertions or as ordinary propositions.
# Predicate Transformers
A {deftech}_predicate transformer semantics_ is an interpretation of programs as functions from predicates to predicates, rather than values to values.
A {deftech}_postcondition_ is an assertion that holds after running a program, while a {deftech}_precondition_ is an assertion that must hold prior to running the program in order for the postcondition to be guaranteed to hold.
The predicate transformer semantics used by {tactic}`mvcgen` transforms postconditions into the {deftech}_weakest preconditions_ under which the program will ensure the postcondition.
An assertion $`P` is weaker than $`P'` if, in all states, $`P'` suffices to prove $`P`, but $`P` does not suffice to prove $`P'`.
Logically equivalent assertions are considered to be equal.
The predicates in question are stateful: they can mention the program's current state.
Furthermore, postconditions can relate the return value and any exceptions thrown by the program to the final state.
{name}`SPred` is a type of predicates that is parameterized over a monadic state, expressed as a list of the types of the fields that make up the state.
The usual logical connectives and quantifiers are defined for {name}`SPred`.
Each monad that can be used with {tactic}`mvcgen` is assigned a state type by an instance of {name}`WP`, and {name}`Assertion` is the corresponding type of assertions for that monad, which is used for preconditions.
{name}`Assertion` is a wrapper around {name}`SPred`: while {name}`SPred` is parameterized by a list of states types, {name}`Assertion` is parameterized by a more informative type that it translates to a list of state types for {name}`SPred`.
A {name}`PostCond` pairs an {name}`Assertion` about a return value with assertions about potential exceptions; the available exceptions are also specified by the monad's {name}`WP` instance.
## Stateful Predicates
The predicate transformer semantics of monadic programs is based on a logic in which propositions may mention the program's state.
Here, “state” refers not only to mutable state, but also to read-only values such as those that are provided via {name}`ReaderT`.
Different monads have different state types available, but each individual state always has a type.
Given a list of state types, {name}`SPred` is a type of predicates over these states.
{name}`SPred` is not inherently tied to the monadic verification framework.
The related {name}`Assertion` computes a suitable {name}`SPred` for a monad's state as expressed via its {name}`WP` instance's {name}`PostShape` output parameter.
{docstring Std.Do.SPred}
::::leanSection
```lean -show
variable {P : Prop} {σ : List (Type u)}
```
Ordinary propositions that do not mention the state can be used as stateful predicates by adding a trivial universal quantification.
This is written with the syntax {lean (type := "SPred σ")}`⌜P⌝`, which is syntactic sugar for {name}`SPred.pure`.
:::syntax term (title := "Notation for `SPred`") (namespace := Std.Do)
```grammar
⌜$_:term⌝
```
{includeDocstring Std.Do.«term⌜_⌝»}
:::
::::
{docstring SPred.pure}
:::example "Stateful Predicates"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
The predicate {name}`ItIsSecret` expresses that a state of type {name}`String` is {lean}`"secret"`:
```lean
def ItIsSecret : SPred [String] := fun s => ⌜s = "secret"⌝
```
:::
### Entailment
Stateful predicates are related by _entailment_.
Entailment of stateful predicates is defined as universally-quantified implication: if $`P` and $`Q` are predicates over a state $`\sigma`, then $`P` entails $`Q` (written $`P \vdash_s Q`) when $`∀ s : \sigma, P(s) → Q(s)`.
{docstring Std.Do.SPred.entails}
{docstring Std.Do.SPred.bientails}
:::syntax term (title := "Notation for `SPred`") (namespace := Std.Do)
```grammar
$_:term ⊢ₛ $_:term
```
{includeDocstring Std.Do.«term_⊢ₛ_»}
```grammar
⊢ₛ $_:term
```
{includeDocstring Std.Do.«term⊢ₛ_»}
```grammar
$_:term ⊣⊢ₛ $_:term
```
{includeDocstring Std.Do.«term_⊣⊢ₛ_»}
:::
:::leanSection
```lean -show
variable {σ : List (Type u)} {P Q : SPred σ}
```
The logic of stateful predicates includes an implication connective.
The difference between entailment and implication is that entailment is a statement in Lean's logic, while implication is internal to the stateful logic.
Given stateful predicates {lean}`P` and {lean}`Q` for state {lean}`σ`, {lean (type := "Prop")}`P ⊢ₛ Q` is a {lean}`Prop` while {lean (type := "SPred σ")}`spred(P → Q)` is an {lean}`SPred σ`.
:::
### Notation
The syntax of stateful predicates overlaps with that of ordinary Lean terms.
In particular, stateful predicates use the usual syntax for logical connectives and quantifiers.
The syntax associated with stateful predicates is automatically enabled in contexts such as pre- and postconditions where they are clearly intended; other contexts must explicitly opt in to the syntax using {keywordOf Std.Do.«termSpred(_)»}`spred`.
The usual meanings of these operators can be recovered by using the {keywordOf Std.Do.«termTerm(_)»}`term` operator.
:::syntax term (title := "Predicate Terms") (namespace := Std.Do)
{keywordOf Std.Do.«termSpred(_)»}`spred` indicates that logical connectives and quantifiers should be understood as those pertaining to stateful predicates, while {keywordOf Std.Do.«termTerm(_)»}`term` indicates that they should have the usual meaning.
```grammar
spred($t)
```
```grammar
term($t)
```
:::
### Connectives and Quantifiers
:::syntax term (title := "Predicate Connectives") (namespace := Std.Do)
```grammar
spred($_ ∧ $_)
```
Syntactic sugar for {name}`SPred.and`.
```grammar
spred($_ ∨ $_)
```
Syntactic sugar for {name}`SPred.or`.
```grammar
spred(¬ $_)
```
Syntactic sugar for {name}`SPred.not`.
```grammar
spred($_ → $_)
```
Syntactic sugar for {name}`SPred.imp`.
```grammar
spred($_ ↔ $_)
```
Syntactic sugar for {name}`SPred.iff`.
:::
{docstring SPred.and}
{docstring SPred.conjunction}
{docstring SPred.or}
{docstring SPred.not}
{docstring SPred.imp}
{docstring SPred.iff}
:::syntax term (title := "Predicate Quantifiers") (namespace := Std.Do)
```grammar
spred(∀ $x:ident, $_)
```
```grammar
spred(∀ $x:ident : $ty, $_)
```
```grammar
spred(∀ ($x:ident $_* : $ty), $_)
```
```grammar
spred(∀ _, $_)
```
```grammar
spred(∀ _ : $ty, $_)
```
```grammar
spred(∀ (_ $_* : $ty), $_)
```
Each form of universal quantification is syntactic sugar for an invocation of {name}`SPred.forall` on a function that takes the quantified variable as a parameter.
```grammar
spred(∃ $x:ident, $_)
```
```grammar
spred(∃ $x:ident : $ty, $_)
```
```grammar
spred(∃ ($x:ident $_* : $ty), $_)
```
```grammar
spred(∃ _, $_)
```
```grammar
spred(∃ _ : $ty, $_)
```
```grammar
spred(∃ (_ $_* : $ty), $_)
```
Each form of existential quantification is syntactic sugar for an invocation of {name}`SPred.exists` on a function that takes the quantified variable as a parameter.
:::
{docstring SPred.forall}
{docstring SPred.exists}
### Stateful Values
Just as {name}`SPred` represents predicate over states, {name}`SVal` represents a value that is derived from a state.
{docstring SVal}
{docstring SVal.getThe}
{docstring SVal.StateTuple}
{docstring SVal.curry}
{docstring SVal.uncurry}
## Assertions
The language of assertions about monadic programs is parameterized by a {deftech}_postcondition shape_, which describes the inputs to and outputs from a computation in a given monad.
Preconditions may mention the initial values of the monad's state, while postconditions may mention the returned value, the final values of the monad's state, and must furthermore account for any exceptions that could have been thrown.
The postcondition shape of a given monad determines the states and exceptions in the monad.
{name}`PostShape.pure` describes a monad in which assertions may not mention any states, {name}`PostShape.arg` describes a state value, and {name}`PostShape.except` describes a possible exception.
Because these constructors can be continually added, the postcondition shape of a monad transformer can be defined in terms of the postcondition shape of the underlying transformed monad.
Behind the scenes, an {name}`Assertion` is translated into an appropriate {name}`SPred` by translating the postcondition shape into a list of state types, discarding exceptions.
{docstring PostShape}
{docstring PostShape.args}
{docstring Assertion}
{docstring PostCond}
:::syntax term (title := "Postconditions")
```grammar
⇓ $_* => $_
```
Syntactic sugar for a nested sequence of product constructors, terminating in {lean}`()`, in which the first element is an assertion about non-exceptional return values and the remaining elements are assertions about the exceptional cases for a postcondition.
:::
{docstring ExceptConds}
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α : Type u} {prog : m α} {Q' : α → Assertion ps}
```
Postconditions for programs that might throw exceptions come in two varieties. The {deftech}_total correctness interpretation_ {lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` asserts that, given {lean}`P` holds, then {lean}`prog` terminates _and_ {lean}`Q'` holds for the result. The {deftech}_partial correctness interpretation_ {lean}`⦃P⦄ prog ⦃⇓? r => Q' r⦄` asserts that, given {lean}`P` holds, and _if_ {lean}`prog` terminates _then_ {lean}`Q'` holds for the result.
:::
:::syntax term (title := "Exception-Free Postconditions")
```grammar
⇓ $_* => $_
```
{includeDocstring PostCond.noThrow}
:::
{docstring PostCond.noThrow}
:::syntax term (title := "Partial Postconditions")
```grammar
⇓? $_* => $_
```
{includeDocstring PostCond.mayThrow}
:::
{docstring PostCond.mayThrow}
:::syntax term (title := "Postcondition Entailment")
```grammar
$_ ⊢ₚ $_
```
Syntactic sugar for {name}`PostCond.entails`
:::
{docstring PostCond.entails}
:::syntax term (title := "Postcondition Conjunction")
```grammar
$_ ∧ₚ $_
```
Syntactic sugar for {name}`PostCond.and`
:::
{docstring PostCond.and}
:::syntax term (title := "Postcondition Implication")
```grammar
$_ →ₚ $_
```
Syntactic sugar for {name}`PostCond.imp`
:::
{docstring PostCond.imp}
## Predicate Transformers
A predicate transformer is a function from postconditions for some postcondition state into assertions for that state.
The function must be {deftech}_conjunctive_, which means it must distribute over {name}`PostCond.and`.
{docstring PredTrans}
{docstring PredTrans.Conjunctive}
{docstring PredTrans.Monotonic}
:::leanSection
```lean -show
variable {σ : List (Type u)} {ps : PostShape} {x y : PredTrans ps α} {Q : Assertion ps}
```
The {inst}`LE PredTrans` instance is defined in terms of their logical strength; one transformer is stronger than another if the result of applying it always entails the result of applying the other.
In other words, {lean}`∀ Q, y Q ⊢ₛ x Q`, then {lean}`x ≤ y`.
This means that stronger predicate transformers are considered greater than weaker ones.
:::
Predicate transformers form a monad.
The {name}`pure` operator is the identity transformer; it simply instantiates the postcondition with the its argument.
The {name}`bind` operator composes predicate transformers.
{docstring PredTrans.pure}
{docstring PredTrans.bind}
The helper operators {name}`PredTrans.pushArg`, {name}`PredTrans.pushExcept`, and {name}`PredTrans.pushOption` modify a predicate transformer by adding a standard side effect.
They are used to implement the {name}`WP` instances for transformers such as {name}`StateT`, {name}`ExceptT`, and {name}`OptionT`; they can also be used to implement monads that can be thought of in terms of one of these.
For example, {name}`PredTrans.pushArg` is typically used for state monads, but can also be used to implement a reader monad's instance, treating the reader's value as read-only state.
{docstring PredTrans.pushArg}
{docstring PredTrans.pushExcept}
{docstring PredTrans.pushOption}
### Weakest Preconditions
The {tech}[weakest precondition] semantics of a monad are provided by the {name}`WP` type class.
Instances of {name}`WP` determine the monad's postcondition shape and provide the logical rules for interpreting the monad's operations as a predicate transformer in its postcondition shape.
{docstring WP}
:::syntax term (title := "Weakest Preconditions")
```grammar
wp⟦$_ $[: $_]?⟧
```
{includeDocstring Std.Do.«termWp⟦_:_⟧»}
:::
### Weakest Precondition Monad Morphisms
Most of the built-in specification lemmas for {tactic}`mvcgen` relies on the presence of a {name}`WPMonad` instance, in addition to the {name}`WP` instance.
In addition to being lawful, weakest preconditions of the monad's implementations of {name}`pure` and {name}`bind` should correspond to the {name}`pure` and {name}`bind` operators for the predicate transformer monad.
Without a {name}`WPMonad` instance, {tactic}`mvcgen` typically returns the original proof goal unchanged.
{docstring WPMonad}
:::example "Missing `WPMonad` Instance"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
This reimplementation of {name}`Id` has a {name}`WP` instance, but no {name}`WPMonad` instance:
```lean
def Identity (α : Type u) : Type u := α
variable {α : Type u}
def Identity.run (act : Identity α) : α := act
instance : Monad Identity where
pure x := x
bind x f := f x
instance : WP Identity .pure where
wp x := PredTrans.pure x
theorem Identity.of_wp_run_eq {x : α} {prog : Identity α}
(h : Identity.run prog = x) (P : α → Prop) :
(⊢ₛ wp⟦prog⟧ (⇓ a => ⟨P a⟩)) → P x := by
intro h'
simpa [← h] using h'
```
```lean -show
instance : LawfulMonad Identity :=
LawfulMonad.mk' Identity
(id_map := fun _ => rfl)
(pure_bind := fun _ _ => rfl)
(bind_assoc := fun _ _ _ => rfl)
```
The missing instance prevents {tactic}`mvcgen` from using its specifications for {name}`pure` and {name}`bind`.
This tends to show up as a verification condition that's equal to the original goal.
This function that reverses a list:
```lean
def rev (xs : List α) : Identity (List α) := do
let mut out := []
for x in xs do
out := x :: out
return out
```
It is correct if it is equal to {name}`List.reverse`.
However, {tactic}`mvcgen` does not make the goal easier to prove:
```lean +error -keep (name := noInst)
theorem rev_correct :
(rev xs).run = xs.reverse := by
generalize h : (rev xs).run = x
apply Identity.of_wp_run_eq h
mvcgen [rev]
```
```leanOutput noInst
unsolved goals
case vc1.a
α✝ : Type u_1
xs x : List α✝
h : (rev xs).run = x
out✝ : List α✝ := []
⊢ (wp⟦do
let r ←
forIn xs out✝ fun x r => do
pure PUnit.unit
pure (ForInStep.yield (x :: r))
pure r⟧
(PostCond.noThrow fun a => { down := a = xs.reverse })).down
```
When the verification condition is just the original problem, without even any simplification of {name}`bind`, the problem is usually a missing {name}`WPMonad` instance.
The issue can be resolved by adding a suitable instance:
```lean
instance : WPMonad Identity .pure where
wp_pure _ := rfl
wp_bind _ _ := rfl
```
With this instance, and a suitable invariant, {tactic}`mvcgen` and {tactic}`grind` can prove the theorem.
```lean
theorem rev_correct :
(rev xs).run = xs.reverse := by
generalize h : (rev xs).run = x
apply Identity.of_wp_run_eq h
simp only [rev]
mvcgen invariants
· ⇓⟨xs, out⟩ =>
⌜out = xs.prefix.reverse⌝
with grind
```
:::
### Adequacy Lemmas
%%%
tag := "mvcgen-adequacy"
%%%
Monads that can be invoked from pure code typically provide a invocation operator that takes any required input state as a parameter and returns either a value paired with an output state or some kind of exceptional value.
Examples include {name}`StateT.run`, {name}`ExceptT.run`, and {name}`Id.run`.
{deftech}_Adequacy lemmas_ provide a bridge between statements about invocations of monadic programs and those programs' {tech}[weakest precondition] semantics as given by their {name}`WP` instances.
They show that a property about the invocation is true if its weakest precondition is true.
{docstring Id.of_wp_run_eq}
{docstring StateM.of_wp_run_eq}
{docstring StateM.of_wp_run'_eq}
{docstring ReaderM.of_wp_run_eq}
{docstring Except.of_wp}
{docstring EStateM.of_wp_run_eq}
## Hoare Triples
A {deftech}_Hoare triple_{citep hoare69}[] consists of a precondition, a program, and a postcondition.
Running the program in a state for which the precondition is true results in a state where the postcondition is true.
{docstring Triple}
::::syntax term (title := "Hoare Triples")
```grammar
⦃ $_ ⦄ $_ ⦃ $_ ⦄
```
:::leanSection
```lean -show
variable [WP m ps] {x : m α} {P : Assertion ps} {Q : PostCond α ps}
```
{lean}`⦃P⦄ x ⦃Q⦄` is syntactic sugar for {lean}`Triple x P Q`.
:::
::::
{docstring Triple.and}
{docstring Triple.mp}
## Specification Lemmas
{deftech}_Specification lemmas_ are designated theorems that associate Hoare triples with functions.
When {tactic}`mvcgen` encounters a function, it checks whether there are any registered specification lemmas and attempts to use them to discharge intermediate {tech}[verification conditions].
If there is no applicable specification lemma, then the connection between the statement's pre- and postconditions will become a verification condition.
Specification lemmas allow compositional reasoning about libraries of monadic code.
When applied to a theorem whose statement is a Hoare triple, the {attr}`spec` attribute registers the theorem as a specification lemma.
These lemmas are used in order of priority.
For theorems, the {keywordOf simpPre}`↓`, {keywordOf simpPost}`↑`, and {keyword}`←` specifiers are ignored.
The {attr}`spec` attribute may also be applied to definitions.
On definitions, it indicates that the definition should be unfolded during verification condition generation.
For definitions, {attr}`spec` uses the {keywordOf simpPre}`↓`, {keywordOf simpPost}`↑`, and {keyword}`←` specifiers in the same manner as {tactic}`simp`.
:::syntax attr (title := "Specification Lemmas")
```grammar
spec $_? $_? $[$_:prio]?
```
{includeDocstring Lean.Parser.Attr.spec}
:::
Universally-quantified variables in specification lemmas can be used to relate input states to output states and return values.
These variables are referred to as {deftech}_schematic variables_.
:::example "Schematic Variables"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
The function {name}`double` doubles the value of a {name}`Nat` state:
```lean
def double : StateM Nat Unit := do
modify (2 * ·)
```
Its specification should _relate_ the initial and final states, but it cannot know their precise values.
The specification uses a schematic variable to stand for the initial state:
```lean
theorem double_spec :
⦃ fun s => ⌜s = n⌝ ⦄ double ⦃ ⇓ () s => ⌜s = 2 * n⌝ ⦄ := by
simp [double]
mvcgen with grind
```
The assertion in the precondition is a function because the {name}`PostShape` of {lean}`StateM Nat` is {lean (type := "PostShape.{0}")}`.arg Nat .pure`, and {lean}`Assertion (.arg Nat .pure)` is {lean}`SPred [Nat]`.
:::
```lean -show -keep
-- Test preceding examples' claims
#synth WP (StateM Nat) (.arg Nat .pure : PostShape.{0})
example : Assertion (.arg Nat .pure) = SPred [Nat] := rfl
```
## Invariant Specifications
These types are used in invariants.
The {tech}[specification lemmas] for {name}`ForIn.forIn` and {name}`ForIn'.forIn'` take parameters of type {name}`Invariant`, and {tactic}`mvcgen` ensures that invariants are not accidentally generated by other automation.
{docstring Invariant}
{docstring Invariant.withEarlyReturn}
Invariants use lists to model the sequence of values in a {keywordOf Lean.Parser.Term.doFor}`for` loop.
The current position in the loop is tracked with a {name}`List.Cursor` that represents a position in a list as a combination of the elements to the left of the position and the elements to the right.
This type is not a traditional zipper, in which the prefix is reversed for efficient movement: it is intended for use in specifications and proofs, not in run-time code, so the prefix is in the original order.
{docstring List.Cursor}
{docstring List.Cursor.at}
{docstring List.Cursor.pos}
{docstring List.Cursor.current}
{docstring List.Cursor.tail}
{docstring List.Cursor.begin}
{docstring List.Cursor.end}
# Verification Conditions
The {tactic}`mvcgen` tactic converts a goal that's expressed in terms of {name}`SPred` and weakest preconditions to a set of invariants and verification conditions that, together, suffice to prove the original goal.
In particular, {tech}[Hoare triples] are defined in terms of weakest preconditions, so {tactic}`mvcgen` can be used to prove them.
:::leanSection
```lean -show
variable [Monad m] [WPMonad m ps] {e : m α} {P : Assertion ps} {Q : PostCond α ps}
```
The verification conditions for a goal are generated as follows:
1. A number of simplifications and rewrites are applied.
2. The goal should now be of the form {lean}`P ⊢ₛ wp⟦e⟧ Q` (that is, an entailment from some set of stateful assumptions to the weakest precondition that implies a desired postcondition).
3. {tech}[Reducible] constants and definitions marked {attrs}`@[spec]` in the expression {lean}`e` are unfolded.
4. If the expression is an application of an {tech}[auxiliary matching function] or a conditional ({name}`ite` or {name}`dite`), then it is first simplified.
The {tech (key := "match discriminant")}[discriminant] of each matcher is simplified, and the entire term is reduced in an attempt to eliminate the matcher or conditional.
If this fails, then a new goal is generated for each branch.
5. If the expression is an application of a constant, then the applicable lemmas marked {attrs}`@[spec]` are attempted in priority order.
Lean includes specification lemmas for constants such as {name Bind.bind}`bind`, {name Pure.pure}`pure`, and {name}`ForIn.forIn` that result from desugaring {keywordOf Lean.Parser.Term.do}`do`-notation.
Instantiating the lemma will sometimes discharge its premises, in particular schematic variables due to definitional equalities with the goal.
Assumptions of type {name}`Invariant` are never instantiated this way, however.
If the spec lemma's precondition or postcondition do not exactly match those of the goal, then new metavariables are created that prove the necessary entailments.
If these cannot be immediately discharged using simple automation that attempts to use local assumptions and decomposes conjunctions in postconditions, then they remain as verification conditions.
6. Each remaining goal created by this process is recursively processed for verification conditions if it has the form {lean}`P ⊢ₛ wp⟦e⟧ Q`. If not, it is added to the set of invariants or verification conditions.
7. The resulting subgoals for invariants and verification conditions are assigned suitable names in the proof state.
8. Depending on the tactic's configuration parameters, {tactic}`mvcgen_trivial` and {tactic}`mleave` are attempted in each verification condition.
:::
Verification condition generation can be improved by defining appropriate {tech}[specification lemmas] for a library.
The presence of good specification lemmas results in fewer generated verification conditions.
Additionally, ensuring that the {tech}[simp normal form] of terms is suitable for pattern matching, and that there are sufficient lemmas in the default simp set to reduce every possible term to that normal form, can lead to more conditionals and pattern matches being eliminated.
# Enabling `mvcgen` For Monads
If a monad is implemented in terms of {tech}[monad transformers] that are provided by the Lean standard library, such as {name}`ExceptT` and {name}`StateT`, then it should not require additional instances.
Other monads will require instances of {name}`WP`, {name}`LawfulMonad`, and {name}`WPMonad`.
The tactic has been designed to support monads that model single-threaded control with state that might be interrupted; in other words, the effects that are present in ordinary imperative programming.
More exotic effects have not yet been investigated.
Once the basic instances are provided, the next step is to prove an {ref "mvcgen-adequacy"}[adequacy lemma].
This lemma should show that the weakest precondition for running the monadic computation and asserting a desired predicate is in fact sufficient to prove the predicate.
In addition to the definition of the monad, typical libraries provide a set of primitive operators.
Each of these should be provided with a {tech}[specification lemma].
It may additionally be useful to make the internals of the state private, and export a carefully-designed set of assertion operators.
The specification lemmas for the library's primitive operators should ideally be precise specifications of the operators as predicate transformers.
While it's often easier to think in terms of how the operator transforms an input state into an output state, {tech}[verification condition] generation will work more reliably when postconditions are completely free.
This allows automation to instantiate the postcondition with the exact precondition of the next statement, rather than needing to show an entailment.
In other words, specifications that specify the precondition as a function of the postcondition work better in practice than specifications that merely relate the pre- and postconditions.
:::example "Schematic Postconditions"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
The function {name}`double` doubles a natural number state:
```lean
def double : StateM Nat Unit := do
modify (2 * ·)
```
Thinking chronologically, a reasonable specification is that value of the output state is twice that of the input state.
This is expressed using a schematic variable that stands for the initial state:
```lean -keep
theorem double_spec :
⦃ fun s => ⌜s = n⌝ ⦄ double ⦃ ⇓ () s => ⌜s = 2 * n⌝ ⦄ := by
simp [double]
mvcgen with grind
```
However, an equivalent specification that treats the postcondition schematically will lead to smaller verification conditions when {name}`double` is used in other functions:
```lean
@[spec]
theorem better_double_spec {Q : PostCond Unit (.arg Nat .pure)} :
⦃ fun s => Q.1 () (2 * s) ⦄ double ⦃ Q ⦄ := by
simp [double]
mvcgen with grind
```
The first projection of the postcondition is its stateful assertion.
Now, the precondition merely states that the postcondition should hold for double the initial state.
:::
:::example "A Logging Monad"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
The monad {name}`LogM` maintains an append-only log during a computation:
```lean
structure LogM (β : Type u) (α : Type v) : Type (max u v) where
log : Array β
value : α
instance : Monad (LogM β) where
pure x := ⟨#[], x⟩
bind x f :=
let { log, value } := f x.value
{ log := x.log ++ log, value }
```
It has a {name}`LawfulMonad` instance as well.
```lean -show
instance : LawfulMonad (LogM β) where
map_const := rfl
id_map x := rfl
seqLeft_eq x y := rfl
seqRight_eq x y := rfl
pure_seq g x := by
simp [pure, Seq.seq, Functor.map]
bind_pure_comp f x := by
simp [bind, Functor.map]
bind_map f x := by
simp [bind, Seq.seq]
pure_bind x f := by
simp [pure, bind]
bind_assoc x f g := by
simp [bind]
```
The log can be written to using {name}`log`, and a value and the associated log can be computed using {name}`LogM.run`.
```lean
def log (v : β) : LogM β Unit := { log := #[v], value := () }
def LogM.run (x : LogM β α) : α × Array β := (x.value, x.log)
```
Rather than writing it from scratch, the {name}`WP` instance uses {name}`PredTrans.pushArg`.
This operator was designed to model state monads, but {name}`LogM` can be seen as a state monad that can only append to the state.
This appending is visible in the body of the instance, where the initial state and the log that resulted from the action are appended:
```lean
instance : WP (LogM β) (.arg (Array β) .pure) where
wp
| { log, value } =>
PredTrans.pushArg (fun s => PredTrans.pure (value, s ++ log))
```
The {name}`WPMonad` instance also benefits from the conceptual model as a state monad and admits very short proofs:
```lean
instance : WPMonad (LogM β) (.arg (Array β) .pure) where
wp_pure x := by
simp [wp, PredTrans.pushArg, PredTrans.pure, Pure.pure]
wp_bind := by
simp [wp, PredTrans.pushArg, PredTrans.bind, Bind.bind]
```
The adequacy lemma has one important detail: the result of the weakest precondition transformation is applied to the empty array.
This is necessary because the logging computation has been modeled as an append-only state, so there must be some initial state.
Semantically, the empty array is the correct choice so as to not place items in a log that don't come from the program; technically, it must also be a value that commutes with the append operator on arrays.
```lean
theorem LogM.of_wp_run_eq {x : α × Array β} {prog : LogM β α}
(h : LogM.run prog = x) (P : α × Array β → Prop) :
(⊢ₛ wp⟦prog⟧ (⇓ v l => ⌜P (v, l)⌝) #[]) → P x := by
rw [← h]
intro h'
simp [wp] at h'
exact h'
```
Next, each operator in the library should be provided with a specification lemma.
There is only one: {name}`log`.
For new monads, these proofs must often break the abstraction boundaries of {tech}[Hoare triples] and weakest preconditions; the specifications that they provide can then be used abstractly by clients of the library.
```lean
theorem log_spec {x : β} :
⦃ fun s => ⌜s = s'⌝ ⦄ log x ⦃ ⇓ () s => ⌜s = s'.push x⌝ ⦄ := by
simp [log, Triple, wp]
```
A better specification for {name}`log` uses a schematic postcondition:
```lean
variable {Q : PostCond Unit (.arg (Array β) .pure)}
@[spec]
theorem log_spec_better {x : β} :
⦃ fun s => Q.1 () (s.push x) ⦄ log x ⦃ Q ⦄ := by
simp [log, Triple, wp]
```
A function {name}`logUntil` that logs all the natural numbers up to some bound will always result in a log whose length is equal to its argument:
```lean
def logUntil (n : Nat) : LogM Nat Unit := do
for i in 0...n do
log i
theorem logUntil_length : (logUntil n).run.2.size = n := by
generalize h : (logUntil n).run = x
unfold logUntil at h
apply LogM.of_wp_run_eq h
mvcgen invariants
· ⇓⟨xs, _⟩ s => ⌜xs.pos = s.size⌝
with
simp_all [List.Cursor.pos] <;>
grind [Std.PRange.Nat.size_rco, Std.Rco.length_toList]
```
:::
# Proof Mode
%%%
tag := "mvcgen-proof-mode"
%%%
Stateful goals can be proven using a special _proof mode_ in which goals are rendered with two contexts of hypotheses: the ordinary Lean context, which contains Lean variables, and a special stateful context, which contains assumptions about the monadic state.
In the proof mode, the goal is an {name}`SPred`, rather than a {lean}`Prop`, and the entire goal is equivalent to an entailment relation ({name}`SPred.entails`) from the conjunction of the hypotheses to the conclusion.
:::syntax Std.Tactic.Do.mgoalStx (title := "Proof Mode Goals")
Proof mode goals are rendered as a series of named hypotheses, one per line, followed by {keywordOf Std.Tactic.Do.mgoalStx}`⊢ₛ` and a goal.
```grammar
$[$_:ident : $t:term]*
⊢ₛ $_:term
```
:::
In the proof mode, special tactics manipulate the stateful context.
These tactics are described in {ref "tactic-ref-spred"}[their own section in the tactic reference].
When working with concrete monads, {tactic}`mvcgen` typically does not result in stateful proof goals—they are simplified away.
However, monad-polymorphic theorems can lead to stateful goals remaining.
:::example "Stateful Proofs"
```imports -show
import Std.Do
import Std.Tactic.Do
```
```lean -show
open Std.Do
set_option mvcgen.warning false
```
The function {name}`bump` increments its state by the indicated amount and returns the resulting value.
```lean
variable [Monad m] [WPMonad m ps]
def bump (n : Nat) : StateT Nat m Nat := do
modifyThe Nat (· + n)
getThe Nat
```
This specification lemma for {name}`bump` is proved in an intentionally low-level manner to demonstrate the intermediate proof states:
```lean
theorem bump_correct :
⦃ fun n => ⌜n = k⌝ ⦄
bump (m := m) i
⦃ ⇓ r n => ⌜r = n ∧ n = k + i⌝ ⦄ := by
mintro n_eq_k
unfold bump
unfold modifyThe
mspec
mspec
mpure_intro
constructor
. trivial
. simp_all
```
The lemma can also be proved using only the simplifier:
```lean
theorem bump_correct' :
⦃ fun n => ⌜n = k⌝ ⦄
bump (m := m) i
⦃ ⇓ r n => ⌜r = n ∧ n = k + i⌝ ⦄ := by
mintro _
simp_all [bump]
```
::: |
reference-manual/Manual/ValidatingProofs.lean | import VersoManual
import Manual.Meta
import Verso.Code.External
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option guard_msgs.diff true
open Verso.Code.External (lit)
open Lean (Syntax SourceInfo)
#doc (Manual) "Validating a Lean Proof" =>
%%%
file := "ValidatingProofs"
tag := "validating-proofs"
number := false
htmlSplit := .never
%%%
This section discusses how to validate a proof expressed in Lean.
Depending on the circumstances, additional steps may be recommended to rule out misleading proofs.
In particular, it matters a lot whether one is dealing with an {tech}[honest] proof attempt, and needs protection against only benign mistakes, or a possibly-{tech}[malicious] proof attempt that actively tries to mislead.
In particular, we use {deftech}_honest_ when the goal is to create a valid proof.
This allows for mistakes and bugs in proofs and meta-code (tactics, attributes, commands, etc.), but not for code that clearly only serves to circumvent the system (such as using the {option}`debug.skipKernelTC`).
Note that the {keyword}`unsafe` marker on API functions is unrelated to whether this API can be used in an dishonest way.
In contrast, we use {deftech}_malicious_ to describe code to go out of its way to trick or mislead the user, exploit bugs or compromise the system.
This includes un-reviewed AI-generated proofs and programs.
Furthermore it is important to distinguish the question “does the theorem have a valid proof” from “what does the theorem statement mean”.
Below, an escalating sequence of checks are presented, with instructions on how to perform them, an explanation of what they entail and the mistakes or attacks they guard against.
# The Blue Double Check Marks
%%%
tag := "validating-blue-check-marks"
%%%
In regular everyday use of Lean, it suffices to check the blue double check marks next to the theorem statement for assurance that the theorem is proved.
## Instructions
While working interactively with Lean, once the theorem is proved, blue double check marks appear in the gutter to the left of the code.
:::figure "A double blue check mark"

:::
## Significance
The blue ticks indicate that the theorem statement has been successfully elaborated, according to the syntax and type class instances defined in the current file and its imports, and that the Lean kernel has accepted a proof of that theorem statement that follows from the definitions, theorems and axioms declared in the current file and its imports.
## Trust
This check is meaningful if one believes the formal theorem statement corresponds to its intended informal meanings and trusts the authors of the imported libraries to be {tech}[honest], that they performed this check, and that no unsound axioms have been declared and used.
## Protection
:::listBullet "🛡️"
This check protects against
* Incomplete proof (missing goals, tactic error) *of the current theorem*
* Explicit use of {lean}`sorry` *in the current theorem*
* {tech}[Honest] bugs in meta-programs and tactics
* Proofs still being checked in the background
:::
## Comments
In the Visual Studio Code extension settings, the symbol can be changed.
Editors other than VS Code may have a different indication.
Running {lake}`build`{lit}` +Module`, where {lit}`Module` refers to the file containing the theorem, and observing success without error messages or warnings provides the same guarantees.
# Printing Axioms
%%%
tag := "validating-printing-axioms"
%%%
The blue double check marks appear even when there are explicit uses of {lean}`sorry` or incomplete proofs in the dependencies of the theorem.
Because both {lean}`sorry` and incomplete proofs are elaborated to axioms, their presence can be detected by listing the axioms that a proof relies on.
## Instructions
:::keepEnv
```lean -show
inductive TheoremStatement : Prop where | intro
theorem thmName : TheoremStatement := .intro
```
Write {leanCommand}`#print axioms thmName` after the theorem declaration, with {lean}`thmName` replaced by the name of the theorem and check that it reports only the built-in axioms {name}`propext`, {name}`Classical.choice`, and {name}`Quot.sound`.
:::
## Significance
This command prints the set of axioms used by the theorem and the theorems it depends on.
The three axioms above are standard axioms of Lean's logic, and benign.
* If {name}`sorryAx` is reported, then this theorem or one of its dependencies uses {lean}`sorry` or is otherwise incomplete.
* If {name}`Lean.trustCompiler` is reported, then native evaluation is used; see below for a discussion.
* Any other axiom means that a custom axiom was declared and used, and the theorem is only valid relative to the soundness of these axioms.
## Trust
This check is meaningful if one believes the formal theorem statement corresponds to its intended informal meanings and one trusts the authors of the imported libraries to be {tech}[honest].
## Protection
:::listBullet "🛡️"
(In addition to the list above)
* Incomplete proofs
* Explicit use of {lean}`sorry`
* Custom axioms
:::
## Comments
At the time of writing, the {keywordOf Lean.Parser.Command.printAxioms}`#print axioms` command does not work in a {tech}[module].
To work around this, create a non-module file, import your module, and use {keywordOf Lean.Parser.Command.printAxioms}`#print axioms` there.
```leanModule -show
-- This module validates the claim in the preceding paragraph that #print axioms doesn't work here
module
/--
error: cannot use `#print axioms` in a `module`; consider temporarily removing the `module` header or placing the command in a separate file
-/
#guard_msgs in
#print axioms sorryAx
```
# Re-Checking Proofs with `lean4checker`
%%%
tag := "validating-lean4checker"
%%%
There is a small class of bugs and some dishonest ways of presenting proofs that can be caught by re-checking the proofs that are stored in {tech}[`.olean` files] when building the project.
## Instructions
Build your project using {lake}`build`, run `lean4checker --fresh` on the module that contains the theorem of interest, and check that no error is reported.
## Significance
The `lean4checker` tool reads the declarations and proofs as they are stored by `lean` during building (the {tech}[`.olean` files]), and replays them through the kernel.
It trusts that the {tech}[`.olean` files] are structurally correct.
## Trust
This check is meaningful if one believes the formal theorem statement corresponds to its intended informal meanings and believes the authors of the imported libraries to not be very cunningly {tech}[malicious], and to neither compromise the user’s system nor use Lean’s extensibility to change the interpretation of the theorem statement.
## Protection
:::listBullet "🛡️"
(In addition to the list above)
* Bugs in Lean’s core handling of the kernel’s state (e.g. due to parallel proof processing, or import handling)
* Meta-programs or tactics intentionally bypassing that state (e.g. using low-level functionality to add unchecked theorems)
:::
## Comments
Since `lean4checker` reads the {tech}[`.olean` files] without validating their format, this check is prone to an attacker crafting invalid `.olean` files (e.g. invalid pointers, invalid data in strings).
Lean tactics and other meta-code can perform arbitrary actions when run.
Importing libraries created by a determined {tech}[malicious] attacker and building them without further protection can compromise the user's system, after which no further meaningful checks are possible.
We recommend running `lean4checker` as part of CI for the additional protection against bugs in Lean's handling of declaration and as a deterrent against simple attacks.
The [lean-action](https://github.com/leanprover/lean-action) GitHub Action provides this functionality by setting `lean4checker: true`.
Without the `--fresh` flag the tool can be instructed to only check some modules, and assume others to be correct (e.g. trusted libraries), for faster processing.
# Gold Standard: `comparator`
%%%
tag := "validating-comparator"
%%%
To protect against a seriously {tech}[malicious] proof compromising how Lean interprets a theorem statement or the user's system, additional steps are necessary.
This should only be necessary for high risk scenarios (proof marketplaces, high-reward proof competitions).
## Instructions
In a trusted environment, write the theorem *statement* (the ”challenge”), and then feed the challenge as well as the proposed proof to the [`comparator`](https://github.com/leanprover/comparator) tool, as documented there.
## Significance
Comparator will build the proof in a sandboxed environment, to protect against {tech}[malicious] code in the build step.
The proof term is exported to a serialized format.
Outside the sandbox and out of the reach of possibly malicious code, it validates the exported format, loads the proofs, replays them using Lean's kernel, and checks that the proved theorem statement matches the one in the challenge file.
## Trust
This check is meaningful if the theorem statement in the trusted challenge file is correct and the sandbox used to build the possibly-{tech}[malicious] code is safe.
## Protection
:::listBullet "🛡️"
(In addition to the list above)
* Actively {tech}[malicious] proofs
:::
## Comments
At the time of writing, `comparator` uses only the official Lean kernel.
In the future it will be easy to use multiple, independent kernel implementations; then this will also protect against implementation bugs in the official Lean kernel.
# Remaining Issues
When following the gold standard of checking proofs using comparator, some assumptions remain:
* The soundness of Lean’s logic.
* The implementation of that logic in Lean’s kernel (for now; see comment above).
* The plumbing provided by the `comparator` tool.
* The safety of the sandbox used by `comparator`
* No human error or misleading presentation of the theorem statement in the trusted challenge file.
# On `Lean.trustCompiler`
%%%
tag := "validating-trustCompiler"
%%%
Lean supports proofs by native evaluation.
This is used by the {tactic}`decide`{keywordOf Lean.Parser.Tactic.decide}` +native` tactic or internally by specific tactics ({tactic}`bv_decide` in particular) and produces proof terms that call compiled Lean code to do a calculation that is then trusted by the kernel.
Specific uses wrapped in {tech}[honest] tactics (e.g. {tactic}`bv_decide`) are generally trustworthy.
The trusted code base is larger (it includes Lean's compilation toolchain and library annotations in the standard library), but still fixed and vetted.
General use ({tactic}`decide`{keywordOf Lean.Parser.Tactic.decide}` +native` or direct use of {name}`Lean.ofReduceBool`) can be used to create invalid proofs whenever the native evaluation of a term disagrees with the kernel's evaluation.
In particular, for every {attr}`implemented_by`/{attr}`extern` attribute in libraries it becomes part of the trusted code base that the replacement is semantically equivalent.
All these uses show up as an axiom {name}`Lean.trustCompiler` in {keywordOf Lean.Parser.Command.printAxioms}`#print axioms`.
External checkers (`lean4checker`, `comparator`) cannot check such proofs, as they do not have access to the Lean compiler.
When that level of checking is needed, proofs have to avoid using native evaluation. |
reference-manual/Manual/NotationsMacros.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Manual.NotationsMacros.Operators
import Manual.NotationsMacros.Precedence
import Manual.NotationsMacros.Notations
import Manual.NotationsMacros.SyntaxDef
import Manual.NotationsMacros.Elab
import Manual.NotationsMacros.Delab
import Lean.Parser.Command
open Manual
open Verso.Genre
open Verso.Genre.Manual hiding seeAlso
open Verso.Genre.Manual.InlineLean
set_option pp.rawOnError true
set_option linter.unusedVariables false
#doc (Manual) "Notations and Macros" =>
%%%
tag := "language-extension"
%%%
Different mathematical fields have their own notational conventions, and many notations are reused with differing meanings in different fields.
It is important that formal developments are able to use established notations: formalizing mathematics is already difficult, and the mental overhead of translating between syntaxes can be substantial.
At the same time, it's important to be able to control the scope of notational extensions.
Many fields use related notations with very different meanings, and it should be possible to combine developments from these separate fields in a way where both readers and the system know which convention is in force in any given region of a file.
Lean addresses the problem of notational extensibility with a variety of mechanisms, each of which solves a different aspect of the problem.
They can be combined flexibly to achieve the necessary results:
* The {ref "parser"}_extensible parser_ {index}[parser] allows a great variety of notational conventions to be implemented declaratively, and combined flexibly.
* {ref "macro-and-elab"}[Macros] allow new syntax to be easily mapped to existing syntax, which is a simple way to provide meaning to new constructs.
Due to {tech}[hygiene] and automatic propagation of source positions, this process doesn't interfere with Lean's interactive features.
* {ref "macro-and-elab"}[Elaborators] provide new syntax with the same tools available to Lean's own syntax in cases where a macro is insufficiently expressive.
* {ref "notations"}[Notations] allow the simultaneous definition of a parser extension, a macro, and a pretty printer.
When defining infix, prefix, or postfix operators, {ref "operators"}[custom operators] automatically take care of precedence and associativity.
* Low-level parser extensions allow the parser to be extended in ways that modify its rules for tokens and whitespace, or that even completely replace Lean's syntax. This is an advanced topic that requires familiarity with Lean internals; nevertheless, the possibility of doing this without modifying the compiler is important. This reference manual is written using a language extension that replaces Lean's concrete syntax with a Markdown-like language for writing documents, but the source files are still Lean files.
{include 0 Manual.NotationsMacros.Operators}
{include 0 Manual.NotationsMacros.Precedence}
{include 0 Manual.NotationsMacros.Notations}
{include 0 Manual.NotationsMacros.SyntaxDef}
# Macros
%%%
tag := "macros"
%%%
{deftech}_Macros_ are transformations from {name Lean.Syntax}`Syntax` to {name Lean.Syntax}`Syntax` that occur during {tech (key := "elaborator") -normalize}[elaboration] and during {ref "tactic-macros"}[tactic execution].
Replacing syntax with the result of transforming it with a macro is called {deftech}_macro expansion_.
Multiple macros may be associated with a single {tech}[syntax kind], and they are attempted in order of definition.
Macros are run in a {tech}[monad] that has access to some compile-time metadata and has the ability to either emit an error message or to delegate to subsequent macros, but the macro monad is much less powerful than the elaboration monads.
```lean -show
section
open Lean (Syntax MacroM)
```
Macros are associated with {tech}[syntax kinds].
An internal table maps syntax kinds to macros of type {lean}`Syntax → MacroM Syntax`.
Macros delegate to the next entry in the table by throwing the {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax` exception.
A given {name}`Syntax` value _is a macro_ when there is a macro associated with its syntax kind that does not throw {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax`.
If a macro throws any other exception, an error is reported to the user.
{tech}[Syntax categories] are irrelevant to macro expansion; however, because each syntax kind is typically associated with a single syntax category, they do not interfere in practice.
::::keepEnv
:::example "Macro Error Reporting"
The following macro reports an error when its parameter is the literal numeral five.
It expands to its argument in all other cases.
```lean
syntax &"notFive" term:arg : term
open Lean in
macro_rules
| `(term|notFive 5) =>
Macro.throwError "'5' is not allowed here"
| `(term|notFive $e) =>
pure e
```
When applied to terms that are not syntactically the numeral five, elaboration succeeds:
```lean (name := notFiveAdd)
#eval notFive (2 + 3)
```
```leanOutput notFiveAdd
5
```
When the error case is triggered, the user receives an error message:
```lean (name := notFiveFive) +error
#eval notFive 5
```
```leanOutput notFiveFive
'5' is not allowed here
```
:::
::::
Before elaborating a piece of syntax, the elaborator checks whether its {tech}[syntax kind] has macros associated with it.
These are attempted in order.
If a macro succeeds, potentially returning syntax with a different kind, the check is repeated and macros are expanded again until the outermost layer of syntax is no longer a macro.
Elaboration or tactic execution can then proceed.
Only the outermost layer of syntax (typically a {name Lean.Syntax.node}`node`) is expanded, and the output of macro expansion may contain nested syntax that is a macro.
These nested macros are expanded in turn when the elaborator reaches them.
In particular, macro expansion occurs in three situations in Lean:
1. During term elaboration, macros in the outermost layer of the syntax to be elaborated are expanded prior to invoking the {ref "elaborators"}[syntax's term elaborator].
2. During command elaboration, macros in the outermost layer of the syntax to be elaborated are expanded prior to invoking the {ref "elaborators"}[syntax's command elaborator].
3. During tactic execution, macros in the outermost layer of the syntax to be elaborated are expanded {ref "tactic-macros"}[prior to executing the syntax as a tactic].
```lean -keep -show
-- Test claim in preceding paragraph that it's OK for macros to give up prior to elab
syntax "doubled " term:arg : term
macro_rules
| `(term|doubled $n:num) => `($n * 2)
| `(term|doubled $_) => Lean.Macro.throwUnsupported
/-- info: 10 -/
#check_msgs in
#eval doubled 5
/--
error: elaboration function for `termDoubled_` has not been implemented
doubled (5 + 2)
-/
#check_msgs in
#eval doubled (5 + 2)
elab_rules : term
| `(term|doubled $e:term) => Lean.Elab.Term.elabTerm e none
/-- info: 7 -/
#check_msgs in
#eval doubled (5 + 2)
```
## Hygiene
%%%
tag := "macro-hygiene"
%%%
A macro is {deftech (key:="hygiene")}_hygienic_ if its expansion cannot result in identifier capture.
{deftech}[Identifier capture] is when an identifier ends up referring to a binding site other than that which is in scope where the identifier occurs in the source code.
There are two types of identifier capture:
* If a macro's expansion introduces binders, then identifiers that are parameters to the macro may end up referring to the introduced binders if their names happen to match.
* If a macro's expansion is intended to refer to a name, but the macro is used in a context that either locally binds this name or in which a new global name has been introduced, it may end up referring to the wrong name.
The first kind of variable capture can be avoided by ensuring that every binding introduced by a macro uses a freshly generated, globally-unique name, while the second can be avoided by always using fully-qualified names to refer to constants.
The fresh names must be generated again at each invocation of the macro to avoid variable capture in recursive macros.
These techniques are error-prone.
Variable capture issues are difficult to test for because they rely on coincidences of name choices, and consistently applying these techniques results in noisy code.
Lean features automatic hygiene: in almost all cases, macros are automatically hygienic.
Capture by introduced bindings is avoided by annotating identifiers introduced by a macro with {deftech}_macro scopes_, which uniquely identify each invocation of macro expansion.
If the binding and the use of the identifier have the same macro scopes, then they were introduced by the same step of macro expansion and should refer to one another.
Similarly, uses of global names in code generated by a macro are not captured by local bindings in the context in which they are expanded because these use sites have macro scopes that are not present in the binding occurrence.
Capture by newly-introduced global names is prevented by annotating potential global name references with the set of global names that match at quotation time in code produced in the macro's body.
Identifiers annotated with potential referents are called {deftech}_pre-resolved identifiers_, and the {lean}`Syntax.Preresolved` field on the {name}`Syntax.ident` constructor is used to store the potential referents.
During elaboration, if an identifier has pre-resolved global names associated with it, then other global names are not considered as valid reference targets.
The introduction of macro scopes and pre-resolved identifiers to generated syntax occurs during {tech}[quotation].
Macros that construct syntax by other means than quotation should also ensure hygiene by some other means.
For more details on Lean's hygiene algorithm, please consult {citet beyondNotations ullrich23}[].
## The Macro Monad
%%%
tag := "macro-monad"
%%%
The macro monad {name Lean.MacroM}`MacroM` is sufficiently powerful to implement hygiene and report errors.
Macro expansion does not have the ability to modify the environment directly, to carry out unification, to examine the current local context, or to do anything else that only makes sense in one particular context.
This allows the same macro mechanism to be used throughout Lean, and it makes macros much easier to write than {tech}[elaborators].
{docstring Lean.MacroM}
{docstring Lean.Macro.expandMacro?}
{docstring Lean.Macro.trace}
### Exceptions and Errors
%%%
tag := "macro-exceptions"
%%%
The {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax` exception is used for control flow during macro expansion.
It indicates that the current macro is incapable of expanding the received syntax, but that an error has not occurred.
The exceptions thrown by {name Lean.Macro.throwError}`throwError` and {name Lean.Macro.throwErrorAt}`throwErrorAt` terminate macro expansion, reporting the error to the user.
{docstring Lean.Macro.throwUnsupported}
{docstring Lean.Macro.Exception.unsupportedSyntax}
{docstring Lean.Macro.throwError}
{docstring Lean.Macro.throwErrorAt}
### Hygiene-Related Operations
%%%
tag := "macro-monad-hygiene"
%%%
{tech}[Hygiene] is implemented by adding {tech}[macro scopes] to the identifiers that occur in syntax.
Ordinarily, the process of {tech}[quotation] adds all necessary scopes, but macros that construct syntax directly must add macro scopes to the identifiers that they introduce.
{docstring Lean.Macro.withFreshMacroScope}
{docstring Lean.Macro.addMacroScope}
### Querying the Environment
%%%
tag := "macro-environment"
%%%
Macros have only limited support for querying the environment.
They can check whether a constant exists and resolve names, but further introspection is unavailable.
{docstring Lean.Macro.hasDecl}
{docstring Lean.Macro.getCurrNamespace}
{docstring Lean.Macro.resolveNamespace}
{docstring Lean.Macro.resolveGlobalName}
## Quotation
%%%
tag := "quotation"
%%%
{deftech}_Quotation_ marks code for representation as data of type {name}`Syntax`.
Quoted code is parsed, but not elaborated—while it must be syntactically correct, it need not make sense.
Quotation makes it much easier to programmatically generate code: rather than reverse-engineering the specific nesting of {name Lean.Syntax.node}`node` values that Lean's parser would produce, the parser can be directly invoked to create them.
This is also more robust in the face of refactoring of the grammar that may change the internals of the parse tree without affecting the user-visible concrete syntax.
Quotation in Lean is surrounded by `` `( `` and `)`.
The syntactic category or parser being quoted may be indicated by placing its name after the opening backtick and parenthesis, followed by a vertical bar (`|`).
As a special case, the name `tactic` may be used to parse either tactics or sequences of tactics.
If no syntactic category or parser is provided, Lean attempts to parse the quotation both as a term and as a non-empty sequence of commands.
Term quotations have higher priority than command quotations, so in cases of ambiguity, the interpretation as a term is chosen; this can be overridden by explicitly indicating that the quotation is of a command sequence.
::::keepEnv
:::example "Term vs Command Quotation Syntax"
```lean -show
open Lean
```
In the following example, the contents of the quotation could either be a function application or a sequence of commands.
Both match the same region of the file, so the {tech}[local longest-match rule] is not relevant.
Term quotation has a higher priority than command quotation, so the quotation is interpreted as a term.
Terms expect their {tech}[antiquotations] to have type {lean}``TSyntax `term`` rather than {lean}``TSyntax `command``.
```lean +error (name := cmdQuot)
example (cmd1 cmd2 : TSyntax `command) : MacroM (TSyntax `command) :=
`($cmd1 $cmd2)
```
The result is two type errors like the following:
```leanOutput cmdQuot
Application type mismatch: The argument
cmd1
has type
TSyntax `command
but is expected to have type
TSyntax `term
in the application
cmd1.raw
```
The type of the quotation ({lean}``MacroM (TSyntax `command)``) is not used to select a result because syntax priorities are applied prior to elaboration.
In this case, specifying that the antiquotations are commands resolves the ambiguity because function application would require terms in these positions:
```lean
example (cmd1 cmd2 : TSyntax `command) : MacroM (TSyntax `command) :=
`($cmd1:command $cmd2:command)
```
Similarly, inserting a command into the quotation eliminates the possibility that it could be a term:
```lean
example (cmd1 cmd2 : TSyntax `command) : MacroM (TSyntax `command) :=
`($cmd1 $cmd2 #eval "hello!")
```
:::
::::
```lean -show
-- There is no way to extract parser priorities (they're only saved in the Pratt tables next to
-- compiled Parser code), so this test of priorities checks the observable relative priorities of the
-- quote parsers.
/--
info: do
let _ ← Lean.MonadRef.mkInfoFromRefPos
let _ ← Lean.getCurrMacroScope
let _ ← Lean.MonadQuotation.getContext
pure { raw := { raw := Syntax.missing }.raw } : MacroM (Lean.TSyntax `term)
-/
#check_msgs in
#check (`($(⟨.missing⟩)) : MacroM _)
/--
info: do
let info ← Lean.MonadRef.mkInfoFromRefPos
let _ ← Lean.getCurrMacroScope
let _ ← Lean.MonadQuotation.getContext
pure
{
raw :=
Syntax.node2 info `Lean.Parser.Term.app { raw := Syntax.missing }.raw
(Syntax.node1 info `null { raw := Syntax.missing }.raw) } : MacroM (Lean.TSyntax `term)
-/
#check_msgs in
#check (`($(⟨.missing⟩) $(⟨.missing⟩)) : MacroM _)
/--
info: do
let info ← Lean.MonadRef.mkInfoFromRefPos
let _ ← Lean.getCurrMacroScope
let _ ← Lean.MonadQuotation.getContext
pure
{
raw :=
Syntax.node2 info `null { raw := Syntax.missing }.raw
{ raw := Syntax.missing }.raw } : MacroM (Lean.TSyntax `command)
-/
#check_msgs in
#check (`($(⟨.missing⟩):command $(⟨.missing⟩)) : MacroM _)
/--
info: do
let _ ← Lean.MonadRef.mkInfoFromRefPos
let _ ← Lean.getCurrMacroScope
let _ ← Lean.MonadQuotation.getContext
pure { raw := { raw := Syntax.missing }.raw } : MacroM (Lean.TSyntax `tactic)
-/
#check_msgs in
#check (`(tactic| $(⟨.missing⟩):tactic) : MacroM _)
/--
info: do
let info ← Lean.MonadRef.mkInfoFromRefPos
let _ ← Lean.getCurrMacroScope
let _ ← Lean.MonadQuotation.getContext
pure
{
raw :=
Syntax.node1 info `Lean.Parser.Tactic.seq1
(Syntax.node3 info `null { raw := Syntax.missing }.raw (Syntax.atom info ";")
{ raw := Syntax.missing }.raw) } : MacroM (Lean.TSyntax `tactic.seq)
-/
#check_msgs in
#check (`(tactic|
$(⟨.missing⟩):tactic; $(⟨.missing⟩)) : MacroM _)
```
:::freeSyntax term -open (title := "Quotations")
Lean's syntax includes quotations for terms, commands, tactics, and sequences of tactics, as well as a general quotation syntax that allows any input that Lean can parse to be quoted.
Term quotations have the highest priority, followed by tactic quotations, general quotations, and finally command quotations.
```grammar
`(term|`($_:term))
*******
"`("$_:command+")"
*******
`(term|`(tactic| $_:tactic))
*******
`(term|`(tactic| $_:tactic;*))
*******
"`("p:ident"|"/-- Parse a {p} here -/")"
```
:::
```lean -show
section M
variable {m : Type → Type}
open Lean (MonadRef MonadQuotation)
open Lean.Elab.Term (TermElabM)
open Lean.Elab.Command (CommandElabM)
open Lean.Elab.Tactic (TacticM)
```
Rather than having type {name}`Syntax`, quotations are monadic actions with type {lean}`m Syntax`.
Quotation is monadic because it implements {tech}[hygiene] by adding {tech}[macro scopes] and pre-resolving identifiers, as described in {ref "macro-hygiene"}[the section on hygiene].
The specific monad to be used is an implicit parameter to the quotation, and any monad for which there is an instance of the {name}`MonadQuotation` type class is suitable.
{name}`MonadQuotation` extends {name}`MonadRef`, which gives the quotation access to the source location of the syntax that the macro expander or elaborator is currently processing. {name}`MonadQuotation` additionally includes the ability to add {tech}[macro scopes] to identifiers and use a fresh macro scope for a sub-task.
Monads that support quotation include {name}`MacroM`, {name}`TermElabM`, {name}`CommandElabM`, and {name}`TacticM`.
```lean -show
end M
```
```lean -show
-- Verify claim about monads above
open Lean in
example [Monad m] [MonadQuotation m] : m Syntax := `(term|2 + 2)
```
### Quasiquotation
%%%
tag := "quasiquotation"
%%%
{deftech}_Quasiquotation_ is a form of quotation that may contain {deftech}_antiquotations_, which are regions of the quotation that are not quoted, but instead are expressions that are evaluated to yield syntax.
A quasiquotation is essentially a template; the outer quoted region provides a fixed framework that always yields the same outer syntax, while the antiquotations yield the parts of the final syntax that vary.
All quotations in Lean are quasiquotations, so no special syntax is needed to distinguish quasiquotations from other quotations.
The quotation process does not add macro scopes to identifiers that are inserted via antiquotations, because these identifiers either come from another quotation (in which case they already have macro scopes) or from the macro's input (in which case they should not have macro scopes, because they are not introduced by the macro).
Basic antiquotations consist of a dollar sign (`$`) immediately followed by an identifier.
This means that the value of the corresponding variable, which should be a syntax tree, is to be substituted into this position of the quoted syntax.
Entire expressions may be used as antiquotations by wrapping them in parentheses.
```lean -show
section
open Lean
example (e : Term) : MacroM Syntax := `(term| $e)
example (e : Term) : MacroM Syntax := `(term| $(e))
--example (e : Term) : MacroM Syntax := `(term| $ (e))
end
```
```lean -show
section
open Lean (TSyntax SyntaxNodeKinds)
variable {c : SyntaxNodeKinds}
```
Lean's parser assigns every antiquotation a syntax category based on what the parser expects at the given position.
If the parser expects syntax category {lean}`c`, then the antiquotation's type is {lean}`TSyntax c`.
Some syntax categories can be matched by elements of other categories.
For example, numeric and string literals are valid terms in addition to being their own syntax categories.
Antiquotations may be annotated with the expected category by suffixing them with a colon and the category name, which causes the parser to validate that the annotated category is acceptable in the given position and construct any intermediate layers that are required in the parse tree.
:::freeSyntax antiquot (title := "Antiquotations") -open
```grammar
"$"ident(":"ident)?
*******
"$("term")"(":"ident)?
```
Whitespace is not permitted between the dollar sign ('$') that initiates an antiquotation and the identifier or parenthesized term that follows.
Similarly, no whitespace is permitted around the colon that annotates the syntax category of the antiquotation.
:::
:::example "Quasiquotation"
Both forms of antiquotation are used in this example.
Because natural numbers are not syntax, {name Lean.quote}`quote` is used to transform a number into syntax that represents it.
```lean
open Lean in
example [Monad m] [MonadQuotation m] (x : Term) (n : Nat) : m Syntax :=
`($x + $(quote (n + 2)))
```
:::
:::::keepEnv
::::example "Antiquotation Annotations"
```lean -show
open Lean
```
This example requires that {lean}`m` is a monad that can perform quotation.
```lean
variable {m : Type → Type} [Monad m] [MonadQuotation m]
```
By default, the antiquotation `$e` is expected to be a term, because that's the syntactic category that's immediately expected as the second argument to addition.
```lean (name := ex1)
def ex1 (e) := show m _ from `(2 + $e)
#check ex1
```
```leanOutput ex1
ex1 {m : Type → Type} [Monad m] [MonadQuotation m] (e : TSyntax `term) : m (TSyntax `term)
```
Annotating `$e` as a numeric literal succeeds, because numeric literals are also valid terms.
The expected type of the parameter `e` changes to ``TSyntax `num``.
```lean (name := ex2)
def ex2 (e) := show m _ from `(2 + $e:num)
#check ex2
```
```leanOutput ex2
ex2 {m : Type → Type} [Monad m] [MonadQuotation m] (e : TSyntax `num) : m (TSyntax `term)
```
Spaces are not allowed between the dollar sign and the identifier.
```syntaxError ex2err1
def ex2 (e) := show m _ from `(2 + $ e:num)
```
```leanOutput ex2err1
<example>:1:34-1:36: unexpected token '$'; expected '`(tactic|' or no space before spliced term
```
Spaces are also not allowed before the colon:
```syntaxError ex2err2
def ex2 (e) := show m _ from `(2 + $e :num)
```
```leanOutput ex2err2
<example>:1:37-1:39: unexpected token ':'; expected ')'
```
::::
:::::
```lean -show
end
```
:::::keepEnv
::::example "Expanding Quasiquotation"
Printing the definition of {name}`f` demonstrates the expansion of a quasiquotation.
```lean (name := expansion)
open Lean in
def f [Monad m] [MonadQuotation m]
(x : Term) (n : Nat) : m Syntax :=
`(fun k => $x + $(quote (n + 2)) + k)
#print f
```
```leanOutput expansion
def f : {m : Type → Type} → [Monad m] → [Lean.MonadQuotation m] → Lean.Term → Nat → m Syntax :=
fun {m} [Monad m] [Lean.MonadQuotation m] x n => do
let info ← Lean.MonadRef.mkInfoFromRefPos
let scp ← Lean.getCurrMacroScope
let quotCtx ← Lean.MonadQuotation.getContext
pure
{
raw :=
Syntax.node2 info `Lean.Parser.Term.fun (Syntax.atom info "fun")
(Syntax.node4 info `Lean.Parser.Term.basicFun
(Syntax.node1 info `null (Syntax.ident info "k".toRawSubstring' (Lean.addMacroScope quotCtx `k scp) []))
(Syntax.node info `null #[]) (Syntax.atom info "=>")
(Syntax.node3 info `«term_+_»
(Syntax.node3 info `«term_+_» x.raw (Syntax.atom info "+") (Lean.quote `term (n + 2)).raw)
(Syntax.atom info "+")
(Syntax.ident info "k".toRawSubstring' (Lean.addMacroScope quotCtx `k scp) []))) }.raw
```
:::paragraph
```lean -show
section
open Lean (Term)
open Lean.Quote
variable {x : Term} {n : Nat}
```
In this output, the quotation is a {keywordOf Lean.Parser.Term.do}`do` block.
It begins by constructing the source information for the resulting syntax, obtained by querying the compiler about the current user syntax being processed.
It then obtains the current macro scope and the name of the module being processed, because macro scopes are added with respect to a module to enable independent compilation and avoid the need for a global counter.
It then constructs a node using helpers such as {name}`Syntax.node1` and {name}`Syntax.node2`, which create a {name}`Syntax.node` with the indicated number of children.
The macro scope is added to each identifier, and {name Lean.TSyntax.raw}`TSyntax.raw` is used to extract the contents of typed syntax wrappers.
The antiquotations of {lean}`x` and {lean (type := "Term")}`quote (n + 2)` occur directly in the expansion, as parameters to {name}`Syntax.node3`.
```lean -show
end
```
:::
::::
:::::
### Splices
%%%
tag := "splices"
%%%
In addition to including other syntax via antiquotations, quasiquotations can include {deftech}_splices_.
Splices indicate that the elements of an array are to be inserted in order.
The repeated elements may include separators, such as the commas between list or array elements.
Splices may consist of an ordinary antiquotation with a {deftech}_splice suffix_, or they may be {deftech}_extended splices_ that provide additional repeated structure.
Splice suffixes consist of either an asterisk or a valid atom followed by an asterisk (`*`).
Suffixes may follow any identifier or term antiquotation.
An antiquotation with the splice suffix `*` corresponds to a use of `many` or `many1`; both the `*` and `+` suffixes in syntax rules correspond to the `*` splice suffix.
An antiquotation with a splice suffix that includes an atom prior to the asterisk corresponds to a use of `sepBy` or `sepBy1`.
The splice suffix `?` corresponds to a use of `optional` or the `?` suffix in a syntax rule.
Because `?` is a valid identifier character, identifiers must be parenthesized to use it as a suffix.
While there is overlap between repetition specifiers for syntax and antiquotation suffixes, they have distinct syntaxes.
When defining syntax, the suffixes `*`, `+`, `,*`, `,+`, `,*,?`, and `,+,?` are built in to Lean.
There is no shorter way to specify separators other than `,`.
Antiquotation suffixes are either just `*` or whatever atom was provided to `sepBy` or `sepBy1` followed by `*`.
The syntax repetitions `+` and `*` correspond to the splice suffix `*`; the repetitions `,*`, `,+`, `,*,?`, and `,+,?` correspond to `,*`.
The optional suffix `?` in syntax and splices correspond with each other.
:::table +header
* - Syntax Repetition
- Splice Suffix
* - `+` `*`
- `*`
* - `,*` `,+` `,*,?` `,+,?`
- `,*`
* - `sepBy(_, "S")` `sepBy1(_, "S")`
- `S*`
* - `?`
- `?`
:::
::::keepEnv
:::example "Suffixed Splices"
```imports -show
import Lean.Elab
```
```lean -show
open Lean
open Lean.Elab.Command (CommandElabM)
```
This example requires that {lean}`m` is a monad that can perform quotation.
```lean
variable {m : Type → Type} [Monad m] [MonadQuotation m]
```
By default, the antiquotation `$e` is expected to be an array of terms separated by commas, as is expected in the body of a list:
```lean (name := ex1)
def ex1 (xs) := show m _ from `(#[$xs,*])
#check ex1
```
```leanOutput ex1
ex1 {m : Type → Type} [Monad m] [MonadQuotation m] (xs : Syntax.TSepArray `term ",") : m (TSyntax `term)
```
However, Lean includes a collection of coercions between various representations of arrays that will automatically insert or remove separators, so an ordinary array of terms is also acceptable:
```lean (name := ex2)
def ex2 (xs : Array (TSyntax `term)) :=
show m _ from `(#[$xs,*])
#check ex2
```
```leanOutput ex2
ex2 {m : Type → Type} [Monad m] [MonadQuotation m] (xs : Array (TSyntax `term)) : m (TSyntax `term)
```
Repetition annotations may also be used with term antiquotations and syntax category annotations.
This example is in {name Lean.Elab.Command.CommandElabM}`CommandElabM` so the result can be conveniently logged.
```lean (name := ex3)
def ex3 (size : Nat) := show CommandElabM _ from do
let mut nums : Array Nat := #[]
for i in [0:size] do
nums := nums.push i
let stx ← `(#[$(nums.map (Syntax.mkNumLit ∘ toString)):num,*])
-- Using logInfo here causes the syntax to be rendered via
-- the pretty printer.
logInfo stx
#eval ex3 4
```
```leanOutput ex3
#[0, 1, 2, 3]
```
:::
::::
::::keepEnv
:::example "Non-Comma Separators"
The following unconventional syntax for lists separates numeric elements by either em dashes or double asterisks, rather than by commas.
```lean
syntax "⟦" sepBy1(num, " — ") "⟧": term
syntax "⟦" sepBy1(num, " ** ") "⟧": term
```
This means that `—*` and `***` are valid splice suffixes between the `⟦` and `⟧` atoms.
In the case of `***`, the first two asterisks are the atom in the syntax rule, while the third is the repetition suffix.
```lean
macro_rules
| `(⟦$n:num—*⟧) => `(⟦$n***⟧)
| `(⟦$n:num***⟧) => `([$n,*])
```
```lean (name := nonComma)
#eval ⟦1 — 2 — 3⟧
```
```leanOutput nonComma
[1, 2, 3]
```
:::
::::
::::keepEnv
:::example "Optional Splices"
```imports -show
import Lean.Elab
```
The following syntax declaration optionally matches a term between two tokens.
The parentheses around the nested `term` are needed because `term?` is a valid identifier.
```lean -show
open Lean
```
```lean
syntax "⟨| " (term)? " |⟩": term
```
The `?` splice suffix for a term expects an {lean}`Option Term`:
```lean
def mkStx [Monad m] [MonadQuotation m]
(e : Option Term) : m Term :=
`(⟨| $(e)? |⟩)
```
```lean (name := checkMkStx)
#check mkStx
```
```leanOutput checkMkStx
mkStx {m : Type → Type} [Monad m] [MonadQuotation m] (e : Option Term) : m Term
```
Supplying {name}`some` results in the optional term being present.
```lean (name := someMkStx)
#eval do logInfo (← mkStx (some (quote 5)))
```
```leanOutput someMkStx
⟨| 5 |⟩
```
Supplying {name}`none` results in the optional term being absent.
```lean (name := noneMkStx)
#eval do logInfo (← mkStx none)
```
```leanOutput noneMkStx
⟨| |⟩
```
:::
::::
```lean -show
section
open Lean Syntax
variable {k k' : SyntaxNodeKinds} {sep : String} [Coe (TSyntax k) (TSyntax k')]
-- Demonstrate the coercions between different kinds of repeated syntax
/-- info: instCoeHTCTOfCoeHTC -/
#check_msgs in
#synth CoeHTCT (TSyntaxArray k) (TSepArray k sep)
/-- info: instCoeHTCTOfCoeHTC -/
#check_msgs in
#synth CoeHTCT (TSyntaxArray k) (TSepArray k' sep)
/-- info: instCoeHTCTOfCoeHTC -/
#check_msgs in
#synth CoeHTCT (Array (TSyntax k)) (TSepArray k sep)
/-- info: instCoeHTCTOfCoeHTC -/
#check_msgs in
#synth CoeHTCT (TSepArray k sep) (TSyntaxArray k)
end
```
### Token Antiquotations
%%%
tag := "token-antiquotations"
%%%
In addition to antiquotations of complete syntax, Lean features {deftech}_token antiquotations_ which allow the source information of an atom to be replaced with the source information from some other syntax.
The resulting synthetic source information is marked {tech}[canonical] so that it will be used for error messages, proof states, and other feedback.
This is primarily useful to control the placement of error messages or other information that Lean reports to users.
A token antiquotation does not allow an arbitrary atom to be inserted via evaluation.
A token antiquotation consists of an atom (that is, a keyword)
:::freeSyntax antiquot +open (title := "Token Antiquotations")
Token antiquotations replace the source information (of type {name Lean.SourceInfo}`SourceInfo`) on a token with the source information from some other syntax.
```grammar
atom"%$"ident
```
:::
::: TODO
More complex splices with brackets
:::
## Matching Syntax
%%%
tag := "quote-patterns"
%%%
:::seeAlso
New syntax is defined using {ref "syntax-rules"}[syntax extensions].
:::
Quasiquotations can be used in pattern matching to recognize syntax that matches a template.
Just as antiquotations in a quotation that's used as a term are regions that are treated as ordinary non-quoted expressions, antiquotations in patterns are regions that are treated as ordinary Lean patterns.
Quote patterns are compiled differently from other patterns, so they can't be intermixed with non-quote patterns in a single {keywordOf Lean.Parser.Term.match}`match` expression.
Like ordinary quotations, quote patterns are first processed by Lean's parser.
The parser's output is then compiled into code that determines whether there is a match.
Syntax matching assumes that the syntax being matched was produced by Lean's parser, either via quotation or directly in user code, and uses this to omit some checks.
For example, if nothing but a particular keyword can be present in a given position, the check may be omitted.
Syntax matches a quote pattern in the following cases:
: Atoms
Keyword atoms (such as {keywordOf termIfThenElse}`if` or {keywordOf Lean.Parser.Term.match}`match`) result in singleton nodes whose kind is `token.` followed by the atom.
In many cases, it is not necessary to check for specific atom values because the grammar allows only a single keyword, and no checking will be performed.
If the syntax of the term being matched requires the check, then the node kind is compared.
Literals, such as string or numeric literals, are compared via their underlying string representation.
The pattern `` `(0x15) `` and the quotation `` `(21) `` do not match.
: Nodes
If both the pattern and the value being matched represent {name}`Syntax.node`, there is a match when both have the same syntax kind, the same number of children, and each child pattern matches the corresponding child value.
: Identifiers
If both the pattern and the value being matched are identifiers, then their literal {name Lean.Name}`Name` values are compared for equality modulo macro scopes.
Identifiers that “look” the same match, and it does not matter if they refer to the same binding.
This design choice allows quote pattern matching to be used in contexts that don't have access to a compile-time environment in which names can be compared by reference.
Because quotation pattern matching is based on the node kinds emitted by the parser, quotations that look identical may not match if they come from different syntax categories.
If in doubt, including the syntax category in the quotation can help.
:::leanSection
```lean -show
open Lean Syntax
variable {k : SyntaxNodeKinds} {sep : String}
```
Variables bound by syntax pattern matches are of type {lean}`TSyntax k`, where {lean}`k` describes the potential syntax kinds.
Variables in repetitions are of type {lean}`TSyntaxArray k`, or {lean}`TSepArray k sep` if the repetition is separated with the string {lean}`sep`.
{name}`TSyntax` is described in more detail in {ref "typed-syntax"}[the section on typed syntax].
:::
::::example "Syntax Pattern Matching"
```lean -show
open Lean Syntax
```
List comprehensions are a notation for writing lists that is inspired by standard set builder notation.
A list comprehension consists of square brackets that contain a result term followed by some nubmer of _qualifiers_; each qualifier either introduces a variable from some other list or imposes a condition that must be satisfied.
Qualifiers are nested: each new variable's value is evaluated for every prior value.
```lean
syntax qbind := ident "←" term
syntax qpred := term
syntax qualifier := atomic(qbind) <|> qpred
syntax "[" term "|" qualifier,* "]" : term
```
List comprehensions can be desugared to a sequence of calls to {name}`List.flatMap`.
Variable introductions are translated to a {name List.flatMap}`flatMap` on the variable's value expression, while predicates are translated to a conditional that returns one or zero values if the predicate is true or false.
The body of the final {name List.flatMap}`flatMap` is the result term.
This desugaring can be implemented as a macro that uses quasiquotation patterns:
```lean
macro_rules
| `(term|[$e | $qs,* ]) => do
let init ← `([$e])
qs.getElems.foldrM (β := Term) (init := init) fun
| `(qualifier|$x ← $e'), r =>
`(($e' : List _) |>.flatMap fun $x => $r)
| `(qualifier|$e':term), r =>
`((if $e' then [()] else []) |>.flatMap fun () => $r)
| other, _ =>
Macro.throwErrorAt other "Unknown qualifier"
```
Initially, the sequence of qualifiers has type {lean}``TSepArray `qualifier ","``, indicating that it represents a comma-separated sequence of qualifiers.
{lean}`TSepArray.getElems` transforms it into a {lean}``TSyntaxArray `qualifier``, which is an abbreviation for {lean}``Array (TSyntax `qualifier)``.
This allows {tech}[generalized field notation] to be used to call {name}`Array.foldrM`.
The `term` annotation in the branch for predicates is required to prevent the matched value from having syntax kind {lean}`` `qualifier ``; one {name Syntax.node}`node` must be unwrapped from the value.
List comprehensions behave as expected:
```lean (name := evalComp)
#eval [ s!"{x}; {y}" |
x ← (1...5).toList,
x % 2 = 0,
y ← [true, false]
]
```
```leanOutput evalComp
["2; true", "2; false", "4; true", "4; false"]
```
::::
## Defining Macros
%%%
tag := "defining-macros"
%%%
There are two primary ways to define macros: the {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command and the {keywordOf Lean.Parser.Command.macro}`macro` command.
The {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command associates a macro with existing syntax, while the {keywordOf Lean.Parser.Command.macro}`macro` command simultaneously defines new syntax and a macro that translates it to existing syntax.
The {keywordOf Lean.Parser.Command.macro}`macro` command can be seen as a generalization of {keywordOf Lean.Parser.Command.notation}`notation` that allows the expansion to be generated programmatically, rather than simply by substitution.
### The `macro_rules` Command
%%%
tag := "macro_rules"
%%%
:::syntax command (title := "Rule-Based Macros With {keyword}`macro_rules`")
The {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command takes a sequence of rewrite rules, specified as syntax pattern matches, and adds each as a macro.
The rules are attempted in order, before previously-defined macros, and later macro definitions may add further macro rules.
```grammar
$[$d:docComment]?
$[@[$attrs,*]]?
$_:attrKind macro_rules $[(kind := $k)]?
$[| `(free{(p:ident"|")?/-- Suitable syntax for {p} -/}) => $e]*
```
:::
The patterns in the macros must be quotation patterns.
They may match syntax from any syntax category, but a given pattern can only ever match a single syntax kind.
If no category or parser is specified for the quotation, then it may match terms or (sequences of) commands, but never both.
In case of ambiguity, the term parser is chosen.
Internally, macros are tracked in a table that maps each {tech}[syntax kind] to its macros.
The {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command may be explicitly annotated with a syntax kind.
If a syntax kind is explicitly provided, the macro definition checks that each quotation pattern has that kind.
If the parse result for the quotation was a {tech}[choice node] (that is, if the parse was ambiguous), then the pattern is duplicated once for each alternative with the specified kind.
It is an error if none of the alternatives have the specified kind.
If no kind is provided explicitly, then the kind determined by the parser is used for each pattern.
The patterns are not required to all have the same syntax kind; macros are defined for each syntax kind used by at least one of the patterns.
It is an error if the parse result for a quotation pattern was a {tech}[choice node] (that is, if the parse was ambiguous).
The documentation comment associated with {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` is displayed to users if the syntax itself has no documentation comment.
Otherwise, the documentation comment for the syntax itself is shown.
As with {ref "notations"}[notations] and {ref "operators"}[operators], macro rules may be declared `scoped` or `local`.
Scoped macros are only active when the current namespace is open, and local macro rules are only active in the current {tech}[section scope].
::::keepEnv
:::example "Idiom Brackets"
Idiom brackets are an alternative syntax for working with applicative functors.
If the idiom brackets contain a function application, then the function is wrapped in {name}`pure` and applied to each argument using `<*>`. {TODO}[Operator hyperlinking to docs]
Lean does not support idiom brackets by default, but they can be defined using a macro.
```lean
syntax (name := idiom) "⟦" (term:arg)+ "⟧" : term
macro_rules
| `(⟦$f $args*⟧) => do
let mut out ← `(pure $f)
for arg in args do
out ← `($out <*> $arg)
return out
```
This new syntax can be used immediately.
```lean
def addFirstThird [Add α] (xs : List α) : Option α :=
⟦Add.add xs[0]? xs[2]?⟧
```
```lean (name := idiom1)
#eval addFirstThird (α := Nat) []
```
```leanOutput idiom1
none
```
```lean (name := idiom2)
#eval addFirstThird [1]
```
```leanOutput idiom2
none
```
```lean (name := idiom3)
#eval addFirstThird [1,2,3,4]
```
```leanOutput idiom3
some 4
```
:::
::::
::::keepEnv
:::example "Scoped Macros"
```lean -show
open Lean
```
Scoped macro rules are active only in their namespace.
When the namespace `ConfusingNumbers` is open, numeric literals will be assigned an incorrect meaning.
```lean
namespace ConfusingNumbers
```
The following macro recognizes terms that are odd numeric literals, and replaces them with double their value.
If it unconditionally replaced them with double their value, then macro expansion would become an infinite loop because the same rule would always match the output.
```lean
scoped macro_rules
| `($n:num) => do
if n.getNat % 2 = 0 then Lean.Macro.throwUnsupported
let n' := (n.getNat * 2)
`($(Syntax.mkNumLit (info := n.raw.getHeadInfo) (toString n')))
```
Once the namespace ends, the macro is no longer used.
```lean
end ConfusingNumbers
```
Without opening the namespace, numeric literals function in the usual way.
```lean (name := nums1)
#eval (3, 4)
```
```leanOutput nums1
(3, 4)
```
When the namespace is open, the macro replaces {lean}`3` with {lean}`6`.
```lean (name := nums2)
open ConfusingNumbers
#eval (3, 4)
```
```leanOutput nums2
(6, 4)
```
It is not typically useful to change the interpretation of numeric or other literals in macros.
However, scoped macros can be very useful when adding new rules to extensible tactics such as {tactic}`trivial` that work well with the contents of the namespaces but should not always be used.
:::
::::
Behind the scenes, a {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command generates one macro function for each syntax kind that is matched in its quote patterns.
This function has a default case that throws the {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax` exception, so further macros may be attempted.
A single {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command with two rules is not always equivalent to two separate single-match commands.
First, the rules in a {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` are tried from top to bottom, but recently-declared macros are attempted first, so the order would need to be reversed.
Additionally, if an earlier rule in the macro throws the {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax` exception, then the later rules are not tried; if they were instead in separate {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` commands, then they would be attempted.
::::example "One vs. Two Sets of Macro Rules"
```lean -show
open Lean.Macro
```
The `arbitrary!` macro is intended to expand to some arbitrarily-determined value of a given type.
```lean
syntax (name := arbitrary!) "arbitrary! " term:arg : term
```
:::keepEnv
```lean
macro_rules
| `(arbitrary! ()) => `(())
| `(arbitrary! Nat) => `(42)
| `(arbitrary! ($t1 × $t2)) => `((arbitrary! $t1, arbitrary! $t2))
| `(arbitrary! Nat) => `(0)
```
Users may extend it by defining further sets of macro rules, such as this rule for {lean}`Empty` that fails:
```lean
macro_rules
| `(arbitrary! Empty) => throwUnsupported
```
```lean (name := arb1)
#eval arbitrary! (Nat × Nat)
```
```leanOutput arb1
(42, 42)
```
:::
:::keepEnv
If all of the macro rules had been defined as individual cases, then the result would have instead used the later case for {lean}`Nat`.
This is because the rules in a single {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command are checked from top to bottom, but more recently-defined {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` commands take precedence over earlier ones.
```lean
macro_rules
| `(arbitrary! ()) =>
`(())
macro_rules
| `(arbitrary! Nat) =>
`(42)
macro_rules
| `(arbitrary! ($t1 × $t2)) =>
`((arbitrary! $t1, arbitrary! $t2))
macro_rules
| `(arbitrary! Nat) =>
`(0)
macro_rules
| `(arbitrary! Empty) =>
throwUnsupported
```
```lean (name := arb2)
#eval arbitrary! (Nat × Nat)
```
```leanOutput arb2
(0, 0)
```
:::
Additionally, if any rule throws the {name Lean.Macro.Exception.unsupportedSyntax}`unsupportedSyntax` exception, no further rules in that command are checked.
```lean
macro_rules
| `(arbitrary! (List Nat)) => throwUnsupported
| `(arbitrary! (List $_)) => `([])
macro_rules
| `(arbitrary! (Array Nat)) => `(#[42])
macro_rules
| `(arbitrary! (Array $_)) => throwUnsupported
```
The case for {lean}`List Nat` fails to elaborate, because macro expansion did not translate the {keywordOf arbitrary!}`arbitrary!` syntax into something supported by the elaborator.
```lean (name := arb3) +error
#eval arbitrary! (List Nat)
```
```leanOutput arb3
elaboration function for `arbitrary!` has not been implemented
arbitrary! (List Nat)
```
The case for {lean}`Array Nat` succeeds, because the first set of macro rules are attempted after the second throws the exception.
```lean (name := arb4)
#eval arbitrary! (Array Nat)
```
```leanOutput arb4
#[42]
```
::::
### The `macro` Command
%%%
tag := "macro-command"
%%%
```lean -show
section
open Lean
```
The {keywordOf Lean.Parser.Command.macro}`macro` command simultaneously defines a new {tech}[syntax rule] and associates it with a {tech}[macro].
Unlike {keywordOf Lean.Parser.Command.notation}`notation`, which can define only new term syntax and in which the expansion is a term into which the parameters are to be substituted, the {keywordOf Lean.Parser.Command.macro}`macro` command may define syntax in any {tech}[syntax category] and it may use arbitrary code in the {name}`MacroM` monad to generate the expansion.
Because macros are so much more flexible than notations, Lean cannot automatically generate an unexpander; this means that new syntax implemented via the {keywordOf Lean.Parser.Command.macro}`macro` command is available for use in _input_ to Lean, but Lean's output does not use it without further work.
:::syntax command (title := "Macro Declarations")
```grammar
$[$_:docComment]?
$[@[$attrs,*]]?
$_:attrKind macro$[:$p]? $[(name := $_)]? $[(priority := $_)]? $xs:macroArg* : $k:ident =>
$tm
```
:::
:::syntax Lean.Parser.Command.macroArg -open (title := "Macro Arguments")
A macro's arguments are either syntax items (as used in the {keywordOf Lean.Parser.Command.syntax}`syntax` command) or syntax items with attached names.
```grammar
$s:stx
```
```grammar
$x:ident:$stx
```
:::
In the expansion, the names that are attached to syntax items are bound; they have type {name Lean.TSyntax}`TSyntax` for the appropriate syntax kinds.
If the syntax matched by the parser does not have a defined kind (e.g. because the name is applied to a complex specification), then the type is {lean}`TSyntax Name.anonymous`.
```lean -show -keep
-- Check the typing rules
open Lean Elab Term Macro Meta
elab "dbg_type " e:term ";" body:term : term => do
let e' ← elabTerm e none
let t ← inferType e'
logInfoAt e t
elabTerm body none
/--
info: TSyntax `str
---
info: TSyntax Name.anonymous
---
info: Syntax.TSepArray `num ","
-/
#check_msgs in
macro "gah!" thing:str other:(str <|> num) arg:num,* : term => do
dbg_type thing; pure ()
dbg_type other; pure ()
dbg_type arg; pure ()
return quote s!"{thing.raw} ||| {other.raw} ||| {arg.getElems}"
/-- info: "(str \"\\\"one\\\"\") ||| (num \"44\") ||| #[(num \"2\"), (num \"3\")]" : String -/
#check_msgs in
#check gah! "one" 44 2,3
```
The documentation comment is associated with the new syntax, and the attribute kind (none, `local`, or `scoped`) governs the visibility of the macro just as it does for notations: `scoped` macros are available in the namespace in which they are defined or in any {tech}[section scope] that opens that namespace, while `local` macros are available only in the local section scope.
Behind the scenes, the {keywordOf Lean.Parser.Command.macro}`macro` command is itself implemented by a macro that expands it to a {keywordOf Lean.Parser.Command.syntax}`syntax` command and a {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command.
Any attributes applied to the macro command are applied to the syntax definition, but not to the {keywordOf Lean.Parser.Command.macro_rules}`macro_rules` command.
```lean -show
end
```
### The Macro Attribute
%%%
tag := "macro-attribute"
%%%
{tech}[Macros] can be manually added to a syntax kind using the {keywordOf Lean.Parser.Attr.macro}`macro` attribute.
This low-level means of specifying macros is typically not useful, except as a result of code generation by macros that themselves generate macro definitions.
:::syntax attr (title := "The {keyword}`macro` Attribute")
The {keywordOf Lean.Parser.Attr.macro}`macro` attribute specifies that a function is to be considered a {tech}[macro] for the specified syntax kind.
```grammar
macro $_:ident
```
:::
::::keepEnv
:::example "The Macro Attribute"
```lean -show
open Lean Macro
```
```lean
/-- Generate a list based on N syntactic copies of a term -/
syntax (name := rep) "[" num " !!! " term "]" : term
@[macro rep]
def expandRep : Macro
| `([ $n:num !!! $e:term]) =>
let e' := Array.replicate n.getNat e
`([$e',*])
| _ =>
throwUnsupported
```
Evaluating this new expression demonstrates that the macro is present.
```lean (name := attrEx1)
#eval [3 !!! "hello"]
```
```leanOutput attrEx1
["hello", "hello", "hello"]
```
:::
::::
{include 0 Manual.NotationsMacros.Elab}
{include 0 Manual.NotationsMacros.Delab} |
reference-manual/Manual/VCGen/Tutorial.lean | import VersoManual
import Manual.Meta
import Manual.Papers
import Std.Tactic.Do
open Verso.Genre Manual
open Verso.Genre.Manual.InlineLean
open Verso.Code.External (lit)
set_option pp.rawOnError true
set_option verso.docstring.allowMissing true
set_option linter.unusedVariables false
set_option linter.typography.quotes true
set_option linter.typography.dashes true
set_option mvcgen.warning false
#doc (Manual) "Tutorial: Verifying Imperative Programs Using `mvcgen`" =>
%%%
tag := "mvcgen-tactic-tutorial"
htmlSplit := .never
%%%
This section is a tutorial that introduces the most important concepts of {tactic}`mvcgen` top-down.
Recall that you need to import {module}`Std.Tactic.Do` and open {namespace}`Std.Do` to run these examples:
```imports
import Std.Tactic.Do
```
```lean
open Std.Do
```
# Preconditions and Postconditions
One style in which program specifications can be written is to provide a {tech}_precondition_ $`P`, which the caller of a program $`\mathit{prog}` is expected to ensure, and a {tech}_postcondition_ $`Q`, which the $`\mathit{prog}` is expected to ensure.
The program $`\mathit{prog}` satisfies the specification if running it when the precondition $`P` holds always results in the postcondition $`Q` holding.
In general, many different preconditions might suffice for a program to ensure the postcondition.
After all, new preconditions can be generated by replacing a precondition $`P_1` with $`P_1 \wedge P_2`.
The {tech}_weakest precondition_ $`\textbf{wp}⟦\mathit{prog}⟧(Q)` of a program $`\mathit{prog}` and postcondition $`Q` is a precondition for which $`\mathit{prog}` ensures the postcondition $`Q` and is implied by all other such preconditions.
One way to prove something about the result of a program is to find the weakest precondition that guarantees the desired result, and then to show that this weakest precondition is simply true.
This means that the postcondition holds no matter what.
# Loops and Invariants
:::leanFirst
As a first example of {tactic}`mvcgen`, the function {name}`mySum` computes the sum of an array using {ref "let-mut"}[local mutable state] and a {keywordOf Lean.Parser.Term.doFor}`for` loop:
```lean
def mySum (l : Array Nat) : Nat := Id.run do
let mut out := 0
for i in l do
out := out + i
return out
```
:::
If {name}`mySum` is correct, then it is equal to {name}`Array.sum`.
In {name}`mySum`, the use of {keywordOf Lean.Parser.Term.do}`do` is an internal implementation detail—the function's signature makes no mention of any monad.
Thus, the proof first manipulates the goal into a form that is amenable to the use of {tactic}`mvcgen`, using the lemma {name}`Id.of_wp_run_eq`.
This lemma states that facts about the result of running a computation in the {name}`Id` monad that terminates normally (`Id` computations never throw exceptions) can be proved by showing that the {tech}[weakest precondition] that ensures the desired result is true.
Next, the proof uses {tactic}`mvcgen` to replace the formulation in terms of weakest preconditions with a set of {tech}[verification conditions].
While {tactic}`mvcgen` is mostly automatic, it does require an invariant for the loop.
A {tech}_loop invariant_ is a statement that is both assumed and guaranteed by the body of the loop; if it is true when the loop begins, then it will be true when the loop terminates.
```lean
theorem mySum_correct (l : Array Nat) : mySum l = l.sum := by
-- Focus on the part of the program with the `do` block (`Id.run ...`)
generalize h : mySum l = x
apply Id.of_wp_run_eq h
-- Break down into verification conditions
mvcgen
-- Specify the invariant which should hold throughout the loop
-- * `out` refers to the current value of the `let mut` variable
-- * `xs` is a `List.Cursor`, which is a data structure representing
-- a list that is split into `xs.prefix` and `xs.suffix`.
-- It tracks how far into the loop we have gotten.
-- Our invariant is that `out` holds the sum of the prefix.
-- The notation ⌜p⌝ embeds a `p : Prop` into the assertion language.
case inv1 => exact ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
-- After specifying the invariant, we can further simplify our goals
-- by "leaving the proof mode". `mleave` is just
-- `simp only [...] at *` with a stable simp subset.
all_goals mleave
-- Prove that our invariant is preserved at each step of the loop
case vc1 ih =>
-- The goal here mentions `pref`, which binds the `prefix` field of
-- the cursor passed to the invariant. Unpacking the
-- (dependently-typed) cursor makes it easier for `grind`.
grind
-- Prove that the invariant is true at the start
case vc2 =>
grind
-- Prove that the invariant at the end of the loop implies the
-- property we wanted
case vc3 h =>
grind
```
:::paragraph
Note that the case labels are actually unique prefixes of the full case labels.
Whenever referring to cases, only this prefix should be used; the suffix is merely a hint to the user of where that particular {tech}[VC] came from.
For example:
* `vc1.step` conveys that this {tech}[VC] proves the inductive step for the loop
* `vc2.a.pre` is meant to prove that the hypotheses of a goal imply the precondition of a specification (of {name}`forIn`).
* `vc3.a.post.success` is meant to prove that the postcondition of a specification (of {name}`forIn`) implies the desired property.
:::
:::paragraph
After specifying the loop invariant, the proof can be shortened to just {keyword}`all_goals mleave; grind` (where {tactic}`mleave` leaves the stateful proof mode, cleaning up the proof state).
```lean
theorem mySum_correct_short (l : Array Nat) : mySum l = l.sum := by
generalize h : mySum l = x
apply Id.of_wp_run_eq h
mvcgen
case inv1 => exact ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
all_goals mleave; grind
```
This pattern is so common that {tactic}`mvcgen` comes with special syntax for it:
```lean
theorem mySum_correct_shorter (l : Array Nat) : mySum l = l.sum := by
generalize h : mySum l = x
apply Id.of_wp_run_eq h
mvcgen
invariants
· ⇓⟨xs, out⟩ => ⌜xs.prefix.sum = out⌝
with grind
```
The {keyword}`mvcgen invariants `{lit}`...`{keyword}` with `{lit}`...` is an abbreviation for the
tactic sequence {keyword}`mvcgen; case`{lit}` inv1 => ...`{keyword}`; all_goals mleave; grind`
above. It is the form that we will be using from now on.
:::
:::paragraph
It is helpful to compare the proof of {name}`mySum_correct_shorter` to a traditional correctness proof:
```lean
theorem mySum_correct_vanilla (l : Array Nat) : mySum l = l.sum := by
-- Turn the array into a list
cases l with | mk l =>
-- Unfold `mySum` and rewrite `forIn` to `foldl`
simp [mySum]
-- Generalize the inductive hypothesis
suffices h : ∀ out, List.foldl (· + ·) out l = out + l.sum by simp [h]
-- Grind away
induction l with grind
```
:::
:::paragraph
This proof is similarly succinct as the proof in {name}`mySum_correct_shorter` that uses {tactic}`mvcgen`.
However, the traditional approach relies on important properties of the program:
* The {keywordOf Lean.Parser.Term.doFor}`for` loop does not {keywordOf Lean.Parser.Term.doBreak}`break` or {keywordOf Lean.Parser.Term.doReturn}`return` early. Otherwise, the {name}`forIn` could not be rewritten to a {name Array.foldl}`foldl`.
* The loop body {lean (type := "Nat → Nat → Nat")}`(· + ·)` is small enough to be repeated in the proof.
* The loop body does not carry out any effects in the underlying monad (that is, the only effects are those introduced by {keywordOf Lean.Parser.Term.do}`do`-notation).
The {name}`Id` monad has no effects, so all of its comptutations are pure.
While {name}`forIn` could still be rewritten to a {name Array.foldlM}`foldlM`, reasoning about the monadic loop body can be tough for {tactic}`grind`.
In the following sections, we will go through several examples to learn about {tactic}`mvcgen` and its support library, and also see where traditional proofs become difficult.
This is usually caused by:
* {keywordOf Lean.Parser.Term.do}`do` blocks using control flow constructs such as {keywordOf Lean.Parser.Term.doFor}`for` loops, {keywordOf Lean.Parser.Term.doBreak}`break`s and early {keywordOf Lean.Parser.Term.doReturn}`return`.
* The use of effects in non-{name}`Id` monads, which affects the implicit monadic context (state, exceptions) in ways that need to be reflected in loop invariants.
{tactic}`mvcgen` scales to these challenges with reasonable effort.
:::
# Control Flow
:::leanFirst
Let us consider another example that combines {keywordOf Lean.Parser.Term.doFor}`for` loops with an early return.
{name}`List.Nodup` is a predicate that asserts that a given list does not contain any duplicates.
The function {name}`nodup` below decides this predicate:
```lean
def nodup (l : List Int) : Bool := Id.run do
let mut seen : Std.HashSet Int := ∅
for x in l do
if x ∈ seen then
return false
seen := seen.insert x
return true
```
:::
:::paragraph
This function is correct if it returns {name}`true` for every list that satisfies {name}`List.Nodup` and {name}`false` for every list that does not.
Just as it was in {name}`mySum`, the use of {keywordOf Lean.Parser.Term.do}`do`-notation and the {name}`Id` monad is an internal implementation detail of {name}`nodup`.
Thus, the proof begins by using {name}`Id.of_wp_run_eq` to make the proof state amenable to {tactic}`mvcgen`:
```lean
theorem nodup_correct (l : List Int) : nodup l ↔ l.Nodup := by
generalize h : nodup l = r
apply Id.of_wp_run_eq h
mvcgen
invariants
· Invariant.withEarlyReturn
(onReturn := fun ret seen => ⌜ret = false ∧ ¬l.Nodup⌝)
(onContinue := fun xs seen =>
⌜(∀ x, x ∈ seen ↔ x ∈ xs.prefix) ∧ xs.prefix.Nodup⌝)
with grind
```
:::
:::paragraph
```lean -show
section
variable {l : List Int} {ret : Bool} {seen : Std.HashSet Int} {xs : l.Cursor}
axiom onReturn : Bool → Std.HashSet Int → SPred PostShape.pure.args
axiom onContinue : l.Cursor → Std.HashSet Int → SPred PostShape.pure.args
axiom onExcept : ExceptConds PostShape.pure
```
The proof has the same succinct structure as for the initial {name}`mySum` example, because we again offload all proofs to {tactic}`grind` and its existing automation around {name}`List.Nodup`.
Therefore, the only difference is in the {tech}[loop invariant].
Since our loop has an {ref "early-return"}[early return], we construct the invariant using the helper function {lean}`Invariant.withEarlyReturn`.
This function allows us to specify the invariant in three parts:
* {lean}`onReturn ret seen` holds after the loop was left through an early return with value {lean}`ret`.
In case of {name}`nodup`, the only value that is ever returned is {name}`false`, in which case {name}`nodup` has decided there _is_ a duplicate in the list.
* {lean}`onContinue xs seen` is the regular induction step that proves the invariant is preserved each loop iteration.
The iteration state is captured by the cursor {lean}`xs`.
The given example asserts that the set {lean}`seen` contains all the elements of previous loop iterations and asserts that there were no duplicates so far.
* {lean}`onExcept` must hold when the loop throws an exception.
There are no exceptions in {lean}`Id`, so we leave it unspecified to use the default.
(Exceptions will be discussed at a later point.)
```lean -show
end
```
:::
:::paragraph
Note that the form `mvcgen invariants?` will suggest an initial invariant using {name}`Invariant.withEarlyReturn`, so there is no need to memorize the exact syntax for specifying invariants:
```lean (name := invariants?)
example (l : List Int) : nodup l ↔ l.Nodup := by
generalize h : nodup l = r
apply Id.of_wp_run_eq h
mvcgen invariants? <;> sorry
```
The tactic suggests a starting invariant.
This starting point will not allow the proof to succeed—after all, if the invariant can be inferred by the system, then there's no need to make the user specify it—but it does provide a reminder of the correct syntax to use for assertions in the current monad:
```leanOutput invariants?
Try this:
[apply] invariants
·
Invariant.withEarlyReturn (onReturn := fun r letMuts => ⌜l.Nodup ∧ (r = true ↔ l.Nodup)⌝) (onContinue :=
fun xs letMuts => ⌜xs.prefix = [] ∧ letMuts = ∅ ∨ xs.suffix = [] ∧ l.Nodup⌝)
```
:::
:::paragraph
Now consider the following direct (and excessively golfed) proof without {tactic}`mvcgen`:
```lean
theorem nodup_correct_directly (l : List Int) : nodup l ↔ l.Nodup := by
rw [nodup]
generalize hseen : (∅ : Std.HashSet Int) = seen
change ?lhs ↔ l.Nodup
suffices h : ?lhs ↔ l.Nodup ∧ ∀ x ∈ l, x ∉ seen by grind
clear hseen
induction l generalizing seen with grind [Id.run_pure, Id.run_bind]
```
:::
:::paragraph
Some observations:
* The proof is even shorter than the one with {tactic}`mvcgen`.
* The use of {tactic}`generalize` to generalize the accumulator relies on there being exactly one occurrence of {lean (type := "Std.HashSet Int")}`∅` to generalize. If that were not the case, we would have to copy parts of the program into the proof. This is a no-go for larger functions.
* {tactic}`grind` splits along the control flow of the function and reasons about {name}`Id`, given the right lemmas.
While this works for {name}`Id.run_pure` and {name}`Id.run_bind`, it would not work for {name}`Id.run_seq`, for example, because that lemma is not {tech (key := "E-matching")}[E-matchable].
If {tactic}`grind` would fail, we would be forced to do all the control flow splitting and monadic reasoning by hand until {tactic}`grind` could pick up again.
:::
The usual way to avoid replicating the control flow of a definition in a proof is to use the {tactic}`fun_cases` or {tactic}`fun_induction` tactics.
Unfortunately, {tactic}`fun_cases` does not help with control flow inside a {name}`forIn` application.
The {tactic}`mvcgen` tactic, on the other hand, ships with support for many {name}`forIn` implementations.
It can easily be extended (with {attrs}`@[spec]` annotations) to support custom {name}`forIn` implementations.
Furthermore, an {tactic}`mvcgen`-powered proof will never need to copy any part of the original program.
# Compositional Reasoning About Effectful Programs Using Hoare Triples
:::leanSection
```lean -show
variable (M : Type u → Type v) [Monad M] (α : Type u)
axiom M.run : M α → β → α
```
The previous examples reasoned about functions defined using {lean}`Id.run`{lit}` `{keywordOf Lean.Parser.Term.do}`do`{lit}` <prog>` to make use of local mutability and early return in {lit}`<prog>`.
However, real-world programs often use {keywordOf Lean.Parser.Term.do}`do` notation and monads {lean}`M` to hide away state and failure conditions as implicit “effects”.
In this use case, functions usually omit the {name}`M.run`.
Instead they have a monadic return type {lean}`M α` and compose well with other functions of that return type.
In other words, the monad is part of the function's _interface_, not merely its implementation.
:::
:::leanFirst
Here is an example involving a stateful function {name}`mkFresh` that returns auto-incremented counter values:
```lean
structure Supply where
counter : Nat
def mkFresh : StateM Supply Nat := do
let n ← (·.counter) <$> get
modify fun s => { s with counter := s.counter + 1 }
pure n
def mkFreshN (n : Nat) : StateM Supply (List Nat) := do
let mut acc := #[]
for _ in [:n] do
acc := acc.push (← mkFresh)
pure acc.toList
```
:::
::::leanFirst
:::leanSection
```lean -show
variable (n : Nat)
```
{lean}`mkFreshN n` returns {lean}`n` “fresh” numbers, modifying the internal {name}`Supply` state through {name}`mkFresh`.
Here, “fresh” refers to all previously generated numbers being distinct from the next generated number.
We can formulate and prove a correctness property {name}`mkFreshN_correct` in terms of {name}`List.Nodup`: the returned list of numbers should contain no duplicates.
In this proof, {name}`StateM.of_wp_run'_eq` serves the same role that {name}`Id.of_wp_run_eq` did in the preceding examples.
:::
```lean
theorem mkFreshN_correct (n : Nat) : ((mkFreshN n).run' s).Nodup := by
-- Focus on `(mkFreshN n).run' s`.
generalize h : (mkFreshN n).run' s = x
apply StateM.of_wp_run'_eq h
-- Show something about monadic program `mkFresh n`.
-- The `mkFreshN` and `mkFresh` arguments to `mvcgen` add to an
-- internal `simp` set and makes `mvcgen` unfold these definitions.
mvcgen [mkFreshN, mkFresh]
invariants
-- Invariant: The counter is larger than any accumulated number,
-- and all accumulated numbers are distinct.
-- Note that the invariant may refer to the state through function
-- argument `state : Supply`. Since the next number to accumulate is
-- the counter, it is distinct to all accumulated numbers.
· ⇓⟨xs, acc⟩ state =>
⌜(∀ x ∈ acc, x < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
::::
## Hoare Triples
:::::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [Monad m] [WP m ps] {α σ ε : Type u} {P : Assertion ps} {Q : PostCond α ps} {prog : m α} {c : Nat}
```
A {tech}_Hoare triple_ consists of a precondition, a statement, and a postcondition; it asserts that if the precondition holds, then the postcondition holds after running the statement.
In Lean syntax, this is written {lean}`⦃ P ⦄ prog ⦃ Q ⦄`, where {lean}`P` is the precondition, {typed}`prog : m α` is the statement, and {lean}`Q` is the postcondition.
{lean}`P` and {lean}`Q` are written in an assertion language that is determined by the specific monad {lean}`m`.{margin}[In particular, monad's instance of the type class {name}`WP` specifies the ways in which assertions may refer to the monad's state or the exceptions it may throw.]
:::leanSection
```lean -show
variable {stmt1 stmt2 : m PUnit} {ps : PostShape.{0}} {P : Assertion ps} {Q : PostCond Unit ps} {P' : Assertion ps} {Q' : PostCond Unit ps}
```
Specifications as Hoare triples are compositional because they allow statements to be sequenced.
Given {lean}`⦃P⦄ stmt1 ⦃Q⦄` and {lean}`⦃P'⦄ stmt2 ⦃Q'⦄`, if {lean}`Q` implies {lean}`P'` then {lean}`⦃P⦄ (do stmt1; stmt2) ⦃Q'⦄`.
Just as proofs about ordinary functions can rely on lemmas about the functions that they call, proofs about monadic programs can use lemmas that are specified in terms of Hoare triples.
:::
::::paragraph
One suitable specification for {name}`mkFresh` as a Hoare triple is this translation of {name}`mkFreshN_correct`:
:::leanSection
```lean -show
variable {n : Nat}
```
```leanTerm
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄
```
:::
```lean -show
variable {p : Prop}
```
Corner brackets embed propositions into the monadic assertion language, so {lean}`⌜p⌝` is the assertion of the proposition {lean}`p`.
The precondition {lean}`⌜True⌝` asserts that {lean}`True` is true; this trivial precondition is used to state that the specification imposes no requirements on the state in which it is called.
The postcondition states that the result value is a list with no duplicate elements.
::::
::::paragraph
A specification for the single-step {name}`mkFresh` describes its effects on the monad's state:
:::leanSection
```lean -show
variable {n : Nat}
```
```leanTerm
∀ (c : Nat),
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄
```
When working in a state monad, preconditions may be parameterized over the value of the state prior to running the code.
Here, the universally quantified {name}`Nat` is used to _relate_ the initial state to the final state; the precondition is used to connect it to the initial state.
Similarly, the postcondition may also accept the final state as a parameter.
This Hoare triple states:
> If {lean}`c` refers to the {name}`Supply.counter` field of the {name}`Supply` prestate, then running {name}`mkFresh` returns {lean}`c` and modifies the {name}`Supply.counter` of the poststate to be larger than {lean}`c`.
Note that this specification is lossy: {name}`mkFresh` could increment its state by an arbitrary non-negative amount and still satisfy the specification.
This is good, because specifications may _abstract over_ uninteresting implementation details, ensuring resilient and small proofs.
:::
::::
:::paragraph
Hoare triples are defined in terms of a logic of stateful predicates plus a {tech}[weakest precondition] semantics {lean}`wp⟦prog⟧` that translates monadic programs into this logic.
A weakest precondition semantics is an interpretation of programs as mappings from postconditions to the weakest precondition that the program would require to ensure the postcondition; in this interpretation, programs are understood as {tech (key := "predicate transformer semantics")}_predicate transformers_.
The Hoare triple syntax is notation for {name}`Std.Do.Triple`:
```lean -keep
-- This is the definition of Std.Do.Triple:
def Triple [WP m ps] {α : Type u} (prog : m α)
(P : Assertion ps) (Q : PostCond α ps) : Prop :=
P ⊢ₛ wp⟦prog⟧ Q
```
:::
```lean -show
variable {σ : Type u}
```
:::paragraph
The {name}`WP` type class maps a monad {lean}`m` to its {name}`PostShape` {lean}`ps`, and this {name}`PostShape` governs the exact shape of the {name}`Std.Do.Triple`.
Many of the standard monad transformers such as {name}`StateT`, {name}`ReaderT` and {name}`ExceptT` come with a canonical {name}`WP` instance.
For example, {lean}`StateT σ` comes with a {name}`WP` instance that adds a {lean}`σ` argument to every {name}`Assertion`.
Stateful entailment `⊢ₛ` eta-expands through these additional {lean}`σ` arguments.
For {name}`StateM` programs, the following type is definitionally equivalent to {name}`Std.Do.Triple`:
```lean
def StateMTriple {α σ : Type u} (prog : StateM σ α)
(P : σ → ULift Prop) (Q : (α → σ → ULift Prop) × PUnit) : Prop :=
∀ s, (P s).down → let (a, s') := prog.run s; (Q.1 a s').down
```
```lean -show
example : @StateMTriple α σ = Std.Do.Triple (m := StateM σ) := rfl
```
:::
```lean -show
variable {p : Prop}
```
The common postcondition notation `⇓ r => ...` injects an assertion of type {lean}`α → Assertion ps` into
{lean}`PostCond α ps` (the `⇓` is meant to be parsed like `fun`); in case of {name}`StateM` by adjoining it with an empty tuple {name}`PUnit.unit`.
The shape of postconditions becomes more interesting once exceptions enter the picture.
The notation {lean}`⌜p⌝` embeds a pure hypotheses {lean}`p` into a stateful assertion.
Vice versa, any stateful hypothesis {lean}`P` is called _pure_ if it is equivalent to {lean}`⌜p⌝`
for some {lean}`p`.
Pure, stateful hypotheses may be freely moved into the regular Lean context and back.
(This can be done manually with the {tactic}`mpure` tactic.)
:::::
## Composing Specifications
Nested unfolding of definitions as in {tactic}`mvcgen`{lit}` [`{name}`mkFreshN`{lit}`, `{name}`mkFresh`{lit}`]` is quite blunt but effective for small programs.
A more compositional way is to develop individual {tech}_specification lemmas_ for each monadic function.
A specification lemma is a Hoare triple that is automatically used during {tech}[verification condition] generation to obtain the pre- and postconditions of each statement in a {keywordOf Lean.Parser.Term.do}`do`-block.
When the system cannot automatically prove that the postcondition of one statement implies the precondition of the next, then this missing reasoning step becomes a verification condition.
:::paragraph
Specification lemmas can either be passed as arguments to {tactic}`mvcgen` or registered in a global (or {keyword}`scoped`, or {keyword}`local`) database of specifications using the {attrs}`@[spec]` attribute:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄ := by
-- Unfold `mkFresh` and blast away:
mvcgen [mkFresh] with grind
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄ := by
-- `mvcgen [mkFreshN, mkFresh_spec]` if `mkFresh_spec` were not
-- registered with `@[spec]`
mvcgen [mkFreshN]
invariants
-- As before:
· ⇓⟨xs, acc⟩ state =>
⌜(∀ x ∈ acc, x < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
:::
:::paragraph
The original correctness theorem can now be proved using {tactic}`mvcgen` alone:
```lean
theorem mkFreshN_correct_compositional (n : Nat) :
((mkFreshN n).run' s).Nodup := by
generalize h : (mkFreshN n).run' s = x
apply StateM.of_wp_run'_eq h
mvcgen
```
The specification lemma {name}`mkFreshN_spec` is automatically used by {tactic}`mvcgen`.
:::
## An Advanced Note About Pure Preconditions and a Notion of Frame Rule
This subsection is a bit of a digression and can be skipped on first reading.
:::leanSection
```lean -show
axiom M : Type → Type
variable {x y : UInt8} [Monad M] [WP M .pure]
def addQ (x y : UInt8) : M UInt8 := pure (x + y)
local infix:1023 " +? " => addQ
axiom dots {α} : α
local notation "…" => dots
```
Say the specification for some [`Aeneas`](https://github.com/AeneasVerif/aeneas)-inspired monadic addition function {typed}`x +? y : M UInt8` has the
requirement that the addition won't overflow, that is, `h : x.toNat + y.toNat ≤ UInt8.size`.
Should this requirement be encoded as a regular Lean hypothesis of the specification (`add_spec_hyp`) or should this requirement be encoded as a pure precondition of the Hoare triple, using `⌜·⌝` notation (`add_spec_pre`)?
```lean
theorem add_spec_hyp (x y : UInt8)
(h : x.toNat + y.toNat ≤ UInt8.size) :
⦃⌜True⌝⦄ x +? y ⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := …
theorem add_spec_pre (x y : UInt8) :
⦃⌜x.toNat + y.toNat ≤ UInt8.size⌝⦄
x +? y
⦃⇓ r => ⌜r.toNat = x.toNat + y.toNat⌝⦄ := …
```
:::
The first approach is advisable, although it should not make a difference in practice.
The VC generator will move pure hypotheses from the stateful context into the regular Lean context, so the second form turns effectively into the first form.
This is referred to as {deftech}_framing_ hypotheses (cf. the {tactic}`mpure` and {tactic}`mframe` tactics).
Hypotheses in the Lean context are part of the immutable {deftech}_frame_ of the stateful logic, because in contrast to stateful hypotheses they survive the rule of consequence.
# Monad Transformers and Lifting
Real-world programs often use monads that are built from multiple {tech}[monad transformers], with operations being frequently {ref "lifting-monads"}[lifted] from one monad to another.
Verification of these programs requires taking this into account.
We can tweak the previous example to demonstrate this.
```lean -show
namespace Transformers
variable {m : Type → Type} {α : Type} {ps : PostShape.{0}}
attribute [-instance] Lake.instMonadLiftTOfMonadLift_lake
```
::::paragraph
:::leanFirst
Now, there is an application with two separate monads, both built using transformers:
```lean
abbrev CounterM := StateT Supply (ReaderM String)
abbrev AppM := StateT Bool CounterM
```
Instead of using {lean}`StateM Supply`, {name}`mkFresh` uses {lean}`CounterM`:
```lean
def mkFresh : CounterM Nat := do
let n ← (·.counter) <$> get
modify fun s => { s with counter := s.counter + 1 }
pure n
```
{name}`mkFreshN` is defined in terms of {name}`AppM`, which includes multiple states and a reader effect.
The definition of {name}`mkFreshN` lifts {name}`mkFresh` into {name}`AppM`:
```lean
def mkFreshN (n : Nat) : AppM (List Nat) := do
let mut acc := #[]
for _ in [:n] do
let n ← mkFresh
acc := acc.push n
return acc.toList
```
:::
::::
::::paragraph
Then the {tactic}`mvcgen`-based proof goes through unchanged:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃⇓ r state => ⌜r = c ∧ c < state.counter⌝⦄ := by
--TODO: mvcgen [mkFresh] with grind
sorry
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄ mkFreshN n ⦃⇓ r => ⌜r.Nodup⌝⦄ := by
-- `liftCounterM` here ensures unfolding
mvcgen [mkFreshN]
invariants
· ⇓⟨xs, acc⟩ _ state =>
⌜(∀ n ∈ acc, n < state.counter) ∧ acc.toList.Nodup⌝
with grind
```
::::
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {α : Type u} {prog : m α}
```
The {name}`WPMonad` type class asserts that {lean}`wp⟦prog⟧` distributes over the {name}`Monad` operations (“monad morphism”).
This proof might not look much more exciting than when only a single monad was involved.
However, under the radar of the user the proof builds on a cascade of specifications for {name}`MonadLift` instances.
:::
```lean -show
end Transformers
```
# Exceptions
::::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α : Type u} {prog : m α} {Q' : α → Assertion ps}
```
If {keyword}`let mut` is the {keywordOf Lean.Parser.Term.do}`do`-equivalent of {name}`StateT`, then early {keywordOf Lean.Parser.Term.doReturn}`return` is the equivalent of {name}`ExceptT`.
We have seen how the {tactic}`mvcgen` copes with {name}`StateT`; here we will look at the program logic's support for {name}`ExceptT`.
Exceptions are the reason why the type of postconditions {lean}`PostCond α ps` is not simply a single condition of type {lean}`α → Assertion ps` for the success case.
To see why, suppose the latter was the case, and suppose that program {lean}`prog` throws an exception in a prestate satisfying {lean}`P`.
Should we be able to prove {lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄`?
(Recall that `⇓` is grammatically similar to `fun`.)
There is no result `r`, so it is unclear what this proof means for {lean}`Q'`!
So there are two reasonable options, inspired by non-termination in traditional program logics:
: The {tech}_total correctness interpretation_
{lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` asserts that, given {lean}`P` holds, then {lean}`prog` terminates _and_ {lean}`Q'` holds for the result.
: The {tech}_partial correctness interpretation_
{lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` asserts that, given {lean}`P` holds, and _if_ {lean}`prog` terminates _then_ {lean}`Q'` holds for the result.
The notation {lean}`⇓ r => Q' r` has the total interpretation, while {lean}`⇓? r => Q' r` has the partial interpretation.
In the running example, {lean}`⦃P⦄ prog ⦃⇓ r => Q' r⦄` is unprovable, but {lean}`⦃P⦄ prog ⦃⇓? r => Q' r⦄` is trivially provable.
However, the binary choice suggests that there is actually a _spectrum_ of correctness properties to express.
The notion of postconditions {name}`PostCond` in `Std.Do` supports this spectrum.
::::
```lean -show
namespace Exceptions
```
For example, suppose that our {name}`Supply` of fresh numbers is bounded and we want to throw an exception if the supply is exhausted.
Then {name}`mkFreshN` should throw an exception _only if_ the supply is indeed exhausted, as in this implementation:
```lean
structure Supply where
counter : Nat
limit : Nat
property : counter ≤ limit
def mkFresh : EStateM String Supply Nat := do
let supply ← get
if h : supply.counter = supply.limit then
throw s!"Supply exhausted: {supply.counter} = {supply.limit}"
else
let n := supply.counter
have := supply.property
set { supply with counter := n + 1, property := by grind }
pure n
```
The following correctness property expresses this:
```lean
@[spec]
theorem mkFresh_spec (c : Nat) :
⦃fun state => ⌜state.counter = c⌝⦄
mkFresh
⦃post⟨fun r state => ⌜r = c ∧ c < state.counter⌝,
fun _ state => ⌜c = state.counter ∧ c = state.limit⌝⟩⦄ := by
mvcgen [mkFresh] with grind
```
In this property, the postcondition has two branches: the first covers successful termination, and the second applies when an exception is thrown.
The monad's {name}`WP` instance determines both how many branches the postcondition may have and the number of parameters in each branch: each exception that might be triggered gives rise to an extra branch, and each state gives an extra parameter.
:::leanFirst
In this new monad, {name}`mkFreshN`'s implementation is unchanged, except for the type signature:
```lean
def mkFreshN (n : Nat) : EStateM String Supply (List Nat) := do
let mut acc := #[]
for _ in [:n] do
acc := acc.push (← mkFresh)
pure acc.toList
```
:::
:::paragraph
However, the specification lemma must account for both successful termination and exceptions being thrown, in both the postcondition and the loop invariant:
```lean
@[spec]
theorem mkFreshN_spec (n : Nat) :
⦃⌜True⌝⦄
mkFreshN n
⦃post⟨fun r => ⌜r.Nodup⌝,
fun _msg state => ⌜state.counter = state.limit⌝⟩⦄ := by
mvcgen [mkFreshN]
invariants
· post⟨fun ⟨xs, acc⟩ state =>
⌜(∀ n ∈ acc, n < state.counter) ∧ acc.toList.Nodup⌝,
fun _msg state => ⌜state.counter = state.limit⌝⟩
with grind
```
:::
:::paragraph
The final proof uses the specification lemmas and {tactic}`mvcgen`, just as before:
```lean
theorem mkFreshN_correct (n : Nat) :
match (mkFreshN n).run s with
| .ok l _ => l.Nodup
| .error _ s' => s'.counter = s'.limit := by
generalize h : (mkFreshN n).run s = x
apply EStateM.of_wp_run_eq h
mvcgen
```
:::
```lean -show
end Exceptions
```
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
Just as any {lean}`StateT σ`-like monad transformer gives rise to a {lean}`PostShape.arg σ` layer in the {lean}`ps` that {name}`WP` maps into, any {lean}`ExceptT ε`-like layer gives rise to a {lean}`PostShape.except ε` layer.
Every {lean}`PostShape.arg σ` adds another `σ → ...` layer to the language of {lean}`Assertion`s.
Every {lean}`PostShape.except ε` leaves the {lean}`Assertion` language unchanged, but adds another exception
condition to the postcondition.
Hence the {name}`WP` instance for {lean}`EStateM ε σ` maps to the {name}`PostShape` {lean}`PostShape.except ε (.arg σ .pure)`, just
as for {lean}`ExceptT ε (StateM σ)`.
:::
# Extending `mvcgen` With Support for Custom Monads
The {tactic}`mvcgen` framework is designed to be extensible.
None of the monads presented so far have in any way been hard-coded into {tactic}`mvcgen`.
Rather, {tactic}`mvcgen` relies on instances of the {name}`WP` and {name}`WPMonad` type class and user-provided specifications to generate {tech}[verification conditions].
:::leanSection
```lean -show
variable {m : Type u → Type v} [Monad m] {ps : PostShape.{u}}
```
The {name}`WP` instance defines the weakest precondition interpretation of a monad {lean}`m` into a predicate transformer {lean}`PredTrans ps`,
and the matching {name}`WPMonad` instance asserts that this translation distributes over the {name}`Monad` operations.
:::
::::paragraph
:::leanFirst
Suppose one wants to use `mvcgen` to generate verification conditions for programs generated by [`Aeneas`](https://github.com/AeneasVerif/aeneas).
`Aeneas` translates Rust programs into Lean programs in the following {name}`Result` monad:
```lean
inductive Error where
| integerOverflow: Error
-- ... more error kinds ...
inductive Result (α : Type u) where
| ok (v: α): Result α
| fail (e: Error): Result α
| div
```
```lean -show
instance Result.instMonad : Monad Result where
pure x := .ok x
bind x f := match x with
| .ok v => f v
| .fail e => .fail e
| .div => .div
instance Result.instLawfulMonad : LawfulMonad Result := by
-- TODO: Replace sorry with grind when it no longer introduces section
-- variables
apply LawfulMonad.mk' <;> (simp only [Result.instMonad]; sorry)
```
:::
::::
:::paragraph
There are both {inst}`Monad Result` and {inst}`LawfulMonad Result` instances.
Supporting this monad in {tactic}`mvcgen` is a matter of:
1. Adding {name}`WP` and {name}`WPMonad` instances for {name}`Result`
2. Registering specification lemmas for the translation of basic Rust primitives such as addition etc.
:::
::::paragraph
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
The {name}`WP` instance for {name}`Result` specifies a postcondition shape {lean (type := "PostShape.{0}")}`.except Error .pure` because there are no state-like effects, but there is a single exception of type {lean}`Error`.
The {name}`WP` instance translates programs in {lean}`Result α` to predicate transformers in {lean}`PredTrans ps α`.
That is, a function in {lean}`PostCond α ps → Assertion ps`, mapping a postcondition to its weakest precondition.
The implementation of {name}`WP.wp` reuses the implementation for {lean}`Except Error` for two of its cases, and maps diverging programs to {lean}`False`.
The instance is named so that it can be more easily unfolded in proofs about it.
:::
```lean
instance Result.instWP : WP Result (.except Error .pure) where
wp
| .ok v => wp (pure v : Except Error _)
| .fail e => wp (throw e : Except Error _)
| .div => PredTrans.const ⌜False⌝
```
::::
:::paragraph
The implementation of {name}`WP.wp` should distribute over the basic monad operators:
```lean
instance : WPMonad Result (.except Error .pure) where
wp_pure := by
intros
ext Q
simp [wp, PredTrans.pure, pure, Except.pure, Id.run]
wp_bind x f := by
simp only [Result.instWP, bind]
ext Q
cases x <;> simp [PredTrans.bind, PredTrans.const]
```
:::
```lean
theorem Result.of_wp {α} {x : Result α} (P : Result α → Prop) :
(⊢ₛ wp⟦x⟧ post⟨fun a => ⌜P (.ok a)⌝,
fun e => ⌜P (.fail e)⌝⟩) → P x := by
intro hspec
simp only [instWP] at hspec
split at hspec <;> simp_all
```
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
The definition of the {name}`WP` instance determines what properties can be derived from proved specifications via {lean}`Result.of_wp`.
This lemma defines what “weakest precondition” means.
:::
:::paragraph
To exemplify the second part, here is an example definition of {name}`UInt32` addition in {name}`Result` that models integer overflow:
```lean
instance : MonadExcept Error Result where
throw e := .fail e
tryCatch x h := match x with
| .ok v => pure v
| .fail e => h e
| .div => .div
def addOp (x y : UInt32) : Result UInt32 :=
if x.toNat + y.toNat ≥ UInt32.size then
throw .integerOverflow
else
pure (x + y)
```
:::
:::paragraph
There are two relevant specification lemmas to register:
```lean
@[spec]
theorem Result.throw_spec {α Q} (e : Error) :
⦃Q.2.1 e⦄ throw (m := Result) (α := α) e ⦃Q⦄ := id
@[spec]
theorem addOp_ok_spec {x y} (h : x.toNat + y.toNat < UInt32.size) :
⦃⌜True⌝⦄
addOp x y
⦃⇓ r => ⌜r = x + y ∧ (x + y).toNat = x.toNat + y.toNat⌝⦄ := by
mvcgen [addOp] with (simp_all; try grind)
```
:::
:::paragraph
This is already enough to prove the following example:
```lean
example :
⦃⌜True⌝⦄
do let mut x ← addOp 1 3
for _ in [:4] do
x ← addOp x 5
return x
⦃⇓ r => ⌜r.toNat = 24⌝⦄ := by
mvcgen
invariants
· ⇓⟨xs, x⟩ => ⌜x.toNat = 4 + 5 * xs.prefix.length⌝
with (simp_all [UInt32.size]; try grind)
```
:::
# Proof Mode for Stateful Goals
```lean -show
variable {σs : List (Type u)} {H T : SPred σs}
```
It is a priority of {tactic}`mvcgen` to break down monadic programs into {tech}[verification conditions] that are straightforward to understand.
For example, when the monad is monomorphic and all loop invariants have been instantiated, an invocation of {tactic}`all_goals`{lit}` `{tactic}`mleave` should simplify away any {name}`Std.Do.SPred`-specific constructs and leave behind a goal that is easily understood by humans and {tactic}`grind`.
This {tactic}`all_goals`{lit}` `{tactic}`mleave` step is carried out automatically by {tactic}`mvcgen` after loop invariants have been instantiated.
However, there are times when {tactic}`mleave` will be unable to remove all {name}`Std.Do.SPred` constructs.
In this case, verification conditions of the form {lean}`H ⊢ₛ T` will be left behind.
The assertion language {name}`Assertion` translates into an {name}`Std.Do.SPred` as follows:
```lean -keep
abbrev PostShape.args : PostShape.{u} → List (Type u)
| .pure => []
| .arg σ s => σ :: PostShape.args s
| .except _ s => PostShape.args s
abbrev Assertion (ps : PostShape.{u}) : Type u :=
SPred (PostShape.args ps)
```
:::leanSection
```lean -show
universe u v
variable {m : Type u → Type v} {ps : PostShape.{u}} [WP m ps] {P : Assertion ps} {α σ ε : Type u} {prog : m α} {Q' : α → Assertion ps}
```
A common case for when a VC of the form {lean}`H ⊢ₛ T` is left behind is when the base monad {lean}`m` is polymorphic.
In this case, the proof will depend on a {lean}`WP m ps` instance which governs the translation into the {name}`Assertion` language, but the exact correspondence to `σs : List (Type u)` is yet unknown.
To successfully discharge such a VC, `mvcgen` comes with an entire proof mode that is inspired by that of the Iris concurrent separation logic.
(In fact, the proof mode was adapted in large part from its Lean clone, [`iris-lean`](https://github.com/leanprover-community/iris-lean).)
The {ref "tactic-ref-spred"}[tactic reference] contains a list of all proof mode tactics.
::: |
reference-manual/Manual/Releases/v4_17_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
-- TODO: investigate why the Markdown elaboration is taking this much stack in the new compiler
set_option maxRecDepth 9500
#doc (Manual) "Lean 4.17.0 (2025-03-03)" =>
%%%
tag := "release-v4.17.0"
file := "v4.17.0"
%%%
````markdown
For this release, 319 changes landed. In addition to the 168 feature additions and 57 fixes listed below there were 12 refactoring changes, 13 documentation improvements and 56 chores.
## Highlights
The Lean v4.17 release brings a range of new features, performance improvements,
and bug fixes. Notable user-visible updates include:
* [#6368](https://github.com/leanprover/lean4/pull/6368) implements executing kernel checking in parallel to elaboration,
which is a prerequisite for parallelizing elaboration itself.
* [#6711](https://github.com/leanprover/lean4/pull/6711) adds support for `UIntX` and `USize` in `bv_decide` by adding a
preprocessor that turns them into `BitVec` of their corresponding size.
* [#6505](https://github.com/leanprover/lean4/pull/6505) implements a basic async framework as well as asynchronously
running timers using libuv.
* improvements to documentation with `docgen`, which now links
dot notations ([#6703](https://github.com/leanprover/lean4/pull/6703)),
coerced functions ([#6729](https://github.com/leanprover/lean4/pull/6729)),
and tokens ([#6730](https://github.com/leanprover/lean4/pull/6730)).
* extensive library development, in particular, expanding verification APIs of `BitVec`,
making APIs of `List` / `Array` / `Vector` consistent, and adding lemmas describing the behavior of `UInt`.
* [#6597](https://github.com/leanprover/lean4/pull/6597) fixes the indentation of nested traces nodes in the info view.
### New Language Features
* **Partial Fixpoint**
[#6355](https://github.com/leanprover/lean4/pull/6355) adds the ability to define possibly non-terminating functions
and still be able to reason about them equationally, as long as they are
tail-recursive, or operate within certain monads such as `Option`
Typical examples:
```lean
def ack : (n m : Nat) → Option Nat
| 0, y => some (y+1)
| x+1, 0 => ack x 1
| x+1, y+1 => do ack x (← ack (x+1) y)
partial_fixpoint
def whileSome (f : α → Option α) (x : α) : α :=
match f x with
| none => x
| some x' => whileSome f x'
partial_fixpoint
def computeLfp {α : Type u} [DecidableEq α] (f : α → α) (x : α) : α :=
let next := f x
if x ≠ next then
computeLfp f next
else
x
partial_fixpoint
```
See the [reference manual](https://lean-lang.org/doc/reference/latest/Recursive-Definitions/Partial-Fixpoint-Recursion/#partial-fixpoint)
for more details.
* [#6905](https://github.com/leanprover/lean4/pull/6905) adds a first draft of the `try`?
interactive tactic, which tries various tactics, including induction:
```lean
@[simp] def revAppend : List Nat → List Nat → List Nat
| [], ys => ys
| x::xs, ys => revAppend xs (x::ys)
example : (revAppend xs ys).length = xs.length + ys.length := by
try?
/-
Try these:
• · induction xs, ys using revAppend.induct
· simp
· simp +arith [*]
• · induction xs, ys using revAppend.induct
· simp only [revAppend, List.length_nil, Nat.zero_add]
· simp +arith only [revAppend, List.length_cons, *]
-/
```
* **`induction` with zero alternatives**
[#6486](https://github.com/leanprover/lean4/pull/6486) modifies the `induction`/`cases` syntax so that the `with`
clause does not need to be followed by any alternatives. This improves
friendliness of these tactics, since this lets them surface the names of
the missing alternatives:
```lean
example (n : Nat) : True := by
induction n with
/- ~~~~
alternative 'zero' has not been provided
alternative 'succ' has not been provided
-/
```
* **`simp?` and `dsimp?` tactics in conversion mode**
[#6593](https://github.com/leanprover/lean4/pull/6593) adds support for the `simp?` and `dsimp?` tactics in conversion
mode.
* **`fun_cases`**
[#6261](https://github.com/leanprover/lean4/pull/6261) adds `foo.fun_cases`, an automatically generated theorem that
splits the goal according to the branching structure of `foo`, much like
the Functional Induction Principle, but for all functions (not just
recursive ones), and without providing inductive hypotheses.
### New CLI Features
* [#6427](https://github.com/leanprover/lean4/pull/6427) adds the Lean CLI option `--src-deps` which parallels `--deps`.
It parses the Lean code's header and prints out the paths to the
(transitively) imported modules' source files (deduced from
`LEAN_SRC_PATH`).
* [#6323](https://github.com/leanprover/lean4/pull/6323) adds a new Lake CLI command, `lake query`, that both builds
targets and outputs their results. It can produce raw text or JSON
-formatted output (with `--json` / `-J`).
### Breaking Changes
* [#6602](https://github.com/leanprover/lean4/pull/6602) allows the dot ident notation to resolve to the current
definition, or to any of the other definitions in the same mutual block.
Existing code that uses dot ident notation may need to have `nonrec`
added if the ident has the same name as the definition.
* Introduction of the `zetaUnused` simp and reduction option ([#6755](https://github.com/leanprover/lean4/pull/6755))
is a breaking change in rare cases: the `split` tactic no longer removes unused `let` and `have` expressions as a side-effect.
`dsimp only` can be used to remove unused `have` and `let` expressions.
_This highlights section was contributed by Violetta Sim._
## Language
* [#5145](https://github.com/leanprover/lean4/pull/5145) splits the environment used by the kernel from that used by the
elaborator, providing the foundation for tracking of asynchronously
elaborated declarations, which will exist as a concept only in the
latter.
* [#6261](https://github.com/leanprover/lean4/pull/6261) adds `foo.fun_cases`, an automatically generated theorem that
splits the goal according to the branching structure of `foo`, much like
the Functional Induction Principle, but for all functions (not just
recursive ones), and without providing inductive hypotheses.
* [#6355](https://github.com/leanprover/lean4/pull/6355) adds the ability to define possibly non-terminating functions
and still be able to reason about them equationally, as long as they are
tail-recursive or monadic.
* [#6368](https://github.com/leanprover/lean4/pull/6368) implements executing kernel checking in parallel to elaboration,
which is a prerequisite for parallelizing elaboration itself.
* [#6427](https://github.com/leanprover/lean4/pull/6427) adds the Lean CLI option `--src-deps` which parallels `--deps`.
It parses the Lean code's header and prints out the paths to the
(transitively) imported modules' source files (deduced from
`LEAN_SRC_PATH`).
* [#6486](https://github.com/leanprover/lean4/pull/6486) modifies the `induction`/`cases` syntax so that the `with`
clause does not need to be followed by any alternatives. This improves
friendliness of these tactics, since this lets them surface the names of
the missing alternatives:
```lean
example (n : Nat) : True := by
induction n with
/- ~~~~
alternative 'zero' has not been provided
alternative 'succ' has not been provided
-/
```
* [#6505](https://github.com/leanprover/lean4/pull/6505) implements a basic async framework as well as asynchronously
running timers using libuv.
* [#6516](https://github.com/leanprover/lean4/pull/6516) enhances the `cases` tactic used in the `grind` tactic and
ensures that it can be applied to arbitrary expressions.
* [#6521](https://github.com/leanprover/lean4/pull/6521) adds support for activating relevant `match`-equations as
E-matching theorems. It uses the `match`-equation lhs as the pattern.
* [#6528](https://github.com/leanprover/lean4/pull/6528) adds a missing propagation rule to the (WIP) `grind` tactic.
* [#6529](https://github.com/leanprover/lean4/pull/6529) adds support for `let`-declarations to the (WIP) `grind` tactic.
* [#6530](https://github.com/leanprover/lean4/pull/6530) fixes nondeterministic failures in the (WIP) `grind` tactic.
* [#6531](https://github.com/leanprover/lean4/pull/6531) fixes the support for `let_fun` in `grind`.
* [#6533](https://github.com/leanprover/lean4/pull/6533) adds support to E-matching offset patterns. For example, we want
to be able to E-match the pattern `f (#0 + 1)` with term `f (a + 2)`.
* [#6534](https://github.com/leanprover/lean4/pull/6534) ensures that users can utilize projections in E-matching
patterns within the `grind` tactic.
* [#6536](https://github.com/leanprover/lean4/pull/6536) fixes different thresholds for controlling E-matching in the
`grind` tactic.
* [#6538](https://github.com/leanprover/lean4/pull/6538) ensures patterns provided by users are normalized. See new test
to understand why this is needed.
* [#6539](https://github.com/leanprover/lean4/pull/6539) introduces the `[grind_eq]` attribute, designed to annotate
equational theorems and functions for heuristic instantiations in the
`grind` tactic.
When applied to an equational theorem, the `[grind_eq]` attribute
instructs the `grind` tactic to automatically use the annotated theorem
to instantiate patterns during proof search. If applied to a function,
it marks all equational theorems associated with that function.
* [#6543](https://github.com/leanprover/lean4/pull/6543) adds additional tests for `grind`, demonstrating that we can
automate some manual proofs from Mathlib's basic category theory
library, with less reliance on Mathlib's `@[reassoc]` trick.
* [#6545](https://github.com/leanprover/lean4/pull/6545) introduces the parametric attribute `[grind]` for annotating
theorems and definitions. It also replaces `[grind_eq]` with `[grind
=]`. For definitions, `[grind]` is equivalent to `[grind =]`.
* [#6556](https://github.com/leanprover/lean4/pull/6556) adds propagators for implication to the `grind` tactic. It also
disables the normalization rule: `(p → q) = (¬ p ∨ q)`
* [#6559](https://github.com/leanprover/lean4/pull/6559) adds a basic case-splitting strategy for the `grind` tactic. We
still need to add support for user customization.
* [#6565](https://github.com/leanprover/lean4/pull/6565) fixes the location of the error emitted when the `rintro` and
`intro` tactics cannot introduce the requested number of binders.
* [#6566](https://github.com/leanprover/lean4/pull/6566) adds support for erasing the `[grind]` attribute used to mark
theorems for heuristic instantiation in the `grind` tactic.
* [#6567](https://github.com/leanprover/lean4/pull/6567) adds support for erasing the `[grind]` attribute used to mark
theorems for heuristic instantiation in the `grind` tactic.
* [#6568](https://github.com/leanprover/lean4/pull/6568) adds basic support for cast-like operators to the grind tactic.
Example:
```lean
example (α : Type) (β : Type) (a₁ a₂ : α) (b₁ b₂ : β)
(h₁ : α = β)
(h₂ : h₁ ▸ a₁ = b₁)
(h₃ : a₁ = a₂)
(h₄ : b₁ = b₂)
: HEq a₂ b₂ := by
grind
```
* [#6569](https://github.com/leanprover/lean4/pull/6569) adds support for case splitting on `match`-expressions in
`grind`.
We still need to add support for resolving the antecedents of
`match`-conditional equations.
* [#6575](https://github.com/leanprover/lean4/pull/6575) ensures tactics are evaluated incrementally in the body of
`classical`.
* [#6578](https://github.com/leanprover/lean4/pull/6578) fixes and improves the propagator for forall-expressions in the
`grind` tactic.
* [#6581](https://github.com/leanprover/lean4/pull/6581) adds the following configuration options to `Grind.Config`:
`splitIte`, `splitMatch`, and `splitIndPred`.
* [#6582](https://github.com/leanprover/lean4/pull/6582) adds support for creating local E-matching theorems for
universal propositions known to be true. It allows `grind` to
automatically solve examples such as:
* [#6584](https://github.com/leanprover/lean4/pull/6584) adds helper theorems to implement offset constraints in grind.
* [#6585](https://github.com/leanprover/lean4/pull/6585) fixes a bug in the `grind` canonicalizer.
* [#6588](https://github.com/leanprover/lean4/pull/6588) improves the `grind` canonicalizer diagnostics.
* [#6593](https://github.com/leanprover/lean4/pull/6593) adds support for the `simp?` and `dsimp?` tactics in conversion
mode.
* [#6595](https://github.com/leanprover/lean4/pull/6595) improves the theorems used to justify the steps performed by the
inequality offset module. See new test for examples of how they are
going to be used.
* [#6600](https://github.com/leanprover/lean4/pull/6600) removes functions from compiling decls from Environment, and
moves all users to functions on CoreM. This is required for supporting
the new code generator, since its implementation uses CoreM.
* [#6602](https://github.com/leanprover/lean4/pull/6602) allows the dot ident notation to resolve to the current
definition, or to any of the other definitions in the same mutual block.
Existing code that uses dot ident notation may need to have `nonrec`
added if the ident has the same name as the definition.
* [#6603](https://github.com/leanprover/lean4/pull/6603) implements support for offset constraints in the `grind` tactic.
Several features are still missing, such as constraint propagation and
support for offset equalities, but `grind` can already solve examples
like the following:
* [#6606](https://github.com/leanprover/lean4/pull/6606) fixes a bug in the pattern selection in the `grind`.
* [#6607](https://github.com/leanprover/lean4/pull/6607) adds support for case-splitting on `<->` (and `@Eq Prop`) in the
`grind` tactic.
* [#6608](https://github.com/leanprover/lean4/pull/6608) fixes a bug in the `simp_arith` tactic. See new test.
* [#6609](https://github.com/leanprover/lean4/pull/6609) improves the case-split heuristic used in grind, prioritizing
case-splits with fewer cases.
* [#6610](https://github.com/leanprover/lean4/pull/6610) fixes a bug in the `grind` core module responsible for merging
equivalence classes and propagating constraints.
* [#6611](https://github.com/leanprover/lean4/pull/6611) fixes one of the sanity check tests used in `grind`.
* [#6613](https://github.com/leanprover/lean4/pull/6613) improves the case split heuristic used in the `grind` tactic,
ensuring it now avoids unnecessary case-splits on `Iff`.
* [#6614](https://github.com/leanprover/lean4/pull/6614) improves the usability of the `[grind =]` attribute by
automatically handling
forbidden pattern symbols. For example, consider the following theorem
tagged with this attribute:
```lean
getLast?_eq_some_iff {xs : List α} {a : α} : xs.getLast? = some a ↔ ∃ ys, xs = ys ++ [a]
```
Here, the selected pattern is `xs.getLast? = some a`, but `Eq` is a
forbidden pattern symbol.
Instead of producing an error, this function converts the pattern into a
multi-pattern,
allowing the attribute to be used conveniently.
* [#6615](https://github.com/leanprover/lean4/pull/6615) adds two auxiliary functions `mkEqTrueCore` and `mkOfEqTrueCore`
that avoid redundant proof terms in proofs produced by `grind`.
* [#6618](https://github.com/leanprover/lean4/pull/6618) implements exhaustive offset constraint propagation in the
`grind` tactic. This enhancement minimizes the number of case splits
performed by `grind`. For instance, it can solve the following example
without performing any case splits:
* [#6633](https://github.com/leanprover/lean4/pull/6633) improves the failure message produced by the `grind` tactic. We
now include information about asserted facts, propositions that are
known to be true and false, and equivalence classes.
* [#6636](https://github.com/leanprover/lean4/pull/6636) implements model construction for offset constraints in the
`grind` tactic.
* [#6639](https://github.com/leanprover/lean4/pull/6639) puts the `bv_normalize` simp set into simp_nf and splits up the
bv_normalize implementation across multiple files in preparation for
upcoming changes.
* [#6641](https://github.com/leanprover/lean4/pull/6641) implements several optimisation tricks from Bitwuzla's
preprocessing passes into the Lean equivalent in `bv_decide`. Note that
these changes are mostly geared towards large proof states as for
example seen in SMT-Lib.
* [#6645](https://github.com/leanprover/lean4/pull/6645) implements support for offset equality constraints in the
`grind` tactic and exhaustive equality propagation for them. The `grind`
tactic can now solve problems such as the following:
* [#6648](https://github.com/leanprover/lean4/pull/6648) adds support for numerals, lower & upper bounds to the offset
constraint module in the `grind` tactic. `grind` can now solve examples
such as:
```lean
example (f : Nat → Nat) :
f 2 = a →
b ≤ 1 → b ≥ 1 →
c = b + 1 →
f c = a := by
grind
```
In the example above, the literal `2` and the lower&upper bounds, `b ≤
1` and `b ≥ 1`, are now processed by offset constraint module.
* [#6649](https://github.com/leanprover/lean4/pull/6649) fixes a bug in the term canonicalizer used in the `grind`
tactic.
* [#6652](https://github.com/leanprover/lean4/pull/6652) adds the `int_toBitVec` simp set to convert UIntX and later IntX
propositions to BitVec ones. This will be used as a preprocessor for `bv_decide` to provide UIntX/IntX `bv_decide` support.
* [#6653](https://github.com/leanprover/lean4/pull/6653) improves the E-matching pattern selection heuristics in the
`grind` tactic. They now take into account type predicates and
transformers.
* [#6654](https://github.com/leanprover/lean4/pull/6654) improves the support for partial applications in the E-matching
procedure used in `grind`.
* [#6656](https://github.com/leanprover/lean4/pull/6656) improves the diagnostic information provided in `grind` failure
states. We now include the list of issues found during the search, and
all search thresholds that have been reached. also improves its
formatting.
* [#6657](https://github.com/leanprover/lean4/pull/6657) improves the `grind` search procedure, and adds the new
configuration option: `failures`.
* [#6658](https://github.com/leanprover/lean4/pull/6658) ensures that `grind` avoids case-splitting on terms congruent to
those that have already been case-split.
* [#6659](https://github.com/leanprover/lean4/pull/6659) fixes a bug in the `grind` term preprocessor. It was abstracting
nested proofs **before** reducible constants were unfolded.
* [#6662](https://github.com/leanprover/lean4/pull/6662) improves the canonicalizer used in the `grind` tactic and the
diagnostics it produces. It also adds a new configuration option,
`canonHeartbeats`, to address (some of) the issues. Here is an example
illustrating the new diagnostics, where we intentionally create a
problem by using a very small number of heartbeats.
* [#6663](https://github.com/leanprover/lean4/pull/6663) implements a basic equality resolution procedure for the `grind`
tactic.
* [#6669](https://github.com/leanprover/lean4/pull/6669) adds a workaround for the discrepancy between Terminal/Emacs and
VS Code when displaying info trees.
* [#6675](https://github.com/leanprover/lean4/pull/6675) adds `simp`-like parameters to `grind`, and `grind only` similar
to `simp only`.
* [#6679](https://github.com/leanprover/lean4/pull/6679) changes the identifier parser to allow for the ⱼ unicode
character which was forgotten as it lives by itself in a codeblock with
coptic characters.
* [#6682](https://github.com/leanprover/lean4/pull/6682) adds support for extensionality theorems (using the `[ext]`
attribute) to the `grind` tactic. Users can disable this functionality
using `grind -ext` . Below are examples that demonstrate problems now
solvable by `grind`.
* [#6685](https://github.com/leanprover/lean4/pull/6685) fixes the issue that `#check_failure`'s output is warning
* [#6686](https://github.com/leanprover/lean4/pull/6686) fixes parameter processing, initialization, and attribute
handling issues in the `grind` tactic.
* [#6691](https://github.com/leanprover/lean4/pull/6691) introduces the central API for making parallel changes to the
environment
* [#6692](https://github.com/leanprover/lean4/pull/6692) removes the `[grind_norm]` attribute. The normalization theorems
used by `grind` are now fixed and cannot be modified by users. We use
normalization theorems to ensure the built-in procedures receive term
wish expected "shapes". We use it for types that have built-in support
in grind. Users could misuse this feature as a simplification rule. For
example, consider the following example:
* [#6700](https://github.com/leanprover/lean4/pull/6700) adds support for beta reduction in the `grind` tactic. `grind`
can now solve goals such as
```lean
example (f : Nat → Nat) : f = (fun x : Nat => x + 5) → f 2 > 5 := by
grind
```
* [#6702](https://github.com/leanprover/lean4/pull/6702) adds support for equality backward reasoning to `grind`. We can
illustrate the new feature with the following example. Suppose we have a
theorem:
```lean
theorem inv_eq {a b : α} (w : a * b = 1) : inv a = b
```
and we want to instantiate the theorem whenever we are tying to prove
`inv t = s` for some terms `t` and `s`
The attribute `[grind ←]` is not applicable in this case because, by
default, `=` is not eligible for E-matching. The new attribute `[grind
←=]` instructs `grind` to use the equality and consider disequalities in
the `grind` proof state as candidates for E-matching.
* [#6705](https://github.com/leanprover/lean4/pull/6705) adds the attributes `[grind cases]` and `[grind cases eager]`
for controlling case splitting in `grind`. They will replace the
`[grind_cases]` and the configuration option `splitIndPred`.
* [#6711](https://github.com/leanprover/lean4/pull/6711) adds support for `UIntX` and `USize` in `bv_decide` by adding a
preprocessor that turns them into `BitVec` of their corresponding size.
* [#6717](https://github.com/leanprover/lean4/pull/6717) introduces a new feature that allows users to specify which
inductive datatypes the `grind` tactic should perform case splits on.
The configuration option `splitIndPred` is now set to `false` by
default. The attribute `[grind cases]` is used to mark inductive
datatypes and predicates that `grind` may case split on during the
search. Additionally, the attribute `[grind cases eager]` can be used to
mark datatypes and predicates for case splitting both during
pre-processing and the search.
* [#6718](https://github.com/leanprover/lean4/pull/6718) adds BitVec lemmas required to cancel multiplicative negatives,
and plumb support through to `bv_normalize` to make use of this result in
the normalized twos-complement form.
* [#6719](https://github.com/leanprover/lean4/pull/6719) fixes a bug in the equational theorem generator for
`match`-expressions. See new test for an example.
* [#6724](https://github.com/leanprover/lean4/pull/6724) adds support for `bv_decide` to automatically split up
non-recursive structures that contain information about supported types.
It can be controlled using the new `structures` field in the `bv_decide`
config.
* [#6733](https://github.com/leanprover/lean4/pull/6733) adds better support for overlapping `match` patterns in `grind`.
`grind` can now solve examples such as
```lean
inductive S where
| mk1 (n : Nat)
| mk2 (n : Nat) (s : S)
| mk3 (n : Bool)
| mk4 (s1 s2 : S)
def f (x y : S) :=
match x, y with
| .mk1 _, _ => 2
| _, .mk2 1 (.mk4 _ _) => 3
| .mk3 _, _ => 4
| _, _ => 5
example : b = .mk2 y1 y2 → y1 = 2 → a = .mk4 y3 y4 → f a b = 5 := by
unfold f
grind (splits := 0)
```
* [#6735](https://github.com/leanprover/lean4/pull/6735) adds support for case splitting on `match`-expressions with
overlapping patterns to the `grind` tactic. `grind` can now solve
examples such as:
```lean
inductive S where
| mk1 (n : Nat)
| mk2 (n : Nat) (s : S)
| mk3 (n : Bool)
| mk4 (s1 s2 : S)
def g (x y : S) :=
match x, y with
| .mk1 a, _ => a + 2
| _, .mk2 1 (.mk4 _ _) => 3
| .mk3 _, .mk4 _ _ => 4
| _, _ => 5
example : g a b > 1 := by
grind [g.eq_def]
```
* [#6736](https://github.com/leanprover/lean4/pull/6736) ensures the canonicalizer used in `grind` does not waste time
checking whether terms with different types are definitionally equal.
* [#6737](https://github.com/leanprover/lean4/pull/6737) ensures that the branches of an `if-then-else` term are
internalized only after establishing the truth value of the condition.
This change makes its behavior consistent with the `match`-expression
and dependent `if-then-else` behavior in `grind`.
This feature is particularly important for recursive functions defined
by well-founded recursion and `if-then-else`. Without lazy
`if-then-else` branch internalization, the equation theorem for the
recursive function would unfold until reaching the generation depth
threshold, and before performing any case analysis. See new tests for an
example.
* [#6739](https://github.com/leanprover/lean4/pull/6739) adds a fast path for bitblasting multiplication with constants
in `bv_decide`.
* [#6740](https://github.com/leanprover/lean4/pull/6740) extends `bv_decide`'s structure reasoning support for also
reasoning about equalities of supported structures.
* [#6745](https://github.com/leanprover/lean4/pull/6745) supports rewriting `ushiftRight` in terms of `extractLsb'`. This
is the companion PR to #6743 which adds the similar lemmas about
`shiftLeft`.
* [#6746](https://github.com/leanprover/lean4/pull/6746) ensures that conditional equation theorems for function
definitions are handled correctly in `grind`. We use the same
infrastructure built for `match`-expression equations. Recall that in
both cases, these theorems are conditional when there are overlapping
patterns.
* [#6748](https://github.com/leanprover/lean4/pull/6748) fixes a few bugs in the `grind` tactic: missing issues, bad
error messages, incorrect threshold in the canonicalizer, and bug in the
ground pattern internalizer.
* [#6750](https://github.com/leanprover/lean4/pull/6750) adds support for fixing type mismatches using `cast` while
instantiating quantifiers in the E-matching module used by the grind
tactic.
* [#6754](https://github.com/leanprover/lean4/pull/6754) adds the `+zetaUnused` option.
* [#6755](https://github.com/leanprover/lean4/pull/6755) implements the `zetaUnused` simp and reduction option (added in
#6754).
* [#6761](https://github.com/leanprover/lean4/pull/6761) fixes issues in `grind` when processing `match`-expressions with
indexed families.
* [#6773](https://github.com/leanprover/lean4/pull/6773) fixes a typo that prevented `Nat.reduceAnd` from working
correctly.
* [#6777](https://github.com/leanprover/lean4/pull/6777) fixes a bug in the internalization of offset terms in the
`grind` tactic. For example, `grind` was failing to solve the following
example because of this bug.
```lean
example (f : Nat → Nat) : f (a + 1) = 1 → a = 0 → f 1 = 1 := by
grind
```
* [#6778](https://github.com/leanprover/lean4/pull/6778) fixes the assignment produced by `grind` to satisfy the offset
constraints in a goal.
* [#6779](https://github.com/leanprover/lean4/pull/6779) improves the support for `match`-expressions in the `grind`
tactic.
* [#6781](https://github.com/leanprover/lean4/pull/6781) fixes the support for case splitting on data in the `grind`
tactic. The following example works now:
```lean
inductive C where
| a | b | c
def f : C → Nat
| .a => 2
| .b => 3
| .c => 4
example : f x > 1 := by
grind [
f, -- instructs `grind` to use `f`-equation theorems,
C -- instructs `grind` to case-split on free variables of type `C`
]
```
* [#6783](https://github.com/leanprover/lean4/pull/6783) adds support for closing goals using `match`-expression
conditions that are known to be true in the `grind` tactic state.
`grind` can now solve goals such as:
```lean
def f : List Nat → List Nat → Nat
| _, 1 :: _ :: _ => 1
| _, _ :: _ => 2
| _, _ => 0
example : z = a :: as → y = z → f x y > 0
```
* [#6785](https://github.com/leanprover/lean4/pull/6785) adds infrastructure for the `grind?` tactic. It also adds the
new modifier `usr` which allows users to write `grind only [use thmName]` to instruct `grind` to only use theorem `thmName`, but using
the patterns specified with the command `grind_pattern`.
* [#6788](https://github.com/leanprover/lean4/pull/6788) teaches bv_normalize that `!(x < x)` and `!(x < 0)`.
* [#6790](https://github.com/leanprover/lean4/pull/6790) fixes an issue with the generation of equational theorems from
`partial_fixpoint` when case-splitting is necessary. Fixes #6786.
* [#6791](https://github.com/leanprover/lean4/pull/6791) fixes #6789 by ensuring metadata generated for inaccessible
variables in pattern-matches is consumed in `casesOnStuckLHS`
accordingly.
* [#6801](https://github.com/leanprover/lean4/pull/6801) fixes a bug in the exhaustive offset constraint propagation
module used in `grind`.
* [#6810](https://github.com/leanprover/lean4/pull/6810) implements a basic `grind?` tactic companion for `grind`. We
will add more bells and whistles later.
* [#6822](https://github.com/leanprover/lean4/pull/6822) adds a few builtin case-splits for `grind`. They are similar to
builtin `simp` theorems. They reduce the noise in the tactics produced
by `grind?`.
* [#6824](https://github.com/leanprover/lean4/pull/6824) introduces the auxiliary command `%reset_grind_attrs` for
debugging purposes. It is particularly useful for writing self-contained
tests.
* [#6834](https://github.com/leanprover/lean4/pull/6834) adds "performance" counters (e.g., number of instances per
theorem) to `grind`. The counters are always reported on failures, and
on successes when `set_option diagnostics true`.
* [#6839](https://github.com/leanprover/lean4/pull/6839) ensures `grind` can use constructors and axioms for heuristic
instantiation based on E-matching. It also allows patterns without
pattern variables for theorems such as `theorem evenz : Even 0`.
* [#6851](https://github.com/leanprover/lean4/pull/6851) makes `bv_normalize` rewrite shifts by `BitVec` constants to
shifts by `Nat` constants. This is part of the greater effort in
providing good support for constant shift simplification in
`bv_normalize`.
* [#6852](https://github.com/leanprover/lean4/pull/6852) allows environment extensions to opt into access modes that do
not block on the entire environment up to this point as a necessary
prerequisite for parallel proof elaboration.
* [#6854](https://github.com/leanprover/lean4/pull/6854) adds a convenience for inductive predicates in `grind`. Now,
give an inductive predicate `C`, `grind [C]` marks `C` terms as
case-split candidates **and** `C` constructors as E-matching theorems.
Here is an example:
```lean
example {B S T s t} (hcond : B s) : (ifThenElse B S T, s) ==> t → (S, s) ==> t := by
grind [BigStep]
```
Users can still use `grind [cases BigStep]` to only mark `C` as a case
split candidate.
* [#6858](https://github.com/leanprover/lean4/pull/6858) adds new propagation rules for `decide` and equality in `grind`.
It also adds new tests and cleans old ones
* [#6861](https://github.com/leanprover/lean4/pull/6861) adds propagation rules for `Bool.and`, `Bool.or`, and `Bool.not`
to the `grind` tactic.
* [#6870](https://github.com/leanprover/lean4/pull/6870) adds two new normalization steps in `grind` that reduces `a !=
b` and `a == b` to `decide (¬ a = b)` and `decide (a = b)`,
respectively.
* [#6879](https://github.com/leanprover/lean4/pull/6879) fixes a bug in `mkMatchCondProf?` used by the `grind` tactic.
This bug was introducing a failure in the test `grind_constProp.lean`.
* [#6880](https://github.com/leanprover/lean4/pull/6880) improves the E-matching pattern selection heuristic used in
`grind`.
* [#6881](https://github.com/leanprover/lean4/pull/6881) improves the `grind` error message by including a trace of the
terms on which `grind` applied `cases`-like operations.
* [#6882](https://github.com/leanprover/lean4/pull/6882) ensures `grind` auxiliary gadgets are "hidden" in error and
diagnostic messages.
* [#6888](https://github.com/leanprover/lean4/pull/6888) adds the `[grind intro]` attribute. It instructs `grind` to mark
the introduction rules of an inductive predicate as E-matching theorems.
* [#6889](https://github.com/leanprover/lean4/pull/6889) inlines a few functions in the `bv_decide` circuit cache.
* [#6892](https://github.com/leanprover/lean4/pull/6892) fixes a bug in the pattern selection heuristic used in `grind`.
It was unfolding definitions/abstractions that were not supposed to be
unfolded. See `grind_constProp.lean` for examples affected by this bug.
* [#6895](https://github.com/leanprover/lean4/pull/6895) fixes a few `grind` issues exposed by the `grind_constProp.lean`
test.
- Support for equational theorem hypotheses created before invoking
`grind`. Example: applying an induction principle.s
- Support of `Unit`-like types.
- Missing recursion depth checks.
* [#6897](https://github.com/leanprover/lean4/pull/6897) adds the new attributes `[grind =>]` and `[grind <=]` for
controlling pattern selection and minimizing the number of places where
we have to use verbose `grind_pattern` command. It also fixes a bug in
the new pattern selection procedure, and improves the automatic pattern
selection for local lemmas.
* [#6904](https://github.com/leanprover/lean4/pull/6904) adds the `grind` configuration option `verbose`. For example,
`grind -verbose` disables all diagnostics. We are going to use this flag
to implement `try?`.
* [#6905](https://github.com/leanprover/lean4/pull/6905) adds the `try?` tactic; see above
## Library
* [#6177](https://github.com/leanprover/lean4/pull/6177) implements `BitVec.*_fill`.
* [#6211](https://github.com/leanprover/lean4/pull/6211) verifies the `insertMany` method on `HashMap`s for the special
case of inserting lists.
* [#6346](https://github.com/leanprover/lean4/pull/6346) completes the toNat/Int/Fin family for `shiftLeft`.
* [#6347](https://github.com/leanprover/lean4/pull/6347) adds `BitVec.toNat_rotateLeft` and `BitVec.toNat_rotateLeft`.
* [#6402](https://github.com/leanprover/lean4/pull/6402) adds a `toFin` and `msb` lemma for unsigned bitvector division.
We *don't* have `toInt_udiv`, since the only truly general statement we
can make does no better than unfolding the definition, and it's not
uncontroversially clear how to unfold `toInt` (see
`toInt_eq_msb_cond`/`toInt_eq_toNat_cond`/`toInt_eq_toNat_bmod` for a
few options currently provided). Instead, we do have `toInt_udiv_of_msb`
that's able to provide a more meaningful rewrite given an extra
side-condition (that `x.msb = false`).
* [#6404](https://github.com/leanprover/lean4/pull/6404) adds a `toFin` and `msb` lemma for unsigned bitvector modulus.
Similar to #6402, we don't provide a general `toInt_umod` lemmas, but
instead choose to provide more specialized rewrites, with extra
side-conditions.
* [#6431](https://github.com/leanprover/lean4/pull/6431) fixes the `Repr` instance of the `Timestamp` type and changes
the `PlainTime` type so that it always represents a clock time that may
be a leap second.
* [#6476](https://github.com/leanprover/lean4/pull/6476) defines `reverse` for bitvectors and implements a first subset
of theorems (`getLsbD_reverse, getMsbD_reverse, reverse_append,
reverse_replicate, reverse_cast, msb_reverse`). We also include some
necessary related theorems (`cons_append, cons_append_append,
append_assoc, replicate_append_self, replicate_succ'`) and deprecate
theorems`replicate_zero_eq` and `replicate_succ_eq`.
* [#6494](https://github.com/leanprover/lean4/pull/6494) proves the basic theorems about the functions `Int.bdiv` and
`Int.bmod`.
* [#6507](https://github.com/leanprover/lean4/pull/6507) adds the subtraction equivalents for `Int.emod_add_emod` (`(a %
n + b) % n = (a + b) % n`) and `Int.add_emod_emod` (`(a + b % n) % n =
(a + b) % n`). These are marked @[simp] like their addition equivalents.
* [#6524](https://github.com/leanprover/lean4/pull/6524) upstreams some remaining `List.Perm` lemmas from Batteries.
* [#6546](https://github.com/leanprover/lean4/pull/6546) continues aligning `Array` and `Vector` lemmas with `List`,
working on `fold` and `map` operations.
* [#6563](https://github.com/leanprover/lean4/pull/6563) implements `Std.Net.Addr` which contains structures around IP
and socket addresses.
* [#6573](https://github.com/leanprover/lean4/pull/6573) replaces the existing implementations of `(D)HashMap.alter` and
`(D)HashMap.modify` with primitive, more efficient ones and in
particular provides proofs that they yield well-formed hash maps (`WF`
type class).
* [#6586](https://github.com/leanprover/lean4/pull/6586) continues aligning `List/Array/Vector` lemmas, finishing up
lemmas about `map`.
* [#6587](https://github.com/leanprover/lean4/pull/6587) adds decidable instances for the `LE` and `LT` instances for the
`Offset` types defined in `Std.Time`.
* [#6589](https://github.com/leanprover/lean4/pull/6589) continues aligning `List/Array` lemmas, finishing `filter` and
`filterMap`.
* [#6591](https://github.com/leanprover/lean4/pull/6591) adds less-than and less-than-or-equal-to relations to `UInt32`,
consistent with the other `UIntN` types.
* [#6612](https://github.com/leanprover/lean4/pull/6612) adds lemmas about `Array.append`, improving alignment with the
`List` API.
* [#6617](https://github.com/leanprover/lean4/pull/6617) completes alignment of `List`/`Array`/`Vector` `append` lemmas.
* [#6620](https://github.com/leanprover/lean4/pull/6620) adds lemmas about HashMap.alter and .modify. These lemmas
describe the interaction of alter and modify with the read methods of
the HashMap. The additions affect the HashMap, the DHashMap and their
respective raw versions. Moreover, the raw versions of alter and modify
are defined.
* [#6625](https://github.com/leanprover/lean4/pull/6625) adds lemmas describing the behavior of `UIntX.toBitVec` on
`UIntX` operations.
* [#6630](https://github.com/leanprover/lean4/pull/6630) adds theorems `Nat.[shiftLeft_or_distrib`,
shiftLeft_xor_distrib`, shiftLeft_and_distrib`, `testBit_mul_two_pow`,
`bitwise_mul_two_pow`, `shiftLeft_bitwise_distrib]`, to prove
`Nat.shiftLeft_or_distrib` by emulating the proof strategy of
`shiftRight_and_distrib`.
* [#6640](https://github.com/leanprover/lean4/pull/6640) completes aligning `List`/`Array`/`Vector` lemmas about
`flatten`. `Vector.flatten` was previously missing, and has been added
(for rectangular sizes only). A small number of missing `Option` lemmas
were also need to get the proofs to go through.
* [#6660](https://github.com/leanprover/lean4/pull/6660) defines `Vector.flatMap`, changes the order of arguments in
`List.flatMap` for consistency, and aligns the lemmas for
`List`/`Array`/`Vector` `flatMap`.
* [#6661](https://github.com/leanprover/lean4/pull/6661) adds array indexing lemmas for `Vector.flatMap`. (These were not
available for `List` and `Array` due to variable lengths.)
* [#6667](https://github.com/leanprover/lean4/pull/6667) aligns `List.replicate`/`Array.mkArray`/`Vector.mkVector`
lemmas.
* [#6668](https://github.com/leanprover/lean4/pull/6668) fixes negative timestamps and `PlainDateTime`s before 1970.
* [#6674](https://github.com/leanprover/lean4/pull/6674) adds theorems `BitVec.[getMsbD_mul, getElem_udiv, getLsbD_udiv,
getMsbD_udiv]`
* [#6695](https://github.com/leanprover/lean4/pull/6695) aligns `List/Array/Vector.reverse` lemmas.
* [#6697](https://github.com/leanprover/lean4/pull/6697) changes the arguments of `List/Array.mapFinIdx` from `(f : Fin
as.size → α → β)` to `(f : (i : Nat) → α → (h : i < as.size) → β)`, in
line with the API design elsewhere for `List/Array`.
* [#6701](https://github.com/leanprover/lean4/pull/6701) completes aligning `mapIdx` and `mapFinIdx` across
`List/Array/Vector`.
* [#6707](https://github.com/leanprover/lean4/pull/6707) completes aligning lemmas for `List` / `Array` / `Vector` about
`foldl`, `foldr`, and their monadic versions.
* [#6708](https://github.com/leanprover/lean4/pull/6708) deprecates `List.iota`, which we make no essential use of. `iota
n` can be replaced with `(range' 1 n).reverse`. The verification lemmas
for `range'` already have better coverage than those for `iota`.
Any downstream projects using it (I am not aware of any) are encouraged
to adopt it.
* [#6712](https://github.com/leanprover/lean4/pull/6712) aligns `List`/`Array`/`Vector` theorems for `countP` and
`count`.
* [#6723](https://github.com/leanprover/lean4/pull/6723) completes the alignment of
{List/Array/Vector}.{attach,attachWith,pmap} lemmas. I had to fill in a
number of gaps in the List API.
* [#6728](https://github.com/leanprover/lean4/pull/6728) removes theorems `Nat.mul_one` to simplify a rewrite in the
proof of `BitVec.getMsbD_rotateLeft_of_lt`
* [#6742](https://github.com/leanprover/lean4/pull/6742) adds the lemmas that show what happens when multiplying by
`twoPow` to an arbitrary term, as well to another `twoPow`.
* [#6743](https://github.com/leanprover/lean4/pull/6743) adds rewrites that normalizes left shifts by extracting bits and
concatenating zeroes. If the shift amount is larger than the bit-width,
then the resulting bitvector is zero.
* [#6747](https://github.com/leanprover/lean4/pull/6747) adds the ability to push `BitVec.extractLsb` and
`BitVec.extractLsb'` with bitwise operations. This is useful for
constant-folding extracts.
* [#6767](https://github.com/leanprover/lean4/pull/6767) adds lemmas to rewrite
`BitVec.shiftLeft,shiftRight,sshiftRight'` by a `BitVec.ofNat` into a
shift-by-natural number. This will be used to canonicalize shifts by
constant bitvectors into shift by constant numbers, which have further
rewrites on them if the number is a power of two.
* [#6799](https://github.com/leanprover/lean4/pull/6799) adds a number of simple comparison lemmas to the top/bottom
element for BitVec. Then they are applied to teach bv_normalize that
`(a<1) = (a==0)` and to remove an intermediate proof that is no longer
necessary along the way.
* [#6800](https://github.com/leanprover/lean4/pull/6800) uniformizes the naming of `enum`/`enumFrom` (on `List`) and
`zipWithIndex` (on `Array` on `Vector`), replacing all with `zipIdx`. At
the same time, we generalize to add an optional `Nat` parameter for the
initial value of the index (which previously existed, only for `List`,
as the separate function `enumFrom`).
* [#6808](https://github.com/leanprover/lean4/pull/6808) adds simp lemmas replacing `BitVec.setWidth'` with `setWidth`,
and conditionally simplifying `setWidth v (setWidth w v)`.
* [#6818](https://github.com/leanprover/lean4/pull/6818) adds a BitVec lemma that `(x >> x) = 0` and plumbs it through to
bv_normalize. I also move some theorems I found useful to the top of the
ushiftRight section.
* [#6821](https://github.com/leanprover/lean4/pull/6821) adds basic lemmas about `Ordering`, describing the interaction
of `isLT`/`isLE`/`isGE`/`isGT`, `swap` and the constructors.
Additionally, it refactors the instance derivation code such that a
`LawfulBEq Ordering` instance is also derived automatically.
* [#6826](https://github.com/leanprover/lean4/pull/6826) adds injectivity theorems for inductives that did not get them
automatically (because they are defined too early) but also not yet
manually later.
* [#6828](https://github.com/leanprover/lean4/pull/6828) adds add/sub injectivity lemmas for BitVec, and then adds
specialized forms with additional symmetries for the `bv_normalize`
normal form.
* [#6831](https://github.com/leanprover/lean4/pull/6831) completes the alignment of `List/Array/Vector` lemmas about
`isEqv` and `==`.
* [#6833](https://github.com/leanprover/lean4/pull/6833) makes the signatures of `find` functions across
`List`/`Array`/`Vector` consistent. Verification lemmas will follow in
subsequent PRs.
* [#6835](https://github.com/leanprover/lean4/pull/6835) fills some gaps in the `Vector` API, adding `mapM`, `zip`, and
`ForIn'` and `ToStream` instances.
* [#6838](https://github.com/leanprover/lean4/pull/6838) completes aligning the (limited) verification API for
`List/Array/Vector.ofFn`.
* [#6840](https://github.com/leanprover/lean4/pull/6840) completes the alignment of
`List/Array/Vector.zip/zipWith/zipWithAll/unzip` lemmas.
* [#6845](https://github.com/leanprover/lean4/pull/6845) adds missing monadic higher order functions on
`List`/`Array`/`Vector`. Only the most basic verification lemmas
(relating the operations on the three container types) are provided for
now.
* [#6848](https://github.com/leanprover/lean4/pull/6848) adds simp lemmas proving `x + y = x ↔ x = 0` for BitVec, along
with symmetries, and then adds these to the bv_normalize simpset.
* [#6860](https://github.com/leanprover/lean4/pull/6860) makes `take`/`drop`/`extract` available for each of
`List`/`Array`/`Vector`. The simp normal forms differ, however: in
`List`, we simplify `extract` to `take+drop`, while in `Array` and
`Vector` we simplify `take` and `drop` to `extract`. We also provide
`Array/Vector.shrink`, which simplifies to `take`, but is implemented by
repeatedly popping. Verification lemmas for `Array/Vector.extract` to
follow in a subsequent PR.
* [#6862](https://github.com/leanprover/lean4/pull/6862) defines Cooper resolution with a divisibility constraint as
formulated in
"Cutting to the Chase: Solving Linear Integer Arithmetic" by Dejan
Jovanović and Leonardo de Moura,
DOI 10.1007/s10817-013-9281-x.
* [#6863](https://github.com/leanprover/lean4/pull/6863) allows fixing regressions in mathlib introduced in
nightly-2024-02-25 by allowing the use of `x * y` in match patterns.
There are currently 11 instances in mathlib explicitly flagging the lack
of this match pattern.
* [#6864](https://github.com/leanprover/lean4/pull/6864) adds lemmas relating the operations on
findIdx?/findFinIdx?/idxOf?/findIdxOf?/eraseP/erase on List and on
Array. It's preliminary to aligning the verification lemmas for
`find...` and `erase...`.
* [#6868](https://github.com/leanprover/lean4/pull/6868) completes the alignment across `List/Array/Vector` of lemmas
about the `eraseP/erase/eraseIdx` operations.
* [#6872](https://github.com/leanprover/lean4/pull/6872) adds lemmas for xor injectivity and when and/or/xor equal
allOnes or zero. Then I plumb support for the new lemmas through to
bv_normalize.
* [#6875](https://github.com/leanprover/lean4/pull/6875) adds a lemma relating `msb` and `getMsbD`, and three lemmas
regarding `getElem` and `shiftConcat`. These lemmas were needed in
[Batteries#1078](https://github.com/leanprover-community/batteries/pull/1078)
and the request to upstream was made in the review of that PR.
* [#6878](https://github.com/leanprover/lean4/pull/6878) completes alignments of `List/Array/Vector` lemmas about
`range`, `range'`, and `zipIdx`.
* [#6883](https://github.com/leanprover/lean4/pull/6883) completes the alignment of lemmas about monadic functions on
`List/Array/Vector`. Amongst other changes, we change the simp normal
form from `List.forM` to `ForM.forM`, and correct the definition of
`List.flatMapM`, which previously was returning results in the incorrect
order. There remain many gaps in the verification lemmas for monadic
functions; this PR only makes the lemmas uniform across
`List/Array/Vector`.
* [#6890](https://github.com/leanprover/lean4/pull/6890) teaches bv_normalize to replace subtractions on one side of an
equality with an addition on the other side, this re-write eliminates a
not + addition in the normalized form so it is easier on the solver.
* [#6912](https://github.com/leanprover/lean4/pull/6912) aligns current coverage of `find`-type theorems across
`List`/`Array`/`Vector`. There are still quite a few holes in this API,
which will be filled later.
## Compiler
* [#6535](https://github.com/leanprover/lean4/pull/6535) avoids a linker warning on Windows.
* [#6547](https://github.com/leanprover/lean4/pull/6547) should prevent Lake from accidentally picking up other linkers
installed on the machine.
* [#6574](https://github.com/leanprover/lean4/pull/6574) actually prevents Lake from accidentally picking up other
toolchains installed on the machine.
* [#6664](https://github.com/leanprover/lean4/pull/6664) changes the toMono pass to longer filter out type class
instances, because they may actually be needed for later compilation.
* [#6665](https://github.com/leanprover/lean4/pull/6665) adds a new lcAny constant to Prelude, which is meant for use in
LCNF to represent types whose dependency on another term has been erased
during compilation. This is in addition to the existing lcErased
constant, which represents types that are irrelevant.
* [#6678](https://github.com/leanprover/lean4/pull/6678) modifies LCNF.toMonoType to use a more refined type erasure
scheme, which distinguishes between irrelevant/erased information
(represented by lcErased) and erased type dependencies (represented by
lcAny). This corresponds to the irrelevant/object distinction in the old
code generator.
* [#6680](https://github.com/leanprover/lean4/pull/6680) makes the new code generator skip generating code for decls with
an implemented_by decl, just like the old code generator.
* [#6757](https://github.com/leanprover/lean4/pull/6757) adds support for applying crimp theorems in toLCNF.
* [#6758](https://github.com/leanprover/lean4/pull/6758) prevents deadlocks from non-cyclical task waits that may
otherwise occur during parallel elaboration with small threadpool sizes.
* [#6837](https://github.com/leanprover/lean4/pull/6837) adds Float32 to the LCNF builtinRuntimeTypes list. This was
missed during the initial Float32 implementation, but this omission has
the side effect of lowering Float32 to obj in the IR.
## Pretty Printing
* [#6703](https://github.com/leanprover/lean4/pull/6703) modifies the delaborator so that in `pp.tagAppFns` mode,
generalized field notation is tagged with the head constant. The effect
is that docgen documentation will linkify dot notation. Internal change:
now formatted `rawIdent` can be tagged.
* [#6716](https://github.com/leanprover/lean4/pull/6716) renames the option `infoview.maxTraceChildren` to
`maxTraceChildren` and applies it to the cmdline driver and language
server clients lacking an info view as well. It also implements the
common idiom of the option value `0` meaning "unlimited".
* [#6729](https://github.com/leanprover/lean4/pull/6729) makes the pretty printer for `.coeFun`-tagged functions respect
`pp.tagAppFns`. The effect is that in docgen, when an expression pretty
prints as `f x y z` with `f` a coerced function, then if `f` is a
constant it will be linkified.
* [#6730](https://github.com/leanprover/lean4/pull/6730) changes how app unexpanders are invoked. Before the ref was
`.missing`, but now the ref is the head constant's delaborated syntax.
This way, when `pp.tagAppFns` is true, then tokens in app unexpanders
are annotated with the head constant. The consequence is that in docgen,
tokens will be linkified. This new behavior is consistent with how
`notation` defines app unexpanders.
## Documentation
* [#6549](https://github.com/leanprover/lean4/pull/6549) fixes #6548.
* [#6638](https://github.com/leanprover/lean4/pull/6638) correct the docstring of theorem `Bitvec.toNat_add_of_lt`
* [#6643](https://github.com/leanprover/lean4/pull/6643) changes the macOS docs to indicate that Lean now requires
pkgconf to build.
* [#6646](https://github.com/leanprover/lean4/pull/6646) changes the ubuntu docs to indicate that Lean now requires
pkgconf to build.
* [#6738](https://github.com/leanprover/lean4/pull/6738) updates our lexical structure documentation to mention the newly
supported ⱼ which lives in a separate unicode block and is thus not
captured by the current ranges.
* [#6885](https://github.com/leanprover/lean4/pull/6885) fixes the name of the truncating integer division function in
the `HDiv.hDiv` docstring (which is shown when hovering over `/`). It
was changed from `Int.div` to `Int.tdiv` in #5301.
## Server
* [#6597](https://github.com/leanprover/lean4/pull/6597) fixes the indentation of nested traces nodes in the info view.
* [#6794](https://github.com/leanprover/lean4/pull/6794) fixes a significant auto-completion performance regression that
was introduced in #5666, i.e. v4.14.0.
## Lake
* [#6290](https://github.com/leanprover/lean4/pull/6290) uses `StateRefT` instead of `StateT` to equip the Lake build
monad with a build store.
* [#6323](https://github.com/leanprover/lean4/pull/6323) adds a new Lake CLI command, `lake query`, that both builds
targets and outputs their results. It can produce raw text or JSON
-formatted output (with `--json` / `-J`).
* [#6418](https://github.com/leanprover/lean4/pull/6418) alters all builtin Lake facets to produce `Job` objects.
* [#6627](https://github.com/leanprover/lean4/pull/6627) aims to fix the trace issues reported by Mathlib that are
breaking `lake exe cache` in downstream projects.
* [#6631](https://github.com/leanprover/lean4/pull/6631) sets `MACOSX_DEPLOYMENT_TARGET` for shared libraries (it was
previously only set for executables).
* [#6771](https://github.com/leanprover/lean4/pull/6771) enables `FetchM` to be run from `JobM` / `SpawnM` and
vice-versa. This allows calls of `fetch` to asynchronously depend on the
outputs of other jobs.
* [#6780](https://github.com/leanprover/lean4/pull/6780) makes all targets and all `fetch` calls produce a `Job` of some
value. As part of this change, facet definitions (e.g., `library_data`,
`module_data`, `package_data`) and Lake type families (e.g.,
`FamilyOut`) should no longer include `Job` in their types (as this is
now implicit).
* [#6798](https://github.com/leanprover/lean4/pull/6798) deprecates the `-U` shorthand for the `--update` option.
* [#7209](https://github.com/leanprover/lean4/pull/7209) fixes broken Lake tests on Windows' new MSYS2. As of MSYS2
0.0.20250221, `OSTYPE` is now reported as `cygwin` instead of `msys`, which must be accounted for in a few Lake tests.
## Other
* [#6479](https://github.com/leanprover/lean4/pull/6479) speeds up JSON serialisation by using a lookup table to check
whether a string needs to be escaped.
* [#6519](https://github.com/leanprover/lean4/pull/6519) adds a script to automatically generate release notes using the
new `changelog-*` labels and "..." conventions.
* [#6542](https://github.com/leanprover/lean4/pull/6542) introduces a script that automates checking whether major
downstream repositories have been updated for a new toolchain release.
```` |
reference-manual/Manual/Releases/v4_21_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
#doc (Manual) "Lean 4.21.0 (2025-06-30)" =>
%%%
tag := "release-v4.21.0"
file := "v4.21.0"
%%%
````markdown
For this release, 295 changes landed. In addition to the 100 feature additions and 83 fixes listed below there were 2 refactoring changes, 4 documentation improvements, 6 performance improvements, 2 improvements to the test suite and 98 other changes.
## Highlights
_'Unknown identifier' code actions_
* [#7665](https://github.com/leanprover/lean4/pull/7665) and [#8180](https://github.com/leanprover/lean4/pull/8180) add
support for code actions that resolve 'Unknown identifier' errors by either importing the missing declaration or by
changing the identifier to one from the environment.
_New Language Features_
* [#8449](https://github.com/leanprover/lean4/pull/8449) and [#8516](https://github.com/leanprover/lean4/pull/8516)
upstream and extend the Mathlib `clear_value` tactic. Given a
local definition `x : T := v`, the tactic `clear_value x` replaces it
with a hypothesis `x : T`, or throws an error if the goal does not
depend on the value `v`. The syntax `clear_value (h : x = _)` creates a
hypothesis `h : x = _` before clearing the value of `x`. Any expression definitionally equal to `x`
can be used in place of the underscore.
Furthermore, `clear_value *` clears all values that can be cleared, or throws an
error if none can be cleared.
* [#8512](https://github.com/leanprover/lean4/pull/8512) adds a `value_of% ident` term that elaborates to the value of
the local or global constant `ident`. This is useful for creating
definition hypotheses:
```lean
let x := ... complicated expression ...
have hx : x = value_of% x := rfl
```
* [#8450](https://github.com/leanprover/lean4/pull/8450) adds a feature to the `subst` tactic so that when `x : X := v`
is a local definition, `subst x` substitutes `v` for `x` in the goal and
removes `x`. Previously the tactic would throw an error.
* [#8037](https://github.com/leanprover/lean4/pull/8037) introduces a `noConfusionType` construction that’s sub-quadratic
in size, and reduces faster. The previous `noConfusion` construction with
two nested `match` statements is quadratic in size and reduction behavior.
Using some helper definitions, a linear size construction is possible.
* [#8104](https://github.com/leanprover/lean4/pull/8104) makes `fun_induction` and `fun_cases` (try to) unfold the
function application of interest in the goal. The old behavior can be
enabled with `set_option tactic.fun_induction.unfolding false`. For
`fun_cases` this does not work yet when the function’s result type
depends on one of the arguments, see issue [#8296](https://github.com/leanprover/lean4/issues/8296).
* [#8171](https://github.com/leanprover/lean4/pull/8171) omits cases from functional induction/cases principles that are
implemented `by contradiction` (or, more generally, `False.elim`,
`absurd` or `noConfusion). **Breaking change** in the sense that there are
fewer goals to prove after using functional induction.
* [#8106](https://github.com/leanprover/lean4/pull/8106) adds a `register_linter_set` command for declaring linter sets.
The `getLinterValue` function now checks if the present linter is
contained in a set that has been enabled (using the `set_option` command
or on the command line).
* [#8267](https://github.com/leanprover/lean4/pull/8267) makes `#guard_msgs` to treat `trace` messages separate from
`info`, `warning` and `error`. It also introduces the ability to say
`#guard_msgs (pass info)`, like `(drop info)` so far, and also adds
`(check info)` as the explicit form of `(info)`, for completeness.
_Library Highlights_
* [#8358](https://github.com/leanprover/lean4/pull/8358) introduces a very minimal version of the new iterator library.
It comes with list iterators and various consumers, namely `toArray`,
`toList`, `toListRev`, `ForIn`, `fold`, `foldM` and `drain`. All
consumers also come in a partial variant that can be used without any
proofs. This limited version of the iterator library generates decent
code, even with the old code generator.
* [#7352](https://github.com/leanprover/lean4/pull/7352) reworks the `simp` set around the `Id` monad, to not elide or
unfold `pure` and `Id.run`
* [#8313](https://github.com/leanprover/lean4/pull/8313) changes the definition of `Vector` so it no longer extends
`Array`. This prevents `Array` API from "leaking through".
_Other Highlights_
* Performance optimizations in `dsimp`:
- [#6973](https://github.com/leanprover/lean4/pull/6973) stops `dsimp` from visiting proof terms, which should make
`simp` and `dsimp` more efficient.
- [#7428](https://github.com/leanprover/lean4/pull/7428) adds a `dsimp` cache to `simp`. Previously each `dsimp` call from
`simp` started with a fresh cache. As a result, time spent in `simp` while
compiling Mathlib is reduced by over 45%, giving an overall 8% speedup to
Mathlib compilation.
* [#8221](https://github.com/leanprover/lean4/pull/8221) adjusts the experimental module system to not export the bodies
of `def`s unless opted out by the new attribute `@[expose]` on the `def`
or on a surrounding `section`.
* [#8559](https://github.com/leanprover/lean4/pull/8559) and [#8560](https://github.com/leanprover/lean4/pull/8560) fix an adversarial
soundness attack described in [#8554](https://github.com/leanprover/lean4/pull/8554). The
attack exploits the fact that `assert!` no longer aborts execution, and
that users can redirect error messages.
## Language
* [#6973](https://github.com/leanprover/lean4/pull/6973) stops `dsimp` from visiting proof terms, which should make
`simp` and `dsimp` more efficient.
* [#7428](https://github.com/leanprover/lean4/pull/7428) adds a `dsimp` cache to `simp`. Previously each `dsimp` call from `simp` started with a fresh cache. As a result, time spent in `simp` while compiling Mathlib is reduced by over 45%, giving an overall 8% speedup to Mathlib compilation.
* [#7631](https://github.com/leanprover/lean4/pull/7631) fixes `Lean.Level.mkIMaxAux` (`mk_imax` in the kernel) such that
`imax 1 u` reduces to `u`.
* [#7977](https://github.com/leanprover/lean4/pull/7977) adds basic support for eta-reduction to `grind`.
* [#8002](https://github.com/leanprover/lean4/pull/8002) fixes an issue where "go to definition" for variables
generalized by the `induction` and `cases` tactic did not work. Closes
#2873.
* [#8024](https://github.com/leanprover/lean4/pull/8024) adds the `--setup` option to the `lean` CLI. It takes a path to
a JSON file containing information about a module's imports and
configuration, superseding that in the module's own file header. This
will be used by Lake to specify paths to module artifacts (e.g., oleans
and ileans) separate from the `LEAN_PATH` schema.
* [#8037](https://github.com/leanprover/lean4/pull/8037) introduces a `noConfusionType` construction that’s sub-quadratic
in size, and reduces faster.
* [#8104](https://github.com/leanprover/lean4/pull/8104) makes `fun_induction` and `fun_cases` (try to) unfold the
function application of interest in the goal. The old behavior can be
enabled with `set_option tactic.fun_induction.unfolding false`. For
`fun_cases` this does not work yet when the function’s result type
depends on one of the arguments, see issue #8296.
* [#8106](https://github.com/leanprover/lean4/pull/8106) adds a `register_linter_set` command for declaring linter sets.
The `getLinterValue` function now checks if the present linter is
contained in a set that has been enabled (using the `set_option` command
or on the command line).
* [#8169](https://github.com/leanprover/lean4/pull/8169) makes the whitespace handling in the syntax of `omit` and
`include` consistent with `variable`.
* [#8171](https://github.com/leanprover/lean4/pull/8171) omits cases from functional induction/cases principles that are
implemented `by contradiction` (or, more generally, `False.elim`,
`absurd` or `noConfusion). Breaking change in the sense that there are
fewer goals to prove after using functional induction.
* [#8196](https://github.com/leanprover/lean4/pull/8196) improves the E-matching pattern inference procedure in `grind`.
Consider the following theorem:
```lean
@[grind →]
theorem eq_empty_of_append_eq_empty {xs ys : Array α} (h : xs ++ ys = #[]) : xs = #[] ∧ ys = #[] :=
append_eq_empty_iff.mp h
```
Before this PR, `grind` inferred the following pattern:
```lean
@HAppend.hAppend _ _ _ _ #2 #1
```
Note that this pattern would match any `++` application, even if it had
nothing to do with arrays. With this PR, the inferred pattern becomes:
```lean
@HAppend.hAppend (Array #3) (Array _) (Array _) _ #2 #1
```
With the new pattern, the theorem will not be considered by `grind` for
goals that do not involve `Array`s.
* [#8198](https://github.com/leanprover/lean4/pull/8198) fixes an issue in the theory propagation used in `grind`. When
two equivalence classes are merged, the core may need to push additional
equalities or disequalities down to the satellite theory solvers (e.g.,
`cutsat`, `comm ring`, etc). Some solvers (e.g. `cutsat`) assume that
all of the core’s invariants hold before they receive those facts.
Propagating immediately therefore risks violating a solver’s
pre-conditions midway through the merge. To decouple the merge operation
from propagation and to keep the core solver-agnostic, this PR adds the
helper type `PendingTheoryPropagation`.
* [#8208](https://github.com/leanprover/lean4/pull/8208) reduces the need for defeq in frequently used bv_decide rewrite
by turning them into simprocs that work on structural equality instead.
As the intended meaning of these rewrites is to simply work with
structural equality anyways this should not change the proving power of
`bv_decide`'s rewriter but just make it faster on certain very large
problems.
* [#8209](https://github.com/leanprover/lean4/pull/8209) fixes a nondeterminism issue in the `grind` tactic. It was a bug
in the model-based theory combination module.
* [#8221](https://github.com/leanprover/lean4/pull/8221) adjusts the experimental module system to not export the bodies
of `def`s unless opted out by the new attribute `@[expose]` on the `def`
or on a surrounding `section`.
* [#8224](https://github.com/leanprover/lean4/pull/8224) adds diagnostic information for the commutative ring procedure
in `grind`.
* [#8226](https://github.com/leanprover/lean4/pull/8226) fixes the `simplifyBasis` procedure in the commutative ring
procedure in `grind`.
* [#8231](https://github.com/leanprover/lean4/pull/8231) changes the behaviour of `apply?` so that the `sorry` it uses to
close the goal is non-synthetic. (Recall that correct use of synthetic
sorries requires that the tactic also generates an error message, which
we don't want to do in this situation.) This change defends against the problem reported in [#8212](https://github.com/leanprover/lean4/issues/8212).
* [#8232](https://github.com/leanprover/lean4/pull/8232) fixes elaboration of constants in the `rewrite` tactic.
previously, `rw [eq_self]` would elaborate `eq_self` twice, and add it
to the infotree twice. This would lead to the "Expected type" being
delaborated with an unknown universe metavariable.
* [#8241](https://github.com/leanprover/lean4/pull/8241) changes the behavior of the `rename` tactic to skip over
implementation detail hypotheses when finding a hypothesis to rename.
* [#8254](https://github.com/leanprover/lean4/pull/8254) fixes unintended inlining of `ToJson`, `FromJson`, and `Repr`
instances, which was causing exponential compilation times in `deriving`
clauses for large structures.
* [#8259](https://github.com/leanprover/lean4/pull/8259) clarifies the invalid field notation error when projected value
type is a metavariable.
* [#8260](https://github.com/leanprover/lean4/pull/8260) clarifies the invalid dotted identifier notation error when the
type is a sort.
* [#8261](https://github.com/leanprover/lean4/pull/8261) adjusts the error message when `apply` fails to unify. It is
clearer about distinguishing the term being applied and the goal, as
well as distinguishing the "conclusion" of the given term and the term
itself.
* [#8262](https://github.com/leanprover/lean4/pull/8262) improves the type-as-hole error message. Type-as-hole error for
theorem declarations should not admit the possibility of omitting the
type entirely.
* [#8264](https://github.com/leanprover/lean4/pull/8264) rewords the `application type mismatch` error message by more
specifically mentioning that the problem is with the final argument.
This is useful when the same argument is passed to the function multiple
times.
* [#8267](https://github.com/leanprover/lean4/pull/8267) makes `#guard_msgs` to treat `trace` messages separate from
`info`, `warning` and `error`. It also introduce the ability to say
`#guard_msgs (pass info`, like `(drop info)` so far, and also adds
`(check info)` as the explicit form of `(info)`, for completeness.
* [#8270](https://github.com/leanprover/lean4/pull/8270) makes the enum pass of `bv_decide` handle enum types that are
universe polymorphic.
* [#8271](https://github.com/leanprover/lean4/pull/8271) changes `addPPExplicitToExposeDiff` to show universe differences
and to visit into projections, e.g.:
```
error: tactic 'rfl' failed, the left-hand side
(Test.mk (∀ (x : PUnit.{1}), True)).1
is not definitionally equal to the right-hand side
(Test.mk (∀ (x : PUnit.{2}), True)).1
```
for
```lean
inductive Test where
| mk (x : Prop)
* [#8275](https://github.com/leanprover/lean4/pull/8275) ensures the congruence closure in `grind` and find non-dependent
arrow congruences. That is, it can apply the `implies_congr` theorem.
* [#8276](https://github.com/leanprover/lean4/pull/8276) adds the instances `Grind.CommRing (Fin n)` and `Grind.IsCharP
(Fin n) n`. New tests:
```lean
example (x y z : Fin 13) :
(x + y + z) ^ 2 = x ^ 2 + y ^ 2 + z ^ 2 + 2 * (x * y + y * z + z * x) := by
grind +ring
* [#8277](https://github.com/leanprover/lean4/pull/8277) improves the generation of `.induct_unfolding` by rewriting
`match` statements more reliably, using the new “congruence equations”
introduced in #8284. Fixes #8195.
* [#8280](https://github.com/leanprover/lean4/pull/8280) adds support for arrows in the congruence closure procedure used
in `grind`.
* [#8281](https://github.com/leanprover/lean4/pull/8281) improves the module used to prove auxiliary type cast equalities
in `grind`.
* [#8284](https://github.com/leanprover/lean4/pull/8284) adds a new variant of equations for matchers, namely “congruence
equations” that generalize the normal matcher equations. They have
unrestricted left-hand-sides, extra equality assumptions relating the
discriminants with the patterns and thus prove heterogeneous equalities.
In that sense they combine congruence with rewriting. They can be used
to rewrite matcher applications where, due to dependencies, `simp` would
fail to rewrite the discriminants, and will be used when producing the
unfolding induction theorems.
* [#8285](https://github.com/leanprover/lean4/pull/8285) fixes “declaration has free variables” errors when generating a
splitter for a match statement with named patterns. Fixes #8274.
* [#8299](https://github.com/leanprover/lean4/pull/8299) implements a missing preprocessing step in `grind`: abstract
metavariables in the goal
* [#8301](https://github.com/leanprover/lean4/pull/8301) unfolds functions in the unfolding induction principle properly
when they use `bif` (a.k.a. `Bool.cond`).
* [#8302](https://github.com/leanprover/lean4/pull/8302) lets `cases` fail gracefully when the motive has a complex
argument whose type is dependent on the targets. While the
`induction` tactic can handle this well, `cases` does not. This change
at least gracefully degrades to not instantiating that motive parameter.
See issue [#8296](https://github.com/leanprover/lean4/issues/8296) for more details on this issue.
* [#8303](https://github.com/leanprover/lean4/pull/8303) fixes missing occurrences of `foldProjs` in `grind`.
* [#8306](https://github.com/leanprover/lean4/pull/8306) makes it possible for `bv_decide` to tackle situations for its
enum type preprocessing where the enums themselves are use in a
dependently type context (for example inside of a `GetElem` body) and
thus not trivially accessible to `simp` for rewriting. To do this we
drop`GetElem` on `BitVec` as well as `dite` as early as possible in the
pipeline.
* [#8321](https://github.com/leanprover/lean4/pull/8321) lets the termination argument inference consider negations of
Nat comparisons. Fixes [#8257](https://github.com/leanprover/lean4/issues/8257).
* [#8323](https://github.com/leanprover/lean4/pull/8323) adds support for bv_decide to understand `BitVec.reverse` in
bitblasting.
* [#8330](https://github.com/leanprover/lean4/pull/8330) improves support for structure extensionality in `grind`. It now
uses eta expansion for structures instead of the extensionality theorems
generated by `[ext]`. Examples:
```lean
opaque f (a : Nat) : Nat × Bool
* [#8338](https://github.com/leanprover/lean4/pull/8338) improves the error messages displayed in `inductive`
declarations when type parameters are invalid or absent.
* [#8341](https://github.com/leanprover/lean4/pull/8341) fixes the `propagateCtor` constraint propagator used in `grind`.
* [#8343](https://github.com/leanprover/lean4/pull/8343) splits `Lean.Grind.CommRing` into 4 type classes, for semirings
and noncommutative rings. This does not yet change the behaviour of
`grind`, which expects to find all 4 type classes. Later we will make
some generalizations.
* [#8344](https://github.com/leanprover/lean4/pull/8344) fixes term normalization issues in `grind`, and the new option
`grind +etaStruct`.
* [#8347](https://github.com/leanprover/lean4/pull/8347) adds draft type classes for `grind` to process facts about
ordered modules. These interfaces will evolve as the implementation
develops.
* [#8354](https://github.com/leanprover/lean4/pull/8354) makes sure that when generating the unfolding functional
induction theorem, `mdata` does not get in the way.
* [#8356](https://github.com/leanprover/lean4/pull/8356) tries harder to clean internals of the argument packing of n-ary
functions from the functional induction theorem, in particular the
unfolding variant
* [#8359](https://github.com/leanprover/lean4/pull/8359) improves the functional cases principles, by making a more
educated guess which function parameters should be targets and which
should remain parameters (or be dropped). This simplifies the
principles, and increases the chance that `fun_cases` can unfold the
function call.
* [#8361](https://github.com/leanprover/lean4/pull/8361) fixes a bug in the `cases` tacic introduced in #3188 that arises
when cases (not induction) is used with a non-atomic expression in using
and the argument indexing gets confused.
* [#8363](https://github.com/leanprover/lean4/pull/8363) unifies various ways of naming auxiliary declarations in a
conflict-free way and ensures the method is compatible with diverging
branches of elaboration such as parallelism or Aesop-like
backtracking+replaying search.
* [#8365](https://github.com/leanprover/lean4/pull/8365) fixes the transparency mode for ground patterns. This is
important for implicit instances. Here is a mwe for an issue detected
while testing `grind` in Mathlib.
```lean
example (a : Nat) : max a a = a := by
grind
* [#8368](https://github.com/leanprover/lean4/pull/8368) improves the error messages produced by invalid pattern-match
alternatives and improves parity in error placement between
pattern-matching tactics and elaborators.
* [#8369](https://github.com/leanprover/lean4/pull/8369) fixes a type error at `instantiateTheorem` function used in
`grind`. It was failing to instantiate theorems such as
```lean
theorem getElem_reverse {xs : Array α} {i : Nat} (hi : i < xs.reverse.size)
: (xs.reverse)[i] = xs[xs.size - 1 - i]'(by simp at hi; omega)
```
in examples such as
```lean
example (xs : Array Nat) (w : xs.reverse = xs) (j : Nat) (hj : 0 ≤ j) (hj' : j < xs.size / 2)
: xs[j] = xs[xs.size - 1 - j]
```
generating the issue
```lean
[issue] type error constructing proof for Array.getElem_reverse
when assigning metavariable ?hi with
‹j < xs.toList.length›
has type
j < xs.toList.length : Prop
but is expected to have type
j < xs.reverse.size : Prop
```
* [#8375](https://github.com/leanprover/lean4/pull/8375) ensures that using `mapError` to expand an error message uses
`addMessageContext` to include the current context, so that expressions
are rendered correctly. Also adds a `preprendError` variant with a more
convenient argument order for the common cases of
prepending-and-indenting.
* [#8403](https://github.com/leanprover/lean4/pull/8403) adds missing monotonicity lemmas for universal quantifiers, that
are used in defining (co)inductive predicates.
* [#8410](https://github.com/leanprover/lean4/pull/8410) fixes a case-splitting heuristic in `grind` and simplifies the
proof for test `grind_palindrome2.lean`.
* [#8412](https://github.com/leanprover/lean4/pull/8412) fixes the `markNestedProofs` preprocessor used in `grind`. There
was a missing case (e.g., `Expr.mdata`)
* [#8413](https://github.com/leanprover/lean4/pull/8413) implements normalization rules that pull universal quantifiers
across disjunctions. This is a common normalization step performed by
first-order theorem provers.
* [#8417](https://github.com/leanprover/lean4/pull/8417) introduces `Lean.Grind.Field`, proves that a `IsCharP 0` field
satisfies `NoNatZeroDivisors`, and sets up some basic (currently
failing) tests for `grind`.
* [#8426](https://github.com/leanprover/lean4/pull/8426) adds the attribute `[grind?]`. It is like `[grind]` but displays
inferred E-matching patterns. It is more convenient than writing it manually.
Thanks @kim-em for suggesting this feature.
```lean
set_option trace.grind.ematch.pattern true
```
It also improves some tests, and adds the helper function
`ENode.isRoot`.
* [#8429](https://github.com/leanprover/lean4/pull/8429) adds `Lean.Grind.Ring.IsOrdered`, and cleans up the ring/module
`grind` API. These type classes are at present unused, but will support
future algorithmic improvements in `grind`.
* [#8437](https://github.com/leanprover/lean4/pull/8437) fixes `split` in the presence of metavariables in the target.
* [#8438](https://github.com/leanprover/lean4/pull/8438) ensures that `grind` diagnostics are obtained even when
`maxHeartbeats` is reached.
Also removes some dead code.
* [#8440](https://github.com/leanprover/lean4/pull/8440) implements non-chronological backtracking for the `grind`
tactic. This feature ensures that `grind` does not need to process
irrelevant branches after performing a case-split that is not relevant.
It is not just about performance, but also the size of the final proof
term. The new test demonstrates this feature in practice.
```lean
-- In the following test, the first 8 case-splits are irrelevant,
-- and non-choronological backtracking is used to avoid searching
-- (2^8 - 1) irrelevant branches
/--
trace:
[grind.split] p8 ∨ q8, generation: 0
[grind.split] p7 ∨ q7, generation: 0
[grind.split] p6 ∨ q6, generation: 0
[grind.split] p5 ∨ q5, generation: 0
[grind.split] p4 ∨ q4, generation: 0
[grind.split] p3 ∨ q3, generation: 0
[grind.split] p2 ∨ q2, generation: 0
[grind.split] p1 ∨ q1, generation: 0
[grind.split] ¬p ∨ ¬q, generation: 0
-/
#guard_msgs (trace) in
set_option trace.grind.split true in
theorem ex
: p ∨ q →
¬ p ∨ q →
p ∨ ¬ q →
¬ p ∨ ¬ q →
p1 ∨ q1 →
p2 ∨ q2 →
p3 ∨ q3 →
p4 ∨ q4 →
p5 ∨ q5 →
p6 ∨ q6 →
p7 ∨ q7 →
p8 ∨ q8 →
False := by
grind (splits := 10)
```
* [#8443](https://github.com/leanprover/lean4/pull/8443) adds the lemmas about ordered rings and ordered fields which
will be needed by the new algebraic normalization components of `grind`.
* [#8449](https://github.com/leanprover/lean4/pull/8449) upstreams and extends the Mathlib `clear_value` tactic. Given a
local definition `x : T := v`, the tactic `clear_value x` replaces it
with a hypothesis `x : T`, or throws an error if the goal does not
depend on the value `v`. The syntax `clear_value x with h` creates a
hypothesis `h : x = v` before clearing the value of `x`. Furthermore,
`clear_value *` clears all values that can be cleared, or throws an
error if none can be cleared.
* [#8450](https://github.com/leanprover/lean4/pull/8450) adds a feature to the `subst` tactic so that when `x : X := v`
is a local definition, `subst x` substitutes `v` for `x` in the goal and
removes `x`. Previously the tactic would throw an error.
* [#8466](https://github.com/leanprover/lean4/pull/8466) fixes another instance of the `grind` issue "unexpected kernel
projection term during internalization".
* [#8472](https://github.com/leanprover/lean4/pull/8472) avoids name resolution blocking on the elaboration of a
theorem's proof when looking up the theorem name.
* [#8479](https://github.com/leanprover/lean4/pull/8479) implements hash-consing for `grind` that takes alpha equivalence
into account.
* [#8483](https://github.com/leanprover/lean4/pull/8483) ensures `grind` reuses the `simp` cache between different calls.
Recall that `grind` uses `simp` to normalize terms during
internalization.
* [#8491](https://github.com/leanprover/lean4/pull/8491) fixes the behavior of `simp_all?` and `simp_all?!`, aligning
them with `simp_all` and `simp_all!` respectively.
* [#8506](https://github.com/leanprover/lean4/pull/8506) implements `match`-expressions in `grind` using `match`
congruence equations. The goal is to minimize the number of `cast`
operations that need to be inserted, and avoid `cast` over functions.
The new approach support `match`-expressions of the form `match h : ...
with ...`.
* [#8512](https://github.com/leanprover/lean4/pull/8512) adds a `value_of% ident` term that elaborates to the value of
the local or global constant `ident`. This is useful for creating
definition hypotheses:
```lean
let x := ... complicated expression ...
have hx : x = value_of% x := rfl
```
* [#8516](https://github.com/leanprover/lean4/pull/8516) is a followup to #8449 to refine the syntax of `clear_value`.
The syntax for adding equality hypotheses before clearing values is now
`clear_value (h : x = _)`. Any expression definitionally equal to `x`
can be used in place of the underscore.
* [#8536](https://github.com/leanprover/lean4/pull/8536) fixes the support for `LawfulBEq` and `BEq` in `grind`.
* [#8541](https://github.com/leanprover/lean4/pull/8541) ensures that for any nested proof `h : p` in a goal, we
propagate that `p` is true in the `grind` tactic.
* [#8542](https://github.com/leanprover/lean4/pull/8542) fixes two inappropriate uses of `whnfD` in `grind`. They were
potential performance foot guns, and were producing unexpected errors
since `whnfD` is not consistently used (and it should not be) in all
modules.
* [#8544](https://github.com/leanprover/lean4/pull/8544) implements support for over-applied `ite` and `dite`
applications in the `grind` tactic. It adds support for propagation and
case-split.
* [#8549](https://github.com/leanprover/lean4/pull/8549) fixes the hash function used to implement congruence closure in
`grind`. The hash of an `Expr` must not depend on whether the expression
has been internalized or not.
* [#8564](https://github.com/leanprover/lean4/pull/8564) simplifies the interface between the `grind` core and the cutsat
procedure. Before this PR, core would try to minimize the number of
numeric literals that have to be internalized in cutsat. This
optimization was buggy (see `grind_cutsat_zero.lean` test), and produced
counterintuitive counterexamples.
* [#8569](https://github.com/leanprover/lean4/pull/8569) adds support for generalized E-match patterns to arbitrary
theorems.
* [#8570](https://github.com/leanprover/lean4/pull/8570) fixes some issues in the E-matching generalized pattern support
after the update stage0.
* [#8572](https://github.com/leanprover/lean4/pull/8572) adds some generalized `Option` theorems for `grind` . The avoid
`casts` operations during E-matching.
* [#8576](https://github.com/leanprover/lean4/pull/8576) sets `ring := true` by default in `grind`. It also fixes a bug
in the reification procedure, and improves the term internalization in
the ring and cutsat modules.
## Library
* [#7352](https://github.com/leanprover/lean4/pull/7352) reworks the `simp` set around the `Id` monad, to not elide or
unfold `pure` and `Id.run`
* [#7995](https://github.com/leanprover/lean4/pull/7995) adds a verification of `Array.qsort` properties, trying to use
`grind` and `fun_induction` where possible.
Currently this is in the `tests/` folder, but once `grind` is ready for
production use we will move it out into the library.
* [#8182](https://github.com/leanprover/lean4/pull/8182) adds `ofList_eq_insertMany_empty` lemmas for all the hash/tree
map types, with the exception of
`Std.HashSet.Raw.ofList_eq_insertMany_empty`.
* [#8188](https://github.com/leanprover/lean4/pull/8188) takes the existing `getElem_map` statements for `HashMap`
variants (also `getElem?`, `getElem!`, and `getD` statements), adds a
prime to their name and an explanatory comment, and replaces the
unprimed statement with a simpler statement that is only true with
`LawfulBEq` present. The original statements which were simp lemmas are
now low priority simp lemmas, so the nicer statements should fire when
`LawfulBEq` is available.
* [#8202](https://github.com/leanprover/lean4/pull/8202) adds an inference that was repeatedly needed when proving
`BitVec.msb_sdiv`, and is the symmetric version of
`BitVec.one_eq_zero_iff`
* [#8206](https://github.com/leanprover/lean4/pull/8206) shows that negating a bitvector created from a natural number
equals creating a bitvector from the negative of that number (as an
integer).
* [#8216](https://github.com/leanprover/lean4/pull/8216) completes adding `@[grind]` annotations for `Option` lemmas, and
incidentally fills in some `Option` API gaps/defects.
* [#8218](https://github.com/leanprover/lean4/pull/8218) continues adding `@[grind]` attributes for List/Array/Vector,
particularly to the lemmas involving the `toList`/`toArray` functions.
* [#8246](https://github.com/leanprover/lean4/pull/8246) add `@[grind]` annotations for HashMap and variants.
* [#8272](https://github.com/leanprover/lean4/pull/8272) adds lemmas about the length and use of `[]?` on results of
`List.intersperse`.
* [#8291](https://github.com/leanprover/lean4/pull/8291) changes the statements of `Fin` lemmas to use `[NeZero n] (i :
Fin n)` rather than `(i : Fin (n+1))` where possible.
* [#8298](https://github.com/leanprover/lean4/pull/8298) adds various `Option` lemmas and defines `Option.filterM` for
applicative functors.
* [#8313](https://github.com/leanprover/lean4/pull/8313) changes the definition of `Vector` so it no longer extends
`Array`. This prevents `Array` API from "leaking through".
* [#8315](https://github.com/leanprover/lean4/pull/8315) splits `Std.Classes.Ord` into `Std.Classes.Ord.Basic` (with few
imports) and `Std.Classes.Ord.SInt` and `Std.Classes.Ord.Vector`. These
changes avoid importing `Init.Data.BitVec.Lemmas` unnecessarily into
various basic files.
As the new import-only file `Std.Classes.Ord` imports all three of
these, end-users are not affected.
* [#8318](https://github.com/leanprover/lean4/pull/8318) is follow-up to #8272, combining the conditional lemmas for
`getElem_intersperse` into a single lemma with an `if` on the RHS.
* [#8327](https://github.com/leanprover/lean4/pull/8327) adds `@[grind]` annotations to the generic
`getElem?_eq_none_iff`, `isSome_getElem?`, and `get_getElem?`.
* [#8328](https://github.com/leanprover/lean4/pull/8328) adds the `@[grind =]` attribute to all `contains_iff_mem`
lemmas.
* [#8331](https://github.com/leanprover/lean4/pull/8331) improves the docstring for `PlainDateTime.now` and its variants.
* [#8346](https://github.com/leanprover/lean4/pull/8346) adds some missing lemmas about consequences of
positivity/non-negativity of `a * b : Int`.
* [#8349](https://github.com/leanprover/lean4/pull/8349) fixes the signature of the intended `Inhabited` instance for
`ExtDHashMap`.
* [#8357](https://github.com/leanprover/lean4/pull/8357) adds variants of `dite_eq_left_iff` that will be useful in a
future PR.
* [#8358](https://github.com/leanprover/lean4/pull/8358) introduces a very minimal version of the new iterator library.
It comes with list iterators and various consumers, namely `toArray`,
`toList`, `toListRev`, `ForIn`, `fold`, `foldM` and `drain`. All
consumers also come in a partial variant that can be used without any
proofs. This limited version of the iterator library generates decent
code, even with the old code generator.
* [#8378](https://github.com/leanprover/lean4/pull/8378) improves and extends the api around `Ord` and `Ordering`.
* [#8379](https://github.com/leanprover/lean4/pull/8379) adds missing `Option` lemmas.
* [#8380](https://github.com/leanprover/lean4/pull/8380) provides simple lemmas about `toArray`, `toList` and `toListRev`
for the iterator library.
* [#8384](https://github.com/leanprover/lean4/pull/8384) provides lemmas about the behavior of `step`, `toArray`,
`toList` and `toListRev` on list iterators created with `List.iter` and
`List.iterM`.
* [#8389](https://github.com/leanprover/lean4/pull/8389) adds the `List/Array/Vector.ofFnM`, the monadic analogues of
`ofFn`, along with basic theory.
* [#8392](https://github.com/leanprover/lean4/pull/8392) corrects some `Array` lemmas to be about `Array` not `List`.
* [#8397](https://github.com/leanprover/lean4/pull/8397) cleans up many duplicate instances (or, in some cases,
needlessly duplicated `def X := ...; instance Y := X`).
* [#8399](https://github.com/leanprover/lean4/pull/8399) adds variants of `HashMap.getElem?_filter` that assume
`LawfulBEq` and have a simpler right-hand-side. `simp` can already
achieve these, via rewriting with `getKey_eq` under the lambda. However
`grind` can not, and these lemmas help `grind` work with `HashMap`
goals. There are variants for all variants of `HashMap`,
`getElem?/getElem/getElem!/getD`, and for `filter` and `filterMap`.
* [#8405](https://github.com/leanprover/lean4/pull/8405) provides lemmas about the loop constructs `ForIn`, `fold`,
`foldM` and `drain` and their relation to each other in the context of
iterators.
* [#8418](https://github.com/leanprover/lean4/pull/8418) provides the `take` iterator combinator that transforms any
iterator into an iterator that stops after a given number of steps. The
change contains the implementation and lemmas.
* [#8422](https://github.com/leanprover/lean4/pull/8422) adds `LT` and `Decidable` `LT` instances for
`Std.Time.Timestamp` and `Std.Time.Duration`.
* [#8434](https://github.com/leanprover/lean4/pull/8434) adds the equivalent of `List.take_cons` about `List.drop`.
* [#8435](https://github.com/leanprover/lean4/pull/8435) upstreams the `LawfulMonadLift(T)` classes, lemmas and instances
from Batteries into Core because the iterator library needs them in
order to prove lemmas about the `mapM` operator, which relies on
`MonadLiftT`.
* [#8445](https://github.com/leanprover/lean4/pull/8445) adds a `@[simp]` lemma, and comments explaining that there is
intentionally no verification API for `Vector.take`, `Vector.drop`, or
`Vector.tail`, which should all be rewritten in terms of
`Vector.extract`.
* [#8446](https://github.com/leanprover/lean4/pull/8446) adds basic `@[grind]` annotations for `TreeMap` and its
variants. Likely more annotations will be added after we've explored
some examples.
* [#8451](https://github.com/leanprover/lean4/pull/8451) provides the iterator combinator `filterMap` in a pure and
monadic version and specializations `map` and `filter`. This new
combinator allows to apply a function to the emitted values of a stream
while filtering out certain elements.
* [#8460](https://github.com/leanprover/lean4/pull/8460) adds further `@[grind]` annotations for `Option`, as follow-up
to the recent additions to the `Option` API in #8379 and #8298.
* [#8465](https://github.com/leanprover/lean4/pull/8465) adds further lemmas about `LawfulGetElem`, including marking
some with `@[grind]`.
* [#8470](https://github.com/leanprover/lean4/pull/8470) adds `@[simp]` to `getElem_pos/neg` (similarly for `getElem!`).
These are often already simp lemmas for concrete types.
* [#8482](https://github.com/leanprover/lean4/pull/8482) adds preliminary `@[grind]` annotations for `List.Pairwise` and
`List.Nodup`.
* [#8484](https://github.com/leanprover/lean4/pull/8484) provides the iterator combinator `zip` in a pure and monadic
version.
* [#8492](https://github.com/leanprover/lean4/pull/8492) adds `simp` lemmas for `toInt_*` and `toNat_*` with arithmetic
operation given the hypothesis of no-overflow
(`toNat_add_of_not_uaddOverflow`, `toInt_add_of_not_saddOverflow`,
`toNat_sub_of_not_usubOverflow`, `toInt_sub_of_not_ssubOverflow`,
`toInt_neg_of_not_negOverflow`, `toNat_mul_of_not_umulOverflow`,
`toInt_mul_of_not_smulOverflow`). In particular, these are `simp` since
(1) the `rhs` is strictly simpler than the `lhs` and (2) this version is
also simpler than the standard operation when the hypothesis is
available.
* [#8493](https://github.com/leanprover/lean4/pull/8493) provides the iterator combinators `takeWhile` (forwarding all
emitted values of another iterator until a predicate becomes false)
`dropWhile` (dropping values until some predicate on these values
becomes false, then forwarding all the others).
* [#8497](https://github.com/leanprover/lean4/pull/8497) adds preliminary grind annotations for
`List.Sublist`/`IsInfix`/`IsPrefix`/`IsSuffix`, along with test cases.
* [#8499](https://github.com/leanprover/lean4/pull/8499) changes the definition of `Array.ofFn.go` to use recursion on
`Nat` (rather than well-founded recursion). This resolves a problem
reported on [zulip](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Memory.20issues.20with.20.60Vector.2EofFn.60.2E/near/520622564)).
* [#8513](https://github.com/leanprover/lean4/pull/8513) removes the `@[reducible]` annotation on `Array.size`. This is
probably best gone anyway in order to keep separation between the `List`
and `Array` APIs, but it also helps avoid uselessly instantiating
`Array` theorems when `grind` is working on `List` problems.
* [#8515](https://github.com/leanprover/lean4/pull/8515) removes the prime from `Fin.ofNat'`: the old `Fin.ofNat` has
completed its 6 month deprecation cycle and is being removed.
* [#8527](https://github.com/leanprover/lean4/pull/8527) adds `grind` annotations for theorems about `List.countP` and
`List.count`.
* [#8552](https://github.com/leanprover/lean4/pull/8552) provides array iterators (`Array.iter(M)`,
`Array.iterFromIdx(M)`), infinite iterators produced by a step function
(`Iter.repeat`), and a `ForM` instance for finite iterators that is
implemented in terms of `ForIn`.
* [#8620](https://github.com/leanprover/lean4/pull/8620) removes the `NatCast (Fin n)` global instance (both the direct
instance, and the indirect one via `Lean.Grind.Semiring`), as that
instance causes `x < n` (for `x : Fin k`, `n : Nat`) to be
elaborated as `x < ↑n` rather than `↑x < n`, which is undesirable. Note
however that in Mathlib this happens anyway!
## Compiler
* [#8211](https://github.com/leanprover/lean4/pull/8211) adds support for generating IR from the LCNF representation of
the new compiler.
* [#8236](https://github.com/leanprover/lean4/pull/8236) fixes an issue where the combination of `extern_lib` and
`precompileModules` would lead to "symbol not found" errors.
* [#8268](https://github.com/leanprover/lean4/pull/8268) optimizes lean_nat_shiftr for scalar operands. The new compiler
converts Nat divisions into right shifts, so this now shows up as hot in
some profiles.
* [#8308](https://github.com/leanprover/lean4/pull/8308) makes the new compiler's specialization pass compute closures
the same way as the old compiler, in particular when it comes to
variables captured by lambdas.
* [#8367](https://github.com/leanprover/lean4/pull/8367) adds a new `structProjCases` pass to the new compiler, analogous
to the `struct_cases_on` pass in the old compiler, which converts all
projections from structs into `cases` expressions. When lowered to IR,
this causes all of the projections from a single structure to be grouped
together, which is an invariant relied upon by the IR RC passes (at
least for linearity, if not general correctness).
* [#8409](https://github.com/leanprover/lean4/pull/8409) adds support to LCNF for native UInt8/UInt16/UInt32/UInt64
literals.
* [#8456](https://github.com/leanprover/lean4/pull/8456) adds support for primitive USize literals in LCNF.
* [#8458](https://github.com/leanprover/lean4/pull/8458) adds closed term extraction to the new compiler, closely
following the approach in the old compiler. In the future, we will
explore some ideas to improve upon this approach.
* [#8462](https://github.com/leanprover/lean4/pull/8462) enables the LCNF extractClosed pass by default.
* [#8468](https://github.com/leanprover/lean4/pull/8468) switches the LCNF baseExt/monoExt environment extensions to use
a custom environment extension that uses a PersistentHashMap. The
optimizer relies upon the ability to update a decl multiple times, which
does not work with SimplePersistentEnvExtension.
* [#8502](https://github.com/leanprover/lean4/pull/8502) changes the new compiler to use the kernel environment to find
definitions, which causes compilation to be skipped when the decl had a
kernel error (e.g. due to an unresolved metavariable). This matches the
behavior of the old compiler.
* [#8521](https://github.com/leanprover/lean4/pull/8521) makes LCNF.toMono recursively process jmp args.
* [#8523](https://github.com/leanprover/lean4/pull/8523) moves the new compiler's noncomputable check into toMono,
matching the recent change in the old compiler. This is mildly more
complicated because we can't throw an error at the mere use of a
constant, we need to check for a later relevant use. This is still a bit
more conservative than it could theoretically be around join points and
local functions, but it's hard to imagine that mattering in practice
(and we can easily enable it if it does).
* [#8535](https://github.com/leanprover/lean4/pull/8535) extracts more Nats (and their downstream users) in extractClosed
by fixing a silly oversight in the logic.
* [#8540](https://github.com/leanprover/lean4/pull/8540) changes the LCNF specialize pass to allow ground variables to
depend on local fun decls (with no non-ground free variables). This
enables specialization of Monad instances that depend on local lambdas.
* [#8559](https://github.com/leanprover/lean4/pull/8559) fixes an adversarial soundness attack described in #8554. The
attack exploits the fact that `assert!` no longer aborts execution, and
that users can redirect error messages.
Another PR will implement the same fix for `Expr.Data`.
* [#8560](https://github.com/leanprover/lean4/pull/8560) is similar to #8559 but for `Expr.mkData`. This vulnerability
has not been exploited yet, but adversarial users may find a way.
* [#8561](https://github.com/leanprover/lean4/pull/8561) increases maxHeartbeats in the isDefEqProjIssue test, because
when running under the new compiler the `run_meta` call includes the
allocations of the compiler itself. With the old compiler, many of the
corresponding allocations were internal to C++ code and would not
increase the heartbeat count.
* [#8565](https://github.com/leanprover/lean4/pull/8565) makes the LCNF specialization pass only treat type/instance
params as ground vars. The current policy was too liberal and would
result on computations being floated into specialized loops.
* [#8566](https://github.com/leanprover/lean4/pull/8566) changes the LCNF constant folding pass to not convert Nat
multiplication to a left shift by a power of 2. The fast path test for
this is sufficiently complex that it's simpler to just use the fast path
for multiplication.
* [#8575](https://github.com/leanprover/lean4/pull/8575) makes LCNF's simpAppApp? bail out on trivial aliases as
intended. It seems that there was a typo in the original logic, and this
PR also extends it to include aliases of global constants rather than
just local vars.
* [#8582](https://github.com/leanprover/lean4/pull/8582) fixes an accidental dropping of state in Param.toMono. When this
code was originally written, there was no other state besides
`typeParams`.
## Pretty Printing
* [#8041](https://github.com/leanprover/lean4/pull/8041) changes the behavior of `pp.showLetValues` to use a hoverable
`⋯` to hide let values. This is now false by default, and there is a new
option `pp.showLetValues.threshold` for allowing small expressions to be
shown anyway. For tactic metavariables, there is an additional option
`pp.showLetValues.tactic.threshold`, which by default is set to the
maximal value, since in tactic states local values are usually
significant.
* [#8372](https://github.com/leanprover/lean4/pull/8372) modifies the pretty printer to use `have` syntax instead of
`let_fun` syntax.
* [#8457](https://github.com/leanprover/lean4/pull/8457) fixes an issue when including a hard line break in a `Format`
that caused subsequent (ordinary) line breaks to be erroneously
flattened to spaces.
* [#8504](https://github.com/leanprover/lean4/pull/8504) modifies the pretty printer so that dot notation is used for
class parent projections. Previously, dot notation was never used for
classes.
## Documentation
* [#8199](https://github.com/leanprover/lean4/pull/8199) adds a style guide for documentation, including both general
principles and docstring-specific concerns.
## Server
* [#7665](https://github.com/leanprover/lean4/pull/7665) and [#8180](https://github.com/leanprover/lean4/pull/8180) add
support for code actions that resolve 'Unknown identifier' errors by either importing the missing declaration or by
changing the identifier to one from the environment.
* [#8091](https://github.com/leanprover/lean4/pull/8091) improves the performance of the workspace symbol request.
* [#8242](https://github.com/leanprover/lean4/pull/8242) fixes the 'goals accomplished' diagnostics. They were
accidentally broken in #7902.
* [#8350](https://github.com/leanprover/lean4/pull/8350) changes namespace completion to use the same algorithm as
declaration identifier completion, which makes it use the short name
(last name component) for completions instead of the full name, avoiding
namespace duplications.
* [#8362](https://github.com/leanprover/lean4/pull/8362) fixes a bug where the Unknown identifier code actions wouldn't
work correctly for some Unknown identifier error spans and adjusts
several Unknown identifier spans to actually end on the identifier in
question.
## Lake
* [#8383](https://github.com/leanprover/lean4/pull/8383) fixes the use of `import Lake` with precompiled modules, which
was previously broken on MacOS.
* [#8411](https://github.com/leanprover/lean4/pull/8411) fixes a doc bug in the Resolve.lean; in reverse order, B comes
before A
* [#8528](https://github.com/leanprover/lean4/pull/8528) fixes the heuristic Lake uses to determine whether a `lean_lib`
can be loaded via `lean --plugin` rather than `lean --load-dynlib`.
Previously, a mismatch between the single root's name and the library's
name would not be caught and cause loading to fail.
* [#8529](https://github.com/leanprover/lean4/pull/8529) changes `lake lean` and `lake setup-file` to precompile the
imports of non-workspace files using the import's whole library.
This ensures that additional link objects are linked and available
during elaboration.
* [#8539](https://github.com/leanprover/lean4/pull/8539) changes Lake to use relative path for the Lean messages produced
by a module build. This makes the message portable across different
machines, which is useful for Mathlib's cache.
## Other
* [#8192](https://github.com/leanprover/lean4/pull/8192) includes upgrades to the `release_checklist.py` script prepared
while releasing v4.20.0-rc1.
* [#8366](https://github.com/leanprover/lean4/pull/8366) adds the `expose` attribute to `Ordering.then`. This is required
for building with the new compiler, but works fine with the old compiler
because it silently ignores the missing definition.
```` |
reference-manual/Manual/Releases/v4_5_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
#doc (Manual) "Lean 4.5.0 (2024-02-01)" =>
%%%
tag := "release-v4.5.0"
file := "v4.5.0"
%%%
````markdown
* Modify the lexical syntax of string literals to have string gaps, which are escape sequences of the form `"\" newline whitespace*`.
These have the interpretation of an empty string and allow a string to flow across multiple lines without introducing additional whitespace.
The following is equivalent to `"this is a string"`.
```lean
"this is \
a string"
```
[PR #2821](https://github.com/leanprover/lean4/pull/2821) and [RFC #2838](https://github.com/leanprover/lean4/issues/2838).
* Add raw string literal syntax. For example, `r"\n"` is equivalent to `"\\n"`, with no escape processing.
To include double quote characters in a raw string one can add sufficiently many `#` characters before and after
the bounding `"`s, as in `r#"the "the" is in quotes"#` for `"the \"the\" is in quotes"`.
[PR #2929](https://github.com/leanprover/lean4/pull/2929) and [issue #1422](https://github.com/leanprover/lean4/issues/1422).
* The low-level `termination_by'` clause is no longer supported.
Migration guide: Use `termination_by` instead, e.g.:
```diff
-termination_by' measure (fun ⟨i, _⟩ => as.size - i)
+termination_by i _ => as.size - i
```
If the well-founded relation you want to use is not the one that the
`WellFoundedRelation` type class would infer for your termination argument,
you can use `WellFounded.wrap` from the std library to explicitly give one:
```diff
-termination_by' ⟨r, hwf⟩
+termination_by x => hwf.wrap x
```
* Support snippet edits in LSP `TextEdit`s. See `Lean.Lsp.SnippetString` for more details.
* Deprecations and changes in the widget API.
- `Widget.UserWidgetDefinition` is deprecated in favour of `Widget.Module`. The annotation `@[widget]` is deprecated in favour of `@[widget_module]`. To migrate a definition of type `UserWidgetDefinition`, remove the `name` field and replace the type with `Widget.Module`. Removing the `name` results in a title bar no longer being drawn above your panel widget. To add it back, draw it as part of the component using `<details open=true><summary class='mv2 pointer'>{name}</summary>{rest_of_widget}</details>`. See an example migration [here](https://github.com/leanprover/std4/pull/475/files#diff-857376079661a0c28a53b7ff84701afabbdf529836a6944d106c5294f0e68109R43-R83).
- The new command `show_panel_widgets` allows displaying always-on and locally-on panel widgets.
- `RpcEncodable` widget props can now be stored in the infotree.
- See [RFC 2963](https://github.com/leanprover/lean4/issues/2963) for more details and motivation.
* If no usable lexicographic order can be found automatically for a termination proof, explain why.
See [feat: GuessLex: if no measure is found, explain why](https://github.com/leanprover/lean4/pull/2960).
* Option to print [inferred termination argument](https://github.com/leanprover/lean4/pull/3012).
With `set_option showInferredTerminationBy true` you will get messages like
```
Inferred termination argument:
termination_by
ackermann n m => (sizeOf n, sizeOf m)
```
for automatically generated `termination_by` clauses.
* More detailed error messages for [invalid mutual blocks](https://github.com/leanprover/lean4/pull/2949).
* [Multiple](https://github.com/leanprover/lean4/pull/2923) [improvements](https://github.com/leanprover/lean4/pull/2969) to the output of `simp?` and `simp_all?`.
* Tactics with `withLocation *` [no longer fail](https://github.com/leanprover/lean4/pull/2917) if they close the main goal.
* Implementation of a `test_extern` command for writing tests for `@[extern]` and `@[implemented_by]` functions.
Usage is
```
import Lean.Util.TestExtern
test_extern Nat.add 17 37
```
The head symbol must be the constant with the `@[extern]` or `@[implemented_by]` attribute. The return type must have a `DecidableEq` instance.
Bug fixes for
[#2853](https://github.com/leanprover/lean4/issues/2853), [#2953](https://github.com/leanprover/lean4/issues/2953), [#2966](https://github.com/leanprover/lean4/issues/2966),
[#2971](https://github.com/leanprover/lean4/issues/2971), [#2990](https://github.com/leanprover/lean4/issues/2990), [#3094](https://github.com/leanprover/lean4/issues/3094).
Bug fix for [eager evaluation of default value](https://github.com/leanprover/lean4/pull/3043) in `Option.getD`.
Avoid [panic in `leanPosToLspPos`](https://github.com/leanprover/lean4/pull/3071) when file source is unavailable.
Improve [short-circuiting behavior](https://github.com/leanprover/lean4/pull/2972) for `List.all` and `List.any`.
Several Lake bug fixes: [#3036](https://github.com/leanprover/lean4/issues/3036), [#3064](https://github.com/leanprover/lean4/issues/3064), [#3069](https://github.com/leanprover/lean4/issues/3069).
```` |
reference-manual/Manual/Releases/v4_9_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
#doc (Manual) "Lean 4.9.0 (2024-07-01)" =>
%%%
tag := "release-v4.9.0"
file := "v4.9.0"
%%%
````markdown
### Language features, tactics, and metaprograms
* **Definition transparency**
* [#4053](https://github.com/leanprover/lean4/pull/4053) adds the `seal` and `unseal` commands, which make definitions locally be irreducible or semireducible.
* [#4061](https://github.com/leanprover/lean4/pull/4061) marks functions defined by well-founded recursion with `@[irreducible]` by default,
which should prevent the expensive and often unfruitful unfolding of such definitions (see breaking changes below).
* **Incrementality**
* [#3940](https://github.com/leanprover/lean4/pull/3940) extends incremental elaboration into various steps inside of declarations:
definition headers, bodies, and tactics.
[Screen recording](https://github.com/leanprover/lean4/assets/109126/c9d67b6f-c131-4bc3-a0de-7d63eaf1bfc9).
* [250994](https://github.com/leanprover/lean4/commit/250994166ce036ab8644e459129f51ea79c1c2d2)
and [67338b](https://github.com/leanprover/lean4/commit/67338bac2333fa39a8656e8f90574784e4c23d3d)
add `@[incremental]` attribute to mark an elaborator as supporting incremental elaboration.
* [#4259](https://github.com/leanprover/lean4/pull/4259) improves resilience by ensuring incremental commands and tactics are reached only in supported ways.
* [#4268](https://github.com/leanprover/lean4/pull/4268) adds special handling for `:= by` so that stray tokens in tactic blocks do not inhibit incrementality.
* [#4308](https://github.com/leanprover/lean4/pull/4308) adds incremental `have` tactic.
* [#4340](https://github.com/leanprover/lean4/pull/4340) fixes incorrect info tree reuse.
* [#4364](https://github.com/leanprover/lean4/pull/4364) adds incrementality for careful command macros such as `set_option in theorem`, `theorem foo.bar`, and `lemma`.
* [#4395](https://github.com/leanprover/lean4/pull/4395) adds conservative fix for whitespace handling to avoid incremental reuse leading to goals in front of the text cursor being shown.
* [#4407](https://github.com/leanprover/lean4/pull/4407) fixes non-incremental commands in macros blocking further incremental reporting.
* [#4436](https://github.com/leanprover/lean4/pull/4436) fixes incremental reporting when there are nested tactics in terms.
* [#4459](https://github.com/leanprover/lean4/pull/4459) adds incrementality support for `next` and `if` tactics.
* [#4554](https://github.com/leanprover/lean4/pull/4554) disables incrementality for tactics in terms in tactics.
* **Functional induction**
* [#4135](https://github.com/leanprover/lean4/pull/4135) ensures that the names used for functional induction are reserved.
* [#4327](https://github.com/leanprover/lean4/pull/4327) adds support for structural recursion on reflexive types.
For example,
```lean4
inductive Many (α : Type u) where
| none : Many α
| more : α → (Unit → Many α) → Many α
def Many.map {α β : Type u} (f : α → β) : Many α → Many β
| .none => .none
| .more x xs => .more (f x) (fun _ => (xs ()).map f)
#check Many.map.induct
/-
Many.map.induct {α β : Type u} (f : α → β) (motive : Many α → Prop)
(case1 : motive Many.none)
(case2 : ∀ (x : α) (xs : Unit → Many α), motive (xs ()) → motive (Many.more x xs)) :
∀ (a : Many α), motive a
-/
```
* [#3903](https://github.com/leanprover/lean4/pull/3903) makes the Lean frontend normalize all line endings to LF before processing.
This lets Lean be insensitive to CRLF vs LF line endings, improving the cross-platform experience and making Lake hashes be faithful to what Lean processes.
* [#4130](https://github.com/leanprover/lean4/pull/4130) makes the tactic framework be able to recover from runtime errors (for example, deterministic timeouts or maximum recursion depth errors).
* `split` tactic
* [#4211](https://github.com/leanprover/lean4/pull/4211) fixes `split at h` when `h` has forward dependencies.
* [#4349](https://github.com/leanprover/lean4/pull/4349) allows `split` for `if`-expressions to work on non-propositional goals.
* `apply` tactic
* [#3929](https://github.com/leanprover/lean4/pull/3929) makes error message for `apply` show implicit arguments in unification errors as needed.
Modifies `MessageData` type (see breaking changes below).
* `cases` tactic
* [#4224](https://github.com/leanprover/lean4/pull/4224) adds support for unification of offsets such as `x + 20000 = 20001` in `cases` tactic.
* `omega` tactic
* [#4073](https://github.com/leanprover/lean4/pull/4073) lets `omega` fall back to using classical `Decidable` instances when setting up contradiction proofs.
* [#4141](https://github.com/leanprover/lean4/pull/4141) and [#4184](https://github.com/leanprover/lean4/pull/4184) fix bugs.
* [#4264](https://github.com/leanprover/lean4/pull/4264) improves `omega` error message if no facts found in local context.
* [#4358](https://github.com/leanprover/lean4/pull/4358) improves expression matching in `omega` by using `match_expr`.
* `simp` tactic
* [#4176](https://github.com/leanprover/lean4/pull/4176) makes names of erased lemmas clickable.
* [#4208](https://github.com/leanprover/lean4/pull/4208) adds a pretty printer for discrimination tree keys.
* [#4202](https://github.com/leanprover/lean4/pull/4202) adds `Simp.Config.index` configuration option,
which controls whether to use the full discrimination tree when selecting candidate simp lemmas.
When `index := false`, only the head function is taken into account, like in Lean 3.
This feature can help users diagnose tricky simp failures or issues in code from libraries
developed using Lean 3 and then ported to Lean 4.
In the following example, it will report that `foo` is a problematic theorem.
```lean
opaque f : Nat → Nat → Nat
@[simp] theorem foo : f x (x, y).2 = y := by sorry
example : f a b ≤ b := by
set_option diagnostics true in
simp (config := { index := false })
/-
[simp] theorems with bad keys
foo, key: f _ (@Prod.mk ℕ ℕ _ _).2
-/
```
With the information above, users can annotate theorems such as `foo` using `no_index` for problematic subterms. Example:
```lean
opaque f : Nat → Nat → Nat
@[simp] theorem foo : f x (no_index (x, y).2) = y := by sorry
example : f a b ≤ b := by
simp -- `foo` is still applied with `index := true`
```
* [#4274](https://github.com/leanprover/lean4/pull/4274) prevents internal `match` equational theorems from appearing in simp trace.
* [#4177](https://github.com/leanprover/lean4/pull/4177) and [#4359](https://github.com/leanprover/lean4/pull/4359) make `simp` continue even if a simp lemma does not elaborate, if the tactic state is in recovery mode.
* [#4341](https://github.com/leanprover/lean4/pull/4341) fixes panic when applying `@[simp]` to malformed theorem syntax.
* [#4345](https://github.com/leanprover/lean4/pull/4345) fixes `simp` so that it does not use the forward version of a user-specified backward theorem.
* [#4352](https://github.com/leanprover/lean4/pull/4352) adds missing `dsimp` simplifications for fixed parameters of generated congruence theorems.
* [#4362](https://github.com/leanprover/lean4/pull/4362) improves trace messages for `simp` so that constants are hoverable.
* **Elaboration**
* [#4046](https://github.com/leanprover/lean4/pull/4046) makes subst notation (`he ▸ h`) try rewriting in both directions even when there is no expected type available.
* [#3328](https://github.com/leanprover/lean4/pull/3328) adds support for identifiers in autoparams (for example, `rfl` in `(h : x = y := by exact rfl)`).
* [#4096](https://github.com/leanprover/lean4/pull/4096) changes how the type in `let` and `have` is elaborated, requiring that any tactics in the type be evaluated before proceeding, improving performance.
* [#4215](https://github.com/leanprover/lean4/pull/4215) ensures the expression tree elaborator commits to the computed "max type" for the entire arithmetic expression.
* [#4267](https://github.com/leanprover/lean4/pull/4267) cases signature elaboration errors to show even if there are parse errors in the body.
* [#4368](https://github.com/leanprover/lean4/pull/4368) improves error messages when numeric literals fail to synthesize an `OfNat` instance,
including special messages warning when the expected type of the numeral can be a proposition.
* [#4643](https://github.com/leanprover/lean4/pull/4643) fixes issue leading to nested error messages and info trees vanishing, where snapshot subtrees were not restored on reuse.
* [#4657](https://github.com/leanprover/lean4/pull/4657) calculates error suppression per snapshot, letting elaboration errors appear even when there are later parse errors ([RFC #3556](https://github.com/leanprover/lean4/issues/3556)).
* **Metaprogramming**
* [#4167](https://github.com/leanprover/lean4/pull/4167) adds `Lean.MVarId.revertAll` to revert all free variables.
* [#4169](https://github.com/leanprover/lean4/pull/4169) adds `Lean.MVarId.ensureNoMVar` to ensure the goal's target contains no expression metavariables.
* [#4180](https://github.com/leanprover/lean4/pull/4180) adds `cleanupAnnotations` parameter to `forallTelescope` methods.
* [#4307](https://github.com/leanprover/lean4/pull/4307) adds support for parser aliases in syntax quotations.
* Work toward implementing `grind` tactic
* [0a515e](https://github.com/leanprover/lean4/commit/0a515e2ec939519dafb4b99daa81d6bf3c411404)
and [#4164](https://github.com/leanprover/lean4/pull/4164)
add `grind_norm` and `grind_norm_proc` attributes and `@[grind_norm]` theorems.
* [#4170](https://github.com/leanprover/lean4/pull/4170), [#4221](https://github.com/leanprover/lean4/pull/4221),
and [#4249](https://github.com/leanprover/lean4/pull/4249) create `grind` preprocessor and core module.
* [#4235](https://github.com/leanprover/lean4/pull/4235) and [d6709e](https://github.com/leanprover/lean4/commit/d6709eb1576c5d40fc80462637dc041f970e4d9f)
add special `cases` tactic to `grind` along with `@[grind_cases]` attribute to mark types that this `cases` tactic should automatically apply to.
* [#4243](https://github.com/leanprover/lean4/pull/4243) adds special `injection?` tactic to `grind`.
* **Other fixes or improvements**
* [#4065](https://github.com/leanprover/lean4/pull/4065) fixes a bug in the `Nat.reduceLeDiff` simproc.
* [#3969](https://github.com/leanprover/lean4/pull/3969) makes deprecation warnings activate even for generalized field notation ("dot notation").
* [#4132](https://github.com/leanprover/lean4/pull/4132) fixes the `sorry` term so that it does not activate the implicit lambda feature
* [9803c5](https://github.com/leanprover/lean4/commit/9803c5dd63dc993628287d5f998525e74af03839)
and [47c8e3](https://github.com/leanprover/lean4/commit/47c8e340d65b01f4d9f011686e3dda0d4bb30a20)
move `cdot` and `calc` parsers to `Lean` namespace.
* [#4252](https://github.com/leanprover/lean4/pull/4252) fixes the `case` tactic so that it is usable in macros by having it erase macro scopes from the tag.
* [26b671](https://github.com/leanprover/lean4/commit/26b67184222e75529e1b166db050aaebee323d2d)
and [cc33c3](https://github.com/leanprover/lean4/commit/cc33c39cb022d8a3166b1e89677c78835ead1fc7)
extract `haveId` syntax.
* [#4335](https://github.com/leanprover/lean4/pull/4335) fixes bugs in partial `calc` tactic when there is mdata or metavariables.
* [#4329](https://github.com/leanprover/lean4/pull/4329) makes `termination_by?` report unused each unused parameter as `_`.
* **Docs:** [#4238](https://github.com/leanprover/lean4/pull/4238), [#4294](https://github.com/leanprover/lean4/pull/4294),
[#4338](https://github.com/leanprover/lean4/pull/4338).
### Language server, widgets, and IDE extensions
* [#4066](https://github.com/leanprover/lean4/pull/4066) fixes features like "Find References" when browsing core Lean sources.
* [#4254](https://github.com/leanprover/lean4/pull/4254) allows embedding user widgets in structured messages.
Companion PR is [vscode-lean4#449](https://github.com/leanprover/vscode-lean4/pull/449).
* [#4445](https://github.com/leanprover/lean4/pull/4445) makes watchdog more resilient against badly behaving clients.
### Library
* [#4059](https://github.com/leanprover/lean4/pull/4059) upstreams many `List` and `Array` operations and theorems from Batteries.
* [#4055](https://github.com/leanprover/lean4/pull/4055) removes the unused `Inhabited` instance for `Subtype`.
* [#3967](https://github.com/leanprover/lean4/pull/3967) adds dates in existing `@[deprecated]` attributes.
* [#4231](https://github.com/leanprover/lean4/pull/4231) adds boilerplate `Char`, `UInt`, and `Fin` theorems.
* [#4205](https://github.com/leanprover/lean4/pull/4205) fixes the `MonadStore` type classes to use `semiOutParam`.
* [#4350](https://github.com/leanprover/lean4/pull/4350) renames `IsLawfulSingleton` to `LawfulSingleton`.
* `Nat`
* [#4094](https://github.com/leanprover/lean4/pull/4094) swaps `Nat.zero_or` and `Nat.or_zero`.
* [#4098](https://github.com/leanprover/lean4/pull/4098) and [#4145](https://github.com/leanprover/lean4/pull/4145)
change the definition of `Nat.mod` so that `n % (m + n)` reduces when `n` is literal without relying on well-founded recursion,
which becomes irreducible by default in [#4061](https://github.com/leanprover/lean4/pull/4061).
* [#4188](https://github.com/leanprover/lean4/pull/4188) redefines `Nat.testBit` to be more performant.
* Theorems: [#4199](https://github.com/leanprover/lean4/pull/4199).
* `Array`
* [#4074](https://github.com/leanprover/lean4/pull/4074) improves the functional induction principle `Array.feraseIdx.induct`.
* `List`
* [#4172](https://github.com/leanprover/lean4/pull/4172) removes `@[simp]` from `List.length_pos`.
* `Option`
* [#4037](https://github.com/leanprover/lean4/pull/4037) adds theorems to simplify `Option`-valued dependent if-then-else.
* [#4314](https://github.com/leanprover/lean4/pull/4314) removes `@[simp]` from `Option.bind_eq_some`.
* `BitVec`
* Theorems: [#3920](https://github.com/leanprover/lean4/pull/3920), [#4095](https://github.com/leanprover/lean4/pull/4095),
[#4075](https://github.com/leanprover/lean4/pull/4075), [#4148](https://github.com/leanprover/lean4/pull/4148),
[#4165](https://github.com/leanprover/lean4/pull/4165), [#4178](https://github.com/leanprover/lean4/pull/4178),
[#4200](https://github.com/leanprover/lean4/pull/4200), [#4201](https://github.com/leanprover/lean4/pull/4201),
[#4298](https://github.com/leanprover/lean4/pull/4298), [#4299](https://github.com/leanprover/lean4/pull/4299),
[#4257](https://github.com/leanprover/lean4/pull/4257), [#4179](https://github.com/leanprover/lean4/pull/4179),
[#4321](https://github.com/leanprover/lean4/pull/4321), [#4187](https://github.com/leanprover/lean4/pull/4187).
* [#4193](https://github.com/leanprover/lean4/pull/4193) adds simprocs for reducing `x >>> i` and `x <<< i` where `i` is a bitvector literal.
* [#4194](https://github.com/leanprover/lean4/pull/4194) adds simprocs for reducing `(x <<< i) <<< j` and `(x >>> i) >>> j` where `i` and `j` are natural number literals.
* [#4229](https://github.com/leanprover/lean4/pull/4229) redefines `rotateLeft`/`rotateRight` to use modulo reduction of shift offset.
* [0d3051](https://github.com/leanprover/lean4/commit/0d30517dca094a07bcb462252f718e713b93ffba) makes `<num>#<term>` bitvector literal notation global.
* `Char`/`String`
* [#4143](https://github.com/leanprover/lean4/pull/4143) modifies `String.substrEq` to avoid linter warnings in downstream code.
* [#4233](https://github.com/leanprover/lean4/pull/4233) adds simprocs for `Char` and `String` inequalities.
* [#4348](https://github.com/leanprover/lean4/pull/4348) upstreams Mathlib lemmas.
* [#4354](https://github.com/leanprover/lean4/pull/4354) upstreams basic `String` lemmas.
* `HashMap`
* [#4248](https://github.com/leanprover/lean4/pull/4248) fixes implicitness of type class arguments in `HashMap.ofList`.
* `IO`
* [#4036](https://github.com/leanprover/lean4/pull/4036) adds `IO.Process.getCurrentDir` and `IO.Process.setCurrentDir` for adjusting the current process's working directory.
* **Cleanup:** [#4077](https://github.com/leanprover/lean4/pull/4077), [#4189](https://github.com/leanprover/lean4/pull/4189),
[#4304](https://github.com/leanprover/lean4/pull/4304).
* **Docs:** [#4001](https://github.com/leanprover/lean4/pull/4001), [#4166](https://github.com/leanprover/lean4/pull/4166),
[#4332](https://github.com/leanprover/lean4/pull/4332).
### Lean internals
* **Defeq and WHNF algorithms**
* [#4029](https://github.com/leanprover/lean4/pull/4029) remove unnecessary `checkpointDefEq`
* [#4206](https://github.com/leanprover/lean4/pull/4206) fixes `isReadOnlyOrSyntheticOpaque` to respect metavariable depth.
* [#4217](https://github.com/leanprover/lean4/pull/4217) fixes missing occurs check for delayed assignments.
* **Definition transparency**
* [#4052](https://github.com/leanprover/lean4/pull/4052) adds validation to application of `@[reducible]`/`@[semireducible]`/`@[irreducible]` attributes (with `local`/`scoped` modifiers as well).
Setting `set_option allowUnsafeReductibility true` turns this validation off.
* **Inductive types**
* [#3591](https://github.com/leanprover/lean4/pull/3591) fixes a bug where indices could be incorrectly promoted to parameters.
* [#3398](https://github.com/leanprover/lean4/pull/3398) fixes a bug in the injectivity theorem generator.
* [#4342](https://github.com/leanprover/lean4/pull/4342) fixes elaboration of mutual inductives with instance parameters.
* **Diagnostics and profiling**
* [#3986](https://github.com/leanprover/lean4/pull/3986) adds option `trace.profiler.useHeartbeats` to switch `trace.profiler.threshold` to being in terms of heartbeats instead of milliseconds.
* [#4082](https://github.com/leanprover/lean4/pull/4082) makes `set_option diagnostics true` report kernel diagnostic information.
* **Typeclass resolution**
* [#4119](https://github.com/leanprover/lean4/pull/4119) fixes multiple issues with TC caching interacting with `synthPendingDepth`, adds `maxSynthPendingDepth` option with default value `1`.
* [#4210](https://github.com/leanprover/lean4/pull/4210) ensures local instance cache does not contain multiple copies of the same instance.
* [#4216](https://github.com/leanprover/lean4/pull/4216) fix handling of metavariables, to avoid needing to set the option `backward.synthInstance.canonInstances` to `false`.
* **Other fixes or improvements**
* [#4080](https://github.com/leanprover/lean4/pull/4080) fixes propagation of state for `Lean.Elab.Command.liftCoreM` and `Lean.Elab.Command.liftTermElabM`.
* [#3944](https://github.com/leanprover/lean4/pull/3944) makes the `Repr` deriving handler be consistent between `structure` and `inductive` for how types and proofs are erased.
* [#4113](https://github.com/leanprover/lean4/pull/4113) propagates `maxHeartbeats` to kernel to control "(kernel) deterministic timeout" error.
* [#4125](https://github.com/leanprover/lean4/pull/4125) reverts [#3970](https://github.com/leanprover/lean4/pull/3970) (monadic generalization of `FindExpr`).
* [#4128](https://github.com/leanprover/lean4/pull/4128) catches stack overflow in auto-bound implicits feature.
* [#4129](https://github.com/leanprover/lean4/pull/4129) adds `tryCatchRuntimeEx` combinator to replace `catchRuntimeEx` reader state.
* [#4155](https://github.com/leanprover/lean4/pull/4155) simplifies the expression canonicalizer.
* [#4151](https://github.com/leanprover/lean4/pull/4151) and [#4369](https://github.com/leanprover/lean4/pull/4369)
add many missing trace classes.
* [#4185](https://github.com/leanprover/lean4/pull/4185) makes congruence theorem generators clean up type annotations of argument types.
* [#4192](https://github.com/leanprover/lean4/pull/4192) fixes restoration of infotrees when auto-bound implicit feature is activated,
fixing a pretty printing error in hovers and strengthening the unused variable linter.
* [dfb496](https://github.com/leanprover/lean4/commit/dfb496a27123c3864571aec72f6278e2dad1cecf) fixes `declareBuiltin` to allow it to be called multiple times per declaration.
* [#4569](https://github.com/leanprover/lean4/pull/4569) fixes an issue introduced in a merge conflict, where the interrupt exception was swallowed by some `tryCatchRuntimeEx` uses.
* [#4584](https://github.com/leanprover/lean4/pull/4584) (backported as [b056a0](https://github.com/leanprover/lean4/commit/b056a0b395bb728512a3f3e83bf9a093059d4301)) adapts kernel interruption to the new cancellation system.
* Cleanup: [#4112](https://github.com/leanprover/lean4/pull/4112), [#4126](https://github.com/leanprover/lean4/pull/4126), [#4091](https://github.com/leanprover/lean4/pull/4091), [#4139](https://github.com/leanprover/lean4/pull/4139), [#4153](https://github.com/leanprover/lean4/pull/4153).
* Tests: [030406](https://github.com/leanprover/lean4/commit/03040618b8f9b35b7b757858483e57340900cdc4), [#4133](https://github.com/leanprover/lean4/pull/4133).
### Compiler, runtime, and FFI
* [#4100](https://github.com/leanprover/lean4/pull/4100) improves reset/reuse algorithm; it now runs a second pass relaxing the constraint that reused memory cells must only be for the exact same constructor.
* [#2903](https://github.com/leanprover/lean4/pull/2903) fixes segfault in old compiler from mishandling `noConfusion` applications.
* [#4311](https://github.com/leanprover/lean4/pull/4311) fixes bug in constant folding.
* [#3915](https://github.com/leanprover/lean4/pull/3915) documents the runtime memory layout for inductive types.
### Lake
* [#4518](https://github.com/leanprover/lean4/pull/4518) makes trace reading more robust. Lake now rebuilds if trace files are invalid or unreadable and is backwards compatible with previous pure numeric traces.
* [#4057](https://github.com/leanprover/lean4/pull/4057) adds support for docstrings on `require` commands.
* [#4088](https://github.com/leanprover/lean4/pull/4088) improves hovers for `family_def` and `library_data` commands.
* [#4147](https://github.com/leanprover/lean4/pull/4147) adds default `README.md` to package templates
* [#4261](https://github.com/leanprover/lean4/pull/4261) extends `lake test` help page, adds help page for `lake check-test`,
adds `lake lint` and tag `@[lint_driver]`, adds support for specifying test and lint drivers from dependencies,
adds `testDriverArgs` and `lintDriverArgs` options, adds support for library test drivers,
makes `lake check-test` and `lake check-lint` only load the package without dependencies.
* [#4270](https://github.com/leanprover/lean4/pull/4270) adds `lake pack` and `lake unpack` for packing and unpacking Lake build artifacts from an archive.
* [#4083](https://github.com/leanprover/lean4/pull/4083)
Switches the manifest format to use `major.minor.patch` semantic
versions. Major version increments indicate breaking changes (e.g., new
required fields and semantic changes to existing fields). Minor version
increments (after `0.x`) indicate backwards-compatible extensions (e.g.,
adding optional fields, removing fields). This change is backwards
compatible. Lake will still successfully read old manifests with numeric
versions. It will treat the numeric version `N` as semantic version
`0.N.0`. Lake will also accept manifest versions with `-` suffixes
(e.g., `x.y.z-foo`) and then ignore the suffix.
* [#4273](https://github.com/leanprover/lean4/pull/4273) adds a lift from `JobM` to `FetchM` for backwards compatibility reasons.
* [#4351](https://github.com/leanprover/lean4/pull/4351) fixes `LogIO`-to-`CliM`-lifting performance issues.
* [#4343](https://github.com/leanprover/lean4/pull/4343) make Lake store the dependency trace for a build in
the cached build long and then verifies that it matches the trace of the current build before replaying the log.
* [#4402](https://github.com/leanprover/lean4/pull/4402) moves the cached log into the trace file (no more `.log.json`).
This means logs are no longer cached on fatal errors and this ensures that an out-of-date log is not associated with an up-to-date trace.
Separately, `.hash` file generation was changed to be more reliable as well.
The `.hash` files are deleted as part of the build and always regenerate with `--rehash`.
* **Other fixes or improvements**
* [#4056](https://github.com/leanprover/lean4/pull/4056) cleans up tests
* [#4244](https://github.com/leanprover/lean4/pull/4244) fixes `noRelease` test when Lean repo is tagged
* [#4346](https://github.com/leanprover/lean4/pull/4346) improves `tests/serve`
* [#4356](https://github.com/leanprover/lean4/pull/4356) adds build log path to the warning for a missing or invalid build log.
### DevOps
* [#3984](https://github.com/leanprover/lean4/pull/3984) adds a script (`script/rebase-stage0.sh`) for `git rebase -i` that automatically updates each stage0.
* [#4108](https://github.com/leanprover/lean4/pull/4108) finishes renamings from transition to Std to Batteries.
* [#4109](https://github.com/leanprover/lean4/pull/4109) adjusts the Github bug template to mention testing using [live.lean-lang.org](https://live.lean-lang.org).
* [#4136](https://github.com/leanprover/lean4/pull/4136) makes CI rerun only when `full-ci` label is added or removed.
* [#4175](https://github.com/leanprover/lean4/pull/4175) and [72b345](https://github.com/leanprover/lean4/commit/72b345c621a9a06d3a5a656da2b793a5eea5f168)
switch to using `#guard_msgs` to run tests as much as possible.
* [#3125](https://github.com/leanprover/lean4/pull/3125) explains the Lean4 `pygments` lexer.
* [#4247](https://github.com/leanprover/lean4/pull/4247) sets up a procedure for preparing release notes.
* [#4032](https://github.com/leanprover/lean4/pull/4032) modernizes build instructions and workflows.
* [#4255](https://github.com/leanprover/lean4/pull/4255) moves some expensive checks from merge queue to releases.
* [#4265](https://github.com/leanprover/lean4/pull/4265) adds aarch64 macOS as native compilation target for CI.
* [f05a82](https://github.com/leanprover/lean4/commit/f05a82799a01569edeb5e2594cd7d56282320f9e) restores macOS aarch64 install suffix in CI
* [#4317](https://github.com/leanprover/lean4/pull/4317) updates build instructions for macOS.
* [#4333](https://github.com/leanprover/lean4/pull/4333) adjusts workflow to update Batteries in manifest when creating `lean-pr-testing-NNNN` Mathlib branches.
* [#4355](https://github.com/leanprover/lean4/pull/4355) simplifies `lean4checker` step of release checklist.
* [#4361](https://github.com/leanprover/lean4/pull/4361) adds installing elan to `pr-release` CI step.
* [#4628](https://github.com/leanprover/lean4/pull/4628) fixes the Windows build, which was missing an exported symbol.
### Breaking changes
While most changes could be considered to be a breaking change, this section makes special note of API changes.
* `Nat.zero_or` and `Nat.or_zero` have been swapped ([#4094](https://github.com/leanprover/lean4/pull/4094)).
* `IsLawfulSingleton` is now `LawfulSingleton` ([#4350](https://github.com/leanprover/lean4/pull/4350)).
* The `BitVec` literal notation is now `<num>#<term>` rather than `<term>#<term>`, and it is global rather than scoped. Use `BitVec.ofNat w x` rather than `x#w` when `x` is a not a numeric literal ([0d3051](https://github.com/leanprover/lean4/commit/0d30517dca094a07bcb462252f718e713b93ffba)).
* `BitVec.rotateLeft` and `BitVec.rotateRight` now take the shift modulo the bitwidth ([#4229](https://github.com/leanprover/lean4/pull/4229)).
* These are no longer simp lemmas:
`List.length_pos` ([#4172](https://github.com/leanprover/lean4/pull/4172)),
`Option.bind_eq_some` ([#4314](https://github.com/leanprover/lean4/pull/4314)).
* Types in `let` and `have` (both the expressions and tactics) may fail to elaborate due to new restrictions on what sorts of elaboration problems may be postponed ([#4096](https://github.com/leanprover/lean4/pull/4096)).
In particular, tactics embedded in the type will no longer make use of the type of `value` in expressions such as `let x : type := value; body`.
* Now functions defined by well-founded recursion are marked with `@[irreducible]` by default ([#4061](https://github.com/leanprover/lean4/pull/4061)).
Existing proofs that hold by definitional equality (e.g. `rfl`) can be
rewritten to explicitly unfold the function definition (using `simp`,
`unfold`, `rw`), or the recursive function can be temporarily made
semireducible (using `unseal f in` before the command), or the function
definition itself can be marked as `@[semireducible]` to get the previous
behavior.
* Due to [#3929](https://github.com/leanprover/lean4/pull/3929):
* The `MessageData.ofPPFormat` constructor has been removed.
Its functionality has been split into two:
- for lazy structured messages, please use `MessageData.lazy`;
- for embedding `Format` or `FormatWithInfos`, use `MessageData.ofFormatWithInfos`.
An example migration can be found in [#3929](https://github.com/leanprover/lean4/pull/3929/files#diff-5910592ab7452a0e1b2616c62d22202d2291a9ebb463145f198685aed6299867L109).
* The `MessageData.ofFormat` constructor has been turned into a function.
If you need to inspect `MessageData`, you can pattern-match on `MessageData.ofFormatWithInfos`.
```` |
reference-manual/Manual/Releases/v4_7_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
#doc (Manual) "Lean 4.7.0 (2024-04-03)" =>
%%%
tag := "release-v4.7.0"
file := "v4.7.0"
%%%
````markdown
* `simp` and `rw` now use instance arguments found by unification,
rather than always resynthesizing. For backwards compatibility, the original behaviour is
available via `set_option tactic.skipAssignedInstances false`.
[#3507](https://github.com/leanprover/lean4/pull/3507) and
[#3509](https://github.com/leanprover/lean4/pull/3509).
* When the `pp.proofs` is false, now omitted proofs use `⋯` rather than `_`,
which gives a more helpful error message when copied from the Infoview.
The `pp.proofs.threshold` option lets small proofs always be pretty printed.
[#3241](https://github.com/leanprover/lean4/pull/3241).
* `pp.proofs.withType` is now set to false by default to reduce noise in the info view.
* The pretty printer for applications now handles the case of over-application itself when applying app unexpanders.
In particular, the ``| `($_ $a $b $xs*) => `(($a + $b) $xs*)`` case of an `app_unexpander` is no longer necessary.
[#3495](https://github.com/leanprover/lean4/pull/3495).
* New `simp` (and `dsimp`) configuration option: `zetaDelta`. It is `false` by default.
The `zeta` option is still `true` by default, but their meaning has changed.
- When `zeta := true`, `simp` and `dsimp` reduce terms of the form
`let x := val; e[x]` into `e[val]`.
- When `zetaDelta := true`, `simp` and `dsimp` will expand let-variables in
the context. For example, suppose the context contains `x := val`. Then,
any occurrence of `x` is replaced with `val`.
See [issue #2682](https://github.com/leanprover/lean4/pull/2682) for additional details. Here are some examples:
```
example (h : z = 9) : let x := 5; let y := 4; x + y = z := by
intro x
simp
/-
New goal:
h : z = 9; x := 5 |- x + 4 = z
-/
rw [h]
example (h : z = 9) : let x := 5; let y := 4; x + y = z := by
intro x
-- Using both `zeta` and `zetaDelta`.
simp (config := { zetaDelta := true })
/-
New goal:
h : z = 9; x := 5 |- 9 = z
-/
rw [h]
example (h : z = 9) : let x := 5; let y := 4; x + y = z := by
intro x
simp [x] -- asks `simp` to unfold `x`
/-
New goal:
h : z = 9; x := 5 |- 9 = z
-/
rw [h]
example (h : z = 9) : let x := 5; let y := 4; x + y = z := by
intro x
simp (config := { zetaDelta := true, zeta := false })
/-
New goal:
h : z = 9; x := 5 |- let y := 4; 5 + y = z
-/
rw [h]
```
* When adding new local theorems to `simp`, the system assumes that the function application arguments
have been annotated with `no_index`. This modification, which addresses [issue #2670](https://github.com/leanprover/lean4/issues/2670),
restores the Lean 3 behavior that users expect. With this modification, the following examples are now operational:
```
example {α β : Type} {f : α × β → β → β} (h : ∀ p : α × β, f p p.2 = p.2)
(a : α) (b : β) : f (a, b) b = b := by
simp [h]
example {α β : Type} {f : α × β → β → β}
(a : α) (b : β) (h : f (a,b) (a,b).2 = (a,b).2) : f (a, b) b = b := by
simp [h]
```
In both cases, `h` is applicable because `simp` does not index f-arguments anymore when adding `h` to the `simp`-set.
It's important to note, however, that global theorems continue to be indexed in the usual manner.
* Improved the error messages produced by the `decide` tactic. [#3422](https://github.com/leanprover/lean4/pull/3422)
* Improved auto-completion performance. [#3460](https://github.com/leanprover/lean4/pull/3460)
* Improved initial language server startup performance. [#3552](https://github.com/leanprover/lean4/pull/3552)
* Changed call hierarchy to sort entries and strip private header from names displayed in the call hierarchy. [#3482](https://github.com/leanprover/lean4/pull/3482)
* There is now a low-level error recovery combinator in the parsing framework, primarily intended for DSLs. [#3413](https://github.com/leanprover/lean4/pull/3413)
* You can now write `termination_by?` after a declaration to see the automatically inferred
termination argument, and turn it into a `termination_by …` clause using the “Try this” widget or a code action. [#3514](https://github.com/leanprover/lean4/pull/3514)
* A large fraction of `Std` has been moved into the Lean repository.
This was motivated by:
1. Making universally useful tactics such as `ext`, `by_cases`, `change at`,
`norm_cast`, `rcases`, `simpa`, `simp?`, `omega`, and `exact?`
available to all users of Lean, without imports.
2. Minimizing the syntactic changes between plain Lean and Lean with `import Std`.
3. Simplifying the development process for the basic data types
`Nat`, `Int`, `Fin` (and variants such as `UInt64`), `List`, `Array`,
and `BitVec` as we begin making the APIs and simp normal forms for these types
more complete and consistent.
4. Laying the groundwork for the Std roadmap, as a library focused on
essential datatypes not provided by the core language (e.g. `RBMap`)
and utilities such as basic IO.
While we have achieved most of our initial aims in `v4.7.0-rc1`,
some upstreaming will continue over the coming months.
* The `/` and `%` notations in `Int` now use `Int.ediv` and `Int.emod`
(i.e. the rounding conventions have changed).
Previously `Std` overrode these notations, so this is no change for users of `Std`.
There is now kernel support for these functions.
[#3376](https://github.com/leanprover/lean4/pull/3376).
* `omega`, our integer linear arithmetic tactic, is now available in the core language.
* It is supplemented by a preprocessing tactic `bv_omega` which can solve goals about `BitVec`
which naturally translate into linear arithmetic problems.
[#3435](https://github.com/leanprover/lean4/pull/3435).
* `omega` now has support for `Fin` [#3427](https://github.com/leanprover/lean4/pull/3427),
the `<<<` operator [#3433](https://github.com/leanprover/lean4/pull/3433).
* During the port `omega` was modified to no longer identify atoms up to definitional equality
(so in particular it can no longer prove `id x ≤ x`). [#3525](https://github.com/leanprover/lean4/pull/3525).
This may cause some regressions.
We plan to provide a general purpose preprocessing tactic later, or an `omega!` mode.
* `omega` is now invoked in Lean's automation for termination proofs
[#3503](https://github.com/leanprover/lean4/pull/3503) as well as in
array indexing proofs [#3515](https://github.com/leanprover/lean4/pull/3515).
This automation will be substantially revised in the medium term,
and while `omega` does help automate some proofs, we plan to make this much more robust.
* The library search tactics `exact?` and `apply?` that were originally in
Mathlib are now available in Lean itself. These use the implementation using
lazy discrimination trees from `Std`, and thus do not require a disk cache but
have a slightly longer startup time. The order used for selection lemmas has
changed as well to favor goals purely based on how many terms in the head
pattern match the current goal.
* The `solve_by_elim` tactic has been ported from `Std` to Lean so that library
search can use it.
* New `#check_tactic` and `#check_simp` commands have been added. These are
useful for checking tactics (particularly `simp`) behave as expected in test
suites.
* Previously, app unexpanders would only be applied to entire applications. However, some notations produce
functions, and these functions can be given additional arguments. The solution so far has been to write app unexpanders so that they can take an arbitrary number of additional arguments. However this leads to misleading hover information in the Infoview. For example, while `HAdd.hAdd f g 1` pretty prints as `(f + g) 1`, hovering over `f + g` shows `f`. There is no way to fix the situation from within an app unexpander; the expression position for `HAdd.hAdd f g` is absent, and app unexpanders cannot register TermInfo.
This commit changes the app delaborator to try running app unexpanders on every prefix of an application, from longest to shortest prefix. For efficiency, it is careful to only try this when app delaborators do in fact exist for the head constant, and it also ensures arguments are only delaborated once. Then, in `(f + g) 1`, the `f + g` gets TermInfo registered for that subexpression, making it properly hoverable.
[#3375](https://github.com/leanprover/lean4/pull/3375)
Breaking changes:
* `Lean.withTraceNode` and variants got a stronger `MonadAlwaysExcept` assumption to
fix trace trees not being built on elaboration runtime exceptions. Instances for most elaboration
monads built on `EIO Exception` should be synthesized automatically.
* The `match ... with.` and `fun.` notations previously in Std have been replaced by
`nomatch ...` and `nofun`. [#3279](https://github.com/leanprover/lean4/pull/3279) and [#3286](https://github.com/leanprover/lean4/pull/3286)
Other improvements:
* several bug fixes for `simp`:
* we should not crash when `simp` loops [#3269](https://github.com/leanprover/lean4/pull/3269)
* `simp` gets stuck on `autoParam` [#3315](https://github.com/leanprover/lean4/pull/3315)
* `simp` fails when custom discharger makes no progress [#3317](https://github.com/leanprover/lean4/pull/3317)
* `simp` fails to discharge `autoParam` premises even when it can reduce them to `True` [#3314](https://github.com/leanprover/lean4/pull/3314)
* `simp?` suggests generated equations lemma names, fixes [#3547](https://github.com/leanprover/lean4/pull/3547) [#3573](https://github.com/leanprover/lean4/pull/3573)
* fixes for `match` expressions:
* fix regression with builtin literals [#3521](https://github.com/leanprover/lean4/pull/3521)
* accept `match` when patterns cover all cases of a `BitVec` finite type [#3538](https://github.com/leanprover/lean4/pull/3538)
* fix matching `Int` literals [#3504](https://github.com/leanprover/lean4/pull/3504)
* patterns containing int values and constructors [#3496](https://github.com/leanprover/lean4/pull/3496)
* improve `termination_by` error messages [#3255](https://github.com/leanprover/lean4/pull/3255)
* fix `rename_i` in macros, fixes [#3553](https://github.com/leanprover/lean4/pull/3553) [#3581](https://github.com/leanprover/lean4/pull/3581)
* fix excessive resource usage in `generalize`, fixes [#3524](https://github.com/leanprover/lean4/pull/3524) [#3575](https://github.com/leanprover/lean4/pull/3575)
* an equation lemma with autoParam arguments fails to rewrite, fixing [#2243](https://github.com/leanprover/lean4/pull/2243) [#3316](https://github.com/leanprover/lean4/pull/3316)
* `add_decl_doc` should check that declarations are local [#3311](https://github.com/leanprover/lean4/pull/3311)
* instantiate the types of inductives with the right parameters, closing [#3242](https://github.com/leanprover/lean4/pull/3242) [#3246](https://github.com/leanprover/lean4/pull/3246)
* New simprocs for many basic types. [#3407](https://github.com/leanprover/lean4/pull/3407)
Lake fixes:
* Warn on fetch cloud release failure [#3401](https://github.com/leanprover/lean4/pull/3401)
* Cloud release trace & `lake build :release` errors [#3248](https://github.com/leanprover/lean4/pull/3248)
```` |
reference-manual/Manual/Releases/v4_15_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
#doc (Manual) "Lean 4.15.0 (2025-01-04)" =>
%%%
tag := "release-v4.15.0"
file := "v4.15.0"
%%%
````markdown
## Language
- [#4595](https://github.com/leanprover/lean4/pull/4595) implements `Simp.Config.implicitDefEqsProofs`. When `true`
(default: `true`), `simp` will **not** create a proof term for a
rewriting rule associated with an `rfl`-theorem. Rewriting rules are
provided by users by annotating theorems with the attribute `@[simp]`.
If the proof of the theorem is just `rfl` (reflexivity), and
`implicitDefEqProofs := true`, `simp` will **not** create a proof term
which is an application of the annotated theorem.
- [#5429](https://github.com/leanprover/lean4/pull/5429) avoid negative environment lookup
- [#5501](https://github.com/leanprover/lean4/pull/5501) ensure `instantiateMVarsProfiling` adds a trace node
- [#5856](https://github.com/leanprover/lean4/pull/5856) adds a feature to the mutual def elaborator where the
`instance` command yields theorems instead of definitions when the class
is a `Prop`.
- [#5907](https://github.com/leanprover/lean4/pull/5907) unset trailing for `simpa?` "try this" suggestion
- [#5920](https://github.com/leanprover/lean4/pull/5920) changes the rule for which projections become instances. Before,
all parents along with all indirect ancestors that were represented as
subobject fields would have their projections become instances. Now only
projections for direct parents become instances.
- [#5934](https://github.com/leanprover/lean4/pull/5934) make `all_goals` admit goals on failure
- [#5942](https://github.com/leanprover/lean4/pull/5942) introduce synthetic atoms in bv_decide
- [#5945](https://github.com/leanprover/lean4/pull/5945) adds a new definition `Message.kind` which returns the top-level
tag of a message. This is serialized as the new field `kind` in
`SerialMessaege` so that i can be used by external consumers (e.g.,
Lake) to identify messages via `lean --json`.
- [#5968](https://github.com/leanprover/lean4/pull/5968) `arg` conv tactic misreported number of arguments on error
- [#5979](https://github.com/leanprover/lean4/pull/5979) BitVec.twoPow in bv_decide
- [#5991](https://github.com/leanprover/lean4/pull/5991) simplifies the implementation of `omega`.
- [#5992](https://github.com/leanprover/lean4/pull/5992) fix style in bv_decide normalizer
- [#5999](https://github.com/leanprover/lean4/pull/5999) adds configuration options for
`decide`/`decide!`/`native_decide` and refactors the tactics to be
frontends to the same backend. Adds a `+revert` option that cleans up
the local context and reverts all local variables the goal depends on,
along with indirect propositional hypotheses. Makes `native_decide` fail
at elaboration time on failure without sacrificing performance (the
decision procedure is still evaluated just once). Now `native_decide`
supports universe polymorphism.
- [#6010](https://github.com/leanprover/lean4/pull/6010) changes `bv_decide`'s configuration from lots of `set_option` to
an elaborated config like `simp` or `omega`. The notable exception is
`sat.solver` which is still a `set_option` such that users can configure
a custom SAT solver globally for an entire project or file. Additionally
it introduces the ability to set `maxSteps` for the simp preprocessing
run through the new config.
- [#6012](https://github.com/leanprover/lean4/pull/6012) improves the validation of new syntactic tokens. Previously, the
validation code had inconsistencies: some atoms would be accepted only
if they had a leading space as a pretty printer hint. Additionally,
atoms with internal whitespace are no longer allowed.
- [#6016](https://github.com/leanprover/lean4/pull/6016) removes the `decide!` tactic in favor of `decide +kernel`
(breaking change).
- [#6019](https://github.com/leanprover/lean4/pull/6019) removes @[specilize] from `MkBinding.mkBinding`, which is a
function that cannot be specialized (as none of its arguments are
functions). As a result, the specializable function `Nat.foldRevM.loop`
doesn't get specialized, which leads to worse performing code.
- [#6022](https://github.com/leanprover/lean4/pull/6022) makes the `change` tactic and conv tactic use the same
elaboration strategy. It works uniformly for both the target and local
hypotheses. Now `change` can assign metavariables, for example:
```lean
example (x y z : Nat) : x + y = z := by
change ?a = _
let w := ?a
-- now `w : Nat := x + y`
```
- [#6024](https://github.com/leanprover/lean4/pull/6024) fixes a bug where the monad lift coercion elaborator would
partially unify expressions even if they were not monads. This could be
taken advantage of to propagate information that could help elaboration
make progress, for example the first `change` worked because the monad
lift coercion elaborator was unifying `@Eq _ _` with `@Eq (Nat × Nat)
p`:
```lean
example (p : Nat × Nat) : p = p := by
change _ = ⟨_, _⟩ -- used to work (yielding `p = (p.fst, p.snd)`), now it doesn't
change ⟨_, _⟩ = _ -- never worked
```
As such, this is a breaking change; you may need to adjust expressions
to include additional implicit arguments.
- [#6029](https://github.com/leanprover/lean4/pull/6029) adds a normalization rule to `bv_normalize` (which is used by
`bv_decide`) that converts `x / 2^k` into `x >>> k` under suitable
conditions. This allows us to simplify the expensive division circuits
that are used for bitblasting into much cheaper shifting circuits.
Concretely, it allows for the following canonicalization:
- [#6030](https://github.com/leanprover/lean4/pull/6030) fixes `simp only [· ∈ ·]` after #5020.
- [#6035](https://github.com/leanprover/lean4/pull/6035) introduces the and flattening pre processing pass from Bitwuzla
to `bv_decide`. It splits hypotheses of the form `(a && b) = true` into
`a = true` and `b = true` which has synergy potential with the already
existing embedded constraint substitution pass.
- [#6037](https://github.com/leanprover/lean4/pull/6037) fixes `bv_decide`'s embedded constraint substitution to generate
correct counter examples in the corner case where duplicate theorems are
in the local context.
- [#6045](https://github.com/leanprover/lean4/pull/6045) add `LEAN_ALWAYS_INLINE` to some functions
- [#6048](https://github.com/leanprover/lean4/pull/6048) fixes `simp?` suggesting output with invalid indentation
- [#6051](https://github.com/leanprover/lean4/pull/6051) mark `Meta.Context.config` as private
- [#6053](https://github.com/leanprover/lean4/pull/6053) fixes the caching infrastructure for `whnf` and `isDefEq`,
ensuring the cache accounts for all relevant configuration flags. It
also cleans up the `WHNF.lean` module and improves the configuration of
`whnf`.
- [#6061](https://github.com/leanprover/lean4/pull/6061) adds a simp_arith benchmark.
- [#6062](https://github.com/leanprover/lean4/pull/6062) optimize Nat.Linear.Expr.toPoly
- [#6064](https://github.com/leanprover/lean4/pull/6064) optimize Nat.Linear.Poly.norm
- [#6068](https://github.com/leanprover/lean4/pull/6068) improves the asymptotic performance of `simp_arith` when there are many variables to consider.
- [#6077](https://github.com/leanprover/lean4/pull/6077) adds options to `bv_decide`'s configuration structure such that
all non mandatory preprocessing passes can be disabled.
- [#6082](https://github.com/leanprover/lean4/pull/6082) changes how the canonicalizer handles `forall` and `lambda`,
replacing bvars with temporary fvars. Fixes a bug reported by @hrmacbeth
on
[zulip](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/Quantifiers.20in.20CanonM/near/482483448).
- [#6093](https://github.com/leanprover/lean4/pull/6093) use mkFreshUserName in ArgsPacker
- [#6096](https://github.com/leanprover/lean4/pull/6096) improves the `#print` command for structures to show all fields
and which parents the fields were inherited from, hiding internal
details such as which parents are represented as subobjects. This
information is still present in the constructor if needed. The pretty
printer for private constants is also improved, and it now handles
private names from the current module like any other name; private names
from other modules are made hygienic.
- [#6098](https://github.com/leanprover/lean4/pull/6098) modifies `Lean.MVarId.replaceTargetDefEq` and
`Lean.MVarId.replaceLocalDeclDefEq` to use `Expr.equal` instead of
`Expr.eqv` when determining whether the expression has changed. This is
justified on the grounds that binder names and binder infos are
user-visible and affect elaboration.
- [#6105](https://github.com/leanprover/lean4/pull/6105) fixes a stack overflow caused by a cyclic assignment in the
metavariable context. The cycle is unintentionally introduced by the
structure instance elaborator.
- [#6108](https://github.com/leanprover/lean4/pull/6108) turn off pp.mvars in apply? results
- [#6109](https://github.com/leanprover/lean4/pull/6109) fixes an issue in the `injection` tactic. This tactic may
execute multiple sub-tactics. If any of them fail, we must backtrack the
partial assignment. This issue was causing the error: "`mvarId` is
already assigned" in issue #6066. The issue is not yet resolved, as the
equation generator for the match expressions is failing in the example
provided in this issue.
- [#6112](https://github.com/leanprover/lean4/pull/6112) makes stricter requirements for the `@[deprecated]` attribute,
requiring either a replacement identifier as `@[deprecated bar]` or
suggestion text `@[deprecated "Past its use by date"]`, and also
requires a `since := "..."` field.
- [#6114](https://github.com/leanprover/lean4/pull/6114) liberalizes atom rules by allowing `''` to be a prefix of an
atom, after #6012 only added an exception for `''` alone, and also adds
some unit tests for atom validation.
- [#6116](https://github.com/leanprover/lean4/pull/6116) fixes a bug where structural recursion did not work when indices
of the recursive argument appeared as function parameters in a different
order than in the argument's type's definition.
- [#6125](https://github.com/leanprover/lean4/pull/6125) adds support for `structure` in `mutual` blocks, allowing
inductive types defined by `inductive` and `structure` to be mutually
recursive. The limitations are (1) that the parents in the `extends`
clause must be defined before the `mutual` block and (2) mutually
recursive classes are not allowed (a limitation shared by `class
inductive`). There are also improvements to universe level inference for
inductive types and structures. Breaking change: structure parents now
elaborate with the structure in scope (fix: use qualified names or
rename the structure to avoid shadowing), and structure parents no
longer elaborate with autoimplicits enabled.
- [#6128](https://github.com/leanprover/lean4/pull/6128) does the same fix as #6104, but such that it doesn't break the
test/the file in `Plausible`. This is done by not creating unused let
binders in metavariable types that are made by `elimMVar`. (This is also
a positive thing for users looking at metavariable types, for example in
error messages)
- [#6129](https://github.com/leanprover/lean4/pull/6129) fixes a bug at `isDefEq` when `zetaDelta := false`. See new test
for a small example that exposes the issue.
- [#6131](https://github.com/leanprover/lean4/pull/6131) fixes a bug at the definitional equality test (`isDefEq`). At
unification constraints of the form `c.{u} =?= c.{v}`, it was not trying
to unfold `c`. This bug did not affect the kernel.
- [#6141](https://github.com/leanprover/lean4/pull/6141) make use of recursive structures in snapshot types
- [#6145](https://github.com/leanprover/lean4/pull/6145) fixes the `revert` tactic so that it creates a `syntheticOpaque`
metavariable as the new goal, instead of a `natural` metavariable
- [#6146](https://github.com/leanprover/lean4/pull/6146) fixes a non-termination bug that occurred when generating the
match-expression splitter theorem. The bug was triggered when the proof
automation for the splitter theorem repeatedly applied `injection` to
the same local declaration, as it could not be removed due to forward
dependencies. See issue #6065 for an example that reproduces this issue.
- [#6165](https://github.com/leanprover/lean4/pull/6165) modifies structure instance notation and `where` notation to use
the same notation for fields. Structure instance notation now admits
binders, type ascriptions, and equations, and `where` notation admits
full structure lvals. Examples of these for structure instance notation:
```lean
structure PosFun where
f : Nat → Nat
pos : ∀ n, 0 < f n
```
- [#6168](https://github.com/leanprover/lean4/pull/6168) extends the "motive is not type correct" error message for the
rewrite tactic to explain what it means. It also pretty prints the
type-incorrect motive and reports the type error.
- [#6170](https://github.com/leanprover/lean4/pull/6170) adds core metaprogramming functions for forking off background
tasks from elaboration such that their results are visible to reporting
and the language server
- [#6175](https://github.com/leanprover/lean4/pull/6175) fixes a bug with the `structure`/`class` command where if there
are parents that are not represented as subobjects but which used other
parents as instances, then there would be a kernel error. Closes #2611.
- [#6180](https://github.com/leanprover/lean4/pull/6180) fixes a non-termination bug that occurred when generating the
match-expression equation theorems. The bug was triggered when the proof
automation for the equation theorem repeatedly applied `injection(` to
the same local declaration, as it could not be removed due to forward
dependencies. See issue #6067 for an example that reproduces this issue.
- [#6189](https://github.com/leanprover/lean4/pull/6189) changes how generalized field notation ("dot notation") resolves
the function. The new resolution rule is that if `x : S`, then `x.f`
resolves the name `S.f` relative to the root namespace (hence it now
affected by `export` and `open`). Breaking change: aliases now resolve
differently. Before, if `x : S`, and if `S.f` is an alias for `S'.f`,
then `x.f` would use `S'.f` and look for an argument of type `S'`. Now,
it looks for an argument of type `S`, which is more generally useful
behavior. Code making use of the old behavior should consider defining
`S` or `S'` in terms of the other, since dot notation can unfold
definitions during resolution.
- [#6206](https://github.com/leanprover/lean4/pull/6206) makes it possible to write `rw (occs := [1,2]) ...` instead of
`rw (occs := .pos [1,2]) ...` by adding a coercion from `List.Nat` to
`Lean.Meta.Occurrences`.
- [#6220](https://github.com/leanprover/lean4/pull/6220) adds proper support for `let_fun` in `simp`.
- [#6236](https://github.com/leanprover/lean4/pull/6236) fixes an issue where edits to a command containing a nested
docstring fail to reparse the entire command.
## Library
- [#4904](https://github.com/leanprover/lean4/pull/4904) introduces date and time functionality to the Lean 4 Std.
- [#5616](https://github.com/leanprover/lean4/pull/5616) is a follow-up to https://github.com/leanprover/lean4/pull/5609,
where we add lemmas characterizing `smtUDiv` and `smtSDiv`'s behavior
when the denominator is zero.
- [#5866](https://github.com/leanprover/lean4/pull/5866) verifies the `keys` function on `Std.HashMap`.
- [#5885](https://github.com/leanprover/lean4/pull/5885) add Int16/Int32/Int64
- [#5926](https://github.com/leanprover/lean4/pull/5926) add `Option.or_some'`
- [#5927](https://github.com/leanprover/lean4/pull/5927) `List.pmap_eq_self`
- [#5937](https://github.com/leanprover/lean4/pull/5937) upstream lemmas about Fin.foldX
- [#5938](https://github.com/leanprover/lean4/pull/5938) upstream List.ofFn and relate to Array.ofFn
- [#5941](https://github.com/leanprover/lean4/pull/5941) List.mapFinIdx, lemmas, relate to Array version
- [#5949](https://github.com/leanprover/lean4/pull/5949) consolidate `decide_True` and `decide_true_eq_true`
- [#5950](https://github.com/leanprover/lean4/pull/5950) relate Array.takeWhile with List.takeWhile
- [#5951](https://github.com/leanprover/lean4/pull/5951) remove @[simp] from BitVec.ofFin_sub and sub_ofFin
- [#5952](https://github.com/leanprover/lean4/pull/5952) relate Array.eraseIdx with List.eraseIdx
- [#5961](https://github.com/leanprover/lean4/pull/5961) define ISize and basic operations on it
- [#5969](https://github.com/leanprover/lean4/pull/5969) upstream List.insertIdx from Batteries, lemmas from Mathlib, and revise lemmas
- [#5970](https://github.com/leanprover/lean4/pull/5970) deprecate Array.split in favour of identical Array.partition
- [#5971](https://github.com/leanprover/lean4/pull/5971) relate Array.isPrefixOf with List.isPrefixOf
- [#5972](https://github.com/leanprover/lean4/pull/5972) relate Array.zipWith/zip/unzip with List versions
- [#5974](https://github.com/leanprover/lean4/pull/5974) add another List.find?_eq_some lemma
- [#5981](https://github.com/leanprover/lean4/pull/5981) names the default SizeOf instance `instSizeOfDefault`
- [#5982](https://github.com/leanprover/lean4/pull/5982) minor lemmas about List.ofFn
- [#5984](https://github.com/leanprover/lean4/pull/5984) adds lemmas for `List` for the interactions between {`foldl`,
`foldr`, `foldlM`, `foldlrM`} and {`filter`, `filterMap`}.
- [#5985](https://github.com/leanprover/lean4/pull/5985) relates the operations `findSomeM?`, `findM?`, `findSome?`, and
`find?` on `Array` with the corresponding operations on `List`, and also
provides simp lemmas for the `Array` operations `findSomeRevM?`,
`findRevM?`, `findSomeRev?`, `findRev?` (in terms of `reverse` and the
usual forward find operations).
- [#5987](https://github.com/leanprover/lean4/pull/5987) BitVec.getMsbD in bv_decide
- [#5988](https://github.com/leanprover/lean4/pull/5988) changes the signature of `Array.set` to take a `Nat`, and a
tactic-provided bound, rather than a `Fin`.
- [#5995](https://github.com/leanprover/lean4/pull/5995) BitVec.sshiftRight' in bv_decide
- [#6007](https://github.com/leanprover/lean4/pull/6007) List.modifyTailIdx naming fix
- [#6008](https://github.com/leanprover/lean4/pull/6008) missing @[ext] attribute on monad transformer ext lemmas
- [#6023](https://github.com/leanprover/lean4/pull/6023) variants of List.forIn_eq_foldlM
- [#6025](https://github.com/leanprover/lean4/pull/6025) deprecate duplicated Fin.size_pos
- [#6032](https://github.com/leanprover/lean4/pull/6032) changes the signature of `Array.get` to take a Nat and a proof,
rather than a `Fin`, for consistency with the rest of the (planned)
Array API. Note that because of bootstrapping issues we can't provide
`get_elem_tactic` as an autoparameter for the proof. As users will
mostly use the `xs[i]` notation provided by `GetElem`, this hopefully
isn't a problem.
- [#6041](https://github.com/leanprover/lean4/pull/6041) modifies the order of arguments for higher-order `Array`
functions, preferring to put the `Array` last (besides positional
arguments with defaults). This is more consistent with the `List` API,
and is more flexible, as dot notation allows two different partially
applied versions.
- [#6049](https://github.com/leanprover/lean4/pull/6049) adds a primitive for accessing the current thread ID
- [#6052](https://github.com/leanprover/lean4/pull/6052) adds `Array.pmap`, as well as a `@[csimp]` lemma in terms of the
no-copy `Array.attachWith`.
- [#6055](https://github.com/leanprover/lean4/pull/6055) adds lemmas about for loops over `Array`, following the existing
lemmas for `List`.
- [#6056](https://github.com/leanprover/lean4/pull/6056) upstream some NameMap functions
- [#6060](https://github.com/leanprover/lean4/pull/6060) implements conversion functions from `Bool` to all `UIntX` and
`IntX` types.
- [#6070](https://github.com/leanprover/lean4/pull/6070) adds the Lean.RArray data structure.
- [#6074](https://github.com/leanprover/lean4/pull/6074) allow `Sort u` in `Squash`
- [#6094](https://github.com/leanprover/lean4/pull/6094) adds raw transmutation of floating-point numbers to and from
`UInt64`. Floats and UInts share the same endianness across all
supported platforms. The IEEE 754 standard precisely specifies the bit
layout of floats. Note that `Float.toBits` is distinct from
`Float.toUInt64`, which attempts to preserve the numeric value rather
than the bitwise value.
- [#6095](https://github.com/leanprover/lean4/pull/6095) generalize `List.get_mem`
- [#6097](https://github.com/leanprover/lean4/pull/6097) naming convention and `NaN` normalization
- [#6102](https://github.com/leanprover/lean4/pull/6102) moves `IO.rand` and `IO.setRandSeed` to be in the `BaseIO`
monad.
- [#6106](https://github.com/leanprover/lean4/pull/6106) fix naming of left/right injectivity lemmas
- [#6111](https://github.com/leanprover/lean4/pull/6111) fills in the API for `Array.findSome?` and `Array.find?`,
transferring proofs from the corresponding List statements.
- [#6120](https://github.com/leanprover/lean4/pull/6120) adds theorems `BitVec.(getMsbD, msb)_(rotateLeft, rotateRight)`.
- [#6126](https://github.com/leanprover/lean4/pull/6126) adds lemmas for extracting a given bit of a `BitVec` obtained
via `sub`/`neg`/`sshiftRight'`/`abs`.
- [#6130](https://github.com/leanprover/lean4/pull/6130) adds `Lean.loadPlugin` which exposes functionality similar to
the `lean` executable's `--plugin` option to Lean code.
- [#6132](https://github.com/leanprover/lean4/pull/6132) duplicates the verification API for
`List.attach`/`attachWith`/`pmap` over to `Array`.
- [#6133](https://github.com/leanprover/lean4/pull/6133) replaces `Array.feraseIdx` and `Array.insertAt` with
`Array.eraseIdx` and `Array.insertIdx`, both of which take a `Nat`
argument and a tactic-provided proof that it is in bounds. We also have
`eraseIdxIfInBounds` and `insertIdxIfInBounds` which are noops if the
index is out of bounds. We also provide a `Fin` valued version of
`Array.findIdx?`. Together, these quite ergonomically improve the array
indexing safety at a number of places in the compiler/elaborator.
- [#6136](https://github.com/leanprover/lean4/pull/6136) fixes the run-time evaluation of `(default : Float)`.
- [#6139](https://github.com/leanprover/lean4/pull/6139) modifies the signature of the functions `Nat.fold`,
`Nat.foldRev`, `Nat.any`, `Nat.all`, so that the function is passed the
upper bound. This allows us to change runtime array bounds checks to
compile time checks in many places.
- [#6148](https://github.com/leanprover/lean4/pull/6148) adds a primitive for creating temporary directories, akin to the
existing functionality for creating temporary files.
- [#6149](https://github.com/leanprover/lean4/pull/6149) completes the elementwise accessors for `ofNatLt`, `allOnes`,
and `not` by adding their implementations of `getMsbD`.
- [#6151](https://github.com/leanprover/lean4/pull/6151) completes the `toInt` interface for `BitVec` bitwise operations.
- [#6154](https://github.com/leanprover/lean4/pull/6154) implements `BitVec.toInt_abs`.
- [#6155](https://github.com/leanprover/lean4/pull/6155) adds `toNat` theorems for `BitVec.signExtend.`
- [#6157](https://github.com/leanprover/lean4/pull/6157) adds toInt theorems for BitVec.signExtend.
- [#6160](https://github.com/leanprover/lean4/pull/6160) adds theorem `mod_eq_sub`, makes theorem
`sub_mul_eq_mod_of_lt_of_le` not private anymore and moves its location
within the `rotate*` section to use it in other proofs.
- [#6184](https://github.com/leanprover/lean4/pull/6184) uses `Array.findFinIdx?` in preference to `Array.findIdx?` where
it allows converting a runtime bounds check to a compile time bounds
check.
- [#6188](https://github.com/leanprover/lean4/pull/6188) completes the `toNat` theorems for the bitwise operations
(`and`, `or`, `xor`, `shiftLeft`, `shiftRight`) of the UInt types and
adds `toBitVec` theorems as well. It also renames `and_toNat` to
`toNat_and` to fit with the current naming convention.
- [#6190](https://github.com/leanprover/lean4/pull/6190) adds the builtin simproc `USize.reduceToNat` which reduces the
`USize.toNat` operation on literals less than `UInt32.size` (i.e.,
`4294967296`).
- [#6191](https://github.com/leanprover/lean4/pull/6191) adds `Array.zipWithAll`, and the basic lemmas relating it to
`List.zipWithAll`.
- [#6192](https://github.com/leanprover/lean4/pull/6192) adds deprecations for `Lean.HashMap` functions which did not
receive deprecation attributes initially.
- [#6193](https://github.com/leanprover/lean4/pull/6193) completes the TODO in `Init.Data.Array.BinSearch`, removing the
`partial` keyword and converting runtime bounds checks to compile time
bounds checks.
- [#6194](https://github.com/leanprover/lean4/pull/6194) changes the signature of `Array.swap`, so it takes `Nat`
arguments with tactic provided bounds checking. It also renames
`Array.swap!` to `Array.swapIfInBounds`.
- [#6195](https://github.com/leanprover/lean4/pull/6195) renames `Array.setD` to `Array.setIfInBounds`.
- [#6197](https://github.com/leanprover/lean4/pull/6197) upstreams the definition of `Vector` from Batteries, along with
the basic functions.
- [#6200](https://github.com/leanprover/lean4/pull/6200) upstreams `Nat.lt_pow_self` and `Nat.lt_two_pow` from Mathlib
and uses them to prove the simp theorem `Nat.mod_two_pow`.
- [#6202](https://github.com/leanprover/lean4/pull/6202) makes `USize.toUInt64` a regular non-opaque definition.
- [#6203](https://github.com/leanprover/lean4/pull/6203) adds the theorems `le_usize_size` and `usize_size_le`, which
make proving inequalities about `USize.size` easier.
- [#6205](https://github.com/leanprover/lean4/pull/6205) upstreams some UInt theorems from Batteries and adds more
`toNat`-related theorems. It also adds the missing `UInt8` and `UInt16`
to/from `USize` conversions so that the interface is uniform across
the UInt types.
- [#6207](https://github.com/leanprover/lean4/pull/6207) ensures the `Fin.foldl` and `Fin.foldr` are semireducible.
Without this the defeq `example (f : Fin 3 → ℕ) : List.ofFn f = [f 0, f
1, f 2] := rfl` was failing.
- [#6208](https://github.com/leanprover/lean4/pull/6208) fix Vector.indexOf?
- [#6217](https://github.com/leanprover/lean4/pull/6217) adds `simp` lemmas about `List`'s `==` operation.
- [#6221](https://github.com/leanprover/lean4/pull/6221) fixes:
- Problems in other linux distributions that the default `tzdata`
directory is not the same as previously defined by ensuring it with a
fallback behavior when directory is missing.
- Trim unnecessary characters from local time identifier.
- [#6222](https://github.com/leanprover/lean4/pull/6222) changes the definition of `HashSet.insertMany` and
`HashSet.Raw.insertMany` so that it is equivalent to repeatedly calling
`HashSet.insert`/`HashSet.Raw.insert`. It also clarifies the docstrings
of all the `insert` and `insertMany` functions.
- [#6230](https://github.com/leanprover/lean4/pull/6230) copies some lemmas about `List.foldX` to `Array`.
- [#6233](https://github.com/leanprover/lean4/pull/6233) upstreams lemmas about `Vector` from Batteries.
- [#6234](https://github.com/leanprover/lean4/pull/6234) upstreams the definition and basic lemmas about `List.finRange`
from Batteries.
- [#6235](https://github.com/leanprover/lean4/pull/6235) relates that operations `Nat.fold`/`foldRev`/`any`/`all` to the
corresponding List operations over `List.finRange`.
- [#6241](https://github.com/leanprover/lean4/pull/6241) refactors `Array.qsort` to remove runtime array bounds checks,
and avoids the use of `partial`. We use the `Vector` API, along with
auto_params, to avoid having to write any proofs. The new code
benchmarks indistinguishably from the old.
- [#6242](https://github.com/leanprover/lean4/pull/6242) deprecates `Fin.ofNat` in favour of `Fin.ofNat'` (which takes an
`[NeZero]` instance, rather than returning an element of `Fin (n+1)`).
- [#6247](https://github.com/leanprover/lean4/pull/6247) adds the theorems `numBits_pos`, `le_numBits`, `numBits_le` ,
which make proving inequalities about `System.Platform.numBits` easier.
## Compiler
- [#5840](https://github.com/leanprover/lean4/pull/5840) changes `lean_sharecommon_{eq,hash}` to only consider the
salient bytes of an object, and not any bytes of any
unspecified/uninitialized unused capacity.
- [#6087](https://github.com/leanprover/lean4/pull/6087) fixes a bug in the constant folding for the `Nat.ble` and
`Nat.blt` function in the old code generator, leading to a
miscompilation.
- [#6143](https://github.com/leanprover/lean4/pull/6143) should make lean better-behaved around sanitizers, per
https://github.com/google/sanitizers/issues/1688.
As far as I can tell,
https://github.com/google/sanitizers/wiki/AddressSanitizerUseAfterReturn#algorithm
replaces local variables with heap allocations, and so taking the
address of a local is not effective at producing a monotonic measure of
stack usage.
- [#6209](https://github.com/leanprover/lean4/pull/6209) documents under which conditions `Runtime.markPersistent` is
unsafe and adjusts the elaborator accordingly
- [#6257](https://github.com/leanprover/lean4/pull/6257) harden `markPersistent` uses
## Pretty Printing
- [#2934](https://github.com/leanprover/lean4/pull/2934) adds the option `pp.parens` (default: false) that causes the
pretty printer to eagerly insert parentheses, which can be useful for
teaching and for understanding the structure of expressions. For
example, it causes `p → q → r` to pretty print as `p → (q → r)`.
- [#6014](https://github.com/leanprover/lean4/pull/6014) prevents `Nat.succ ?_` from pretty printing as `?_.succ`, which
should make `apply?` be more usable.
- [#6085](https://github.com/leanprover/lean4/pull/6085) improves the term info for coercions marked with
`CoeFnType.coeFun` (such as `DFunLike.coe` in Mathlib), making "go to
definition" on the function name work. Hovering over such a coerced
function will show the coercee rather than the coercion expression. The
coercion expression can still be seen by hovering over the whitespace in
the function application.
- [#6096](https://github.com/leanprover/lean4/pull/6096) improves the `#print` command for structures to show all fields
and which parents the fields were inherited from, hiding internal
details such as which parents are represented as subobjects. This
information is still present in the constructor if needed. The pretty
printer for private constants is also improved, and it now handles
private names from the current module like any other name; private names
from other modules are made hygienic.
- [#6119](https://github.com/leanprover/lean4/pull/6119) adds a new delab option `pp.coercions.types` which, when
enabled, will display all coercions with an explicit type ascription.
- [#6161](https://github.com/leanprover/lean4/pull/6161) ensures whitespace is printed before `+opt` and `-opt`
configuration options when pretty printing, improving the experience of
tactics such as `simp?`.
- [#6181](https://github.com/leanprover/lean4/pull/6181) fixes a bug where the signature pretty printer would ignore the
current setting of `pp.raw`. This fixes an issue where `#check ident`
would not heed `pp.raw`. Closes #6090.
- [#6213](https://github.com/leanprover/lean4/pull/6213) exposes the difference in "synthesized type class instance is
not definitionally equal" errors.
## Documentation
- [#6009](https://github.com/leanprover/lean4/pull/6009) fixes a typo in the docstring for prec and makes the text
slightly more precise.
- [#6040](https://github.com/leanprover/lean4/pull/6040) join → flatten in docstring
- [#6110](https://github.com/leanprover/lean4/pull/6110) does some mild refactoring of the `Lean.Elab.StructInst` module
while adding documentation.
- [#6144](https://github.com/leanprover/lean4/pull/6144) converts 3 doc-string to module docs since it seems that this is
what they were intended to be!
- [#6150](https://github.com/leanprover/lean4/pull/6150) refine kernel code comments
- [#6158](https://github.com/leanprover/lean4/pull/6158) adjust file reference in Data.Sum
- [#6239](https://github.com/leanprover/lean4/pull/6239) explains the order in which `Expr.abstract` introduces de Bruijn
indices.
## Server
- [#5835](https://github.com/leanprover/lean4/pull/5835) adds auto-completion for the fields of structure instance notation. Specifically, querying the completions via `Ctrl+Space` in the whitespace of a structure instance notation will now bring up the full list of fields. Whitespace structure completion can be enabled for custom syntax by wrapping the parser for the list of fields in a `structInstFields` parser.
- [#5837](https://github.com/leanprover/lean4/pull/5837) fixes an old auto-completion bug where `x.` would issue
nonsensical completions when `x.` could not be elaborated as a dot
completion.
- [#5996](https://github.com/leanprover/lean4/pull/5996) avoid max heartbeat error in completion
- [#6031](https://github.com/leanprover/lean4/pull/6031) fixes a regression with go-to-definition and document highlight
misbehaving on tactic blocks.
- [#6246](https://github.com/leanprover/lean4/pull/6246) fixes a performance issue where the Lean language server would
walk the full project file tree every time a file was saved, blocking
the processing of all other requests and notifications and significantly
increasing overall language server latency after saving.
## Lake
- [#5684](https://github.com/leanprover/lean4/pull/5684) update toolchain on `lake update`
- [#6026](https://github.com/leanprover/lean4/pull/6026) adds a newline at end of each Lean file generated by `lake new`
templates.
- [#6218](https://github.com/leanprover/lean4/pull/6218) makes Lake no longer automatically fetch GitHub cloud releases
if the package build directory is already present (mirroring the
behavior of the Reservoir cache). This prevents the cache from
clobbering existing prebuilt artifacts. Users can still manually fetch
the cache and clobber the build directory by running `lake build
<pkg>:release`.
- [#6225](https://github.com/leanprover/lean4/pull/6225) makes `lake build` also eagerly print package materialization
log lines. Previously, only a `lake update` performed eager logging.
- [#6231](https://github.com/leanprover/lean4/pull/6231) improves the errors Lake produces when it fails to fetch a
dependency from Reservoir. If the package is not indexed, it will
produce a suggestion about how to require it from GitHub.
## Other
- [#6137](https://github.com/leanprover/lean4/pull/6137) adds support for displaying multiple threads in the trace
profiler output.
- [#6138](https://github.com/leanprover/lean4/pull/6138) fixes `trace.profiler.pp` not using the term pretty printer.
- [#6259](https://github.com/leanprover/lean4/pull/6259) ensures that nesting trace nodes are annotated with timing
information iff `trace.profiler` is active.
```` |
reference-manual/Manual/Releases/v4_19_0.lean | import VersoManual
import Manual.Meta.Markdown
open Manual
open Verso.Genre
-- TODO: figure out why this is needed with the new compiler
set_option maxRecDepth 11000
#doc (Manual) "Lean 4.19.0 (2025-05-01)" =>
%%%
tag := "release-v4.19.0"
file := "v4.19.0"
%%%
````markdown
For this release, 420 changes landed. In addition to the 164 feature additions and 78 fixes listed below there were 13 refactoring changes, 29 documentation improvements, 31 performance improvements, 9 improvements to the test suite and 94 other changes.
## Highlights
Lean v4.19.0 introduces a number of features, bug fixes, performance gains, library developments,
along with quality-of-life improvements across documentation, the language server, and Lake.
### New Decorations in VS Code
Visual feedback in VS Code has been improved, with the extension now featuring:
* Gutter decorations for errors and warnings. These make the full range of errors/warnings clear,
which is especially useful when the corresponding squigglies are small.
* End-of-line markers for 'unsolved goals'. These are displayed at the line where
'unsolved goals' error ends and indicate where the proof needs to be continued.
* 'Goals accomplished!' message. When a theorem or a `Prop`-typed `example` contains no errors or `sorry`s anymore,
two blue checkmarks appear next to the start of the declaration as a gutter decoration.
Additionally, a 'Goals accomplished!' message appears under 'Messages' in the InfoView.
Gutter decorations for errors and warnings are available for all Lean 4 versions.
Decorations for 'unsolved goals' and 'goals accomplished' rely on server-side support,
which is added in this version via [#7366](https://github.com/leanprover/lean4/pull/7366).
All of these features can be disabled, and 'Goals accomplished!' icon can be configured in VS Code extension settings.
See [leanprover/vscode-lean4#585](https://github.com/leanprover/vscode-lean4/pull/585) for the details.
### Parallel Elaboration
* [#7084](https://github.com/leanprover/lean4/pull/7084) enables the elaboration of theorem bodies, i.e. proofs, to
happen in parallel to each other as well as to other elaboration tasks.
### Language Features
* [#5182](https://github.com/leanprover/lean4/pull/5182) makes functions defined by well-founded recursion use an
`opaque` well-founded proof by default. This reliably prevents kernel
reduction of such definitions and proofs, which tends to be
prohibitively slow (fixes [#2171](https://github.com/leanprover/lean4/issues/2171)), and which regularly causes
hard-to-debug kernel type-checking failures. This change renders
`unseal` ineffective for such definitions. To avoid the opaque proof,
annotate the function definition with `@[semireducible]`.
* [#7166](https://github.com/leanprover/lean4/pull/7166) extends the notion of “fixed parameter” of a recursive function
also to parameters that come after varying function. The main benefit is
that we get nicer induction principles.
Before, the definition
```lean
def app (as : List α) (bs : List α) : List α :=
match as with
| [] => bs
| a::as => a :: app as bs
```
produced
```lean
app.induct.{u_1} {α : Type u_1} (motive : List α → List α → Prop) (case1 : ∀ (bs : List α), motive [] bs)
(case2 : ∀ (bs : List α) (a : α) (as : List α), motive as bs → motive (a :: as) bs) (as bs : List α) : motive as bs
```
and now you get
```lean
app.induct.{u_1} {α : Type u_1} (motive : List α → Prop) (case1 : motive [])
(case2 : ∀ (a : α) (as : List α), motive as → motive (a :: as)) (as : List α) : motive as
```
because `bs` is fixed throughout the recursion (and can completely be dropped from the principle).
This is a **breaking change** when such an induction principle is used explicitly. Using `fun_induction` makes proof tactics robust against this change.
See the PR description for the rules for when a parameter is considered fixed.
Note that in a definition like
```lean
def app : List α → List α → List α
| [], bs => bs
| a::as, bs => a :: app as bs
```
the `bs` is not considered fixed, as it goes through the matcher machinery.
* [#7431](https://github.com/leanprover/lean4/pull/7431) changes the syntax of location modifiers for tactics like `simp`
and `rw` (e.g., `simp at h ⊢`) to allow the turnstile `⊢` to appear
anywhere in the sequence of locations.
* [#7457](https://github.com/leanprover/lean4/pull/7457) ensures info tree users such as linters and request handlers
have access to info subtrees created by async elab task by introducing
API to leave holes filled by such tasks.
**Breaking change**: other metaprogramming users of `Command.State.infoState` may need to call `InfoState.substituteLazy` on it manually to fill all holes.
### Updates to structures and classes
* [#7302](https://github.com/leanprover/lean4/pull/7302) changes how fields are elaborated in the `structure`/`class`
commands and also makes default values respect the structure resolution
order when there is diamond inheritance. Before, the details of
subobjects were exposed during elaboration, and in the local context any
fields that came from a subobject were defined to be projections of the
subobject field. Now, every field is represented as a local variable.
All parents (not just subobject parents) are now represented in the
local context, and they are now local variables defined to be parent
constructors applied to field variables (inverting the previous
relationship). See the PR description for further details.
* [#7640](https://github.com/leanprover/lean4/pull/7640) implements the main logic for inheriting and overriding
autoParam fields in the `structure`/`class` commands, pending being
enabled in the structure instance notation elaborator. Adds term info to
overridden fields, so they now can be hovered over, and "go to
definition" goes to the structure the field is originally defined in.
* [#7717](https://github.com/leanprover/lean4/pull/7717) changes how `{...}`/`where` notation ("structure instance
notation") elaborates. The notation now tries to simulate a flat
representation as much as possible, without exposing the details of
subobjects.
This is a **breaking change**, see the PR description for further details and mitigation strategies.
* [#7742](https://github.com/leanprover/lean4/pull/7742) adds a feature to `structure`/`class` where binders without
types on a field definition are interpreted as overriding the type's
parameters binder kinds in that field's projection function. See the PR description for further details.
### Library Updates
* Developments in the async machinery;
* Standardization of the integer division API;
* Conversions between finite types;
* API expansion of `BitVec` and tree maps;
* Proofs of Bitwuzla rewrite rules;
* Improvements to `List`/`Array`/`Vector`, as well as `HashMap` and `Int`/`Nat`.
See the Library section below for details.
### Other Highlights
* Documentation has been significantly expanded. See the Documentation section below for details.
* [#7185](https://github.com/leanprover/lean4/pull/7185) refactors Lake's build internals to enable the introduction of
targets and facets beyond packages, modules, and libraries. Facets,
build keys, build info, and CLI commands have been generalized to
arbitrary target types.
## Language
* [#5182](https://github.com/leanprover/lean4/pull/5182) makes functions defined by well-founded recursion use an
`opaque` well-founded proof by default; see highlights section for details.
* [#5998](https://github.com/leanprover/lean4/pull/5998) lets `omega` always abstract its own proofs into an auxiliary
definition. The size of the olean of Vector.Extract goes down from 20MB
to 5MB with this, overall stdlib olean size and build instruction count
go down 5%.
* [#6325](https://github.com/leanprover/lean4/pull/6325) ensures that environments can be loaded, repeatedly, without
executing arbitrary code
* [#7075](https://github.com/leanprover/lean4/pull/7075) ensures that names suggested by tactics like `simp?` are not
shadowed by auxiliary declarations in the local context and that names
of `let rec` and `where` declarations are correctly resolved in tactic
blocks.
* [#7166](https://github.com/leanprover/lean4/pull/7166) extends the notion of “fixed parameter” of a recursive function
also to parameters that come after varying function; see highlights section for details.
* [#7256](https://github.com/leanprover/lean4/pull/7256) introduces the `assert!` variant `debug_assert!` that is
activated when compiled with `buildType` `debug`.
* [#7304](https://github.com/leanprover/lean4/pull/7304) fixes an issue where nested `let rec` declarations within
`match` expressions or tactic blocks failed to compile if they were
nested within, and recursively called, a `let rec` that referenced a
variable bound by a containing declaration.
* [#7324](https://github.com/leanprover/lean4/pull/7324) changes the internal construction of well-founded recursion, to
not change the type of `fix`’s induction hypothesis in non-defeq ways.
* [#7333](https://github.com/leanprover/lean4/pull/7333) allows aux decls (like generated by `match`) to be generated by
decreasing_by tactics.
* [#7335](https://github.com/leanprover/lean4/pull/7335) modifies `elabTerminationByHints` in a way that the type of the
recursive function used for elaboration of the termination measure is
striped of from optional parameters. It prevents introducing
dependencies between the default values for arguments, that can cause
the termination checker to fail.
* [#7353](https://github.com/leanprover/lean4/pull/7353) changes `abstractNestedProofs` so that it also visits the
subterms in the head of an application.
* [#7362](https://github.com/leanprover/lean4/pull/7362) allows simp dischargers to add aux decls to the environment.
This enables tactics like `native_decide` to be used here, and unblocks
improvements to omega in #5998.
* [#7387](https://github.com/leanprover/lean4/pull/7387) uses `-implicitDefEqProofs` in `bv_omega` to ensure it is not
affected by the change in #7386.
* [#7397](https://github.com/leanprover/lean4/pull/7397) ensures that `Poly.mul p 0` always returns `Poly.num 0`.
* [#7409](https://github.com/leanprover/lean4/pull/7409) allows the use of `dsimp` during preprocessing of well-founded
definitions. This fixes regressions when using `if-then-else` without
giving a name to the condition, but where the condition is needed for
the termination proof, in cases where that subexpression is reachable
only by dsimp, but not by simp (e.g. inside a dependent let)
* [#7431](https://github.com/leanprover/lean4/pull/7431) changes the syntax of location modifiers for tactics like `simp`
and `rw` (e.g., `simp at h ⊢`) to allow the turnstile `⊢` to appear
anywhere in the sequence of locations.
* [#7509](https://github.com/leanprover/lean4/pull/7509) disables the `implicitDefEqProofs` simp option in the
preprocessor of `bv_decide` in order to account for regressions caused
by #7387.
* [#7511](https://github.com/leanprover/lean4/pull/7511) fixes two bugs in `simp +arith` that were preventing specific
subterms from being normalized.
* [#7515](https://github.com/leanprover/lean4/pull/7515) fixes another bug in `simp +arith`. This bug was affecting
`grind`. See new test for an example.
* [#7551](https://github.com/leanprover/lean4/pull/7551) changes `isNatCmp` to ignore optional arguments annotations,
when checking for `<`-like comparison between elements of `Nat`. That
previously caused `guessLex` to fail when checking termination of a
function, whose signature involved an optional argument of the type
`Nat`.
* [#7560](https://github.com/leanprover/lean4/pull/7560) ensures that we use the same ordering to normalize linear `Int`
terms and relations. This change affects `simp +arith` and `grind`
normalizer.
* [#7622](https://github.com/leanprover/lean4/pull/7622) fixes `fun_induction` when used on structurally recursive
functions where there are targets occurring before fixed parameters.
* [#7630](https://github.com/leanprover/lean4/pull/7630) fixes a performance issue in the `whnfCore` procedure.
* [#7728](https://github.com/leanprover/lean4/pull/7728) fixes an issue in `abstractNestedProofs`.
We should abstract proofs occurring in the inferred proposition too.
### Structures
* [#7302](https://github.com/leanprover/lean4/pull/7302) changes how fields are elaborated in the `structure`/`class`
commands and also makes default values respect the structure resolution
order when there is diamond inheritance. Before, the details of
subobjects were exposed during elaboration, and in the local context any
fields that came from a subobject were defined to be projections of the
subobject field. Now, every field is represented as a local variable.
All parents (not just subobject parents) are now represented in the
local context, and they are now local variables defined to be parent
constructors applied to field variables (inverting the previous
relationship). Other notes:
- The entire collection of parents is processed, and all parent
projection names are checked for consistency. Every parent appears in
the local context now.
- For classes, every parent now contributes an instance, not just the
parents represented as subobjects.
- Default values are now processed according to the parent resolution
order. Default value definition/override auxiliary definitions are
stored at `StructName.fieldName._default`, and inherited values are
stored at `StructName.fieldName._inherited_default`. Metaprograms no
longer need to look at parents when doing calculations on default
values.
- Default value omission for structure instance notation pretty printing
has been updated in consideration of this.
- Now the elaborator generates a `_flat_ctor` constructor that will be
used for structure instance elaboration. All types in this constructor
are put in "field normal form" (projections of parent constructors are
reduced, and parent constructors are eta reduced), and all fields with
autoParams are annotated as such. This is not meant for users, but it
may be useful for metaprogramming.
- While elaborating fields, any metavariables whose type is one of the
parents is assigned to that parent. The hypothesis is that, for the
purpose of elaborating structure fields, parents are fixed: there is
only *one* instance of any given parent under consideration. See the
`Magma` test for an example of this being necessary. The hypothesis may
not be true when there are recursive structures, since different values
of the structure might not agree on parent fields.
* [#7314](https://github.com/leanprover/lean4/pull/7314) changes elaboration of `structure` parents so that each must be
fully elaborated before the next one is processed.
* [#7640](https://github.com/leanprover/lean4/pull/7640) implements the main logic for inheriting and overriding
autoParam fields in the `structure`/`class` commands, pending being
enabled in the structure instance notation elaborator. Adds term info to
overridden fields, so they now can be hovered over, and "go to
definition" goes to the structure the field is originally defined in.
* [#7652](https://github.com/leanprover/lean4/pull/7652) gives `#print` for structures the ability to show the default
values and auto-param tactics for fields.
* [#7717](https://github.com/leanprover/lean4/pull/7717) changes how `{...}`/`where` notation ("structure instance
notation") elaborates. The notation now tries to simulate a flat
representation as much as possible, without exposing the details of
subobjects. Features:
- When fields are elaborated, their expected types now have a couple
reductions applied. For all projections and constructors associated to
the structure and its parents, projections of constructors are reduced
and constructors of projections are eta reduced, and also implementation
detail local variables are zeta reduced in propositions (so tactic
proofs should never see them anymore). Furthermore, field values are
beta reduced automatically in successive field types. The example in
[mathlib4#12129](https://github.com/leanprover-community/mathlib4/issues/12129#issuecomment-2056134533)
now shows a goal of `0 = 0` rather than `{ toFun := fun x => x }.toFun 0
= 0`.
- All parents can now be used as field names, not just the subobject
parents. These are like additional sources but with three constraints:
every field of the value must be used, the fields must not overlap with
other provided fields, and every field of the specified parent must be
provided for. Similar to sources, the values are hoisted to `let`s if
they are not already variables, to avoid multiple evaluation. They are
implementation detail local variables, so they get unfolded for
successive fields.
- All class parents are now used to fill in missing fields, not just the
subobject parents. Closes #6046. Rules: (1) only those parents whose
fields are a subset of the remaining fields are considered, (2) parents
are considered only before any fields are elaborated, and (3) only those
parents whose type can be computed are considered (this can happen if a
parent depends on another parent, which is possible since #7302).
- Default values and autoparams now respect the resolution order
completely: each field has at most one default value definition that can
provide for it. The algorithm that tries to unstick default values by
walking up the subobject hierarchy has been removed. If there are
applications of default value priorities, we might consider it in a
future release.
- The resulting constructors are now fully packed. This is implemented
by doing structure eta reduction of the elaborated expressions.
- "Magic field definitions" (as reported [on
Zulip](https://leanprover.zulipchat.com/#narrow/channel/113489-new-members/topic/Where.20is.20sSup.20defined.20on.20submodules.3F/near/499578795))
have been eliminated. This was where fields were being solved for by
unification, tricking the default value system into thinking they had
actually been provided. Now the default value system keeps track of
which fields it has actually solved for, and which fields the user did
not provide. Explicit structure fields (the default kind) without any
explicit value definition will result in an error. If it was solved for
by unification, the error message will include the inferred value, like
"field 'f' must be explicitly provided, its synthesized value is v"
- When the notation is used in patterns, it now no longer inserts fields
using class parents, and it no longer applies autoparams or default
values. The motivation is that one expects patterns to match only the
given fields. This is still imperfect, since fields might be solved for
indirectly.
- Elaboration now attempts error recovery. Extraneous fields log errors
and are ignored, missing fields are filled with `sorry`.
* [#7742](https://github.com/leanprover/lean4/pull/7742) adds a feature to `structure`/`class` where binders without
types on a field definition are interpreted as overriding the type's
parameters binder kinds in that field's projection function. The rules
are (1) only a prefix of the binders are interpreted this way, (2)
multi-identifier binders are allowed but they must all be for
parameters, (3) only parameters that appear in the declaration itself
(not from `variables`) can be overridden and (4) the updates will be
applied after parameter binder kind inference is done. Binder updates
are not allowed in default value redefinitions. Example application: In
the following, `(R p)` causes the `R` and `p` parameters to be explicit,
where normally they would be implicit.
```
class CharP (R : Type u) [AddMonoidWithOne R] (p : Nat) : Prop where
cast_eq_zero_iff (R p) : ∀ x : Nat, (x : R) = 0 ↔ p ∣ x
#guard_msgs in #check CharP.cast_eq_zero_iff
/-
info: CharP.cast_eq_zero_iff.{u} (R : Type u) {inst✝ : AddMonoidWithOne R} (p : Nat) [self : CharP R p] (x : Nat) :
↑x = 0 ↔ p ∣ x
-/
```
* [#7746](https://github.com/leanprover/lean4/pull/7746) adds declaration ranges to structure fields that were copied
from parents that aren't represented as subobjects, supporting "go to
definition". The declaration range is the parent in the `extends`
clause.
### Parallel Elaboration
* [#7084](https://github.com/leanprover/lean4/pull/7084) enables the elaboration of theorem bodies, i.e. proofs, to
happen in parallel to each other as well as to other elaboration tasks.
* [#7247](https://github.com/leanprover/lean4/pull/7247) makes generation of `match` equations and splitters compatible
with parallelism.
* [#7261](https://github.com/leanprover/lean4/pull/7261) ensures all equation, unfold, induction, and partial fixpoint
theorem generators in core are compatible with parallelism.
* [#7348](https://github.com/leanprover/lean4/pull/7348) ensures all equation and unfold theorem generators in core are
compatible with parallelism.
* [#7457](https://github.com/leanprover/lean4/pull/7457) ensures info tree users such as linters and request handlers
have access to info subtrees created by async elab task by introducing
API to leave holes filled by such tasks.
* [#8101](https://github.com/leanprover/lean4/pull/8101) fixes a parallelism regression where linters that e.g. check for
errors in the command would no longer find such messages.
### bv_decide
* [#7298](https://github.com/leanprover/lean4/pull/7298) adds rewrites to bv_decide's preprocessing that concern
combinations of if-then-else and operation such as multiplication or
negation.
* [#7309](https://github.com/leanprover/lean4/pull/7309) fixes a bug where bv_decide's new structure support would
sometimes not case split on all available structure fvars as their type
was an mvar.
* [#7329](https://github.com/leanprover/lean4/pull/7329) adds support to bv_decide for simple pattern matching on enum
inductives. By simple we mean non dependent match statements with all
arms written out.
* [#7347](https://github.com/leanprover/lean4/pull/7347) upgrades the CaDiCal we ship and use for bv_decide to version
2.1.2. Additionally it enables binary LRAT proofs on windows by default
as https://github.com/arminbiere/cadical/issues/112 has been fixed.
* [#7381](https://github.com/leanprover/lean4/pull/7381) refactors the AIG datastructures that underly bv_decide in order
to allow a better tracking of negations in the circuit. This refactor
has two effects, for one adding full constant folding to the AIG
framework and secondly enabling us to add further simplifications from
the Brummayer Biere paper in the future which was previously
architecturally impossible.
* [#7390](https://github.com/leanprover/lean4/pull/7390) makes bv_decide's preprocessing handle casts, as we are in the
constant BitVec fragment we should be able to always remove them using
BitVec.cast_eq.
* [#7407](https://github.com/leanprover/lean4/pull/7407) adds rules for `-1#w * a = -a` and `a * -1#w = -a` to
bv_normalize as seen in Bitwuzla's BV_MUL_SPECIAL_CONST.
* [#7417](https://github.com/leanprover/lean4/pull/7417) adds support for enum inductive matches with default branches to
bv_decide.
* [#7429](https://github.com/leanprover/lean4/pull/7429) adds the BV_EXTRACT_FULL preprocessing rule from Bitwuzla to
bv_decide.
* [#7436](https://github.com/leanprover/lean4/pull/7436) adds simprocs that turn left and right shifts by constants into
extracts to bv_decide.
* [#7438](https://github.com/leanprover/lean4/pull/7438) adds the EQUAL_CONST_BV_ADD and BV_AND_CONST rules to
bv_decide's preprocessor.
* [#7441](https://github.com/leanprover/lean4/pull/7441) adds the BV_CONCAT_CONST, BV_CONCAT_EXTRACT and ELIM_ZERO_EXTEND
rule from Bitwuzla to bv_decide.
* [#7477](https://github.com/leanprover/lean4/pull/7477) ensures that bv_decide doesn't accidentally operate on terms
underneath binders. As there is currently no binder construct that is in
the supported fragment of bv_decide this changes nothing about the proof
power.
* [#7480](https://github.com/leanprover/lean4/pull/7480) adds the necessary rewrites for the Bitwuzla rules
BV_ULT_SPECIAL_CONST, BV_SIGN_EXTEND_ELIM, TODO.
* [#7486](https://github.com/leanprover/lean4/pull/7486) adds the BitVec.add_neg_mul rule introduced in #7481 to
bv_decide's preprocessor.
* [#7491](https://github.com/leanprover/lean4/pull/7491) achieves a speed up in bv_decide's LRAT checker by improving its
input validation.
* [#7521](https://github.com/leanprover/lean4/pull/7521) adds the equivalent of `Array.emptyWithCapacity` to the AIG
framework and applies it to `bv_decide`. This is particularly useful as
we are only working with capacities that are always known at run time so
we should never have to reallocate a `RefVec`.
* [#7527](https://github.com/leanprover/lean4/pull/7527) adds the BV_EXTRACT_CONCAT_LHS_RHS, NORM_BV_ADD_MUL and
NORM_BV_SHL_NEG rewrite from Bitwuzla as well as a reduction from
getLsbD to extractLsb' to bv_decide.
* [#7615](https://github.com/leanprover/lean4/pull/7615) adds the ADD part of bitwuzlas BV_EXTRACT_ADD_MUL rule to
bv_decide's preprocessor.
* [#7617](https://github.com/leanprover/lean4/pull/7617) adds the known bits optimization from the multiplication circuit
to the add one, allowing us to discover potentially even more symmetries
before going to the SAT solver.
* [#7636](https://github.com/leanprover/lean4/pull/7636) makes sure that the expression level cache in bv_decide is
maintained across the entire bitblaster instead of just locally per
BitVec expression.
* [#7644](https://github.com/leanprover/lean4/pull/7644) adds a cache to the reflection procedure of bv_decide.
* [#7649](https://github.com/leanprover/lean4/pull/7649) changes the AIG representation of constants from `const (b :
Bool)` to a single constructor `false`. Since #7381 `Ref` contains an
`invert` flag meaning the constant `true` can be represented as a `Ref`
to `false` with `invert` set, so no expressivity is lost.
* [#7655](https://github.com/leanprover/lean4/pull/7655) adds the preprocessing rule for extraction over multiplication
to bv_decide.
* [#7663](https://github.com/leanprover/lean4/pull/7663) uses computed fields to store the hash code and pointer equality
to increase performance of comparison and hashmap lookups on the core
data structure used by the bitblaster.
* [#7670](https://github.com/leanprover/lean4/pull/7670) improves the caching computation of the atoms assignment in
bv_decide's reflection procedure.
* [#7698](https://github.com/leanprover/lean4/pull/7698) adds more sharing and caching procedures to bv_decide's
reflection step.
* [#7720](https://github.com/leanprover/lean4/pull/7720) compresses the AIG representation by storing the inverter bit in
the lowest bit of the gate descriptor instead of as a separate `Bool`.
* [#7727](https://github.com/leanprover/lean4/pull/7727) avoids some unnecessary allocations in the CNF to dimacs
conversion
* [#7733](https://github.com/leanprover/lean4/pull/7733) ensures that in the AIG the constant circuit node is always
stored at the first spot. This allows us to skip performing a cache
lookup when we require a constant node.
### Grind
* [#7355](https://github.com/leanprover/lean4/pull/7355) fixes a bug in the `markNestedProofs` preprocessor used in the
`grind` tactic.
* [#7392](https://github.com/leanprover/lean4/pull/7392) fixes an issue in the `grind` tactic when case splitting on
if-then-else expressions.
* [#7510](https://github.com/leanprover/lean4/pull/7510) ensures that `grind` can be used as a more powerful
`contradiction` tactic, sparing the user from having to type `exfalso;
grind` or `intros; exfalso; grind`.
* [#7512](https://github.com/leanprover/lean4/pull/7512) adds missing normalization rules for `Nat` div and mod to the
`grind` tactic.
* [#7514](https://github.com/leanprover/lean4/pull/7514) adds more missing normalization rules for `div` and `mod` to
`grind`.
* [#7532](https://github.com/leanprover/lean4/pull/7532) fixes the procedure for putting new facts into the `grind`
"to-do" list. It ensures the new facts are preprocessed. also
removes some of the clutter in the `Nat.sub` support.
* [#7540](https://github.com/leanprover/lean4/pull/7540) adds `[grind cases eager]` attribute to `Subtype`. See new test.
* [#7553](https://github.com/leanprover/lean4/pull/7553) removes a bad normalization rule in `grind`, and adds a missing
dsimproc.
* [#7641](https://github.com/leanprover/lean4/pull/7641) implements basic model-based theory combination in `grind`.
`grind` can now solve examples such as
```lean
example (f : Int → Int) (x : Int)
: 0 ≤ x → x ≠ 0 → x ≤ 1 → f x = 2 → f 1 = 2 := by
grind
```
* [#7712](https://github.com/leanprover/lean4/pull/7712) ensures `grind` always abstract its own proofs into an auxiliary
definition/theorem. This is similar to #5998 but for `grind`
* [#7714](https://github.com/leanprover/lean4/pull/7714) fixes an assertion violation in the `grind` model-based theory
combination module.
* [#7723](https://github.com/leanprover/lean4/pull/7723) adds the configuration options `zeta` and `zetaDelta` in
`grind`. Both are set to `true` by default.
* [#7724](https://github.com/leanprover/lean4/pull/7724) adds `dite_eq_ite` normalization rule to `grind`. This rule is
important to adjust mismatches between a definition and its function
induction principle.
* [#7726](https://github.com/leanprover/lean4/pull/7726) fixes the `markNestedProofs` procedure used in `grind`. It was
missing the case where the type of a nested proof may contain other
nested proofs.
* [#7760](https://github.com/leanprover/lean4/pull/7760) ensures `grind` is using the default transparency setting when
computing auxiliary congruence lemmas.
* [#7765](https://github.com/leanprover/lean4/pull/7765) improves how `grind` normalizes dependent implications during
introduction.
Previously, `grind` would introduce a hypothesis `h : p` for a goal of
the form `.. ⊢ (h : p) → q h`, and then normalize and assert a
non-dependent copy of `p`. As a result, the local context would contain
both `h : p` and a separate `h' : p'`, where `p'` is the normal form of
`p`. Moreover, `q` would still depend on the original `h`.
* [#7776](https://github.com/leanprover/lean4/pull/7776) improves the equality proof discharger used by the E-matching
procedure in `grind`.
* [#7777](https://github.com/leanprover/lean4/pull/7777) fixes the introduction procedure used in `grind`. It was not
registering local instances that are also propositions. See new test.
* [#7778](https://github.com/leanprover/lean4/pull/7778) adds missing propagation rules for `LawfulBEq A` to `grind`.
They are needed in a context where the instance `DecidableEq A` is not
available. See new test.
* [#7781](https://github.com/leanprover/lean4/pull/7781) adds a new propagation rule for `Bool` disequalities to `grind`.
It now propagates `x = true` (`x = false`) from the disequality `x =
false` (`x = true`). It ensures we don't have to perform case analysis
on `x` to learn this fact. See tests.
### CutSat
* [#7312](https://github.com/leanprover/lean4/pull/7312) implements proof term generation for `cooper_dvd_left` and its
variants in the cutsat procedure for linear integer arithmetic.
* [#7315](https://github.com/leanprover/lean4/pull/7315) implements the Cooper conflict resolution in cutsat. We still
need to implement the backtracking and disequality case.
* [#7339](https://github.com/leanprover/lean4/pull/7339) implements cooper conflict resolution in the cutsat procedure.
It also fixes several bugs in the proof term construction. We still need
to add more tests, but we can already solve the following example that
`omega` fails to solve:
```lean
example (x y : Int) :
27 ≤ 11*x + 13*y →
11*x + 13*y ≤ 45 →
-10 ≤ 7*x - 9*y →
7*x - 9*y ≤ 4 → False := by
grind
```
* [#7351](https://github.com/leanprover/lean4/pull/7351) ensures cutsat does not have to perform case analysis in the
univariate polynomial case. That it, it can close a goal whenever there
is no solution for a divisibility constraint in an interval. Example of
theorem that is now proved in a single step by cutsat:
```lean
example (x : Int) : 100 ≤ x → x ≤ 10000 → 20000 ∣ 3*x → False := by
grind
```
* [#7357](https://github.com/leanprover/lean4/pull/7357) adds support for `/` and `%` to the cutsat procedure.
* [#7369](https://github.com/leanprover/lean4/pull/7369) uses `let`-declarations for each polynomial occurring in a proof
term generated by the cutsat procedure.
* [#7370](https://github.com/leanprover/lean4/pull/7370) simplifies the proof term due to the Cooper's conflict
resolution in cutsat.
* [#7373](https://github.com/leanprover/lean4/pull/7373) implements the last missing case for the cutsat procedure and
fixes a bug. During model construction, we may encounter a bounded
interval containing integer solutions that satisfy the divisibility
constraint but fail to satisfy known disequalities.
* [#7394](https://github.com/leanprover/lean4/pull/7394) adds infrastructure necessary for supporting `Nat` in the cutsat
procedure. It also makes the `grind` more robust.
* [#7396](https://github.com/leanprover/lean4/pull/7396) fixes a bug in the cutsat model construction. It was searching
for a solution in the wrong direction.
* [#7401](https://github.com/leanprover/lean4/pull/7401) improves the cutsat model search procedure by tightening
inequalities using divisibility constraints.
* [#7494](https://github.com/leanprover/lean4/pull/7494) implements support for `Nat` inequalities in the cutsat
procedure.
* [#7495](https://github.com/leanprover/lean4/pull/7495) implements support for `Nat` divisibility constraints in the
cutsat procedure.
* [#7501](https://github.com/leanprover/lean4/pull/7501) implements support for `Nat` equalities and disequalities in the
cutsat procedure.
* [#7502](https://github.com/leanprover/lean4/pull/7502) implements support for `Nat` div and mod in the cutsat
procedure.
* [#7503](https://github.com/leanprover/lean4/pull/7503) implements support for `Nat.sub` in cutsat
* [#7536](https://github.com/leanprover/lean4/pull/7536) implements support for `¬ d ∣ p` in the cutsat procedure.
* [#7537](https://github.com/leanprover/lean4/pull/7537) implements support for `Int.natAbs` and `Int.toNat` in the
cutsat procedure.
* [#7538](https://github.com/leanprover/lean4/pull/7538) fixes a bug in the cutsat model construction. It was not
resetting the decision stack at the end of the search.
* [#7561](https://github.com/leanprover/lean4/pull/7561) fixes the support for nonlinear `Nat` terms in cutsat. For
example, cutsat was failing in the following example
```lean
example (i j k l : Nat) : i / j + k + l - k = i / j + l := by grind
```
because we were not adding the fact that `i / j` is non negative when we
inject the `Nat` expression into `Int`.
* [#7579](https://github.com/leanprover/lean4/pull/7579) improves the counterexamples produced by the cutsat procedure,
and adds proper support for `Nat`. Before this PR, the assignment for an
natural variable `x` would be represented as `NatCast.natCast x`.
## Library
* [#6496](https://github.com/leanprover/lean4/pull/6496) adds short-circuit support to bv_decide to accelerate
multiplications with shared coefficients. In particular, `a * x = b * x`
can be extended to `a = b v (a * x = b * x)`. The latter is faster if `a
= b` is true, as `a = b` may be evaluated without considering the
multiplication circuit. On the other hand, we require the multiplication
circuit, as `a * x = b * x -> a = b` is not always true due to two's
complement wrapping.
* [#7141](https://github.com/leanprover/lean4/pull/7141) generalizes `cond` to allow the motive to be in `Sort u`, not
just `Type u`.
* [#7289](https://github.com/leanprover/lean4/pull/7289) adds `getKey_beq`, `getKey_congr` and variants to the hashmap
api.
* [#7319](https://github.com/leanprover/lean4/pull/7319) continues alignment of lemmas about `Int.ediv/fdiv/tdiv`,
including adding notes about "missing" lemmas that do not apply in one
case. Also lemmas about `emod/fmod/tmod`. There's still more to do.
* [#7338](https://github.com/leanprover/lean4/pull/7338) adds @[simp] to `Int.neg_inj`.
* [#7341](https://github.com/leanprover/lean4/pull/7341) adds an equivalence relation to the hash map with several lemmas
for it.
* [#7356](https://github.com/leanprover/lean4/pull/7356) adds lemmas reducing monadic operations with `pure` to the
non-monadic counterparts.
* [#7358](https://github.com/leanprover/lean4/pull/7358) fills further gaps in the integer division API, and mostly
achieves parity between the three variants of integer division. There
are still some inequality lemmas about `tdiv` and `fdiv` that are
missing, but as they would have quite awkward statements I'm hoping that
for now no one is going to miss them.
* [#7378](https://github.com/leanprover/lean4/pull/7378) adds lemmas about `Int` that will be required in #7368.
* [#7380](https://github.com/leanprover/lean4/pull/7380) moves `DHashMap.Raw.foldRev(M)` into `DHashMap.Raw.Internal`.
* [#7406](https://github.com/leanprover/lean4/pull/7406) makes the instance for `Subsingleton (Squash α)` work for `α :
Sort u`.
* [#7418](https://github.com/leanprover/lean4/pull/7418) renames several hash map lemmas (`get` -> `getElem`) and uses
`m[k]?` instead of `get? m k` (and also for `get!` and `get`).
* [#7432](https://github.com/leanprover/lean4/pull/7432) adds a consequence of `Nat.add_div` using a divisibility
hypothesis.
* [#7433](https://github.com/leanprover/lean4/pull/7433) makes `simp` able to simplify basic `for` loops in monads other
than `Id`.
* [#7435](https://github.com/leanprover/lean4/pull/7435) reviews the `Nat` and `Int` API, making the interfaces more
consistent.
* [#7445](https://github.com/leanprover/lean4/pull/7445) renames `Array.mkEmpty` to `emptyWithCapacity`. (Similarly for
`ByteArray` and `FloatArray`.)
* [#7446](https://github.com/leanprover/lean4/pull/7446) prefers using `∅` instead of `.empty` functions. We may later
rename `.empty` functions to avoid the naming clash with
`EmptyCollection`, and to better express semantics of functions which
take an optional capacity argument.
* [#7451](https://github.com/leanprover/lean4/pull/7451) renames the member `insert_emptyc_eq` of the `LawfulSingleton`
type class to `insert_empty_eq` to conform to the recommended spelling of
`∅` as `empty`.
* [#7466](https://github.com/leanprover/lean4/pull/7466) further cleans up simp lemmas for `Int`.
* [#7516](https://github.com/leanprover/lean4/pull/7516) changes the order of arguments for `List.modify` and
`List.insertIdx`, making them consistent with `Array`.
* [#7522](https://github.com/leanprover/lean4/pull/7522) splits off the required theory about `Nat`, `Fin` and `BitVec`
from #7484.
* [#7529](https://github.com/leanprover/lean4/pull/7529) upstreams `bind_congr` from Mathlib and proves that the minimum
of a sorted list is its head and weakens the antisymmetry condition of
`min?_eq_some_iff`. Instead of requiring an `Std.Antisymm` instance,
`min?_eq_some_iff` now only expects a proof that the relation is
antisymmetric *on the elements of the list*. If the new premise is left
out, an autoparam will try to derive it from `Std.Antisymm`, so existing
usages of the theorem will most likely continue to work.
* [#7541](https://github.com/leanprover/lean4/pull/7541) corrects names of a number of lemmas, where the incorrect name
was identified automatically by a
[tool](https://leanprover.zulipchat.com/#narrow/channel/270676-lean4/topic/automatic.20spelling.20generation.20.26.20comparison/near/505760384)
written by @Rob23oba.
* [#7554](https://github.com/leanprover/lean4/pull/7554) adds SMT-LIB operators to detect overflow `BitVec.negOverflow`,
according to the [SMTLIB
standard](https://github.com/SMT-LIB/SMT-LIB-2/blob/2.7/Theories/FixedSizeBitVectors.smt2),
and the theorem proving equivalence of such definition with the `BitVec`
library functions (`negOverflow_eq`).
* [#7558](https://github.com/leanprover/lean4/pull/7558) changes the definition of `Nat.div` and `Nat.mod` to use a
structurally recursive, fuel-based implementation rather than
well-founded recursion. This leads to more predictable reduction behavior
in the kernel.
* [#7565](https://github.com/leanprover/lean4/pull/7565) adds `BitVec.toInt_sdiv` plus a lot of related bitvector theory
around divisions.
* [#7614](https://github.com/leanprover/lean4/pull/7614) marks `Nat.div` and `Nat.modCore` as `irreducible`, to recover
the behavior from before #7558.
* [#7672](https://github.com/leanprover/lean4/pull/7672) reviews the implicitness of arguments across List/Array/Vector,
generally trying to make arguments implicit where possible, although
sometimes correcting propositional arguments which were incorrectly
implicit to explicit.
* [#7687](https://github.com/leanprover/lean4/pull/7687) provides `Inhabited`, `Ord` (if missing), `TransOrd`,
`LawfulEqOrd` and `LawfulBEqOrd` instances for various types, namely
`Bool`, `String`, `Nat`, `Int`, `UIntX`, `Option`, `Prod` and date/time
types. It also adds a few related theorems, especially about how the
`Ord` instance for `Int` relates to `LE` and `LT`.
* [#7692](https://github.com/leanprover/lean4/pull/7692) upstreams a small number of ordering lemmas for `Fin` from
mathlib.
* [#7700](https://github.com/leanprover/lean4/pull/7700) provides `Ord`-related instances such as `TransOrd` for `IntX`,
`Ordering`, `BitVec`, `Array`, `List` and `Vector`.
* [#7704](https://github.com/leanprover/lean4/pull/7704) adds lemmas about the modulo operation defined on signed bounded
integers.
* [#7706](https://github.com/leanprover/lean4/pull/7706) performs various cleanup tasks on `Init/Data/UInt/*` and
`Init/Data/SInt/*`.
* [#7729](https://github.com/leanprover/lean4/pull/7729) replaces `assert!` with `assertBEq` to fix issues where asserts
didn't trigger the `ctest` due to being in a separate task. This was
caused by panics not being caught in tasks, while IO errors were handled
by the `AsyncTask` if we use the `block` function on them.
* [#7756](https://github.com/leanprover/lean4/pull/7756) adds lemmas about `Nat.gcd` (some of which are currently present
in mathlib).
**BREAKING CHANGE:** While many lemmas were renamed and the lemma with the old signature was simply deprecated, some lemmas were changed without renaming them. They now use the `getElem` variants instead of `get`.
### Async
* [#6683](https://github.com/leanprover/lean4/pull/6683) introduces TCP socket support using the LibUV library, enabling
asynchronous I/O operations with it.
* [#7571](https://github.com/leanprover/lean4/pull/7571) fixes #7478 by modifying `number` specifiers from `atLeast size`
to `flexible size` for parsing. This change allows:
- 1 repetition to accept 1 or more characters
- More than 1 repetition to require exactly that many characters
* [#7574](https://github.com/leanprover/lean4/pull/7574) introduces UDP socket support using the LibUV library, enabling
asynchronous I/O operations with it.
* [#7578](https://github.com/leanprover/lean4/pull/7578) introduces a function called `interfaceAddresses` that retrieves
an array of system’s network interfaces.
* [#7584](https://github.com/leanprover/lean4/pull/7584) introduces a structure called `FormatConfig`, which provides
additional configuration options for `GenericFormat`, such as whether
leap seconds should be allowed during parsing. By default, this option
is set to `false`.
* [#7751](https://github.com/leanprover/lean4/pull/7751) adds `Std.BaseMutex.tryLock` and `Std.Mutex.tryAtomically` as
well as unit tests for our locking and condition variable primitives.
* [#7755](https://github.com/leanprover/lean4/pull/7755) adds `Std.RecursiveMutex` as a recursive/reentrant equivalent to
`Std.Mutex`.
* [#7771](https://github.com/leanprover/lean4/pull/7771) adds a barrier primitive as `Std.Barrier`.
### Finite Types
* [#7228](https://github.com/leanprover/lean4/pull/7228) adds simprocs to reduce expressions involving `IntX`.
* [#7274](https://github.com/leanprover/lean4/pull/7274) adds lemmas about iterated conversions between finite types,
starting with something of type `IntX`.
* [#7340](https://github.com/leanprover/lean4/pull/7340) adds lemmas for iterated conversions between finite types which
start with `Nat`/`Int`/`Fin`/`BitVec` and then go through `UIntX`.
* [#7368](https://github.com/leanprover/lean4/pull/7368) adds lemmas for iterated conversions between finite types,
starting with something of type `Nat`/`Int`/`Fin`/`BitVec` and going
through `IntX`.
* [#7414](https://github.com/leanprover/lean4/pull/7414) adds the remaining lemmas about iterated conversions of finite
type that go through signed or unsigned bounded integers.
* [#7484](https://github.com/leanprover/lean4/pull/7484) adds some lemmas about operations defined on `UIntX`
* [#7487](https://github.com/leanprover/lean4/pull/7487) adds the instance `Neg UInt8`.
* [#7592](https://github.com/leanprover/lean4/pull/7592) adds theory about signed finite integers relating operations and
conversion functions.
* [#7598](https://github.com/leanprover/lean4/pull/7598) adds miscellaneous results about `Nat` and `BitVec` that will be
required for `IntX` theory (#7592).
* [#7685](https://github.com/leanprover/lean4/pull/7685) contains additional material about `BitVec` and `Int` spun off
from #7592.
* [#7694](https://github.com/leanprover/lean4/pull/7694) contains additional material on `BitVec`, `Int` and `Nat`, split
off from #7592.
### Tree Map
* [#7270](https://github.com/leanprover/lean4/pull/7270) provides lemmas about the tree map functions `foldlM`, `foldl`,
`foldrM` and `foldr` and their interactions with other functions for
which lemmas already exist. Additionally, it generalizes the
`fold*`/`keys` lemmas to arbitrary tree maps, which were previously
stated only for the `DTreeMap α Unit` case.
* [#7331](https://github.com/leanprover/lean4/pull/7331) provides lemmas about the tree map function `insertMany` and its
interaction with other functions for which lemmas already exist. Most
lemmas about `ofList`, which is related to `insertMany`, are not
included.
* [#7360](https://github.com/leanprover/lean4/pull/7360) provides lemmas about the tree map function `ofList` and
interactions with other functions for which lemmas already exist.
* [#7367](https://github.com/leanprover/lean4/pull/7367) provides lemmas for the tree map functions `alter` and `modify`
and their interactions with other functions for which lemmas already
exist.
**BREAKING CHANGE:** The signature of `size_alter` was corrected for all four hash map types. Instead of relying on the boolean operations `contains` and `&&` in the if statements, we now use the `Prop`-based operations `Membership` and `And`.
* [#7412](https://github.com/leanprover/lean4/pull/7412) provides lemmas about the tree map that have been introduced to
the hash map in #7289.
* [#7419](https://github.com/leanprover/lean4/pull/7419) provides lemmas about the tree map function `modify` and its
interactions with other functions for which lemmas already exist.
* [#7437](https://github.com/leanprover/lean4/pull/7437) provides (some but not all) lemmas about the tree map function
`minKey?`.
* [#7556](https://github.com/leanprover/lean4/pull/7556) provides lemmas about the tree map function `minKey?` and its
interaction with other functions for which lemmas already exist.
* [#7600](https://github.com/leanprover/lean4/pull/7600) provides lemmas about the tree map function `minKey!` and its
interactions with other functions for which lemmas already exist.
* [#7626](https://github.com/leanprover/lean4/pull/7626) provides lemmas for the tree map function `minKeyD` and its
interactions with other functions for which lemmas already exist.
* [#7657](https://github.com/leanprover/lean4/pull/7657) provides lemmas for the tree map function `maxKey?` and its
interactions with other functions for which lemmas already exist.
* [#7660](https://github.com/leanprover/lean4/pull/7660) provides lemmas for the tree map function `minKey` and its
interactions with other functions for which lemmas already exist.
* [#7664](https://github.com/leanprover/lean4/pull/7664) fixes a bug in the definition of the tree map functions `maxKey`
and `maxEntry`. Moreover, it provides lemmas for this function and its
interactions with other function for which lemmas already exist.
* [#7674](https://github.com/leanprover/lean4/pull/7674) add missing lemmas about the tree map: `minKey*` variants return
the head of `keys`, `keys` and `toList` are ordered and `getKey*
t.minKey?` equals the minimum.
* [#7675](https://github.com/leanprover/lean4/pull/7675) provides lemmas about the tree map function `maxKeyD` and its
interactions with other functions for which lemmas already exist.
* [#7686](https://github.com/leanprover/lean4/pull/7686) provides lemmas for the tree map function `maxKey!` and its
interactions with other functions for which lemmas already exist.
* [#7695](https://github.com/leanprover/lean4/pull/7695) removes simp lemmas about the tree map with a metavariable in
the head of the discrimination pattern.
* [#7697](https://github.com/leanprover/lean4/pull/7697) is a follow-up to #7695, which removed `simp` attributes from
tree map lemmas with bad discrimination patterns. In this PR, we
introduce some `Ord`-based lemmas that are more simp-friendly.
### BitVec API
* [#7104](https://github.com/leanprover/lean4/pull/7104) adds `BitVec.[toNat|toFin|toInt]_[sshiftRight|sshiftRight']`
plus variants with `of_msb_*`. While at it, we also add
`toInt_zero_length` and `toInt_of_zero_length`. In support of our main
theorem we add `toInt_shiftRight_lt` and `le_toInt_shiftRight`, which
make the main theorem automatically derivable via omega.
* [#7225](https://github.com/leanprover/lean4/pull/7225) contains `BitVec.(toInt, toFin)_twoPow` theorems, completing the
API for `BitVec.*_twoPow`. It also expands the `toNat_twoPow` API with
`toNat_twoPow_of_le`, `toNat_twoPow_of_lt`, as well as
`toNat_twoPow_eq_if` and moves `msb_twoPow` up, as it is used in the
`toInt_msb` proof.
* [#7415](https://github.com/leanprover/lean4/pull/7415) adds a few lemmas about the interactions of `BitVec` with `Fin`
and `Nat`.
* [#7420](https://github.com/leanprover/lean4/pull/7420) generalizes `BitVec.toInt_[lt|le]'` to not require `0 < w`.
* [#7465](https://github.com/leanprover/lean4/pull/7465) adds the theorem:
```lean
theorem lt_allOnes_iff {x : BitVec w} : x < allOnes w ↔ x ≠ allOnes w
```
to simplify comparisons against `-1#w`. This is a corollary of the
existing lemma:
```lean
theorem allOnes_le_iff {x : BitVec w} : allOnes w ≤ x ↔ x = allOnes w
```
* [#7599](https://github.com/leanprover/lean4/pull/7599) adds SMT-LIB operators to detect overflow `BitVec.(usubOverflow,
ssubOverflow)`, according to the [SMTLIB
standard](https://github.com/SMT-LIB/SMT-LIB-2/blob/2.7/Theories/FixedSizeBitVectors.smt2),
and the theorems proving equivalence of such definition with the
`BitVec` library functions `BittVec.(usubOverflow_eq, ssubOverflow_eq)`.
* [#7604](https://github.com/leanprover/lean4/pull/7604) adds bitvector theorems that to push negation into other
operations, following Hacker's Delight: Ch2.1.
* [#7605](https://github.com/leanprover/lean4/pull/7605) adds theorems `BitVec.[(toInt, toFin)_(extractLsb,
extractLsb')]`, completing the API for `BitVec.(extractLsb,
extractLsb')`.
* [#7616](https://github.com/leanprover/lean4/pull/7616) introduces `BitVec.(toInt, toFin)_rotate(Left, Right)`,
completing the API for `BitVec.rotate(Left, Right)`
* [#7658](https://github.com/leanprover/lean4/pull/7658) introduces `BitVec.(toFin_signExtend_of_le, toFin_signExtend)`,
completing the API for `BitVec.signExtend`.
* [#7661](https://github.com/leanprover/lean4/pull/7661) adds theorems `BitVec.[(toFin, toInt)_setWidth',
msb_setWidth'_of_lt, toNat_lt_twoPow_of_le, toInt_setWidth'_of_lt]`,
completing the API for `BitVec.setWidth'`.
* [#7699](https://github.com/leanprover/lean4/pull/7699) adds the `BitVec.toInt_srem` lemma, relating `BitVec.srem` with
`Int.tmod`.
### Bitwuzla Rewrite Rules
* [#7424](https://github.com/leanprover/lean4/pull/7424) proves Bitwuzla's rule
[`BV_ZERO_EXTEND_ELIM`](https://github.com/bitwuzla/bitwuzla/blob/6a1a768987cca77f36ebfe06f3a786348a481bbd/src/rewrite/rewrites_bv.cpp#L4021-L4033):
```lean
theorem setWidth_eq_append {v : Nat} {x : BitVec v} {w : Nat} (h : v ≤ w) :
x.setWidth w = ((0#(w - v)) ++ x).cast (by omega) := by
```
* [#7426](https://github.com/leanprover/lean4/pull/7426) adds the Bitwuzla rewrite rule
[`BV_EXTRACT_FULL`](https://github.com/bitwuzla/bitwuzla/blob/6a1a768987cca77f36ebfe06f3a786348a481bbd/src/rewrite/rewrites_bv.cpp#L1236-L1253),
which is useful for the bitblaster to simplify `extractLsb'` based
expressions.
* [#7427](https://github.com/leanprover/lean4/pull/7427) implements the bitwuzla rule
[`BV_CONCAT_EXTRACT`](https://github.com/bitwuzla/bitwuzla/blob/main/src/rewrite/rewrites_bv.cpp#L1146-L1176).
This will be used by the bitblaster to simplify adjacent `extract`s
into a single `extract`.
* [#7454](https://github.com/leanprover/lean4/pull/7454) implements the bitwuzla rule
[BV_SIGN_EXTEND_ELIM](https://github.com/bitwuzla/bitwuzla/blob/main/src/rewrite/rewrites_bv.cpp#L3638-L3663),
which rewrites a `signExtend x` as an `append` of the appropriate sign
bits, followed by the bits of `x`.
* [#7461](https://github.com/leanprover/lean4/pull/7461) introduces a bitvector associativity/commutativity normalization
on bitvector terms of the form `(a * b) = (c * d)` for `a, b, c, d`
bitvectors. This mirrors Bitwuzla's `PassNormalize::process`'s
`PassNormalize::normalize_eq_add_mul`.
* [#7481](https://github.com/leanprover/lean4/pull/7481) implements the Bitwuzla rewrites [BV_ADD_NEG_MUL](), and
associated lemmas to make the proof streamlined. ```bvneg (bvadd a
(bvmul a b)) = (bvmul a (bvnot b))```, or spelled as lean:
```lean
theorem neg_add_mul_eq_mul_not {x y : BitVec w} :
- (x + x * y) = (x * ~~~ y)
```
* [#7482](https://github.com/leanprover/lean4/pull/7482) implements the
[BV_EXTRACT_CONCAT](https://github.com/bitwuzla/bitwuzla/blob/6a1a768987cca77f36ebfe06f3a786348a481bbd/src/rewrite/rewrites_bv.cpp#L1264)
rule from Bitwuzla, which explains how to extract bits from an append.
We first prove a 'master theorem' which has the full case analysis, from
which we rapidly derive the necessary `BV_EXTRACT_CONCAT` theorems:
```lean
theorem extractLsb'_append_eq_ite {v w} {xhi : BitVec v} {xlo : BitVec w} {start len : Nat} :
extractLsb' start len (xhi ++ xlo) =
if hstart : start < w
then
if hlen : start + len < w
then extractLsb' start len xlo
else
(((extractLsb' (start - w) (len - (w - start)) xhi) ++
extractLsb' start (w - start) xlo)).cast (by omega)
else
extractLsb' (start - w) len xhi
* [#7493](https://github.com/leanprover/lean4/pull/7493) implements the Bitwuzla rewrite rule
[NORM_BV_ADD_MUL](https://github.com/bitwuzla/bitwuzla/blob/e09c50818b798f990bd84bf61174553fef46d561/src/rewrite/rewrites_bv_norm.cpp#L19-L23),
and the associated lemmas to allow for expedient rewriting:
```lean
theorem neg_add_mul_eq_mul_not {x y : BitVec w} : - (x + x * y) = x * ~~~ y
```
* [#7508](https://github.com/leanprover/lean4/pull/7508) shows that negation commutes with left shift, which is the
Bitwuzla rewrite
[NORM_BV_SHL_NEG](https://github.com/bitwuzla/bitwuzla/blob/e09c50818b798f990bd84bf61174553fef46d561/src/rewrite/rewrites_bv_norm.cpp#L142-L148).
* [#7594](https://github.com/leanprover/lean4/pull/7594) implements the Bitwuzla rewrites
[BV_EXTRACT_ADD_MUL](https://github.com/bitwuzla/bitwuzla/blob/e09c50818b798f990bd84bf61174553fef46d561/src/rewrite/rewrites_bv.cpp#L1495-L1510),
which witness that the high bits at `i >= len` do not affect the bits of
the product upto `len`.
* [#7595](https://github.com/leanprover/lean4/pull/7595) implements the addition rewrite from the Bitwuzla rewrite
[BV_EXTRACT_ADD_MUL](https://github.com/bitwuzla/bitwuzla/blob/e09c50818b798f990bd84bf61174553fef46d561/src/rewrite/rewrites_bv.cpp#L1495-L1510),
which witness that the high bits at `i >= len` do not affect the bits of
the sum upto `len`:
```lean
theorem extractLsb'_add {w len} {x y : BitVec w} (hlen : len ≤ w) :
(x + y).extractLsb' 0 len = x.extractLsb' 0 len + y.extractLsb' 0 len
```
* [#7757](https://github.com/leanprover/lean4/pull/7757) adds the Bitwuzla rewrite `NORM_BV_ADD_CONCAT` for symbolic
simplification of add-of-append.
## Compiler
* [#7398](https://github.com/leanprover/lean4/pull/7398) fixes a scoping error in the cce (Common Case Elimination) pass
of the old code generator. This pass would create a join point for
common minor premises even if some of those premises were in the bodies
of locally defined functions, which results in an improperly scoped
reference to a join point. The fix is to save/restore candidates when
visiting a lambda.
* [#7710](https://github.com/leanprover/lean4/pull/7710) improves memory use of Lean, especially for longer-running
server processes, by up to 60%
## Pretty Printing
* [#7589](https://github.com/leanprover/lean4/pull/7589) changes the structure instance notation pretty printer so that
fields are omitted if their value is definitionally equal to the default
value for the field (up to reducible transparency). Setting
`pp.structureInstances.defaults` to true forces such fields to be pretty
printed anyway.
## Documentation
* [#7198](https://github.com/leanprover/lean4/pull/7198) makes the docstrings in the `Char` namespace follow the
documentation conventions.
* [#7204](https://github.com/leanprover/lean4/pull/7204) adds docstrings for the `Id` monad.
* [#7246](https://github.com/leanprover/lean4/pull/7246) updates existing docstrings for Bool and adds the missing ones.
* [#7288](https://github.com/leanprover/lean4/pull/7288) fixes the doc of `List.removeAll`
* [#7365](https://github.com/leanprover/lean4/pull/7365) updates docstrings and adds some that are missing.
* [#7452](https://github.com/leanprover/lean4/pull/7452) makes the style of all `List` docstrings that appear in the
language reference consistent.
* [#7476](https://github.com/leanprover/lean4/pull/7476) adds missing docstrings for `IO` and related code and makes the
style of the existing docstrings consistent.
* [#7492](https://github.com/leanprover/lean4/pull/7492) adds missing `Array` docstrings and makes their style
consistent.
* [#7506](https://github.com/leanprover/lean4/pull/7506) adds missing `String` docstrings and makes the existing ones
consistent in style.
* [#7523](https://github.com/leanprover/lean4/pull/7523) adds missing docstrings and makes docstring style consistent for
`System` and `System.FilePath`.
* [#7528](https://github.com/leanprover/lean4/pull/7528) makes the docstrings for `Thunk` consistent with the style of
the others.
* [#7534](https://github.com/leanprover/lean4/pull/7534) adds missing `Syntax`-related docstrings and makes the existing
ones consistent in style with the others.
* [#7535](https://github.com/leanprover/lean4/pull/7535) revises the docstring for `funext`, making it more concise and
adding a reference to the manual for more details.
* [#7548](https://github.com/leanprover/lean4/pull/7548) adds missing monad transformer docstrings and makes their style
consistent.
* [#7552](https://github.com/leanprover/lean4/pull/7552) adds missing `Nat` docstrings and makes their style consistent.
* [#7564](https://github.com/leanprover/lean4/pull/7564) updates the docstrings for `ULift` and `PLift`, making their
style consistent with the others.
* [#7568](https://github.com/leanprover/lean4/pull/7568) adds missing `Int` docstrings and makes the style of all of them
consistent.
* [#7602](https://github.com/leanprover/lean4/pull/7602) adds missing docstrings for fixed-width integer operations and
makes their style consistent.
* [#7607](https://github.com/leanprover/lean4/pull/7607) adds docstrings for `String.drop` and `String.dropRight`.
* [#7613](https://github.com/leanprover/lean4/pull/7613) adds a variety of docstrings for names that appear in the
manual.
* [#7635](https://github.com/leanprover/lean4/pull/7635) adds missing docstrings for `Substring` and makes the style of
`Substring` docstrings consistent.
* [#7642](https://github.com/leanprover/lean4/pull/7642) reviews the docstrings for `Float` and `Float32`, adding missing
ones and making their format consistent.
* [#7645](https://github.com/leanprover/lean4/pull/7645) adds missing docstrings and makes docstring style consistent for
`ForM`, `ForIn`, `ForIn'`, `ForInStep`, `IntCast`, and `NatCast`.
* [#7711](https://github.com/leanprover/lean4/pull/7711) adds the last few missing docstrings that appear in the manual.
* [#7713](https://github.com/leanprover/lean4/pull/7713) makes the BitVec docstrings match each other and the rest of the
API in style.
## Server
* [#7178](https://github.com/leanprover/lean4/pull/7178) fixes a race condition in the language server that would
sometimes cause it to drop requests and never respond to them when
editing the header of a file. This in turn could cause semantic
highlighting to stop functioning in VS Code, as VS Code would stop
emitting requests when a prior request was dropped, and also cause the
InfoView to become defective. It would also cause import auto-completion
to feel a bit wonky, since these requests were sometimes dropped. This
race condition has been present in the language server since its first
version in 2020.
* [#7223](https://github.com/leanprover/lean4/pull/7223) implements parallel watchdog request processing so that requests
that are processed by the watchdog cannot block the main thread of the
watchdog anymore.
* [#7240](https://github.com/leanprover/lean4/pull/7240) adds a canonical syntax for linking to sections in the language
reference along with formatting of examples in docstrings according to
the docstring style guide.
* [#7343](https://github.com/leanprover/lean4/pull/7343) mitigates an issue where inserting an inlay hint in VS Code by
double-clicking would insert the inlay hint at the wrong position right
after an edit.
* [#7344](https://github.com/leanprover/lean4/pull/7344) combines the auto-implicit inlay hint tooltips into a single
tooltip. This works around an issue in VS Code where VS Code fails to
update hovers for tooltips in adjacent inlay hint parts when moving the
mouse.
* [#7346](https://github.com/leanprover/lean4/pull/7346) fixes an issue where the language server would run into an inlay
hint assertion violation when deleting a file that is still open in the
language server.
* [#7366](https://github.com/leanprover/lean4/pull/7366) adds server-side support for dedicated 'unsolved goals' and
'goals accomplished' diagnostics that will have special support in the
Lean 4 VS Code extension. The special 'unsolved goals' diagnostic is
adapted from the 'unsolved goals' error diagnostic, while the 'goals
accomplished' diagnostic is issued when a `theorem` or `Prop`-typed
`example` has no errors or `sorry`s. The Lean 4 VS Code extension
companion PR is at leanprover/vscode-lean4#585.
* [#7376](https://github.com/leanprover/lean4/pull/7376) ensures `weak` options do not have to be repeated in both Lake
`leanOptions` and `moreServerOptions`.
* [#7882](https://github.com/leanprover/lean4/pull/7882) fixes a regression where elaboration of a previous document
version is not cancelled on changes to the document.
## Lake
* [#7185](https://github.com/leanprover/lean4/pull/7185) refactors Lake's build internals to enable the introduction of
targets and facets beyond packages, modules, and libraries. Facets,
build keys, build info, and CLI commands have been generalized to
arbitrary target types.
* [#7393](https://github.com/leanprover/lean4/pull/7393) adds autocompletion support for Lake configuration fields in the
Lean DSL at the indented whitespace after an existing field.
Autocompletion in the absence of any fields is currently still not
supported.
**Breaking change:** The nonstandard braced configuration syntax now uses a semicolon `;` rather than a comma `,` as a separator. Indentation can still be used as an alternative to the separator.
* [#7399](https://github.com/leanprover/lean4/pull/7399) reverts the new builtin initializers, elaborators, and macros in
Lake back to non-builtin.
* [#7504](https://github.com/leanprover/lean4/pull/7504) augments the Lake configuration data structures declarations
(e.g., `PackageConfig`, `LeanLibConfig`) to produce additional metadata
which is used to automatically generate the Lean & TOML encoders and
decoders via metaprograms.
* [#7543](https://github.com/leanprover/lean4/pull/7543) unifies the configuration declarations of dynamic targets,
external libraries, Lean libraries, and Lean executables into a single
data type stored in a unified map within a package.
**Breaking change:** Users can no longer define multiple targets with the same name but different kinds (e.g., a Lean executable and a Lean library both named `foo`). This should not effect most users as the Lake DSL already discouraged this.
* [#7576](https://github.com/leanprover/lean4/pull/7576) changes Lake to produce and use response files on Windows when
building executables and libraries (static and shared). This is done to
avoid potentially exceeding Windows command line length limits.
* [#7586](https://github.com/leanprover/lean4/pull/7586) changes the `static.export` facet for Lean libraries to produce
thin static libraries.
* [#7608](https://github.com/leanprover/lean4/pull/7608) removes the use of the Lake plugin in the Lake build and in
configuration files.
* [#7667](https://github.com/leanprover/lean4/pull/7667) changes Lake to log messages from a Lean configuration the same
way it logs message from a Lean build. This, for instance, removes
redundant severity captions.
* [#7703](https://github.com/leanprover/lean4/pull/7703) adds `input_file` and `input_dir` as new target types. It also
adds the `needs` configuration option for Lean libraries and
executables. This option generalizes `extraDepTargets` (which will be
deprecated in the future), providing much richer support for declaring
dependencies across package and target type boundaries.
* [#7716](https://github.com/leanprover/lean4/pull/7716) adds the `moreLinkObjs` and `moreLinkLibs` options for Lean
packages, libraries, and executables. These serves as functional
replacements for `extern_lib` and provided additional flexibility.
**Breaking change:** `precompileModules` now only loads modules of the current library individually. Modules of other libraries are loaded together via that library's shared library.
* [#7732](https://github.com/leanprover/lean4/pull/7732) deprecates `extraDepTargets` and fixes a bug caused by the
configuration refactor.
* [#7758](https://github.com/leanprover/lean4/pull/7758) removes the `-lstdcpp` extra link argument from the FFI example.
It is not actually necessary.
* [#7763](https://github.com/leanprover/lean4/pull/7763) corrects build key fetches to produce jobs with the proper data
kinds and fixes a failed coercion from key literals to targets.
## Other
* [#7326](https://github.com/leanprover/lean4/pull/7326) updates the release notes script to better indent PR
descriptions.
* [#7453](https://github.com/leanprover/lean4/pull/7453) adds "(kernel)" to the message for the kernel-level application
type mismatch error.
* [#7769](https://github.com/leanprover/lean4/pull/7769) fixes a number of bugs in the release automation scripts, adds a
script to merge tags into remote `stable` branches, and makes the main
`release_checklist.py` script give suggestions to call the
`merge_remote.py` and `release_steps.py` scripts when needed.
```` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.