diff --git a/site/dune b/site/dune index 13fc698..550c914 100644 --- a/site/dune +++ b/site/dune @@ -31,7 +31,7 @@ (target new_book.html) (deps README.md) (action - (system "pandoc -s %{deps} -o %{target} --metadata title='Curious OCaml'"))) + (system "pandoc -s %{deps} -o %{target} --katex --metadata title='Curious OCaml'"))) (rule (alias old_lectures_as_book) @@ -40,4 +40,4 @@ (deps old_lectures_as_book.md) (action (system - "pandoc -s %{deps} -o %{target} --metadata title='Functional Programming Course'"))) + "pandoc -s %{deps} -o %{target} --katex --metadata title='Functional Programming Course'"))) diff --git a/site/new_book.html b/site/new_book.html index 8c54aa4..15c68ec 100644 --- a/site/new_book.html +++ b/site/new_book.html @@ -152,8 +152,25 @@ div.column{display: inline-block; vertical-align: top; width: 50%;} div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;} ul.task-list{list-style: none;} - .display.math{display: block; text-align: center; margin: 0.5rem auto;} + + +
@@ -174,20 +191,20 @@

From logic rules to - - - - - +\top +\bot +\wedge +\vee +\rightarrow -a ∧ b -a ∨ b -a → b +a \wedge b +a \vee b +a \rightarrow b truth @@ -199,39 +216,39 @@

From logic rules to “trivial” “impossible” -a and b -a or b -a gives b +a and b +a or b +a gives b shouldn’t get got both got at least one -given a, we get b +given a, we get b

How can we define them? Think in terms of derivation trees:

-

$$ +

\frac{ \frac{\frac{\,}{\text{a premise}} \; \frac{\,}{\text{another premise}}}{\frac{\,}{\text{some fact}}} \; \frac{\frac{\,}{\text{a premise}} \; \frac{\,}{\text{another premise}}}{\frac{\,}{\text{another fact}}}} {\text{final conclusion}} -$$

+

To define the connectives, we provide rules for using them: for -example, a rule $\frac{a \; b}{c}$ +example, a rule \frac{a \; b}{c} matches parts of the tree that have two premises, represented by -variables a and b, and have any conclusion, -represented by variable c.

+variables a and b, and have any conclusion, represented by +variable c.

Rules for Logical Connectives

Introduction rules say how to produce a connective. Elimination rules diff --git a/site/old_lectures_as_book.html b/site/old_lectures_as_book.html index 4b16799..7831a69 100644 --- a/site/old_lectures_as_book.html +++ b/site/old_lectures_as_book.html @@ -215,8 +215,25 @@ code span.va { color: #19177c; } /* Variable */ code span.vs { color: #4070a0; } /* VerbatimString */ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */ - .display.math{display: block; text-align: center; margin: 0.5rem auto;} + + +

@@ -311,21 +328,20 @@

1 In the Beginning there was

How can we define them?Think in terms of derivation trees:

-

$$ \frac{\begin{array}{ll} +

\frac{\begin{array}{ll} \frac{\begin{array}{ll} \frac{\,}{\text{a premise}} & \frac{\,}{\text{another premise}} \end{array}}{\text{some fact}} & \frac{\frac{\,}{\text{this we have by default}}}{\text{another fact}} - \end{array}}{\text{final conclusion}} $$

+ \end{array}}{\text{final conclusion}}

Define by providing rules for using the connectives: for example, a -rule $\frac{\begin{array}{ll} a & b -\end{array}}{c}$ matches parts of the tree that have two -premises, represented by variables a and b, and have any conclusion, -represented by variable c.

+rule \frac{\begin{array}{ll} a & b +\end{array}}{c} matches parts of the tree that have two premises, +represented by variables a and b, and have any conclusion, represented by +variable c.

Try to use only the connective you define in its definition.# 2 Rules for Logical Connectives

Introduction rules say how to produce a connective.

@@ -389,25 +405,24 @@

1 In the Beginning there was

Notations

-

$$ {{{\frac{\,}{a} \tiny{x}} \atop +

{{{\frac{\,}{a} \tiny{x}} \atop {\text{\textbar}}} \atop {b}} \text{, \ \ or \ \ } {{{\frac{\,}{a} \tiny{x}} \atop {\text{\textbar}}} \atop {c}} -$$

-

match any subtree that derives b (or c) and can use a (by assumption $\frac{\,}{a} \tiny{x}$) although otherwise -a might not be warranted. For +

+

match any subtree that derives b (or +c) and can use a (by assumption \frac{\,}{a} \tiny{x}) although otherwise +a might not be warranted. For example:

-

$$ +

\frac{\frac{\frac{\frac{\frac{\,}{\text{sunny}} \small{x}}{\text{go outdoor}}}{\text{playing}}}{\text{happy}}}{\text{sunny} \rightarrow - \text{happy}} \small{\text{ using } x} $$

+ \text{happy}} \small{\text{ using } x}

Such assumption can only be used in the matched subtree! But it can be used several times, e.g. if someone’s mood is more difficult to influence:

-

$$ \frac{\frac{\begin{array}{ll} +

\frac{\frac{\begin{array}{ll} \frac{\frac{\frac{\,}{\text{sunny}} \small{x}}{\text{go outdoor}}}{\text{playing}} & \frac{\begin{array}{ll} \frac{\,}{\text{sunny}} \small{x} & @@ -415,17 +430,17 @@

1 In the Beginning there was \small{x}}{\text{go outdoor}} \end{array}}{\text{nice view}} \end{array}}{\text{happy}}}{\text{sunny} \rightarrow \text{happy}} - \small{\text{ using } x} $$Elimination rule for disjunction + \small{\text{ using } x} Elimination rule for disjunction represents reasoning by cases.

How can we use the fact that it is sunnycloudy (but not rainy)?

-

$$ \frac{\begin{array}{rrl} +class="math inline">\veecloudy (but not rainy)?

+

\frac{\begin{array}{rrl} \frac{\,}{\text{sunny} \vee \text{cloudy}} \tiny{\text{ forecast}} & \frac{\frac{\,}{\text{sunny}} \tiny{x}}{\text{no-umbrella}} & \frac{\frac{\,}{\text{cloudy}} \tiny{y}}{\text{no-umbrella}} \end{array}}{\text{no-umbrella}} \small{\text{ using } x, y} -$$

+

We know that it will be sunny or cloudy, by watching weather forecast. If it will be sunny, we won’t need an umbrella. If it will be cloudy, we won’t need an umbrella. Therefore, won’t need an umbrella.We @@ -433,15 +448,15 @@

1 In the Beginning there was induction (it is somewhat similar to reasoning by cases). Example rule for induction on natural numbers:

$$ x $$

-

So we get any p for any -natural number n, provided we -can get it for 0, and using it for -x we can derive it for the -successor x + 1, where x is a unique variable (we cannot -substitute for it some particular number, because we write “using x” on the side).# 3 Logos was -Programmed in OCaml

+

So we get any p for any natural +number n, provided we can get it for +0, and using it for x we can derive it for the successor x + 1, where x is a unique variable (we cannot substitute +for it some particular number, because we write “using x” on the side).# 3 Logos was Programmed in +OCaml

@@ -553,16 +568,16 @@

3.1 Definitions

type ty = some type.

  • Writing A (s) : A ofa| B ofb in the table was -cheating. Usually we have to define the type and then use it, e.g. using -int for a and -string for b:

    +class="math inline">s) : A ofa| B ofb in the table was cheating. +Usually we have to define the type and then use it, e.g. using +int for a and +string for b:

    type int_string_choice = A of int | B of string

    allows us to write A (s) : int_string_choice.

  • +class="math inline">s) : int_string_choice.

  • Without the type definition, it is difficult to know what other variants there are when one infers (i.e. “guesses”, computes) the type!

  • @@ -579,15 +594,15 @@

    3.1 Definitions

    and create its values: {a = 7; b = "Mary"}. * We access the fields of records using the dot notation:

    {a=7; b="Mary"}.b = "Mary".Recursive expression $\text{{\texttt{rec}} } x \text{{\texttt{=}}} -e$ in the table was cheating: rec (usually called +class="math inline">\text{{\texttt{rec}} } x \text{{\texttt{=}}} +e in the table was cheating: rec (usually called fix) cannot appear alone in OCaml! It must be part of a definition.

    Definitions for expressions are introduced by rules a bit more complex than these:

    $$ $$

    (note that this rule is the same as introducing and eliminating ), and:

    +class="math inline">\rightarrow), and:

    $$ $$

    We will cover what is missing in above rules when we will talk about polymorphism.* Type definitions we have seen above are @@ -595,20 +610,18 @@

    3.1 Definitions

    expressions, and they extend from the point they occur till the end of the source file or interactive session. * let-in definitions for expressions: $\text{{\texttt{let}} } x \text{{\texttt{=}}} e_{1} -\text{ \text{{\texttt{in}}} } e_{2}$ are local, x is only visible in e2. But let -definitions are global: placing $\text{{\texttt{let}} } x \text{{\texttt{=}}} -e_{1}$ at the top-level makes x visible from after e1 till the end of the -source file or interactive session. * In the interactive session, we -mark an end of a top-level “sentence” by ;; – it is unnecessary in -source files. * Operators like +, *, <, =, are names of functions. -Just like other names, you can use operator names for your own -functions:

    +class="math inline">\text{{\texttt{let}} } x \text{{\texttt{=}}} e_{1} +\text{ \text{{\texttt{in}}} } e_{2} are local, x is only visible in e_{2}. But let definitions are +global: placing \text{{\texttt{let}} } +x \text{{\texttt{=}}} e_{1} at the top-level makes x visible from after e_{1} till the end of the source file or +interactive session. * In the interactive session, we mark an end of a +top-level “sentence” by ;; – it is unnecessary in source files. * +Operators like +, *, <, =, are names of functions. Just like other +names, you can use operator names for your own functions:

    let (+:) a b = String.concat “” [a; b];;Special way of defining”Alpha” +: “Beta”;;but normal way of using operators. * Operators in OCaml are not overloaded. It means, that @@ -634,9 +647,8 @@

    3.1 Definitions

  • Practice using the OCaml interpreter as a calculator:

    1. The volume of a sphere with radius r is $\frac{4}{3} \pi r^3$. What is the volume of -a sphere with radius 5?

      +class="math inline">r is \frac{4}{3} +\pi r^3. What is the volume of a sphere with radius 5?

      Hint: 392.6 is wrong!

    2. Suppose the cover price of a book is $24.95, but bookstores get a 40% discount. Shipping costs $3 for the first copy and 75 cents for each @@ -649,12 +661,12 @@

      3.1 Definitions

    3. You’ve probably heard of the fibonacci numbers before, but in case you haven’t, they’re defined by the following recursive relationship:

      -

      $$ \left\lbrace\begin{array}{llll} +

      \left\lbrace\begin{array}{llll} f (0) & = & 0 & \\\\\\ f (1) & = & 1 & \\\\\\ f (n + 1) & = & f (n) + f (n - 1) & \text{for } n = 2, 3, \ldots -\end{array}\right. $$

      +\end{array}\right.

      Write a recursive function to calculate these numbers.

    4. A palindrome is a word that is spelled the same backward and forward, like “noon” and “redivider”. Recursively, a word is a @@ -673,17 +685,14 @@

      3.1 Definitions

      and returns true if it is a palindrome and false otherwise.
  • The greatest common divisor (GCD) of a and b is the largest number that divides -both of them with no remainder.

    +class="math inline">a and b is +the largest number that divides both of them with no remainder.

    One way to find the GCD of two numbers is Euclid’s algorithm, which -is based on the observation that if r is the remainder when a is divided by b, then gcd (a,b) = gcd (b,r). -As a base case, we can consider gcd (a,0) = a.

    +is based on the observation that if r +is the remainder when a is divided by +b, then \gcd +(a, b) = \gcd (b, r). As a base case, we can consider \gcd (a, 0) = a.

    Write a function called gcd that takes parameters a and b and returns their greatest common divisor.

    If you need help, see 1 A Glimpse at Type Inference on some simple examples. Starting with fun x -> x. [?] will mean “dunno yet”.

    $$ +\begin{matrix} + & \frac{[?]}{\text{{\texttt{fun x -> x}}} : [?]} & +\text{use } + \rightarrow \text{ introduction:}\\\\\\ + & \frac{\frac{\,}{\text{{\texttt{x}}} : a} + \tiny{x}}{\text{{\texttt{fun x -> x}}} : [?] \rightarrow +[?]} + & \frac{\,}{\text{{\texttt{x}}} : a} \tiny{x} \text{ matches + with } + +{{{\frac{\,}{x : a} \tiny{x}} \atop {\text{\textbar}}} \atop {e : b}} +\text{ since } e = \text{{\texttt{x}}}\\\\\\ + & \frac{\frac{\,}{\text{{\texttt{x}}} : a} + \tiny{x}}{\text{{\texttt{fun x -> x}}} : a \rightarrow a} +& + \text{since } b = a \text{ because } x : a \text{ matched with } e : b +\end{matrix}

    $$

    -

    Because a is arbitrary, -OCaml puts a type variable 'a for it:

    +

    Because a is arbitrary, OCaml puts a +type variable 'a for it:

    # fun x -> x;;
     - : 'a -> 'a = <fun>

    Let’s try fun x -> x+1, which is the same as fun x -> ((+) x) 1(try it with OCaml/F#!). [?α] will mean “dunno yet, but the -same as in other places with [?α]”.

    -

    $$ \begin{matrix} +class="math inline">[? \alpha] will mean “dunno yet, but the same +as in other places with [? +\alpha]”.

    +

    \begin{matrix} & \frac{[?]}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?]} & \text{use } \rightarrow \text{ introduction:}\\\\\\ & \frac{\frac{[?]}{\text{{\texttt{((+) x) 1}}} : [? @@ -744,7 +770,7 @@

    1 A Glimpse at Type Inference

    \end{array}}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?] \rightarrow [? - \alpha]} & \text{it's our \text{{\texttt{x}}}!}\\\\\\ + \alpha]} & \text{it's our \text{{\texttt{x}}}!}\\\\\\ & \frac{\frac{\begin{array}{ll} \frac{\begin{array}{ll} \frac{[?]}{\text{{\texttt{(+)}}} : [? \gamma] \rightarrow @@ -775,20 +801,20 @@

    1 A Glimpse at Type Inference

    \text{{\texttt{int}}}}}{\text{\text{{\texttt{fun x -> ((+) x) 1}}}} : \text{{\texttt{int}}} \rightarrow \text{{\texttt{int}}}} & -\end{matrix} $$

    +\end{matrix}

    1.1 Curried form

    When there are several arrows “on the same depth” in a function type, it means that the function returns a function: e.g. $\text{{\texttt{(+)}}} : \text{{\texttt{int}}} +class="math inline">\text{{\texttt{(+)}}} : \text{{\texttt{int}}} \rightarrow \text{{\texttt{int}}} \rightarrow -\text{{\texttt{int}}}$ is just a shorthand for $\text{{\texttt{(+)}}} : \text{{\texttt{int}}} +\text{{\texttt{int}}} is just a shorthand for \text{{\texttt{(+)}}} : \text{{\texttt{int}}} \rightarrow \left( \text{{\texttt{int}}} \rightarrow -\text{{\texttt{int}}} \right)$. It is very different from

    -

    $$ \text{{\texttt{fun f -> (f 1) + 1}}} -: \left( +\text{{\texttt{int}}} \right). It is very different from

    +

    \text{{\texttt{fun f -> (f 1) + 1}}} : +\left( \text{{\texttt{int}}} \rightarrow \text{{\texttt{int}}} - \right) \rightarrow \text{{\texttt{int}}} $$

    + \right) \rightarrow \text{{\texttt{int}}}

    For addition, instead of (fun x -> x+1) we can write ((+) 1). What expanded form does ((+) 1) correspond to exactly (computationally)?

    @@ -939,9 +965,9 @@

    5 Interpreting Algebraic DTs as Polynomials

    Let’s do a peculiar translation: take a data type and replace | with +, * with ×, treating record types as tuple types +class="math inline">\times, treating record types as tuple types (i.e. erasing field names and translationg ; as ×).

    +class="math inline">\times).

    There is a special type for which we cannot build a value:

    type void
    @@ -962,28 +988,25 @@

    5 Interpreting

    Let’s have fun with it.

    type date = {year: int; month: int; day: int}
    -

    D = xxx = x3

    +

    D = xxx = x^3

    type 'a option = None | Some of 'a   (* built-in type *)
    -

    O = 1 + x

    +

    O = 1 + x

    type 'a my_list = Empty | Cons of 'a * 'a my_list
    -

    L = 1 + xL

    +

    L = 1 + xL

    type btree = Tip | Node of int * btree * btree
    -

    T = 1 + xTT = 1 + xT2

    +

    T = 1 + xTT = 1 + xT^2

    When translations of two types are equal according to laws of high-school algebra, the types are isomorphic, that is, there exist 1-to-1 functions from one type to the other.

    Let’s play with the type of binary trees:

    -

    $$ \begin{matrix} +

    \begin{matrix} T & = & 1 + xT^2 = 1 + xT + x^2 T^3 = 1 + x + x^2 T^2 + x^2 T^3 =\\\\\\ & = & 1 + x + x^2 T^2 (1 + T) = 1 + x (1 + xT^2 (1 + T)) -\end{matrix} $$

    +\end{matrix}

    Now let’s translate the resulting type:

    type repr =
    @@ -1072,13 +1095,12 @@ 

    5.1 Differentiating

    Take the “date” example:

    type date = {year: int; month: int; day: int}
    -

    $$ \begin{matrix} +

    \begin{matrix} D & = & xxx = x^3\\\\\\ \frac{\partial D}{\partial x} & = & 3 x^2 = xx + xx + xx -\end{matrix} $$

    -

    (we could have left it at 3xx as well). Now we -construct the type:

    +\end{matrix}

    +

    (we could have left it at 3 xx as +well). Now we construct the type:

    type date_deriv =
       Year of int * int | Month of int * int | Day of int * int
    @@ -1098,16 +1120,16 @@

    5.1 Differentiating

    Let’s do now the more difficult case of binary trees:

    type btree = Tip | Node of int * btree * btree
    -

    $$ \begin{matrix} +

    \begin{matrix} T & = & 1 + xT^2\\\\\\ \frac{\partial T}{\partial x} & = & 0 + T^2 + 2 xT \frac{\partial T}{\partial x} = TT + 2 xT \frac{\partial T}{\partial x} -\end{matrix} $$

    +\end{matrix}

    (again, we could expand further into $\frac{\partial T}{\partial x} = TT + xT +class="math inline">\frac{\partial T}{\partial x} = TT + xT \frac{\partial T}{\partial x} + xT \frac{\partial T}{\partial -x}$).

    +x}).

    Instead of translating 2 as bool, we will introduce new type for clarity:

    6 Homework

    Chapter 2: Derivation example Functional Programming

    Lecture 2: Algebra, Fig. 1

    Type inference example derivation

    -

    $$ \frac{[?]}{\text{{\texttt{fun x -> -((+) x) 1}}} : [?]} $$

    -

    use  →  introduction:

    -

    $$ \frac{\frac{[?]}{\text{{\texttt{((+) x) +

    \frac{[?]}{\text{{\texttt{fun x -> +((+) x) 1}}} : [?]}

    +

    \text{use } \rightarrow \text{ +introduction:}

    +

    \frac{\frac{[?]}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?] \rightarrow - [? \alpha]} $$

    -

    use  →  elimination:

    -

    $$ \frac{\frac{\begin{array}{ll} + [? \alpha]}

    +

    \text{use } \rightarrow \text{ +elimination:}

    +

    \frac{\frac{\begin{array}{ll} \frac{[?]}{\text{{\texttt{(+) x}}} : [? \beta] \rightarrow [? \alpha]} & \frac{[?]}{\text{{\texttt{1}}} : [? \beta]} \end{array}}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?] \rightarrow - [? \alpha]} $$

    -

    $$ \text{we know that \text{{\texttt{1}}}} -: \text{{\texttt{int}}} -$$

    -

    $$ \frac{\frac{\begin{array}{ll} + [? \alpha]}

    +

    \text{we know that \text{{\texttt{1}}}} : +\text{{\texttt{int}}} +

    +

    \frac{\frac{\begin{array}{ll} \frac{[?]}{\text{{\texttt{(+) x}}} : \text{{\texttt{int}}} \rightarrow [? \alpha]} & \frac{\,}{\text{{\texttt{1}}} : \text{{\texttt{int}}}} @@ -1167,9 +1191,9 @@

    6 Homework

    \end{array}}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?] \rightarrow - [? \alpha]} $$

    -

    application again:

    -

    $$ \frac{\frac{\begin{array}{ll} + [? \alpha]}

    +

    \text{application again:}

    +

    \frac{\frac{\begin{array}{ll} \frac{\begin{array}{ll} \frac{[?]}{\text{{\texttt{(+)}}} : [? \gamma] \rightarrow \text{{\texttt{int}}} \rightarrow [? \alpha]} & @@ -1181,10 +1205,10 @@

    6 Homework

    \end{array}}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [?] \rightarrow - [? \alpha]} $$

    -

    $$ \text{it's our \text{{\texttt{x}}}!} -$$

    -

    $$ \frac{\frac{\begin{array}{ll} + [? \alpha]}

    +

    \text{it's our \text{{\texttt{x}}}!} +

    +

    \frac{\frac{\begin{array}{ll} \frac{\begin{array}{ll} \frac{[?]}{\text{{\texttt{(+)}}} : [? \gamma] \rightarrow \text{{\texttt{int}}} \rightarrow [? \alpha]} & @@ -1196,12 +1220,12 @@

    6 Homework

    \tiny{\text{(constant)}} \end{array}}{\text{{\texttt{((+) x) 1}}} : [? \alpha]}}{\text{{\texttt{fun x -> ((+) x) 1}}} : [? \gamma] - \rightarrow [? \alpha]} $$

    -

    $$ \text{but \text{{\texttt{(+)}}}} : + \rightarrow [? \alpha]}

    +

    \text{but \text{{\texttt{(+)}}}} : \text{{\texttt{int}}} \rightarrow \text{{\texttt{int}}} \rightarrow - \text{{\texttt{int}}} $$

    -

    $$ \frac{\frac{\begin{array}{ll} + \text{{\texttt{int}}}

    +

    \frac{\frac{\begin{array}{ll} \frac{\begin{array}{ll} \frac{\,}{\text{{\texttt{(+)}}} : \text{{\texttt{int}}} \rightarrow \text{{\texttt{int}}} \rightarrow @@ -1216,7 +1240,7 @@

    6 Homework

    \text{{\texttt{int}}}}}{\text{\text{{\texttt{fun x -> ((+) x) 1}}}} : \text{{\texttt{int}}} \rightarrow - \text{{\texttt{int}}}} $$

    + \text{{\texttt{int}}}}

    Chapter 3

    Functional Programming

    Lecture 3: Computation

    @@ -1227,8 +1251,8 @@

    1 Function Composition

    • The usual way function composition is defined in math is “backward”:
        -
      • math: (fg)(x) = f(g(x))
      • +
      • math: (f \circ g) (x) = f (g +(x))
      • OCaml: let (-|) f g x = f (g x)
      • F#: let (<<) f g x = f (g x)
      • Haskell: (.) f g = \x -> f (g x)
      • @@ -1254,10 +1278,9 @@

        1 Function Composition

        week: we don’t pass all arguments a function needs, in result we get a function that requires the remaining arguments. How is it used above? -
      • Now we define fn(x) := (f∘…∘f)(x) -(f appears n times).
      • +
      • Now we define f^n (x) := (f \circ \ldots +\circ f) (x) (f appears n times).
      let rec power f n =
      @@ -1284,7 +1307,7 @@ 

      2 Evaluation Rules (reduction semantics)

      • Programs consist of expressions:

        -

        $$ \begin{matrix} +

        \begin{matrix} a := & x & \text{variables}\\\\\\ | & \text{{\texttt{fun }}} x \text{{\texttt{->}}} a & \text{(defined) functions}\\\\\\ @@ -1306,42 +1329,39 @@

        2 Evaluation Rules | & (p, \ldots, p) & \text{tuple patterns}\\\\\\ | & C^0 & \text{variant patterns of arity } 0\\\\\\ | & C^n (p, \ldots, p) & \text{variant patterns of arity } n -\end{matrix} $$

      • +\end{matrix}

      • Arity means how many arguments something requires; (and for tuples, the length of a tuple).

      • To simplify presentation, we will use a primitive fix to define a limited form of let rec:

        -

        $$ \text{{\texttt{let rec }}} f +

        \text{{\texttt{let rec }}} f \text{{\texttt{ }}} x = e_{1} \text{{\texttt{ in }}} e_{2} \equiv \text{{\texttt{let }}} f = \text{{\texttt{fix (fun }}} f \text{{\texttt{ }}} x \text{{\texttt{->}}} e_{1} -\text{{\texttt{) in }}} e_{2} $$

      • +\text{{\texttt{) in }}} e_{2}

      • Expressions evaluate (i.e. compute) to values:

        -

        $$ \begin{matrix} +

        \begin{matrix} v := & \text{{\texttt{fun }}} x \text{{\texttt{->}}} a & \text{(defined) functions}\\\\\\ | & C^n (v_{1}, \ldots, v_{n}) & \text{constructed values}\\\\\\ | & f^n v_{1} \ldots v_{k} & k < n \text{ partially applied -primitives} \end{matrix} $$

      • -
      • To substitute a value v for a variable x in expression a we write a[x:=v] – it -behaves as if every occurrence of x in a was rewritten by v.

        -
          -
        • (But actually the value v -is not duplicated.)
        • +primitives} \end{matrix}

          +
        • To substitute a value v +for a variable x in expression a we write a [x := +v] – it behaves as if every occurrence of x in a was +rewritten by v.

          +
            +
          • (But actually the value v is not +duplicated.)
        • Reduction (i.e. computation) proceeds as follows: first we give redexes

          -

          $$ \begin{matrix} +

          \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ \text{{\texttt{let }}} x = v \text{{\texttt{ in }}} a & @@ -1364,52 +1384,47 @@

          2 Evaluation Rules C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a \text{{\texttt{ \textbar }}} \ldots & \rightsquigarrow & a [x_{1} -\:= v_{1} ; \ldots ; x_{n} := v_{n}] \end{matrix} $$

          -

          If n = 0, C1n(v1,…,vn) -stands for C10, etc. By -f(v1,…,vn) -we denote the actual value resulting from computing the primitive. We -omit the more complex cases of pattern matching.

        • -
        • Rule variables: x -matches any expression/pattern variable; a, a1, …, an -match any expression; v, v1, …, vn -match any value. Substitute them so that the left-hand-side of a rule is -your expression, then the right-hand-side is the reduced -expression.

        • +\:= v_{1} ; \ldots ; x_{n} := v_{n}] \end{matrix}

          +

          If n = 0, C_{1}^n (v_{1}, \ldots, v_{n}) stands for +C^0_{1}, etc. By f (v_{1}, \ldots, v_{n}) we denote the actual +value resulting from computing the primitive. We omit the more complex +cases of pattern matching.

          +
        • Rule variables: x matches any +expression/pattern variable; a, a_{1}, \ldots, +a_{n} match any expression; v, v_{1}, +\ldots, v_{n} match any value. Substitute them so that the +left-hand-side of a rule is your expression, then the right-hand-side is +the reduced expression.

        • The remaining rules evaluate the arguments in arbitrary order, but keep the order in which letin and matchwith is evaluated.

          -

          If ai ⇝ ai, -then:

          -

          $$ \begin{matrix} -a_{1} a_{2} & \rightsquigarrow & a_{1}' a_{2}\\\\\\ -a_{1} a_{2} & \rightsquigarrow & a_{1} a_{2}'\\\\\\ +

          If a_{i} \rightsquigarrow +a_{i}', then:

          +

          \begin{matrix} +a_{1} a_{2} & \rightsquigarrow & a_{1}' a_{2}\\\\\\ +a_{1} a_{2} & \rightsquigarrow & a_{1} a_{2}'\\\\\\ C^n (a_{1}, \ldots, a_{i}, \ldots, a_{n}) & \rightsquigarrow & C^n -(a_{1}, \ldots, a_{i}', \ldots, a_{n})\\\\\\ +(a_{1}, \ldots, a_{i}', \ldots, a_{n})\\\\\\ \text{{\texttt{let }}} x = a_{1} \text{{\texttt{ in }}} -a_{2} & \rightsquigarrow & \text{{\texttt{let }}} x = a_{1}' +a_{2} & \rightsquigarrow & \text{{\texttt{let }}} x = a_{1}' \text{{\texttt{ in }}} a_{2}\\\\\\ \text{{\texttt{match }}} a_{1} \text{{\texttt{ with}} } \operatorname{pm} & \rightsquigarrow & \text{{\texttt{match }}} -a_{1}' \text{{\texttt{ with}} } \operatorname{pm} \end{matrix} -$$

        • +a_{1}' \text{{\texttt{ with}} } \operatorname{pm} \end{matrix} +

        • Finally, we give the rule for the primitive fix – it is a binary primitive:

          -

          $$ \begin{matrix} +

          \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \rightsquigarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2} \end{matrix} -$$

          -

          Because fix is binary, $\left( -\text{{\texttt{fix}}}^2 v_{1} \right)$ is already a value so it +

          +

          Because fix is binary, \left( +\text{{\texttt{fix}}}^2 v_{1} \right) is already a value so it will not be further computed until it is applied inside of v1.

        • +class="math inline">v_{1}.

        • Compute some programs using the rules by hand.

        3 Symbolic Derivation Example

        @@ -1526,12 +1541,10 @@

        6 Homework

        step of the simplification, and wrap it using a general fixpoint function that performs an operation until a fixed point is reached: given f and x, it computes fn(x) -such that fn(x) = fn + 1(x). -Functional Programming
      • +class="math inline">f and x, it +computes f^n (x) such that f^n (x) = f^{n + 1} (x). Functional +Programming
    • Computation

      @@ -1563,12 +1576,10 @@

      6 Homework

      Write a simplify_once function that performs a single step of the simplification, and wrap it using a general fixpoint function that performs an operation until a -fixed point* is reached: given f and x, it computes fn(x) -such that fn(x) = fn + 1(x).*

      +fixed point* is reached: given f +and x, it computes f^n (x) such that f^n (x) = f^{n + 1} (x).*

      Exercise 5: Write two sorting algorithms, working on lists: merge sort and quicksort.

        @@ -1581,7 +1592,7 @@

        Chapter 4

        Functional Programming

        Lecture 4: Functions.

        Programming in untyped λ-calculus.

        +class="math inline">\lambda-calculus.

        Introduction to Lambda Calculus Henk Barendregt, Erik Barendsen

        Lecture Notes on the Lambda Calculus Peter Selinger

        @@ -1597,54 +1608,54 @@

        1 Review: a “computation -> 1 + f xs) inlength (Cons (1, (Cons (2, Nil))))

        let length = fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs) inlength (Cons (1, (Cons (2, Nil))))

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{let }}} x = v \text{{\texttt{ in }}} a & \Downarrow & a [x := v] -\end{matrix} $$

        +\end{matrix}

        fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs) (Cons (1, (Cons (2, Nil))))

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \Downarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2} -\end{matrix} $$

        -

        $$ \begin{matrix} +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \Downarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2} -\end{matrix} $$

        +\end{matrix}

        (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs) (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) (Cons (1, (Cons (2, Nil))))

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1}' a_{2} -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1}' a_{2} +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1}' a_{2} -\end{matrix} $$

        + a_{1}' a_{2} +\end{matrix}

        (fun l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l -> match l with | Nil -> 0
        | Cons (x, xs) -> 1 + f xs)) xs) (Cons (1, (Cons (2, Nil))))

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \Downarrow & a [x := v] -\end{matrix} $$

        -

        $$ \begin{matrix} +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \Downarrow & a [x := v] -\end{matrix} $$

        +\end{matrix}

        (match Cons (1, (Cons (2, Nil))) with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l -> match l with | Nil -> 0
        | Cons (x, xs) -> 1 + f xs)) xs)

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{2}^n (p_{1}, \ldots, p_{k}) \text{{\texttt{->}}} a @@ -1652,8 +1663,8 @@

        1 Review: a “computation \Downarrow & \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n})\\\\\\ & & \text{{\texttt{with}} } \operatorname{pm} -\end{matrix} $$

        -

        $$ \begin{matrix} +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{2}^n (p_{1}, \ldots, p_{k}) \text{{\texttt{->}}} a @@ -1661,75 +1672,75 @@

        1 Review: a “computation \Downarrow & \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n})\\\\\\ & & \text{{\texttt{with}} } \operatorname{pm} -\end{matrix} $$

        +\end{matrix}

        (match Cons (1, (Cons (2, Nil))) with | Cons (x, xs) -> 1 + (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs)

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a \text{{\texttt{ \textbar }}} \ldots & \Downarrow & a [x_{1} := v_{1} ; \ldots ; x_{n} := v_{n}] -\end{matrix} $$

        -

        $$ \begin{matrix} +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a \text{{\texttt{ \textbar }}} \ldots & \Downarrow & a [x_{1} := v_{1} ; \ldots ; x_{n} := v_{n}] -\end{matrix} $$

        +\end{matrix}

        1 + (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) (Cons (2, Nil))

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \rightsquigarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \rightsquigarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) (Cons (2, Nil))

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (fun l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l -> match l with
        | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs))
        (Cons (2, Nil))

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (match Cons (2, Nil) with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs))

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{2}^n (p_{1}, \ldots, p_{k}) \text{{\texttt{->}}} a @@ -1738,9 +1749,9 @@

        1 Review: a “computation \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n})\\\\\\ & & \text{{\texttt{with}} } \operatorname{pm}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{2}^n (p_{1}, \ldots, p_{k}) \text{{\texttt{->}}} a @@ -1749,11 +1760,11 @@

        1 Review: a “computation \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n})\\\\\\ & & \text{{\texttt{with}} } \operatorname{pm}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (match Cons (2, Nil) with | Cons (x, xs) -> 1 + (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs)

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a @@ -1761,8 +1772,8 @@

        1 Review: a “computation \Downarrow & a [x_{1} := v_{1} ; \ldots ; x_{n} := v_{n}]\\\\\\ & & -\end{matrix} $$

        -

        $$ \begin{matrix} +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a @@ -1770,68 +1781,68 @@

        1 Review: a “computation [x_{1} \:= v_{1} ; \ldots ; x_{n} := v_{n}]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (1 + (fix (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) Nil)

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \rightsquigarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{fix}}}^2 v_{1} v_{2} & \rightsquigarrow & v_{1} \left( \text{{\texttt{fix}}}^2 v_{1} \right) v_{2}\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (1 + (fun f l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs) (fix (fun f l ->
        match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) Nil)

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (1 + (fun l -> match l with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l ->
        match l with | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs) Nil)

        -

        $$ \begin{matrix} +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v & \rightsquigarrow & a [x := v]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (1 + (match Nil with | Nil -> 0 | Cons (x, xs) -> 1 + (fix (fun f l -> match l with
        | Nil -> 0 | Cons (x, xs) -> 1 + f xs)) xs))

        -

        $$ \begin{matrix} +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a @@ -1839,11 +1850,11 @@

        1 Review: a “computation [x_{1} \:= v_{1} ; \ldots ; x_{n} := v_{n}]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        -

        $$ \begin{matrix} + a_{1} a_{2}' +\end{matrix}

        +

        \begin{matrix} \text{{\texttt{match }}} C_{1}^n (v_{1}, \ldots, v_{n}) \text{{\texttt{ with}}} & & \\\\\\ C_{1}^n (x_{1}, \ldots, x_{n}) \text{{\texttt{->}}} a @@ -1851,73 +1862,69 @@

        1 Review: a “computation [x_{1} \:= v_{1} ; \ldots ; x_{n} := v_{n}]\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}'\\\\\\ + a_{1} a_{2}'\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + (1 + 0)

        -

        $$ \begin{matrix} +

        \begin{matrix} f^n v_{1} \ldots v_{n} & \rightsquigarrow & f (v_{1}, \ldots, v_{n})\\\\\\ a_{1} a_{2} & \Downarrow & - a_{1} a_{2}' -\end{matrix} $$

        + a_{1} a_{2}' +\end{matrix}

        1 + 1

        -

        $$ \begin{matrix} +

        \begin{matrix} f^n v_{1} \ldots v_{n} & \Downarrow & f (v_{1}, \ldots, v_{n}) -\end{matrix} $$

        +\end{matrix}

        2

        2 Language and rules of the untyped λ-calculus

        +class="math inline">\lambda-calculus

    • First, let’s forget about types.

    • Next, let’s introduce a shortcut:

        -
      • We write λx.a for -fun x->a, λxy.a for -fun x y->a, etc.
      • +
      • We write \lambda x.a for +fun x->a, \lambda x y.a +for fun x y->a, etc.
    • Let’s forget about all other constructions, only fun and variables.

    • -
    • The real λ-calculus has -a more general reduction:

      -

      $$ \begin{matrix} +

    • The real \lambda-calculus has a +more general reduction:

      +

      \begin{matrix} \left( \text{{\texttt{fun }}} x \text{{\texttt{->}}} a_{1} \right) a_{2} & \rightsquigarrow & a_{1} [x := a_{2}] -\end{matrix} $$

      -

      (called β-reduction) and uses bound -variable renaming (called α-conversion), or some other -trick, to avoid variable capture. But let’s not over-complicate +\end{matrix}

      +

      (called \beta-reduction) +and uses bound variable renaming (called \alpha-conversion), or some other trick, +to avoid variable capture. But let’s not over-complicate things.

      • We will look into the β-reduction rule in the +class="math inline">\beta-reduction rule in the laziness lecture.
      • -
      • Why is β-reduction more -general than the rule we use?
      • +
      • Why is \beta-reduction more general +than the rule we use?

    3 Booleans

    • Alonzo Church introduced λ-calculus to encode logic.
    • +class="math inline">\lambda-calculus to encode logic.
    • There are multiple ways to encode various sorts of data in λ-calculus. Not all of them make -sense in a typed setting, i.e. the straightforward encode/decode -functions do not type-check for them.
    • -
    • Define c_true=λxy.x and -c_false=λxy.y.
    • -
    • Define c_and=$\lambda x y.x y -\text{{\texttt{c\_false}}}$. Check that it works! +class="math inline">\lambda-calculus. Not all of them make sense +in a typed setting, i.e. the straightforward encode/decode functions do +not type-check for them.
    • +
    • Define c_true=\lambda x +y.x and c_false=\lambda x +y.y.
    • +
    • Define c_and=\lambda x y.x y +\text{{\texttt{c\_false}}}. Check that it works!
      • I.e. that c_and c_true c_true = c_true,otherwise c_and a b = @@ -1952,9 +1959,8 @@

        5 Pair-encoded natural numbers

        • Our first encoding of natural numbers is as the depth of nested -pairs whose rightmost leaf is λx.x and whose -left elements are c_false.
        • +pairs whose rightmost leaf is \lambda +x.x and whose left elements are c_false.

        let pn0 = fun x -> xStart with the identity function.let pnsucc n = cpair cfalse nStack another pair.let pnpred = fun x -> x @@ -1992,12 +1998,12 @@

        6 Church numerals

        let rec encodecnat n f = if n <= 0 then (fun x -> x) else f -| encodecnat (n-1) flet decodecnat n = n ((+) 1) 0let cn7 f x = encodecnat 7 f xWe need to η-expand these definitionslet -cn13 f x = encodecnat 13 f xfor type-system reasons.(Because OCaml -allows side-effects.)let cnadd = fun n m f x -> n f (m f -x)Put n of f in front.let cnmult = fun n m f --> n (m f)Repeat n timesputting m of -f in front.let cnprev n =
        +class="math inline">\eta-expand these definitionslet cn13 f +x = encodecnat 13 f xfor type-system reasons.(Because OCaml allows +side-effects.)let cnadd = fun n m f x -> n f (m f x)Put +n of f in front.let cnmult = fun n m f -> n +(m f)Repeat n timesputting m of f +in front.let cnprev n =
        fun f x ->This is the ‘‘Church numeral signature’‘. nThe only thing we have is an n-step loop. (fun g v -> v (g f))We need sth that operates on f. (fun z->x)We need to ignore the @@ -2006,48 +2012,49 @@

        6 Church numerals machine.

        cn_is_zero left as an exercise.

        decodecnat (cn_prev cn3)

        -

        +

        \Downarrow

        (cn_prev cn3) ((+) 1) 0

        -

        +

        \Downarrow

        (fun f x -> cn3 (fun g v -> v (g f)) (fun z->x)
        (fun z->z)) ((+) 1) 0

        -

        +

        \Downarrow

        ((fun f x -> f (f (f x))) (fun g v -> v (g ((+) 1))) (fun z->0) (fun z->z))

        -

        +

        \Downarrow

        ((fun g v -> v (g ((+) 1))) ((fun g v -> v (g ((+) 1))) ((fun g v -> v (g ((+) 1))) (fun z->0)))) (fun z->z))

        -

        +

        \Downarrow

        ((fun z->z) (((fun g v -> v (g ((+) 1))) ((fun g v -> v (g ((+) 1))) (fun z->0)))) ((+) 1)))

        -

        +

        \Downarrow

        (fun g v -> v (g ((+) 1))) ((fun g v -> v (g ((+) 1))) (fun z->0)) ((+) 1)

        -

        +

        \Downarrow

        ((+) 1) ((fun g v -> v (g ((+) 1))) (fun z->0) ((+) 1))

        -

        +

        \Downarrow

        ((+) 1) (((+) 1) ((fun z->0) ((+) 1)))

        -

        +

        \Downarrow

        ((+) 1) (((+) 1) (0))

        -

        +

        \Downarrow

        ((+) 1) 1

        -

        +

        \Downarrow

        2

        7 Recursion: Fixpoint Combinator

          -
        • Turing’s fixpoint combinator: Θ = (λxy.y(xxy))(λxy.y(xxy))

          -

          $$ \begin{matrix} +

        • Turing’s fixpoint combinator: \Theta = +(\lambda x y.y (x x y)) (\lambda x y.y (x x y))

          +

          \begin{matrix} N & = & \Theta F\\\\\\ & = & (\lambda x y.y (x x y)) (\lambda x y.y (x x y)) F\\\\\\ & =_{\rightarrow \rightarrow} & F ((\lambda x y.y (x x y)) (\lambda x y.y (x x y)) F)\\\\\\ -& = & F (\Theta F) = F N \end{matrix} $$

        • +& = & F (\Theta F) = F N \end{matrix}

        • Curry’s fixpoint combinator: Y = λf.(λx.f(xx))(λx.f(xx))

          -

          $$ \begin{matrix} +class="math inline">\boldsymbol{Y}= \lambda f. (\lambda x.f (x +x)) (\lambda x.f (x x))

          +

          \begin{matrix} N & = & \boldsymbol{Y}F\\\\\\ & = & (\lambda f. (\lambda x.f (x x)) (\lambda x.f (x x))) F\\\\\\ @@ -2058,31 +2065,32 @@

          7 Recursion: Fixpoint & =_{\leftarrow} & F ((\lambda f. (\lambda x.f (x x)) (\lambda x.f (x x))) F)\\\\\\ -& = & F (\boldsymbol{Y}F) = F N \end{matrix} $$

        • +& = & F (\boldsymbol{Y}F) = F N \end{matrix}

        • Call-by-value fixpoint combinator: λf′.(λfx.f′(ff)x)(λfx.f′(ff)x)

          -

          $$ \begin{matrix} +class="math inline">\lambda f' . (\lambda f x.f' (f f) +x) (\lambda f x.f' (f f) x)

          +

          \begin{matrix} N & = & \operatorname{fix}F\\\\\\ -& = & (\lambda f' . (\lambda f x.f' (f f) x) (\lambda f -x.f' (f f) x)) +& = & (\lambda f' . (\lambda f x.f' (f f) x) (\lambda +f x.f' (f f) x)) F\\\\\\ & =_{\rightarrow} & (\lambda f x.F (f f) x) (\lambda f x.F (f f) x)\\\\\\ & =_{\rightarrow} & \lambda x.F ((\lambda f x.F (f f) x) (\lambda f x.F (f f) x)) x\\\\\\ -& =_{\leftarrow} & \lambda x.F ((\lambda f' . (\lambda f -x.f' (f f) x) -(\lambda f x.f' (f f) x)) F) x\\\\\\ +& =_{\leftarrow} & \lambda x.F ((\lambda f' . (\lambda f +x.f' (f f) x) +(\lambda f x.f' (f f) x)) F) x\\\\\\ & = & \lambda x.F (\operatorname{fix}F) x = \lambda x.F N x\\\\\\ -& =_{\eta} & F N \end{matrix} $$

        • -
        • The λ-terms we have -seen above are fixpoint combinators – means inside -λ-calculus to perform +& =_{\eta} & F N \end{matrix}

        • +
        • The \lambda-terms we have seen +above are fixpoint combinators – means inside \lambda-calculus to perform recursion.

        • What is the problem with the first two combinators?

          -

          $$ \begin{matrix} +

          \begin{matrix} \Theta F & \rightsquigarrow \rightsquigarrow & F ((\lambda x y.y (x x y)) (\lambda x y.y (x x y)) F)\\\\\\ @@ -2093,46 +2101,45 @@

          7 Recursion: Fixpoint x y)) (\lambda x y.y (x x y)) F)))\\\\\\ & \rightsquigarrow \rightsquigarrow & \ldots \end{matrix} -$$

        • +

        • Recall the distinction between expressions and values from the previous lecture Computation.

        • The reduction rule for λ-calculus is just meant to -determine which expressions are considered “equal” – it is highly +class="math inline">\lambda-calculus is just meant to determine +which expressions are considered “equal” – it is highly non-deterministic, while on a computer, computation needs to go one way or another.

        • Using the general reduction rule of λ-calculus, for a recursive -definition, it is always possible to find an infinite reduction sequence -(which means that you couldn’t complain when a nasty λ-calculus compiler generates -infinite loops for all recursive definitions).

          +class="math inline">\lambda-calculus, for a recursive definition, +it is always possible to find an infinite reduction sequence (which +means that you couldn’t complain when a nasty \lambda-calculus compiler generates infinite +loops for all recursive definitions).

          • Why?
        • Therefore, we need more specific rules. For example, most -languages use $\left( \text{{\texttt{fun }}} x -\text{{\texttt{->}}} a \right) v \rightsquigarrow a [x := v]$, +languages use \left( \text{{\texttt{fun }}} x +\text{{\texttt{->}}} a \right) v \rightsquigarrow a [x := v], which is called call-by-value, or eager computation (because the program eagerly computes the arguments before starting to compute the function). (It’s exactly the rule we introduced in Computation lecture.)

        • What happens with call-by-value fixpoint combinator?

          -

          $$ \begin{matrix} +

          \begin{matrix} \operatorname{fix}F & \rightsquigarrow & (\lambda f x.F (f f) x) (\lambda f x.F (f f) x)\\\\\\ & \rightsquigarrow & \lambda x.F ((\lambda f x.F (f f) x) (\lambda f x.F -(f f) x)) x \end{matrix} $$

          -

          Voila – if we use $\left( -\text{{\texttt{fun }}} x \text{{\texttt{->}}} a \right) v -\rightsquigarrow a [x := v]$ as the rulerather than $\left( \text{{\texttt{fun }}} x -\text{{\texttt{->}}} a_{1} \right) a_{2} \rightsquigarrow a_{1} [x := -a_{2}]$, the computation stops. Let’s compute the function on -some input:

          -

          $$ \begin{matrix} +(f f) x)) x \end{matrix}

          +

          Voila – if we use \left( \text{{\texttt{fun +}}} x \text{{\texttt{->}}} a \right) v \rightsquigarrow a [x := +v] as the rulerather than \left( +\text{{\texttt{fun }}} x \text{{\texttt{->}}} a_{1} \right) a_{2} +\rightsquigarrow a_{1} [x := a_{2}], the computation stops. Let’s +compute the function on some input:

          +

          \begin{matrix} \operatorname{fix}F v & \rightsquigarrow & (\lambda f x.F (f f) x) (\lambda f x.F (f f) x) v\\\\\\ @@ -2146,35 +2153,33 @@

          7 Recursion: Fixpoint x) (\lambda f x.F (f f) x)) x) v\\\\\\ & \rightsquigarrow & \text{depends on } F \end{matrix} -$$

        • +

        • Why the name fixpoint? If you look at our derivations, you’ll see that they show what in math can be written as x = f(x). Such -values x are called fixpoints -of f. An arithmetic function -can have several fixpoints, for example f(x) = x2 -(which xes are fixpoints?) or -no fixpoints, for example f(x) = x + 1.

        • +class="math inline">x = f (x). Such values x are called fixpoints of f. An arithmetic function can have several +fixpoints, for example f (x) = x^2 +(which xes are fixpoints?) or no +fixpoints, for example f (x) = x + +1.

        • When you define a function (or another object) by recursion, it has very similar meaning: there is a name that is on both sides of =.

        • -
        • In λ-calculus, there -are functions like Θ and Y, that take any +

        • In \lambda-calculus, there are +functions like \Theta and \boldsymbol{Y}, that take any function as an argument, and return its fixpoint.

        • We turn a specification of a recursive object into a definition, by solving it with respect to the recurring name: deriving x = f(x) where -x is the recurring name. We -then have x = fix (f).

        • +class="math inline">x = f (x) where x is the recurring name. We then have x =\operatorname{fix} (f).

        • Let’s walk through it for the factorial function (we omit the prefix cn_ – could be pn_ if pn1 was used instead of cn1 – for numeric functions, and we shorten if_then_else into if_t_e):

          -

          $$ \begin{matrix} +

          \begin{matrix} \text{{\texttt{fact}}} n & = & \text{{\texttt{if\_t\_e}}} \left( \text{{\texttt{is\_zero}}} n \right) \text{{\texttt{cn1}}} \left( \text{{\texttt{mult}}} n @@ -2198,7 +2203,7 @@

          7 Recursion: Fixpoint \text{{\texttt{is\_zero}}} n \right) \text{{\texttt{cn1}}} \left( \text{{\texttt{mult}}} n \left( f \left( \text{{\texttt{pred}}} n \right) \right) \right) -\right) \end{matrix} $$

          +\right) \end{matrix}

          The last specification is a valid definition: we just give a name to a (ground, a.k.a. closed) expression.

        • We have seen how fix works already!

          @@ -2212,43 +2217,42 @@

          8 Encoding of Lists and Trees

        • A list is either empty, which we often call Empty or Nil, or it consists of an element followed by another list (called “tail”), the other case often called Cons.

        • -
        • Define nil = λxy.y -and consHT = λxy.xHT.

        • +
        • Define nil= \lambda x +y.y and consH T = \lambda +x y.x H T.

        • Add numbers stored inside a list:

          -

          $$ \begin{matrix} +

          \begin{matrix} \text{{\texttt{addlist}}} l & = & l \left( \lambda h t. \text{{\texttt{cn\_add}}} h \left( \text{{\texttt{addlist}}} t \right) \right) -\text{{\texttt{cn0}}} \end{matrix} $$

          +\text{{\texttt{cn0}}} \end{matrix}

          To make a proper definition, we need to apply fix  to the solution of above equation.

          -

          $$ \begin{matrix} +class="math inline">\operatorname{fix} to the solution of above +equation.

          +

          \begin{matrix} \text{{\texttt{addlist}}} & = & \operatorname{fix} \left( \lambda f l.l \left( \lambda h t. \text{{\texttt{cn\_add}}} h (f t) \right) -\text{{\texttt{cn0}}} \right) \end{matrix} $$

        • +\text{{\texttt{cn0}}} \right) \end{matrix}

        • For trees, let’s use a different form of binary trees than so far: instead of keeping elements in inner nodes, we will keep elements in leaves.

        • -
        • Define leafn = λxy.xn -and nodeLR = λxy.yLR.

        • +
        • Define leafn = \lambda x +y.x n and nodeL R = +\lambda x y.y L R.

        • Add numbers stored inside a tree:

          -

          $$ \begin{matrix} +

          \begin{matrix} \text{{\texttt{addtree}}} t & = & t (\lambda n.n) \left( \lambda l r. \text{{\texttt{cn\_add}}} \left( \text{{\texttt{addtree}}} l \right) \left( -\text{{\texttt{addtree}}} r \right) \right) \end{matrix} $$

          +\text{{\texttt{addtree}}} r \right) \right) \end{matrix}

          and, in solved form:

          -

          $$ \begin{matrix} +

          \begin{matrix} \text{{\texttt{addtree}}} & = & \operatorname{fix} \left( \lambda f t.t (\lambda n.n) \left( \lambda l r. \text{{\texttt{cn\_add}}} -(f l) (f r) \right) \right) \end{matrix} $$

        • +(f l) (f r) \right) \right) \end{matrix}

        let nil = fun x y -> ylet cons h t = fun x y -> x h tlet addlist l =
        @@ -2261,21 +2265,22 @@

        8 Encoding of Lists and Trees

        cn1)));;

        • Observe a regularity: when we encode a variant type with n variants, for each variant we -define a function that takes n +class="math inline">n variants, for each variant we define a +function that takes n arguments.

        • -
        • If the kth variant -Ck has -mk -parameters, then the function ck that encodes -it will have the form:

          -

          Ck(v1,…,vmk) ∼ ckv1vmk = λx1xn.xkv1vmk

        • +
        • If the kth variant C_{k} has m_{k} parameters, then the function c_{k} that encodes it will have the form:

          +

          C_{k} (v_{1}, \ldots, v_{m_{k}}) \sim +c_{k} v_{1} \ldots +v_{m_{k}} += \lambda x_{1} \ldots x_{n} .x_{k} v_{1} \ldots v_{m_{k}} +

        • The encoded variants serve as a shallow pattern matching with -guaranteed exhaustiveness: kth -argument corresponds to kth -branch of pattern matching.

        • +guaranteed exhaustiveness: kth argument +corresponds to kth branch of pattern +matching.

        9 Looping Recursion

          @@ -2288,9 +2293,9 @@

          9 Looping Recursion

        • Oops… OCaml says:Stack overflow during evaluation (looping recursion?).
        • What is wrong? Nothing as far as λ-calculus is concerned. But OCaml -and F# always compute arguments before calling a function. By definition -of fix, f corresponds to recursively calling +class="math inline">\lambda-calculus is concerned. But OCaml and +F# always compute arguments before calling a function. By definition of +fix, f corresponds to recursively calling pn_add. Therefore,(pnsucc (f (pnpred m) n)) will be called regardless of what(pniszero m) returns!
        • Why addlist and addtree work?
        • @@ -2309,7 +2314,7 @@

          9 Looping Recursion

          • In OCaml or F# we would guard by fun () ->, and then apply to (), but we do not have datatypes like unit in λ-calculus.
          • +class="math inline">\lambda-calculus.

        let pnadd m n = fix (fun f m n -> (ifthenelse (pniszero m) (fun x @@ -2320,28 +2325,25 @@

        10 In-class Work and Homework

        c_not; 1. exponentiation for Church numerals; 1. is-zero predicate for Church numerals; 1. even-number predicate for Church numerals; 1. multiplication for pair-encoded natural numbers; 1. -factorial n! for pair-encoded -natural numbers. 1. Construct λ-terms m0, m1, … -such that for all n one -has:

        -

        $$ \begin{matrix} +factorial n! for pair-encoded natural +numbers. 1. Construct \lambda-terms +m_{0}, m_{1}, \ldots such that for all +n one has:

        +

        \begin{matrix} m_{0} & = & x \\\\\\ - m_{n + 1} & = & m_{n + 2} m_{n} \end{matrix} $$

        + m_{n + 1} & = & m_{n + 2} m_{n} \end{matrix}

        (where equality is after performing β-reductions). 1. Define (implement) -and verify a function computing: the length of a list (in Church -numerals); 1. cn_max – maximum of two Church numerals; 1. -the depth of a tree (in Church numerals). 1. Representing side-effects -as an explicitly “passed around” state value, write combinators that -represent the imperative constructs: 1. for…to… 1. for…downto… 1. -while…do… 1. do…while… 1. repeat…until…

        -

        Rather than writing a λ-term using the encodings that -we’ve learnt, just implement the functions in OCaml / F#, using built-in -int and bool types. You can use let rec instead of fix. * For example, -in exercise (a), write a function let rec +class="math inline">\beta-reductions). 1. Define (implement) and +verify a function computing: the length of a list (in Church numerals); +1. cn_max – maximum of two Church numerals; 1. the depth of +a tree (in Church numerals). 1. Representing side-effects as an +explicitly “passed around” state value, write combinators that represent +the imperative constructs: 1. for…to… 1. for…downto… 1. while…do… 1. +do…while… 1. repeat…until…

        +

        Rather than writing a \lambda-term +using the encodings that we’ve learnt, just implement the functions in +OCaml / F#, using built-in int and bool types. You can use let rec +instead of fix. * For example, in exercise (a), write a function let rec for_to f beg_i end_i s =… where f takes arguments i ranging from beg_i to end_i, state s at given step, and returns @@ -2381,8 +2383,8 @@

        10 In-class Work and Homework

      • is-zero predicate for Church numerals;
      • even-number predicate for Church numerals;
      • multiplication for pair-encoded natural numbers;
      • -
      • factorial n! -for pair-encoded natural numbers.
      • +
      • factorial n! for +pair-encoded natural numbers.
      • the length of a list (in Church numerals);
      • *cn_max* – maximum of two Church numerals;
      • @@ -2399,9 +2401,9 @@

        10 In-class Work and Homework

      • *repeatuntil**…*
      • Rather than writing a λ-term using the encodings that -we’ve learnt, just implement the functions in OCaml / F#, using built-in -int and bool types. You can use let rec instead of fix.

        +class="math inline">\lambda-term using the encodings that we’ve +learnt, just implement the functions in OCaml / F#, using built-in int +and bool types. You can use let rec instead of fix.

        • For example, in exercise (a), write a function let rec *for_to f beg_i end_i s* =… where @@ -2444,98 +2446,88 @@

          10 In-class Work and Homework

          find the type for:

          let cadr l = List.hd (List.tl l) in cadr (1::2::[]), cadr (true::false::[])

          -

          in environment $\Gamma = \left\lbrace +

          in environment \Gamma = \left\lbrace \text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{hd}}} : \forall \alpha . \alpha \operatorname{list} \rightarrow \alpha ; \text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{tl}}} : \forall \alpha . \alpha \operatorname{list} \rightarrow \alpha -\operatorname{list} \right\rbrace$. You can take “shortcuts” if -it is too many equations to write down.

          +\operatorname{list} \right\rbrace. You can take “shortcuts” if it +is too many equations to write down.

          Exercise 2: Terms t1, t2, … ∈ T(Σ,X) -are built out of variables x, y, … ∈ X and -function symbols f, g, … ∈ Σ the -way you build values out of functions:

          -
            -
          • X ⊂ T(Σ,X) -– variables are terms; usually an infinite set,
          • -
          • for terms t1, …, tn ∈ T(Σ,X) -and a function symbol f ∈ Σn -of arity n,f(t1,…,tn) ∈ T(Σ,X) -– bigger terms arise from applying function symbols to smaller -terms; $\Sigma = \dot{\cup}_{n} \Sigma -_{n}$ is called a signature.
          • +class="math inline">t_{1}, t_{2}, \ldots \in T (\Sigma, X) are +built out of variables x, y, \ldots \in +X and function symbols f, g, \ldots \in +\Sigma the way you build values out of functions:

            +
              +
            • X \subset T (\Sigma, X) – +variables are terms; usually an infinite set,
            • +
            • for terms t_{1}, \ldots, t_{n} +\in T (\Sigma, X) and a function symbol f \in \Sigma _{n} of arity n,f (t_{1}, \ldots, +t_{n}) \in T (\Sigma, X) – bigger terms arise from applying +function symbols to smaller terms; \Sigma += \dot{\cup}_{n} \Sigma _{n} is called a signature.

            In OCaml, we can define terms as: type term = V of string | T of string term list5mm, where for example V(“x”) is a variable x and T(“f”, [V(“x”); V(“y”)]) is -the term f(x,y).*

            -

            By substitutions* σ, ρ, … we mean finite sets -of variable, term pairs which we can write as {x1 ↦ t1, …, xk ↦ tk} -or [x1:=t1;…;xk:=tk], -but also functions from terms to terms σ : T(Σ,X) → T(Σ,X) -related to the pairs as follows: if σ = {x1 ↦ t1, …, xk ↦ tk}, -then*

            -
              -
            • σ(xi) = ti -for xi ∈ {x1, …, xk},
            • -
            • σ(x) = x -for x ∈ X ∖ {x1, …, xk},
            • -
            • σ(f(t1,…,tn)) = f(σ(t1),…,σ(tn)).
            • +class="math inline">x and T(“f”, [V(“x”); V(“y”)]) is the term +f (x, y).*

              +

              By substitutions* \sigma, \rho, +\ldots we mean finite sets of variable, term pairs which we can +write as \lbrace x_{1} \mapsto t_{1}, \ldots, +x_{k} \mapsto t_{k} \rbrace or [x_{1} +:= t_{1} ; \ldots ; x_{k} := t_{k}], but also functions from +terms to terms \sigma : T (\Sigma, X) +\rightarrow T (\Sigma, X) related to the pairs as follows: if +\sigma = \lbrace x_{1} \mapsto t_{1}, \ldots, +x_{k} \mapsto t_{k} \rbrace, then*

              +
                +
              • \sigma (x_{i}) = t_{i} for +x_{i} \in \lbrace x_{1}, \ldots, x_{k} +\rbrace,
              • +
              • \sigma (x) = x for x \in X\backslash \lbrace x_{1}, \ldots, x_{k} +\rbrace,
              • +
              • \sigma (f (t_{1}, \ldots, t_{n})) = f +(\sigma (t_{1}), \ldots, \sigma (t_{n})).

              In OCaml, we can define substitutions σ as: type subst = (string -term) list, together with a function apply : subst -> term -> term -which computes σ(⋅).*

              +class="math inline">\sigma as: type subst = (string term) +list, together with a function apply : subst -> term -> term which +computes \sigma (\cdot).*

              We say that a substitution σ is more general* than all -substitutions ρ ∘ σ, -where (ρσ)(x) = ρ(σ(x)). -In type inference, we are interested in most general solutions: the less -general type judgement $\text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{hd}}} +class="math inline">\sigma is more general* than all +substitutions \rho \circ \sigma, where +(\rho \circ \sigma) (x) = \rho (\sigma +(x)). In type inference, we are interested in most general +solutions: the less general type judgement \text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{hd}}} : \operatorname{int}\operatorname{list} \rightarrow -\operatorname{int}$, although valid, is less useful than $\text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{hd}}} -: \forall \alpha . \alpha \operatorname{list} \rightarrow \alpha$ +\operatorname{int}, although valid, is less useful than \text{{\textcolor{green}{List}}{\textcolor{blue}{.}}{\textcolor{brown}{hd}}} +: \forall \alpha . \alpha \operatorname{list} \rightarrow \alpha because it limits the usage of List.hd.*

              A unification problem* is a finite set of equations S = {s1=?t1, …, sn=?tn} -which we can also write as $s_{1} \dot{=} -t_{1} \wedge \ldots \wedge s_{n} \dot{=} t_{n}$. A solution, or -unifier of S, is a -substitution σ such that σ(si) = σ(ti) -for i = 1, …, n. A -most general unifier, for short MGU, is a most general -such substitution.*

              +class="math inline">S = \lbrace s_{1} =^? t_{1}, \ldots, s_{n} =^? t_{n} +\rbrace which we can also write as s_{1} \dot{=} t_{1} \wedge \ldots \wedge s_{n} +\dot{=} t_{n}. A solution, or unifier of S, is a substitution \sigma such that \sigma (s_{i}) = \sigma (t_{i}) for i = 1, \ldots, n. A most general +unifier, for short MGU, is a most general such +substitution.*

              A substitution is idempotent* when σ = σ ∘ σ. If -σ = {x1 ↦ t1, …, xk ↦ tk}, -then σ is idempotent exactly -when no ti -contains any of the variables {x1, …, xn}; -i.e. {x1, …, xn} ∩ Vars (t1,…,tn) = ⌀.*

              +class="math inline">\sigma = \sigma \circ \sigma. If \sigma = \lbrace x_{1} \mapsto t_{1}, \ldots, x_{k} +\mapsto t_{k} \rbrace, then \sigma is idempotent exactly when no t_{i} contains any of the variables \lbrace x_{1}, \ldots, x_{n} \rbrace; +i.e. \lbrace x_{1}, \ldots, x_{n} \rbrace \cap +\operatorname{Vars} (t_{1}, \ldots, t_{n}) = \varnothing.*

              1. Implement an algorithm that, given a set of equations represented as a list of pairs of terms, computes an idempotent most @@ -2544,9 +2536,9 @@

                10 In-class Work and Homework

                All That”**, p. 82.) Modify the implementation of unification to achieve linear space complexity by working with what could be called iterated substitutions. For example, the solution to* {x=?f(y), y=?g(z), z=?a} -should be represented as variable, term pairs (x,f(y)), (y,g(z)), (z,a). +class="math inline">\lbrace x =^? f (y), y =^? g (z), z =^? a +\rbrace should be represented as variable, term pairs +(x, f (y)), (y, g (z)), (z, a). (Hint: iterated substitutions should be unfolded lazily, i.e. only so far that either a non-variable term or the end of the instantiation chain is found.)
              2. @@ -2556,9 +2548,9 @@

                10 In-class Work and Homework

              3. What does it mean that an implementation has junk (as an algebraic structure for a given signature)? Is it bad?
              4. Define a monomorphic algebraic specification (other than, but -similar to, natp or stringp, some useful +similar to, \operatorname{nat}_{p} +or \operatorname{string}_{p}, some useful data type).
              5. Discuss an example of a (monomorphic) algebraic specification where it would be useful to drop some axioms (giving up monomorphicity) @@ -2581,13 +2573,14 @@

                10 In-class Work and Homework

                performs a lot of copying and is not tail-recursive. Optimize it (without changing the type definition).

              6. Add (and specify) isEmpty  : (α,β)map  → bool  -to the example algebraic specification of maps without increasing -the burden on its implementations (i.e. without affecting -implementations of other operations). Hint: equational reasoning might -be not enough; consider an equivalence relation meaning “have the same keys”, defined -and used just in the axioms of the specification.

              7. +class="math inline">\operatorname{isEmpty}: (\alpha, \beta) +\operatorname{map} \rightarrow \operatorname{bool} to the +example algebraic specification of maps without increasing the burden on +its implementations (i.e. without affecting implementations of other +operations). Hint: equational reasoning might be not enough; consider an +equivalence relation \approx +meaning “have the same keys”, defined and used just in the axioms of +the specification.

              Exercise 5: Design an algebraic specification and write a signature for first-in-first-out queues. Provide two @@ -2608,15 +2601,14 @@

              10 In-class Work and Homework

              1. (Ex. 2.2 in *Chris Okasaki “Purely Functional Data Structures”**) In the worst case,* *member* performs -approximately 2d -comparisons, where d -is the depth of the tree. Rewrite *member* to -take no mare than d + 1 -comparisons by keeping track of a candidate element that -might be equal to the query element (say, the last element -for which < returned false) -and checking for equality only when you hit the bottom of the -tree.
              2. +approximately
                2 d comparisons, +where d is the depth of the +tree. Rewrite *member* to take no mare than +d + 1 comparisons by keeping track +of a candidate element that might be equal to the +query element (say, the last element for which < returned false) and checking for +equality only when you hit the bottom of the tree.
              3. (Ex. 3.10 in *Chris Okasaki “Purely Functional Data Structures”**) The *balance* function currently performs several unnecessary tests: when e.g.* *ins* @@ -2656,8 +2648,8 @@

                1 Plan

                structures.
              4. The point-free programming style. A bit of history: the FP language.
              5. -
              6. Sum over an interval example: $\sum_{n = -a}^b f (n)$.
              7. +
              8. Sum over an interval example: \sum_{n = +a}^b f (n).
              9. Combining multiple results: concat_map.
              10. Interlude: generating all subsets of a set (as list), and as exercise: all permutations of a list.
              11. @@ -3142,7 +3134,7 @@

                4 Point-free Programming

                5 Reductions. More higher-order/list functions

                Mathematics has notation for sum over an interval: $\sum_{n = a}^b f (n)$.

                +class="math inline">\sum_{n = a}^b f (n).

                In OCaml, we do not have a universal addition operator:

                let rec isumfromto f a b = if a > b then 0 else f a + isumfromto f (a+1) blet rec fsumfromto f a b = if a > b then 0. else f a +. @@ -3153,8 +3145,8 @@

                5 Reductions. More (opfromto op base f (a+1) b)

                Let’s collect the results of a multifunction (i.e. a set-valued function) for a set of arguments, in math notation:

                -

                f(A) = ⋃p ∈ Af(p)

                +

                f (A) = \bigcup_{p \in A} f (p) +

                It is a useful operation over lists with union translated as append:

                let rec concatmap f = function | [] -> [] | a::l -> f a @ @@ -3407,10 +3399,9 @@

                7.1 Representing the honeycomb

                type cell = int * intWe address cells using ‘‘cartesian’’ coordinatesmodule CellSet =and store them in either lists or sets. Set.Make (struct type t = cell let compare = compare end)type task = -{For board ‘‘size’’ N, the -honeycomb coordinates boardsize : int;range from (−2N,−N) to 2N, N.
                +{For board ‘‘size’’ N, the honeycomb +coordinates boardsize : int;range from (- 2 N, +- N) to 2 N, N.
                numislands : int;Required number of islands islandsize : int;and required number of cells in an island. emptycells : CellSet.t;The cells that are initially without honey.}

                @@ -3433,15 +3424,15 @@

                7.1.2 Building the honeycomb

                y)))

                7.1.3 Drawing honeycombs

                We separately generate colored polygons:

                -

                let drawhoneycomb w h task eaten = let i2f = floatofint in let -nx = i2f (4 * task.boardsize + 2) in let ny = i2f (2 * task.boardsize + -2) in let radius = min (i2f w /. nx) (i2f h /. ny) in let x0 = w / 2 in -let y0 = h / 2 in let dx = (sqrt 3. /. 2.) . radius +. 1. inThe -distance between let dy = (3. /. 2.) . radius +. 2. in(x,y) and (x+1,y+1). let drawcell -(x,y) = Array.init 7We draw a closed polygon by placing 6 points (fun i +

                let drawhoneycomb \simw \simh task eaten = let i2f = floatofint in +let nx = i2f (4 * task.boardsize + 2) in let ny = i2f (2 * +task.boardsize + 2) in let radius = min (i2f w /. nx) (i2f h /. ny) in +let x0 = w / 2 in let y0 = h / 2 in let dx = (sqrt 3. /. 2.) . +radius +. 1. inThe distance between let dy = (3. /. 2.) . radius +. +2. in(x, y) and (x + 1, y + 1). let drawcell (x,y) = +Array.init 7We draw a closed polygon by placing 6 points (fun i ->evenly spaced on a circumcircle. let phi = floatofint i . pi /. 3. in x0 + intoffloat (radius . sin phi +. floatofint x . dx), y0 + intoffloat (radius . cos phi +. floatofint y *. dy)) in let @@ -3451,10 +3442,11 @@

                7.1.3 Drawing honeycombs

                let oldempty = List.map (fun p->drawcell p, (0, 0, 0)) (CellSet.elements task.emptycells) in honey @ eaten @ oldempty

                We can draw the polygons to an SVG image:

                -let drawtosvg file w h ?title ?desc curves = let f = openout file -in Printf.fprintf f “<!DOCTYPE -svg PUBLIC”-//W3C//DTD SVG 1.1//EN” +let drawtosvg file \simw \simh ?title ?desc curves = let f = openout +file in Printf.fprintf f +“<!DOCTYPE svg +PUBLIC”-//W3C//DTD SVG 1.1//EN” “http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd”>” w h w h; (match title with None -> () | Some title -> Printf.fprintf f ” @@ -3473,8 +3465,8 @@

                7.1.3 Drawing honeycombs

                #load “graphics.cma”;;

                When compiling we just provide graphics.cma to the command.

                -

                let drawtoscreen w h curves = Graphics.opengraph (” +

                let drawtoscreen \simw \simh curves = Graphics.opengraph (” “stringofint w”x”stringofint h); Graphics.setcolor (Graphics.rgb 50 50 0);We draw a brown background. Graphics.fillrect 0 0 (Graphics.sizex ()) (Graphics.sizey ()); List.iter (fun (points, (r,g,b)) -> @@ -3581,16 +3573,16 @@

                7.4 Generating a solution

                We can test it now:

                let w = 800 and h = 800let ans0 = findtoeat testtask0.boardsize testtask0.islandsize testtask0.numislands testtask0.emptycellslet = -drawtoscreen w h (drawhoneycomb w h +drawtoscreen \simw \simh (drawhoneycomb \simw \simh testtask0 (List.hd ans0))

                But in a more complex case, finding all solutions takes too long:

                let ans1 = findtoeat testtask1.boardsize testtask1.islandsize testtask1.numislands testtask1.emptycellslet = drawtoscreen w h -(drawhoneycomb w h testtask1 (List.hd ans1))

                +class="math inline">\simw \simh +(drawhoneycomb \simw \simh testtask1 (List.hd ans1))

                (See Lec6.ml for definitions of test cases.)

                7.5 Optimizations for Honey Islands

                @@ -3599,9 +3591,8 @@

                7.5 Optimizations for Honey as early as possible.
                • Is the number of solutions generated by the more brute-force -approach above 2n -for n honey cells, or -smaller?
                • +approach above 2^n for n honey cells, or smaller?
              12. We will guard both choices (eating a cell and keeping it in island).
              13. @@ -3663,9 +3654,10 @@

                8 Constraint-based puzzles

              14. A domain of a variable is a set of possible values the variable can have in any solution.
              15. In the Honey Islands puzzle, the variables correspond to -cells and the domains are {Honey , Empty } (either a cell has honey, or -is empty – without distinguishing “initially empty” and “eaten”).
              16. +cells and the domains are \lbrace +\operatorname{Honey}, \operatorname{Empty} \rbrace (either a cell +has honey, or is empty – without distinguishing “initially empty” and +“eaten”).
              17. In the Honey Islands puzzle, the constraints are: a selection of cells that have to be empty, the number and size of connected components of cells that are not empty. The neighborhood graph @@ -3727,7 +3719,7 @@

                2 Evaluation

          • Recall our problems with using flow control expressions like if_then_else in examples from λ-calculus lecture.

          • +class="math inline">\lambda-calculus lecture.

          • There are many technical terms describing various strategies. Wikipedia:

            Strict evaluationArguments are always evaluated completely before @@ -3897,10 +3889,10 @@

            5 Power series and
            • Differential equations idea due to Henning Thielemann. Just an example.

            • -
            • Expression $P (x) = \sum_{i = 0}^n -a_{i} x^i$ defines a polynomial for n < ∞ and a power series for -n = ∞.

            • +
            • Expression P (x) = \sum_{i = 0}^n a_{i} +x^i defines a polynomial for n < +\infty and a power series for n = +\infty.

            • If we define

              let rec lfoldright f l base = match l with | LNil -> base | LCons (a, lazy l) -> f a (lfoldright f l base)

              @@ -3910,14 +3902,14 @@

              5 Power series and
            • But it will not work for infinite power series!

              • Does it make sense to compute the value at x of a power series?
              • +class="math inline">x of a power series?
              • Does it make sense to fold an infinite list?
            • -
            • If the power series converges for x > 1, then when the elements -an get -small, the remaining sum $\sum_{i = -n}^{\infty} a_{i} x^i$ is also small.

            • +
            • If the power series converges for x +> 1, then when the elements a_{n} get small, the remaining sum \sum_{i = n}^{\infty} a_{i} x^i is also +small.

            • lfold_right falls into an infinite loop on infinite lists. We need call-by-name / call-by-need semantics for the argument function f.

              @@ -3925,7 +3917,7 @@

              5 Power series and (a, ll) -> f a (lazy (lazyfoldr f (Lazy.force ll) base))

            • We need a stopping condition in the Horner algorithm step:

              let lhorner x l =This is a bit of a hack, let upd c sum =we hope to -‘‘hit’’ the interval (0, ε]. +‘‘hit’’ the interval (0, \varepsilon]. if c = 0. || absfloat c > epsilonfloat then c +. x *. Lazy.force sum else 0. in lazyfoldr upd l 0.

              let invfact = lmap (fun n -> 1. /. floatofint n) lfactlet e = @@ -3938,7 +3930,7 @@

              5.1 Power series / -> xs | LCons (x,xs), LCons (y,ys) -> LCons (x +. y, lazy (add (Lazy.force xs) (Lazy.force ys)))

            • let rec sub xs ys = match xs, ys with | LNil, -> lmap (fun x-> --.x) ys | , LNil -> xs | LCons +\sim-.x) ys | , LNil -> xs | LCons (x,xs), LCons (y,ys) ->
              LCons (x-.y, lazy (add (Lazy.force xs) (Lazy.force ys)))
            • let scale s = lmap (fun x->s*.x)
            • @@ -3962,16 +3954,16 @@

              5.1 Power series /

            5.2 Differential equations

              -
            • $\frac{\mathrm{d} \sin x}{\mathrm{d} x} +

            • \frac{\mathrm{d} \sin x}{\mathrm{d} x} = \cos x, \frac{\mathrm{d} \cos x}{\mathrm{d} x} = - \sin x, \sin 0 = 0, -\cos 0 = 1$.

            • +\cos 0 = 1.

            • We will solve the corresponding integral equations. Why?

            • We cannot define the integral by direct recursion like this:

              let rec sin = integrate (ofint 0) cosUnary op. let (-:) =and cos = integrate (ofint 1) -:sin lmap (fun x-> -.x)

              +class="math inline">\sim-:) =and cos = integrate (ofint 1) \sim-:sin lmap (fun x-> \sim-.x)

              unfortunately fails:

              Error: This kind of expression is not allowed as right-hand side of ‘let rec'

                @@ -3983,7 +3975,7 @@

                5.2 Differential equations

                knows how to start building the recursive structure.

                let integ xs = lmap (uncurry (/.)) (lzip (xs, posnums))let rec sin = LCons (ofint 0, lazy (integ cos))and cos = LCons (ofint 1, lazy (integ --:sin))

                +\sim-:sin))

              • The complete example would look much more elegant in Haskell.

              • Although this approach is not limited to linear equations, @@ -3991,29 +3983,31 @@

                5.2 Differential equations

                coefficients quickly grow instead of quickly falling…

              • Drawing functions are like in previous lecture, but with open curves.

              • -
              • let plot1D f w scale tbeg -tend = let dt = (tend -. tbeg) /. -ofint w in Array.init w (fun i -> let y = lhorner (dt . ofint i) -f in i, to_int (scale . y))

              • +
              • let plot1D f \simw \simscale \simtbeg \simtend = let dt = (tend -. tbeg) /. ofint w +in Array.init w (fun i -> let y = lhorner (dt . ofint i) f in i, +to_int (scale . y))

              6 Arbitrary precision computation

              • Putting it all together reveals drastic numerical errors for -large x.

                +large x.

                let graph = let scale = ofint h /. ofint 8 in [plot1D sin w h0:(h/2) -scale tbeg:(ofint 0) tend:(ofint 15),
                -(250,250,0); plot1D cos w h0:(h/2) scale
                -tbeg:(ofint 0) tend:(ofint 15), (250,0,250)]let () = -drawtoscreen w h graph

                +class="math inline">\simw \simh0:(h/2) \simscale \simtbeg:(ofint 0) \simtend:(ofint 15),
                +(250,250,0); plot1D cos \simw \simh0:(h/2) \simscale
                +\simtbeg:(ofint 0) \simtend:(ofint 15), (250,0,250)]let () = +drawtoscreen \simw \simh graph

                • Floating-point numbers have limited precision.
                • We break out of Horner method computations too quickly.
                • @@ -4025,7 +4019,7 @@

                  6 Arbitrary precision
                • It does not help – yet.
              • Generate a sequence of approximations to the power series limit -at x.

                +at x.

                let infhorner x l = let upd c sum = LCons (c, lazy (lmap (fun apx -> c+.x*.apx) (Lazy.force sum))) in lazyfoldr upd l (LCons (ofint 0, lazy LNil))

              • @@ -4037,13 +4031,14 @@

                6 Arbitrary precision && f x0 = f x2 -> f x0 | LCons (, lazy tl) -> exact f tl

              • Draw the pixels of the graph at exact coordinates.

                -

                let plot1D f w h0 scale -tbeg tend = let dt = (tend -. tbeg) /. ofint w in -let eval = exact (fun y-> toint (scale . y)) in Array.init w (fun -i -> let y = infhorner (tbeg +. dt . ofint i) f in i, h0 + eval -y)

              • +

                let plot1D f \simw \simh0 \simscale \simtbeg \simtend = let dt = (tend -. tbeg) /. ofint w +in let eval = exact (fun y-> toint (scale . y)) in Array.init w +(fun i -> let y = infhorner (tbeg +. dt . ofint i) f in i, h0 + +eval y)

              • Success! If a power series had every third term contributing we would have to check three terms in the function exact

                -

                let nchain nA0 nB0 lA -lB = let rec nA = LCons (nA0, lazy -(integ (-.lA :. nA))) and nB = -LCons (nB0, lazy (integ (-.lB :. -nB +: lA *:. nA))) in nA, nB

              • +

                let nchain \simnA0 \simnB0 \simlA \simlB = let rec nA = LCons (nA0, lazy (integ +(\sim-.lA :. nA))) and nB = LCons +(nB0, lazy (integ (\sim-.lB :. nB ++: lA *:. nA))) in nA, nB

              7 Circular data @@ -4274,10 +4270,11 @@

              8.2 Example: pretty-printing

              irrelevant.

              let rec grends w grstack = let flush tail =When the stack exceeds width w, yieldallflush it – yield everything -in it.(revconcatmap prep:(GBeg Toofar) -snd grstack) tail inAbove: concatenate in rev. with prep -before each part. Await (function | TE (curp, ) | LE curp as e -> -(match grstack withRemember beginning of groups in the stack.
              +in it.(revconcatmap \simprep:(GBeg +Toofar) snd grstack) tail inAbove: concatenate in rev. with +prep before each part. Await (function | TE (curp, ) | LE +curp as e -> (match grstack withRemember beginning of groups in the +stack.
              | [] -> Yield (e, grends w []) | (begp, ):: when curp-begp > w -> flush (Yield (e, grends w [])) | (begp, gr)::grs -> grends w ((begp, e::gr)::grs)) | GBeg begp -> grends w ((begp, [])::grstack) | @@ -4371,7 +4368,8 @@

              8.2 Example: pretty-printing

              *cycle : 'a list -> 'a llist* that creates a lazy list with elements from standard list, and the whole list as the tail after the last element from the input list.

              -

              *[a1; a2; …; aN]

              +

              *[a1; a2; …; aN]\mapsto

@@ -4408,14 +4406,13 @@

8.2 Example: pretty-printing

Your function cycle can either return LNil or fail for an empty list as argument. 1. Note that *inv_fact* from the lecture defines -the power series for the exp (⋅) -function (exp (x) = ex). -Using *cycle* and +the power series for the \exp +(\cdot) function (\exp (x) = +e^x). Using *cycle* and *inv_fact*, define the power series for sin (⋅) and cos (⋅), and draw their graphs using -helper functions from the lecture script +class="math inline">\sin (\cdot) and \cos (\cdot), and draw their graphs +using helper functions from the lecture script *Lec7.ml*.

Exercise 3: * Modify one of the puzzle solving programs (either from the previous lecture or from your previous @@ -4425,12 +4422,12 @@

8.2 Example: pretty-printing

than computing solutions by the original program.

Exercise 4: Hamming’s problem. Generate in increasing order the numbers of the form 2a13a25a3pkak, -that is numbers not divisible by prime numbers greater than the kth prime number.

+class="math inline">2^{a_{1}} 3^{a_{2}} 5^{a_{3}} \ldots +p_{k}^{a_{k}}, that is numbers not divisible by prime numbers +greater than the kth prime number.

  • In the original Hamming’s problem posed by Dijkstra, k = 3*, which is related
  • +class="math inline">k = 3*, which is related

to* http://en.wikipedia.org/wiki/Regular_number.

@@ -4699,8 +4696,9 @@

3 Monads

  • as if ; was replaced by >>=
  • [] |-> … does not produce anything – as needed by guarding
  • -
  • [()] |-> … (fun _ -> …) () - … i.e. keep without change
  • +
  • [()] |-> … \rightsquigarrow (fun +_ -> …) () \rightsquigarrow … +i.e. keep without change
  • Throwing away the binding argument is a common practice, with infix syntax >> in Haskell, and supported in do-notation @@ -4855,7 +4853,7 @@

    3.1 Monad laws

    • A parametric data type is a monad only if its bind and return operations meet axioms:

      -

      $$ \begin{matrix} +

      \begin{matrix} \operatorname{bind} (\operatorname{return}a) f & \approx & f a\\\\\\ \operatorname{bind}a (\lambda x.\operatorname{return}x) & \approx @@ -4864,7 +4862,7 @@

      3.1 Monad laws

      & \approx & \operatorname{bind}a (\lambda x.\operatorname{bind}b (\lambda -y.c)) \end{matrix} $$

    • +y.c)) \end{matrix}

    • Check that the laws hold for our example monad

      let bind a b = concatmap b alet return x = [x]

    @@ -4877,31 +4875,31 @@

    3.2 Monoid laws and
  • mplus : 'a monoid -> 'a monoid -> 'a monoid
  • that meet the laws:

    -

    $$ \begin{matrix} +

    \begin{matrix} \operatorname{mplus}\operatorname{mzero}a & \approx & a \\\\\\ \operatorname{mplus}a\operatorname{mzero} & \approx & a \\\\\\ \operatorname{mplus}a (\operatorname{mplus}b c) & \approx & \operatorname{mplus} (\operatorname{mplus}a b) c \end{matrix} -$$

  • +

  • We will define fail as synonym for mzero and infix ++ for mplus.

  • Fusing monads and monoids gives the most popular general flavor of monads which we call monad-plus after Haskell.

  • Monad-plus requires additional axioms that relate its “addition” and its “multiplication”.

    -

    $$ \begin{matrix} +

    \begin{matrix} \operatorname{bind}\operatorname{mzero}f & \approx & \operatorname{mzero}\\\\\\ \operatorname{bind}m (\lambda x.\operatorname{mzero}) & \approx & -\operatorname{mzero} \end{matrix} $$

  • -
  • Using infix notation with as -mplus, 0 -as mzero, as -bind and 1 as return, we -get monad-plus axioms

    -

    $$ \begin{matrix} +\operatorname{mzero} \end{matrix}

  • +
  • Using infix notation with \oplus +as mplus, \boldsymbol{0} +as mzero, \vartriangleright as bind and +\boldsymbol{1} as return, +we get monad-plus axioms

    +

    \begin{matrix} \boldsymbol{0} \oplus a & \approx & a \\\\\\ a \oplus \boldsymbol{0} & \approx & a \\\\\\ a \oplus (b \oplus c) & \approx & (a \oplus b) \oplus c\\\\\\ @@ -4914,7 +4912,7 @@

    3.2 Monoid laws and \boldsymbol{0} \vartriangleright f & \approx & \boldsymbol{0}\\\\\\ a \vartriangleright (\lambda x.\boldsymbol{0}) & \approx & -\boldsymbol{0} \end{matrix} $$

  • +\boldsymbol{0} \end{matrix}

  • The list type has a natural monad and monoid structure

    let mzero = [] let mplus = (@) let bind a b = concatmap b a let return a = [a]

  • @@ -4977,17 +4975,16 @@

    4 Monad “flavors”

    We will not cover it.

  • Probabilistic computation:

    choose : float -> ’a monad -> ’a monad -> ’a monad

    -

    satisfying the laws with apb -for choose p a b and pq for p*.q, -0 ≤ p, q ≤ 1:

    -

    $$ \begin{matrix} +

    satisfying the laws with a \oplus _{p} +b for choose p a b and pq for p*.q, 0 \leqslant p, q \leqslant 1:

    +

    \begin{matrix} a \oplus _{0} b & \approx & b \\\\\\ a \oplus _{p} b & \approx & b \oplus _{1 - p} a\\\\\\ a \oplus _{p} (b \oplus _{q} c) & \approx & \left( a \oplus _{\frac{p}{p + q - pq}} b \right) \oplus _{p + q - pq} c\\\\\\ - a \oplus _{p} a & \approx & a \end{matrix} $$

  • + a \oplus _{p} a & \approx & a \end{matrix}

  • Parallel computation as monad with access and parallel bind:

    parallel :’a monad-> ’b monad-> (’a -> ’b -> ’c monad) -> ’c monad

    @@ -5178,8 +5175,8 @@

    6.2 Monads as computation

    let assemblyLine w = perform c <– makeChopsticks w c’ <– polishChopsticks c c’’ <– wrapChopsticks c’ return c’’

  • Any expression can be spread over a monad, e.g. for λ-terms:

    -

    $$ \begin{matrix} +class="math inline">\lambda-terms:

    +

    \begin{matrix} \llbracket N \rrbracket = & \operatorname{return}N & \text{(constant)}\\\\\\ \llbracket x \rrbracket = & \operatorname{return}x & @@ -5194,7 +5191,7 @@

    6.2 Monads as computation

    \rrbracket (\lambda v_{a} .\operatorname{bind} \llbracket b \rrbracket (\lambda v_{b} .v_{a} v_{b})) & \text{(application)} \end{matrix} -$$

  • +

  • When an expression is spread over a monad, its computation can be monitored or affected without modifying the expression.

  • @@ -5466,13 +5463,12 @@

    8.4 The state monad

  • The state monad is useful to hide passing-around of a “current” value.

  • We will rename variables in λ-terms to get rid of possible name +class="math inline">\lambda-terms to get rid of possible name clashes.

      -
    • This does not make a λ-term safe for multiple steps of -β-reduction. Find a -counter-example.
    • +
    • This does not make a \lambda-term +safe for multiple steps of \beta-reduction. Find a counter-example.
  • type term =| Var of string| Lam of string * term| App of term * term

  • @@ -5672,10 +5668,9 @@

    10.1 The probability monad

    • choose : float -> ’a monad -> ’a monad -> ’a monad

      choose p a b represents an event or distribution which -is a with probability p and is b with probability 1 − p.

    • +is a with probability p and is b +with probability 1 - p.

    • val pick : (’a * float) list -> ’a t

      A result from the provided distribution over values. The argument must be a probability distribution: positive values summing to @@ -5729,15 +5724,15 @@

      10.1 The probability monad

      probability distribution – naive implementation. type ’a t = (’a * float) list
      let bind a b = merge``x w.p. p and then y w.p. q happens =[y, q*.p | (x,p) <- a; -(y,q) <- b x]y results w.p. pq. let return a = [a, -1.]Certainly a. end include M include MonadOps (M) let -choose p a b = List.map (fun (e,w) -> e, p.w) a @ List.map (fun -(e,w) -> e, (1. -.p).w) b let pick dist = dist let -uniform elems = normalize (List.map (fun e->e,1.) elems)let coin = -[true, 0.5; false, 0.5] let flip p = [true, p; false, 1. -. p]

      +class="math inline">p and then y w.p. q happens =[y, q*.p | (x,p) <- a; (y,q) +<- b x]y results w.p. p +q. let return a = [a, 1.]Certainly a. end include M +include MonadOps (M) let choose p a b = List.map (fun (e,w) -> e, +p.w) a @ List.map (fun (e,w) -> e, (1. -.p).w) b let pick +dist = dist let uniform elems = normalize (List.map (fun +e->e,1.) elems)let coin = [true, 0.5; false, 0.5] let flip p = [true, +p; false, 1. -. p]

      let prob p m = m |> List.filter (fun (e,) -> p e)All cases where p holds, |> List.map snd |> List.foldleft (+.) 0.add up.
      @@ -5794,13 +5789,12 @@

      10.3 Conditional monad?

    • We could use guard – conditional probabilities!

        -
      • P(A|B) +
      • P (A|B)
          -
        • Compute what is needed for both A and B.
        • -
        • Guard B.
        • -
        • Return A.
        • +
        • Compute what is needed for both A +and B.
        • +
        • Guard B.
        • +
        • Return A.
    • For the exact distribution monad it turns out very easy – we just @@ -5824,10 +5818,10 @@

      10.3 Conditional q*.p | (x,p) <- a; (y,q) <- b x] let return a = [a, 1.] let mzero = []Measure equal 0 everywhere is OK. let mplus = List.append end include MP include MonadPlusOps (MP) let choose p a b =It isn’t -a w.p. p & -b w.p. (1−p) -since a and b List.map (fun (e,w) -> e, -p.w) a @are not +a w.p. p & +b w.p. (1 - p) since +a and b List.map (fun (e,w) -> e, p.w) +a @are not normalized!List.map (fun (e,w) -> e, (1. -.p).w) b let pick dist = dist

      let uniform elems = normalize (List.map (fun e->e,1.) elems) let @@ -6003,9 +5997,9 @@

      10.4 Burglary example:
    • module Burglary (P : CONDPROBAB) = struct open P type whathappened =
      Safe | Burgl | Earthq | Burglnearthq

      -

      let check johncalled marycalled radio = perform
      +

      let check \simjohncalled \simmarycalled \simradio = perform
      earthquake <– flip 0.002; guard (radio = None || radio = Some earthquake); burglary <– flip 0.001; let alarmp = match burglary, earthquake with | false, false -> 0.001 | false, true -> 0.29 | @@ -6079,10 +6073,9 @@

      11 Lightweight cooperative library, parallel \concat{e}{\rsub{a}} \concat{e}{\rsub{b}} c is equivalent to

      -

      perform let a = ea in let b = -eb in x -<– a; y <– b; c x y

      +

      perform let a = e_{a} in let b = +e_{b} in x <– a; y <– b; c x +y

      • We will follow this model, with an imperative implementation.
      • In any case, do not call run or access @@ -7252,7 +7245,7 @@

        5.2 Algorithmic optimizations

        • (All times measured with profiling turned on.)

        • Optim0.ml asymptotic time complexity: 𝒪(n2), time: 22.53s.

          +class="math inline">\mathcal{O} (n^2), time: 22.53s.

          • Garbage collection takes 6% of time.
              @@ -7264,7 +7257,7 @@

              5.2 Algorithmic optimizations

              Hashtbl.find h w in Hashtbl.replace h w (c+1); h with Notfound -> Hashtbl.add h w 1; hlet iterate f h = Hashtbl.iter f h

              Optim1.ml asymptotic time complexity: 𝒪(n), time: 0.63s.

              +class="math inline">\mathcal{O} (n), time: 0.63s.

              • Garbage collection takes 17% of time.
              @@ -7273,7 +7266,7 @@

              5.2 Algorithmic optimizations

              let k,c,h = List.foldleft (fun (k,c,h) w -> if k = w then k, c+1, h else w, 1, ((k,c)::h)) (““, 0, []) words in (k,c)::h

              Optim2.ml asymptotic time complexity: 𝒪(nlogn), time: 1s.

              +class="math inline">\mathcal{O} (n \log n), time: 1s.

              • Garbage collection takes 40% of time.
              @@ -7281,8 +7274,8 @@

              5.2 Algorithmic optimizations

              attempt it.

            • With algorithmic optimizations we should be concerned with asymptotic complexity in terms of the 𝒪(⋅) notation, but we will not pursue -complexity analysis in the remainder of the lecture.

            • +class="math inline">\mathcal{O} (\cdot) notation, but we will not +pursue complexity analysis in the remainder of the lecture.

            5.3 Low-level optimizations

              @@ -7333,9 +7326,9 @@

              5.4 Comparison of
            • We perform a rough comparison of association lists, tree-based maps and hashtables. Sets would give the same results.
            • We always create hashtables with initial size 511.
            • -
            • 107 operations of: -adding an association (creation), finding a key that is in the map, -finding a key out of a small number of keys not in the map.
            • +
            • 10^7 operations of: adding an +association (creation), finding a key that is in the map, finding a key +out of a small number of keys not in the map.
            • First row gives sizes of maps. Time in seconds, to two significant digits.
            @@ -8237,8 +8230,8 @@

            6.1.1 Example: Finding email “EDU” | “org” | “ORG” | “com” | “COM”of address.rule email state = parse| newlineCheck state before moving on.{ report state lexbuf; nextline lexbuf; email Seek -lexbuf }Detected possible start of -address.| (addrchar+ as name) atsepsymb (addrchar+ as addr) { email +lexbuf }\swarrowDetected possible start +of address.| (addrchar+ as name) atsepsymb (addrchar+ as addr) { email (Addr (false, name, [addr])) lexbuf }

            domsepsymb (addrdom as dom)Detected possible finish of address. { let
            @@ -8346,21 +8339,21 @@

            6.2 Parsing with Menhir

            optimization)

            %token < int > INT%token PLUS TIMES%left PLUS%left TIMESMultiplication has higher priority.%%expression:| i = INT { i -} Without inlining, would not +}\swarrow Without inlining, would not distinguish priorities.| e = expression; o = op; f = expression { o e f }%inline op:Inline operator – generate corresponding rules.| PLUS { ( + ) }| TIMES { ( * ) }

          • -
          • Menhir is an LR (1) parser -generator, i.e. it fails for grammars where looking one token ahead, -together with precedences, is insufficient to determine whether a rule -applies.

            +
          • Menhir is an \operatorname{LR} +(1) parser generator, i.e. it fails for grammars where looking +one token ahead, together with precedences, is insufficient to determine +whether a rule applies.

            • In particular, only unambiguous grammars.
          • -
          • Although LR (1) grammars are a -small subset of context free grammars, the semantic actions can -depend on context: actions can be functions that take some form of -context as input.

          • +
          • Although \operatorname{LR} (1) +grammars are a small subset of context free grammars, the +semantic actions can depend on context: actions can be functions that +take some form of context as input.

          • Positions are available in actions via keywords $startpos(x) and $endpos(x) where x is name given @@ -8393,7 +8386,7 @@

            6.2.1 Example: parsing number val ( + ): number -> number -> number val ( - ): number -> number -> number val ( ): number -> number -> number val ( / ): number -> number -> number val ( -): number -> numberend>%start +class="math inline">\sim-): number -> numberend>%start <Semantics.number> main%{ open Semantics %}

            %%main:| e = expr EOL { e }expr:| i = INT { inject i }| LPAREN e = expr RPAREN { e }| e1 = expr PLUS e2 = expr { e1 + e2 }| e1 = expr MINUS @@ -8402,8 +8395,8 @@

            6.2.1 Example: parsing
          • File calc.ml:

            module FloatSemantics = struct type number = float let inject = floatofint let ( + ) = ( +. ) let ( - ) = ( -. ) let ( * ) = ( *. ) let -( / ) = ( /. ) let (- ) = (-. )endmodule FloatParser = +( / ) = ( /. ) let (\sim- ) = (\sim-. )endmodule FloatParser = Parser.Make(FloatSemantics)

            let () = let stdinbuf = Lexing.fromchannel stdin in while true do let linebuf = Lexing.fromstring (Lexer.line stdinbuf) in try
            @@ -8679,10 +8672,10 @@

            4 Bruteforce
          • Pairs of words are much less frequent than single words so storing them means less work for postings lists merging.

          • Can result in much bigger index size: min (W2,N) where -W is the number of distinct -words and N the total number -of words in documents.

          • +class="math inline">\min (W^2, N) where W is the number of distinct words and N the total number of words in +documents.

          • Invocation that gives us stack backtraces:

            ocamlbuild InvIndex3.native -cflag -g -libs unix; export OCAMLRUNPARAM="b"; ./InvIndex3.native

          • Time: 2.4s – disappointing.

          • @@ -8829,12 +8822,12 @@

            1 Zippers

          type btree = Tip | Node of int * btree * btree
          -

          $$ \begin{matrix} +

          \begin{matrix} T & = & 1 + xT^2\\\\\\ \frac{\partial T}{\partial x} & = & 0 + T^2 + 2 xT \frac{\partial T}{\partial x} = TT + 2 xT \frac{\partial T}{\partial x} -\end{matrix} $$

          +\end{matrix}

          type btree_dir = LeftBranch | RightBranch
           type btree_deriv =
          @@ -8847,7 +8840,7 @@ 

          1 Zippers

          bottom.

          The part closest to the location should be on top.

        • Revisiting equations for trees and lists:

          -

          $$ \begin{matrix} +

          \begin{matrix} T & = & 1 + xT^2\\\\\\ \frac{\partial T}{\partial x} & = & 0 + T^2 + 2 xT \frac{\partial @@ -8856,7 +8849,7 @@

          1 Zippers

          L (y) & = & 1 + yL (y)\\\\\\ L (y) & = & \frac{1}{1 - y}\\\\\\ \frac{\partial T}{\partial x} & = & T^2 L (2 xT) \end{matrix} -$$

          +

          I.e. the context can be stored as a list with the root as the last node.

            @@ -8909,15 +8902,14 @@

            1.1 Example: Context rewriting

            as we can, but changing the whole expression as little as possible.
          • We can illustrate our algorithm using mathematical notation. Let:
              -
            • x be the thing we pull -out
            • -
            • C[e] and D[e] be big expressions -with subexpression e
            • -
            • operator stand for one of: *,+
            • +
            • x be the thing we pull out
            • +
            • C [e] and D [e] be big expressions with subexpression +e
            • +
            • operator \circ stand for one of: +\ast, +
            -$$ \begin{matrix} + \begin{matrix} D [(C [x] \circ e_{1}) \circ e_{2}] & \Rightarrow & D [C [x] \circ (e_{1} \circ e_{2})]\\\\\\ @@ -8930,7 +8922,7 @@

            1.1 Example: Context rewriting

            e_{1} e_{2}]\\\\\\ D [e \circ C [x]] & \Rightarrow & D [C [x] \circ e] \end{matrix} -$$
          • +
          • First the groundwork:

          type op = Add | Multype expr = Val of int | Var of string | App of @@ -8951,14 +8943,15 @@

          1.1 Example: Context rewriting

        let rec pullout loc = match loc.ctx with | [] -> locDone.| (Leftarg, op, l) :: upctx ->D[eC[x]] ⇒ D[C[x]∘e] -pullout {loc with ctx=(Rightarg, op, l) :: upctx} | (Rightarg, op1, e1) -:: (, op2, e2) :: upctx when op1 = op2 ->D[(C[x]∘e1)∘e2]/D[e2∘(C[x]∘e1)] ⇒ D[C[x]∘(e1e2)] -pullout {loc with ctx=(Rightarg, op1, App(e1,op1,e2)) :: upctx} | -(Rightarg, Add, e1) :: (, Mul, e2) :: upctx -> pullout {loc with -ctx=D[(C[x]+e1)e2]/D[e2(C[x]+e1)] ⇒ D[C[x]e2+e1e2] +class="math inline">D [e \circ C [x]] \Rightarrow D [C [x] \circ +e] pullout {loc with ctx=(Rightarg, op, l) :: upctx} | (Rightarg, +op1, e1) :: (, op2, e2) :: upctx when op1 = op2 ->D [(C [x] \circ e_{1}) \circ e_{2}] / D [e_{2} \circ +(C [x] \circ e_{1})] \Rightarrow D [C [x] \circ (e_{1} \circ +e_{2})] pullout {loc with ctx=(Rightarg, op1, App(e1,op1,e2)) :: +upctx} | (Rightarg, Add, e1) :: (, Mul, e2) :: upctx -> pullout {loc +with ctx=D [(C [x] + e_{1}) e_{2}] / D [e_{2} +(C [x] + e_{1})] \Rightarrow D [C [x] e_{2} + e_{1} e_{2}] (Rightarg, Mul, e2) :: (Rightarg, Add, App(e1,Mul,e2)) :: upctx} | (Rightarg, op, r)::upctx ->Move up the context. pullout {sub=App(loc.sub, op, r); ctx=upctx}

        @@ -9083,11 +9076,11 @@

        2.1 Example using Froc

        display x y >>= fun -> r >>= display x y
      • let growat (x, depth, upd) = let xl = x-f2i -(width.(2.0(-.(i2f +(width.(2.0(\sim-.(i2f (depth+1))))) in let l, updl = changeable (Leaf (xl, (depth+1)20)) in
        let xr = x+f2i (width
        .(2.0
        (-.(i2f (depth+1))))) in let r, updr = +class="math inline">\sim-.(i2f (depth+1))))) in let r, updr = changeable (Leaf (xr, (depth+1)
        20)) in write upd (Node (x, depth*20, l, r));Update the old leaf propagate ();and keep handles to make future updates. [xl, depth+1, updl; xr, depth+1, updr]

      • @@ -9252,10 +9245,8 @@

        3 Functional Reactive happens.

      • Turning behaviors and events from functions of time into input-output streams is similar to optimizing interesction of ordered -lists from O(mn) to O(m+n) -time.

      • +lists from O (mn) to O (m + n) time.

      • Now we can in turn define events in terms of behaviors:

        type ’a event = ’a option behavior

        although it betrays the discrete character of events (happening at @@ -9372,8 +9363,8 @@

        4 Reactivity by Stream sampling times.

        let integral fb = let rec loop t0 acc uts bs = let Cons ((,t1), uts) = Lazy.force uts in let Cons (b, bs) = Lazy.force bs in let acc = acc +. -(t1 -. t0) *. b inb = fb (t1), acc  ≈ ∫t ≤ t0f. +(t1 -. t0) *. b inb =\operatorname{fb} +(t_{1}), \operatorname{acc} \approx \int_{t \leqslant t_{0}} f. Cons (acc, lazy (loop t1 acc uts bs)) in memo1 (fun uts -> lazy ( let Cons ((,t), uts’) = Lazy.force uts in Cons (0., lazy (loop t 0. uts’ (fb $ uts)))))

        @@ -9407,7 +9398,7 @@

        1 The Paddle Game example

        ->
        fillrect (f2i tx+x) (f2i ty+y) w h | Circle (x, y, r) -> fillcircle (f2i tx+x) (f2i ty+y) r | Group scs -> List.iter (aux tx ty) -scs Set color for +scs\swarrow Set color for sc objects.| Color (c, sc) -> setcolor c; aux tx ty sc | Translate (x, y, sc) -> aux (tx+.x) (ty+.y) sc in
        cleargraph ();‘‘Fast and clean’’ removing of previous picture. aux 0. 0. @@ -9461,10 +9452,10 @@

        1 The Paddle Game example

      • Unfortunately OCaml, being an eager language, does not let us encode recursive behaviors in elegant way. We need to unpack behaviors and events as functions of the input stream.
      • -
      • xbounce ->> (-.) event is +
      • xbounce ->> (\sim-.) event is just the negation function happening at each horizontal bounce.
      • stepaccum vel (xbounce ->> (-.)) behavior is vel value +class="math inline">\sim-.)) behavior is vel value changing sign at each horizontal bounce.
      • liftB intoffloat (integral xvel) +* width /* !*2 – first integrate velocity, then truncate it to integers and offset to the middle of the @@ -9476,19 +9467,19 @@

        1 The Paddle Game example

    let pbal vel = let rec xvel uts = stepaccum vel (xbounce ->> -(-.)) $ uts and xvel = {memof = xvel; -memor = None} and xpos uts = (liftB intoffloat (integral xvel) +* width -/* !2) $ uts and xpos = {memof = xpos; memor = None} and xbounce uts -= whenB ((xpos > width -* !27) || (xpos <* !27)) -$ uts and xbounce = {memof = xbounce; memor = None} in let rec yvel uts -= (stepaccum vel (ybounce ->> (-.))) $ uts and yvel = {memof = yvel; memor -= None} and ypos uts = (liftB intoffloat (integral yvel) + height -/* !2) $ uts and ypos = {memof = ypos; memor = None} and ybounce uts -= whenB ( (ypos > height -* !27) || ((ypos <* -!17) && (ypos >* !7) && (xpos >* -mousex) &&* (xpos <* mousex +* !*50))) $ uts and ybounce = -{memof = ybounce; memor = None} in liftB2 (fun x y -> Color +(\sim-.)) $ uts and xvel = {memof = +xvel; memor = None} and xpos uts = (liftB intoffloat (integral xvel) +* +width /* !2) $ uts and xpos = {memof = xpos; memor = None} and +xbounce uts = whenB ((xpos > width -* !27) || (xpos +<* !27)) $ uts and xbounce = {memof = xbounce; memor = None} in +let rec yvel uts = (stepaccum vel (ybounce ->> (\sim-.))) $ uts and yvel = {memof = yvel; +memor = None} and ypos uts = (liftB intoffloat (integral yvel) + +height /* !2) $ uts and ypos = {memof = ypos; memor = None} and +ybounce uts = whenB ( (ypos > height -* !27) || ((ypos +<* !17) && (ypos >* !7) && (xpos +>* mousex) &&* (xpos <* mousex +* !*50))) $ uts and +ybounce = {memof = ybounce; memor = None} in liftB2 (fun x y -> Color (Graphics.red, Circle (x, y, 6))) xpos ypos

    • Invocation:

      @@ -9692,10 +9683,10 @@

      6 Direct Control

    • We follow our (or Lwt) implementation of lightweight threads, adapting it to the need of cancelling flows.

      module F = Froctype ’a result =| Return of 'aNotifications to cancel when cancelled.| -Sleep of (’a -> unit) list * F.cancel ref list| Cancelled| Link of ’a -stateand ’a state = {mutable state : ’a result}type cancellable = unit -state

    • +class="math inline">\downarrowNotifications to cancel when +cancelled.| Sleep of (’a -> unit) list * F.cancel ref list| +Cancelled| Link of ’a stateand ’a state = {mutable state : ’a +result}type cancellable = unit state

    • Functions find, wakeup, connect are as in lecture 8 (but connecting to cancelled thread cancels the other thread).

    • @@ -9744,12 +9735,12 @@

      6 Direct Control

      (perform
      await mbuttonpressed;Start when button down. let opn = ref [] in
      repeat (perform mpos <– await mousemove;Add next position to line. emit (opn := mpos -:: !opn; !opn :: !cld))
      -until:mbuttonreleased;Start new shape. emit (cld := !opn :: !cld; -opn := []; [] :: !cld))let painter, cancelpainter = behaviorflow [] -painterlet () = reactimate painter

      +class="math inline">\swarrowAdd next position to line. emit (opn +:= mpos :: !opn; !opn :: !cld))
      +\simuntil:mbuttonreleased;\swarrowStart new shape. emit (cld := !opn :: +!cld; opn := []; [] :: !cld))let painter, cancelpainter = behaviorflow +[] painterlet () = reactimate painter

    1 Flows and state

    @@ -9799,14 +9790,15 @@

    7.1 Calculator Flow

    <– repeat
    (performEnter the digits of a number (on later turns d <– await digits;starting from the second digit) emit (now := 10. *. !now +. d; -!now)) until:ops;until operator button -is pressed.
    +!now)) \simuntil:ops;until operator +button is pressed.
    emit (now := !f !now; f := op !now; !now); d <– -repeatCompute the result -and ‘‘store away’’ the operator.(perform op <– await ops; return (f -:= op !now)) until:digits;The user can -pick a different operator. emit (now := d; !now))Reset the state to a -new number.let calce, cancelcalc = eventflow calcNotifies display +repeat\nwarrowCompute the +result and ‘‘store away’’ the operator.(perform op <– await ops; +return (f := op !now)) \simuntil:digits;The user can pick a +different operator. emit (now := d; !now))Reset the state to a new +number.let calce, cancelcalc = eventflow calcNotifies display update.

    7.2 Tk: LablTk

      @@ -9830,53 +9822,53 @@

      7.2 Tk: LablTk

      edit fields) are for entering info from keyboard.

      • Actions are callback functions passed as the command argument.
      • +class="math inline">\simcommand argument.
    • Frames in Tk group widgets.

    • The parent is sent as last argument, after optional labelled arguments.

      let top = Tk.openTk ()let btnframe = Frame.create relief:’Groove borderwidth:2 toplet buttons = Array.map +class="math inline">\simrelief:’Groove \simborderwidth:2 toplet buttons = Array.map (Array.map (function | text, ’Dot -> Button.create text command:(fun () -> F.send dot ()) +class="math inline">\simtext \simcommand:(fun () -> F.send dot ()) btnframe | text, ’Di d -> Button.create text command:(fun () -> F.send digit d) +class="math inline">\simtext \simcommand:(fun () -> F.send digit d) btnframe | text, ’O f -> Button.create text command:(fun () -> F.send op f) +class="math inline">\simtext \simcommand:(fun () -> F.send op f) btnframe)) layoutlet result = Label.create text:“0” relief:’Sunken top

    • +class="math inline">\simtext:“0” \simrelief:’Sunken top

    • GUI toolkits have layout algorithms, so we only need to tell which widgets hang together and whether they should fill all available space etc. – via pack, or grid for “rectangular” organization.

    • -
    • fill: the allocated space in +

    • \simfill: the allocated space in ‘X, ‘Y, ‘Both or -‘None axes;expand: +‘None axes;\simexpand: maximally how much space is allocated or only as needed.

    • -
    • anchor: allows to glue a widget -in particular direction (‘Center, ‘E, +

    • \simanchor: allows to glue a +widget in particular direction (‘Center, ‘E, ‘Ne etc.)

    • The grid packing flexibility: columnspan and rowspan.

    • +class="math inline">\simcolumnspan and \simrowspan.

    • configure functions accept the same arguments as create but change existing widgets.

    • let () = Wm.titleset top “Calculator”; Tk.pack [result] side:’Top fill:’X; Tk.pack [btnframe] side:’Bottom expand:true;
      +class="math inline">\simside:’Top \simfill:’X; Tk.pack [btnframe] \simside:’Bottom \simexpand:true;
      Array.iteri (fun column ->Array.iteri (fun row button -> Tk.grid -column row [button])) buttons; Wm.geometryset top -“200x200”;
      +\simcolumn \simrow [button])) buttons; Wm.geometryset +top “200x200”;
      F.notifye calce (fun now -> Label.configure text:(stringoffloat now) result); +class="math inline">\simtext:(stringoffloat now) result); Tk.mainLoop ()

    @@ -9892,33 +9884,33 @@

    7.3 GTk+: LablGTk

  • Our events are called signals in GTk+.
  • Registering a notification is called connecting a signal handler, e.g.button#connect#clicked callback:hello which takes $\sim {\nobreak}$callback:(unit -> unit) -and returns GtkSignal.id. +class="math inline">\simcallback:hello which takes \sim {\nobreak}callback:(unit -> unit) and +returns GtkSignal.id.
    • As with Froc notifications, multiple handlers can be attached.
  • GTk+ events are a subclass of signals related to more specific window events, e.g.window#event#connect#delete callback:deleteevent
  • +class="math inline">\simcallback:deleteevent
  • GTk+ event callbacks take more info: callback:(event -> unit) for some type +class="math inline">\simcallback:(event -> unit) for some type event.
  • Automatic layout (aka. packing) seems less sophisticated than in Tk:

    • only horizontal and vertical boxes,
    • -
    • therefore fill is binary and anchor is replaced by from ‘START or +
    • therefore \simfill is binary and +\simanchor is replaced by \simfrom ‘START or ‘END.
  • Automatic grid layout is called table.

      -
    • fill and expand take ‘X, +
    • \simfill and \simexpand take ‘X, ‘Y, ‘BOTH, ‘NONE.
  • The coerce method casts the type of the object (in @@ -9931,45 +9923,46 @@

    7.3 GTk+: LablGTk

  • The model part of application doesn’t change.

  • Setup:

    let = GtkMain.Main.init ()let window = GWindow.window width:200 height:200 title:“Calculator” ()let top = GPack.vbox -packing:window#add ()let result = -GMisc.label text:“0” packing:top#add ()let btnframe = GPack.table -rows:(Array.length layout) columns:(Array.length layout.(0)) packing:top#add ()

  • +class="math inline">\simwidth:200 \simheight:200 \simtitle:“Calculator” ()let top = GPack.vbox +\simpacking:window#add ()let result = +GMisc.label \simtext:“0” \simpacking:top#add ()let btnframe = +GPack.table \simrows:(Array.length +layout) \simcolumns:(Array.length +layout.(0)) \simpacking:top#add +()

  • Button actions:

    let buttons = Array.map (Array.map (function | label, ’Dot -> let -b = GButton.button label () in let = -b#connect#clicked
    -callback:(fun () -> F.send dot ()) -in b | label, ’Di d ->
    -let b = GButton.button label () in let -= b#connect#clicked
    -callback:(fun () -> F.send digit d) -in b | label, ’O f ->
    -let b = GButton.button label () in let +b = GButton.button \simlabel () in let = b#connect#clicked
    -callback:(fun () -> F.send op f) in -b)) layout

  • +\simcallback:(fun () -> F.send dot +()) in b | label, ’Di d ->
    +let b = GButton.button \simlabel () in +let = b#connect#clicked
    +\simcallback:(fun () -> F.send digit +d) in b | label, ’O f ->
    +let b = GButton.button \simlabel () in +let = b#connect#clicked
    +\simcallback:(fun () -> F.send op f) +in b)) layout

  • Button layout, result notification, start application:

    let deleteevent = GMain.Main.quit (); falselet () = let = window#event#connect#delete callback:deleteevent in Array.iteri (fun +class="math inline">\simcallback:deleteevent in Array.iteri (fun column->Array.iteri (fun row button -> btnframe#attach left:column top:row fill:’BOTH expand:’BOTH (button#coerce)) ) buttons; +class="math inline">\simleft:column \simtop:row \simfill:’BOTH \simexpand:’BOTH (button#coerce)) ) buttons; F.notifye calce (fun now -> result#setlabel (stringoffloat now)); window#show (); GMain.Main.main ()

  • Functional Programming

  • Zippers, Reactivity, GUIs

    Exercise 1: Introduce operators −,/ into the context rewriting “pull out +class="math inline">-, / into the context rewriting “pull out subexpression” example. Remember that they are not commutative.

    Exercise 2: Add to the paddle game example:

    @@ -10080,7 +10073,7 @@

    Chapter 11

    with new constructs:

    • lambda calculus: variables Var, λ-abstractions Abs, +class="math inline">\lambda-abstractions Abs, function applications App;
    • arithmetics: variables Var, constants Num, addition Add, multiplication Mult; …
    • @@ -10120,7 +10113,7 @@

      Chapter 11

      dedicated variant.let0.5emevalvar0.5emwrap0.5emsub0.5em(s0.5em:0.5emvar)0.5em=0.5em0.5emtry0.5emList.assoc0.5ems0.5emsub0.5emwith0.5emNotfound0.5em->0.5emwrap0.5emstype0.5em’a0.5emlambda0.5em=Here we define the sub-language of λ-expressions.0.5em0.5emVarL0.5emof0.5emvar0.5em|0.5emAbs0.5emof0.5emstring0.5em0.5em’a0.5em|0.5emApp0.5emof0.5em’a0.5em0.5em’aDuring +class="math inline">\lambda-expressions.0.5em0.5emVarL0.5emof0.5emvar0.5em|0.5emAbs0.5emof0.5emstring0.5em0.5em’a0.5em|0.5emApp0.5emof0.5em’a0.5em0.5em’aDuring evaluation, we need to freshen variables to avoid capturelet0.5emgensym0.5em=0.5emlet0.5emn0.5em=0.5emref0.5em00.5emin0.5emfun0.5em()0.5em->0.5emincr0.5emn;0.5em”“0.5emˆ0.5emstringofint0.5em!n(mistaking distinct variables with the same @@ -10142,10 +10135,10 @@

      Chapter 11

      ->0.5emwrap0.5em(App0.5em(l1’,0.5eml2’)))Wrap into the completed language.0.5em0.5em0.5emSome0.5em(Abs0.5em(s,0.5eml1))0.5em->0.5em0.5em0.5em0.5emlet0.5ems’0.5em=0.5emgensym0.5em()0.5eminRename variable to avoid capture (α-equivalence).0.5em0.5em0.5em0.5emwrap0.5em(Abs0.5em(s’,0.5emevalrec0.5em((s,0.5emwrap0.5em(VarL0.5ems’))::subst)0.5eml1))0.5em0.5em0.5emNone0.5em->0.5emeFalling-through +class="math inline">\alpha-equivalence).0.5em0.5em0.5em0.5emwrap0.5em(Abs0.5em(s’,0.5emevalrec0.5em((s,0.5emwrap0.5em(VarL0.5ems’))::subst)0.5eml1))0.5em0.5em0.5emNone0.5em->0.5emeFalling-through when not in the current sub-language.type0.5emlambdat0.5em=0.5emLambdat0.5emof0.5emlambdat0.5emlambdaDefining -λ-expressionsas the completed +\lambda-expressionsas the completed language,let0.5emrec0.5emeval10.5emsubst0.5em=and the corresponding eval function.0.5em0.5emevallambda0.5emeval10.5em0.5em0.5em0.5em(fun0.5eme0.5em->0.5emLambdat0.5eme)0.5em(fun0.5em(Lambdat0.5eme)0.5em->0.5emSome0.5eme)0.5emsubsttype0.5em’a0.5emexpr0.5em=The @@ -10160,7 +10153,7 @@

      Chapter 11

      arithmetic expressionsas the completed language,let0.5emrec0.5emeval20.5emsubst0.5em=aka. ‘‘tying the recursive knot’‘.0.5em0.5emevalexpr0.5emeval20.5em0.5em0.5em0.5em(fun0.5eme0.5em->0.5emExprt0.5eme)0.5em(fun0.5em(Exprt0.5eme)0.5em->0.5emSome0.5eme)0.5emsubsttype0.5em’a0.5emlexpr0.5em=The -language merging λ-expressions +language merging \lambda-expressions and arithmetic expressions,0.5em0.5emLambda0.5emof0.5em’a0.5emlambda0.5em0.5emExpr0.5emof0.5em’a0.5emexprcan also be used asa sub-language for further @@ -10175,9 +10168,10 @@

      Chapter 11

      • Exceptions have always formed an extensible variant type in OCaml, whose pattern matching is done using the trywith syntax. Since recently, new extensible -variant types can be defined. This augments the normal function -extensibility of FP with straightforward data extensibility.
      • +class="math inline">\ldotswith syntax. Since recently, new +extensible variant types can be defined. This augments the normal +function extensibility of FP with straightforward data +extensibility.
      • Non-solution penalty points:
        • Giving up exhaustivity checking, which is an important aspect of @@ -10301,12 +10295,12 @@

          Chapter 11

        type0.5emvarname0.5em=0.5emstringlet0.5emgensym0.5em=0.5emlet0.5emn0.5em=0.5emref0.5em00.5emin0.5emfun0.5em()0.5em->0.5emincr0.5emn;0.5em”“0.5em0.5emstringofint0.5em!nclass0.5emvirtual0.5em[’lang]0.5emevaluable0.5em=The abstract class for objects supporting the eval -method.object0.5emFor λ-calculus, we need helper +method.object0.5emFor \lambda-calculus, +we need helper functions:0.5em0.5emmethod0.5emvirtual0.5emeval0.5em:0.5em(varname0.5em0.5em’lang)0.5emlist0.5em->0.5em’lang0.5em0.5emmethod0.5emvirtual0.5emrename0.5em:0.5emvarname0.5em->0.5emvarname0.5em->0.5em'langrenaming of free variables,0.5em0.5emmethod0.5emapply0.5em(arg0.5em:0.5em’lang)β-reduction if possible (fallback +class="math inline">\beta-reduction if possible (fallback otherwise).0.5em0.5em0.5em0.5em(fallback0.5em:0.5emunit0.5em->0.5em’lang)0.5em(subst0.5em:0.5em(varname0.5em0.5em’lang)0.5emlist)0.5em=0.5em0.5em0.5em0.5emfallback0.5em()endclass0.5em[’lang]0.5emvar0.5em(v0.5em:0.5emvarname)0.5em=object0.5em(self)We name the current object self.0.5em0.5eminherit0.5em[’lang]0.5emevaluable0.5em0.5emval0.5emv0.5em=0.5emv0.5em0.5emmethod0.5emeval0.5emsubst0.5em=0.5em0.5em0.5em0.5emtry0.5emList.assoc0.5emv0.5emsubst0.5emwith0.5emNotfound0.5em->0.5emself0.5em0.5emmethod0.5emrename0.5emv10.5emv20.5em=Renaming @@ -10314,7 +10308,7 @@

        Chapter 11

        variable:0.5em0.5em0.5em0.5emif0.5emv0.5em=0.5emv10.5emthen0.5em{<0.5emv0.5em=0.5emv20.5em>}0.5emelse0.5emselfwe clone the current object putting the new name.endclass0.5em[’lang]0.5emabs0.5em(v0.5em:0.5emvarname)0.5em(body0.5em:0.5em’lang)0.5em=object0.5em(self)0.5em0.5eminherit0.5em[’lang]0.5emevaluable0.5em0.5emval0.5emv0.5em=0.5emv0.5em0.5emval0.5embody0.5em=0.5embody0.5em0.5emmethod0.5emeval0.5emsubst0.5em=We -do α-conversion prior to +do \alpha-conversion prior to evaluation.0.5em0.5em0.5em0.5emlet0.5emv’0.5em=0.5emgensym0.5em()0.5eminAlternatively, we could evaluate with0.5em0.5em0.5em0.5em{<0.5emv0.5em=0.5emv’;0.5embody0.5em=0.5em(body#rename0.5emv0.5emv’)#eval0.5emsubst0.5em>}substitution @@ -10325,7 +10319,7 @@

        Chapter 11

        v=v1.0.5em0.5em0.5em0.5emelse0.5em{<0.5embody0.5em=0.5embody#rename0.5emv10.5emv20.5em>}0.5em0.5emmethod0.5emapply0.5emargsubst0.5em=0.5em0.5em0.5em0.5embody#eval0.5em((v,0.5emarg)::subst)endclass0.5em[’lang]0.5emapp0.5em(f0.5em:0.5em’lang)0.5em(arg0.5em:0.5em’lang)0.5em=object0.5em(self)0.5em0.5eminherit0.5em[’lang]0.5emevaluable0.5em0.5emval0.5emf0.5em=0.5emf0.5em0.5emval0.5emarg0.5em=0.5emarg0.5em0.5emmethod0.5emeval0.5emsubst0.5em=We use apply to differentiate between f = abs0.5em0.5em0.5em0.5emlet0.5emarg’0.5em=0.5emarg#eval0.5emsubst0.5emin -(β-redexes) and +(\beta-redexes) and f ≠ abs.0.5em0.5em0.5em0.5emf#apply0.5emarg’0.5em(fun0.5em()0.5em->0.5em{<0.5emf0.5em=0.5emf#eval0.5emsubst;0.5emarg0.5em=0.5emarg’0.5em>})0.5emsubst0.5em0.5emmethod0.5emrename0.5emv10.5emv20.5em=Cloning the object ensures that it will be a subtype of 'lang0.5em0.5em0.5em0.5em{<0.5emf0.5em=0.5emf#rename0.5emv10.5emv2;0.5emarg0.5em=0.5emarg#rename0.5emv10.5emv20.5em>}rather @@ -10337,8 +10331,8 @@

        Chapter 11

        need0.5em0.5emmethod0.5emcompute0.5em:0.5emint0.5emoption0.5em=0.5emNone0.5em0.5ema heper method compute.endclass0.5em[’lang]0.5emvarc0.5emv0.5em=0.5emobjectTo -use λ-expressions together -with arithmetic +use \lambda-expressions together with +arithmetic expressions0.5em0.5eminherit0.5em[’lang]0.5emvar0.5emvwe need to upgrade them with the helper method.0.5em0.5eminherit0.5emcomputemixinendclass0.5em[’lang]0.5emabsc0.5emv0.5embody0.5em=0.5emobject0.5em0.5eminherit0.5em[’lang]0.5emabs0.5emv0.5embody0.5em0.5eminherit0.5emcomputemixinendclass0.5em[’lang]0.5emappc0.5emf0.5emarg0.5em=0.5emobject0.5em0.5eminherit0.5em[’lang]0.5emapp0.5emf0.5emarg0.5em0.5eminherit0.5emcomputemixinendclass0.5em[’lang]0.5emnum0.5em(i0.5em:0.5emint)0.5em=A @@ -10392,7 +10386,7 @@

        Chapter 11

        type.class0.5em[’visitor]0.5emabs0.5em(v0.5em:0.5emvarname)0.5em(body0.5em:0.5em’visitor0.5emvisitable)0.5em=object0.5em(self)0.5em0.5emmethod0.5emv0.5em=0.5emv0.5em0.5emmethod0.5embody0.5em=0.5embody0.5em0.5emmethod0.5emaccept0.5em:0.5em’visitor0.5em->0.5emunit0.5em=0.5em0.5em0.5em0.5emfun0.5emvisitor0.5em->0.5emvisitor#visitAbs0.5emselfendlet0.5emnewabs0.5emv0.5embody0.5em=0.5em(new0.5emabs0.5emv0.5embody0.5em:>0.5em’a0.5emvisitable)class0.5em[’visitor]0.5emapp0.5em(f0.5em:0.5em’visitor0.5emvisitable)0.5em(arg0.5em:0.5em’visitor0.5emvisitable)0.5em=object0.5em(self)0.5em0.5emmethod0.5emf0.5em=0.5emf0.5em0.5emmethod0.5emarg0.5em=0.5emarg0.5em0.5emmethod0.5emaccept0.5em:0.5em’visitor0.5em->0.5emunit0.5em=0.5em0.5em0.5em0.5emfun0.5emvisitor0.5em->0.5emvisitor#visitApp0.5emselfendlet0.5emnewapp0.5emf0.5emarg0.5em=0.5em(new0.5emapp0.5emf0.5emarg0.5em:>0.5em’a0.5emvisitable)class0.5emvirtual0.5em[’visitor]0.5emlambdavisit0.5em=This abstract class has two uses:objectit defines the visitors for the sub-langauge of λ-expressions,0.5em0.5emmethod0.5emvirtual0.5emvisitVar0.5em:0.5em’visitor0.5emvar0.5em->0.5emunitand +class="math inline">\lambda-expressions,0.5em0.5emmethod0.5emvirtual0.5emvisitVar0.5em:0.5em’visitor0.5emvar0.5em->0.5emunitand it will provide an early check0.5em0.5emmethod0.5emvirtual0.5emvisitAbs0.5em:0.5em’visitor0.5emabs0.5em->0.5emunitthat the visitor @@ -10421,7 +10415,7 @@

        Chapter 11

        use result as an accumulator.0.5em0.5eminherit0.5em[’visitor]0.5emlambdavisit0.5em0.5emmethod0.5emvisitVar0.5emvar0.5em=0.5em0.5em0.5em0.5emresult0.5em:=0.5emvar#v0.5em::0.5em!result0.5em0.5emmethod0.5emvisitAbs0.5emabs0.5em=0.5em0.5em0.5em0.5em(abs#body)#accept0.5emself;0.5em0.5em0.5em0.5emresult0.5em:=0.5emList.filter0.5em(fun0.5emv’0.5em->0.5emv’0.5em<>0.5emabs#v)0.5em!result0.5em0.5emmethod0.5emvisitApp0.5emapp0.5em=0.5em0.5em0.5em0.5emapp#arg#accept0.5emself;0.5emapp#f#accept0.5emselfendtype0.5emlambdavisitt0.5em=0.5emlambdavisitt0.5emlambdavisitVisitor for the language of λ-expressions.type0.5emlambdat0.5em=0.5emlambdavisitt0.5emvisitablelet0.5emeval10.5em(e0.5em:0.5emlambdat)0.5emsubst0.5em:0.5emlambdat0.5em=0.5em0.5emlet0.5emresult0.5em=0.5emref0.5em(newvar0.5em”“)0.5eminThis +class="math inline">\lambda-expressions.type0.5emlambdat0.5em=0.5emlambdavisitt0.5emvisitablelet0.5emeval10.5em(e0.5em:0.5emlambdat)0.5emsubst0.5em:0.5emlambdat0.5em=0.5em0.5emlet0.5emresult0.5em=0.5emref0.5em(newvar0.5em”“)0.5eminThis initial value will be ignored.0.5em0.5eme#accept0.5em(new0.5emevallambda0.5emsubst0.5emresult0.5em:>0.5emlambdavisitt);0.5em0.5em!resultlet0.5emfreevars10.5em(e0.5em:0.5emlambdat)0.5em=0.5em0.5emlet0.5emresult0.5em=0.5emref0.5em[]0.5eminInitial value of the @@ -10446,7 +10440,7 @@

        Chapter 11

        branches’’.object0.5em0.5eminherit0.5em[’visitor]0.5emevalexpr0.5emresult0.5em0.5eminherit0.5em[’visitor]0.5emevallambda0.5emsubst0.5emresultendclass0.5em[’visitor]0.5emfreevarslexpr0.5emresult0.5em=object0.5em0.5eminherit0.5em[’visitor]0.5emfreevarsexpr0.5emresult0.5em0.5eminherit0.5em[’visitor]0.5emfreevarslambda0.5emresultendtype0.5emlexprvisitt0.5em=0.5emlexprvisitt0.5emlexprvisitThe language combiningtype0.5emlexprt0.5em=0.5emlexprvisitt0.5emvisitableλ-expressions and arithmetic +class="math inline">\lambda-expressions and arithmetic expressions.let0.5emeval30.5em(e0.5em:0.5emlexprt)0.5emsubst0.5em:0.5emlexprt0.5em=0.5em0.5emlet0.5emresult0.5em=0.5emref0.5em(newnum0.5em0)0.5emin0.5em0.5eme#accept0.5em(new0.5emevallexpr0.5emsubst0.5emresult);0.5em0.5em!resultlet0.5emfreevars30.5em(e0.5em:0.5emlexprt)0.5em=0.5em0.5emlet0.5emresult0.5em=0.5emref0.5em[]0.5emin0.5em0.5eme#accept0.5em(new0.5emfreevarslexpr0.5emresult);0.5em0.5em!resultlet0.5emtest30.5em=0.5em0.5em(newadd0.5em(newmult0.5em(newnum0.5em3)0.5em(newvar0.5em”x”))0.5em(newnum0.5em1)0.5em:>0.5emlexprt)let0.5emetest0.5em=0.5emeval30.5emtest30.5em[]let0.5emfvtest0.5em=0.5emfreevars30.5emtest3let0.5emoldetest0.5em=0.5emeval30.5em(test20.5em:>0.5emlexprt)0.5em[]let0.5emoldfvtest0.5em=0.5emeval30.5em(test20.5em:>0.5emlexprt)0.5em[]Polymorphic Variant Types: Subtyping

          @@ -10479,9 +10473,9 @@

          Chapter 11

          • The need to “tie the recursive knot” separately both at the type level and the function level. At the function level, an η-expansion is required due to -value recursion problem. At the type level, the type variable -can be confusing.
          • +class="math inline">\eta-expansion is required due to value +recursion problem. At the type level, the type variable can be +confusing.
          • There can be a slight time cost compared to the visitor pattern-based approach: additional dispatch at each level of type aggregation (i.e.  merging sub-languages).
          • @@ -10537,24 +10531,21 @@

            Chapter 11

            composing definitions of smaller languages. For example, the combinators of the Extended Backus-Naur Form notation are:

              -
            • concatenation: S = A, B stands -for S = {ab|a ∈ A, b ∈ b},
            • -
            • alternation: S = A|B stands for -S = {a|a ∈ A ∨ a ∈ B},
            • -
            • option: S = [A] -stands for S = {ϵ} ∪ A, where -ϵ is an empty string,
            • -
            • repetition: S = {A} stands for S = {ϵ} ∪ {as|a ∈ A, s ∈ S},
            • -
            • terminal string: S = ″a stands for S = {a}.
            • +
            • concatenation: S = A, B stands for +S = \lbrace a b|a \in A, b \in b +\rbrace,
            • +
            • alternation: S = A|B stands for +S = \lbrace a|a \in A \vee a \in B +\rbrace,
            • +
            • option: S = [A] stands for S = \lbrace \epsilon \rbrace \cup A, where +\epsilon is an empty string,
            • +
            • repetition: S = \lbrace A \rbrace +stands for S = \lbrace \epsilon \rbrace \cup +\lbrace a s|a \in A, s \in S \rbrace,
            • +
            • terminal string: S ='' +a'' stands for S = \lbrace a +\rbrace.
          • Parsers implemented directly in a functional programming paradigm are functions from character streams to the parsed values. @@ -10572,13 +10563,13 @@

            Chapter 11

          • Return: val return : 'a -> 'a parser

            • return x parses an empty string, symbolically S = {ϵ}, and returns +class="math inline">S = \lbrace \epsilon \rbrace, and returns x.
          • MZero: val fail : 'a parser

            • fail fails to parse anything, symbolically S = ⌀ = {}.
            • +class="math inline">S = \varnothing = \lbrace \rbrace.
          • MPlus: either val <|> : 'a parser -> 'a parser -> 'a parser,

            @@ -10606,12 +10597,10 @@

            Chapter 11

            consuming any character can be entered when a parse failure should occur, the cycle will keep repeating.

              -
            • For example, if we define numbers N := D|ND, -where D stands for digits, -then a stack of uses of the rule N → ND will build -up when the next character is not a digit.
            • +
            • For example, if we define numbers N := D | +N D, where D stands for digits, +then a stack of uses of the rule N \rightarrow +N D will build up when the next character is not a digit.
            • On the other hand, rules can share common prefixes.Parser Combinators: Implementation
          • @@ -10698,21 +10687,19 @@

            Chapter 11

          • File Plugin1.ml:

          open0.5emPluginBase.ParseMlet0.5emdigitofchar0.5emd0.5em=0.5emintofchar0.5emd0.5em-0.5emintofchar0.5em’0’let0.5emnumber=0.5em0.5emlet0.5emrec0.5emnum0.5em=Numbers: -N := DN|D -where D is +N := D N | D where D is digits.0.5em0.5em0.5em0.5emlazy0.5em(0.5em0.5em(perform0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5emd0.5em<–0.5emdigit;0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em(n,0.5emb)0.5em<–0.5emLazy.force0.5emnum;0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5em0.5emreturn0.5em(digitofchar0.5emd0.5em0.5emb0.5em+0.5emn,0.5emb0.5em0.5em10))0.5em0.5em0.5em0.5em0.5em0.5em<>0.5emlazy0.5em(digit0.5em>>=0.5em(fun0.5emd0.5em->0.5emreturn0.5em(digitofchar0.5emd,0.5em10))))0.5emin0.5em0.5emLazy.force0.5emnum0.5em>>0.5emfstlet0.5emaddition0.5emlang0.5em=Addition -rule: S → (S+S).0.5em0.5emperformRequiring -a parenthesis ( turns the rule into +rule: S \rightarrow (S + +S).0.5em0.5emperformRequiring a parenthesis ( turns the rule into non-left-recursive.0.5em0.5em0.5em0.5emliteral0.5em”(“;0.5emn10.5em<–0.5emlang;0.5emliteral0.5em”+“;0.5emn20.5em<–0.5emlang;0.5emliteral0.5em”)“;0.5em0.5em0.5em0.5emreturn0.5em(n10.5em+0.5emn2)let0.5em()0.5em= PluginBase.(grammarrules0.5em:=0.5emnumber0.5em::0.5emaddition0.5em::0.5em!grammarrules)

          • File Plugin2.ml:

          open0.5emPluginBase.ParseMlet0.5emmultiplication0.5emlang0.5em=0.5em0.5emperformMultiplication -rule: S → (S*S).0.5em0.5em0.5em0.5emliteral0.5em”(“;0.5emn10.5em<–0.5emlang;0.5emliteral0.5em”“;0.5emn20.5em<–0.5emlang;0.5emliteral0.5em”)“;0.5em0.5em0.5em0.5emreturn0.5em(n10.5em0.5emn2)let0.5em()0.5em= +rule: S \rightarrow (S \ast +S).0.5em0.5em0.5em0.5emliteral0.5em”(“;0.5emn10.5em<–0.5emlang;0.5emliteral0.5em”“;0.5emn20.5em<–0.5emlang;0.5emliteral0.5em”)“;0.5em0.5em0.5em0.5emreturn0.5em(n10.5em0.5emn2)let0.5em()0.5em= PluginBase.(grammarrules0.5em:=0.5emmultiplication0.5em::0.5em!grammarrules) Functional ProgrammingŁukasz Stafiniak

          The Expression Problem

          @@ -10774,7 +10761,7 @@

          Chapter 11

          Exercise 10: Streamline the solution PolyRecM.ml by extending the language of λ-expressions with arithmetic +class="math inline">\lambda-expressions with arithmetic expressions, rather than defining the sub-languages separately and then merging them. See slide on page 15 of Jacques Garrigue Structural Types, Recursive Modules, and the Expression Problem.

          @@ -10870,35 +10857,31 @@

          Exam set 1

          lists or trees).

          Exercise 11: (Purple.) One way to express constraints on a polymorphic function is to write its type in the form: -α1αn[C].τ, -where τ is the type of the -function, α1αn -are the polymorphic type variables, and C are additional constraints that -the variables α1αn -have to meet. Let’s say we allow “local variables” in C: for example $C = \exists \beta . \alpha _{1} \dot{=} -\operatorname{list} (\beta)$. Why the general form β[C].β is enough -to express all types of the general form α1αn[C].τ?

          +\forall \alpha _{1} \ldots \alpha _{n} [C] . +\tau, where \tau is the type of +the function, \alpha _{1} \ldots \alpha +_{n} are the polymorphic type variables, and C are additional constraints that the +variables \alpha _{1} \ldots \alpha +_{n} have to meet. Let’s say we allow “local variables” in C: for example C = +\exists \beta . \alpha _{1} \dot{=} \operatorname{list} (\beta). +Why the general form \forall \beta [C] . +\beta is enough to express all types of the general form \forall \alpha _{1} \ldots \alpha _{n} [C] . +\tau?

          Exercise 12: (Purple.) Define a type that corresponds to a set with a googleplex of elements (i.e. 1010100 elements).

          +class="math inline">10^{10^{100}} elements).

          Exercise 13: (Red.) In a height-balanced binary tree, the following property holds for every node: The height of its left subtree and the height of its right subtree are almost equal, which means their difference is not greater than one. Consider a height-balanced binary tree of height h. What is the maximum number of -nodes it can contain? Clearly, maxN  = 2h − 1. However, finding the -minimum number minN  is more -difficult.

          +class="math inline">h. What is the maximum number of nodes it can +contain? Clearly, \operatorname{maxN}= 2 h - +1. However, finding the minimum number \operatorname{minN} is more difficult.

          Construct all the height-balanced binary trees with a given nuber of nodes. hbal_tree_nodes n returns a list of all height-balanced binary tree with n nodes.

          @@ -10984,12 +10967,10 @@

          Exam set 1

          fold_left in terms of fold_right. Hint: continuation passing style.

          Exercise 11: (Purple.) Show why for a monomorphic -specification, if datastructures d1 and d2 have the same behavior -under all operations, then they have the same representation d1 = d2 -in all implementations.

          +specification, if datastructures d_{1} +and d_{2} have the same behavior under +all operations, then they have the same representation d_{1} = d_{2} in all implementations.

          Exercise 12: (Purple.) append for lazy lists returns in constant time. Where has its linear-time complexity gone? Explain how you would account for this in a time complexity @@ -11051,18 +11032,16 @@

          Exam set 3

          elements. The required sizes of subsets are given as a list of numbers.

          Exercise 6: (Yellow.) A complete binary tree with -height H is defined as -follows: The levels 1, 2, 3, …, H − 1 contain the -maximum number of nodes (i.e 2i − 1 at the level i, note that we start counting the -levels from 1 at the root). In level -H, which may contain less than -the maximum possible number of nodes, all the nodes are “left-adjusted”. -This means that in a levelorder tree traversal all internal nodes come -first, the leaves come second, and empty successors (the nil’s which are -not really nodes!) come last.

          +height H is defined as follows: The +levels 1, 2, 3, \ldots, H - 1 contain +the maximum number of nodes (i.e 2^{i - +1} at the level i, note that we +start counting the levels from 1 at the +root). In level H, which may contain +less than the maximum possible number of nodes, all the nodes are +“left-adjusted”. This means that in a levelorder tree traversal all +internal nodes come first, the leaves come second, and empty successors +(the nil’s which are not really nodes!) come last.

          We can assign an address number to each node in a complete binary tree by enumerating the nodes in levelorder, starting at the root with number 1. In doing so, we realize that for every node X with address A @@ -11102,11 +11081,9 @@

          Exam set 3

          Exercise 10: (Orange.) Write a function that transposes a rectangular matrix represented as a list of lists.

          Exercise 11: (Purple.) Find the bijective functions -between the types corresponding to a(ab+c) -and ab + 1 + ac -(in OCaml).

          +between the types corresponding to a (a^b + +c) and a^{b + 1} + ac (in +OCaml).

          Exercise 12: (Purple.) Show the monad-plus laws for OptionM monad.

          Exercise 13: (Red.) As a preparation for drawing the @@ -11167,9 +11144,9 @@

          Exam set 3

          left child is at least as large as the rank of its right sibling. The rank of a node is defined to be the length of its right spine, i.e. the rightmost path from the node in question to an empty node. -Implement O(logn) -worst case time complexity Priority Queues based on leftist heaps. Each -node of the tree should contain its rank.

          +Implement O (\log n) worst case time +complexity Priority Queues based on leftist heaps. Each node of the tree +should contain its rank.

          Note that the elements along any path through a heap-ordered tree are stored in sorted order. The key insight behind leftist heaps is that two heaps can be merged by merging their right spines as you would merge