Skip to content

Commit

Permalink
Merge branch 'master' of github.com:higherorderco/hvm
Browse files Browse the repository at this point in the history
  • Loading branch information
VictorTaelin committed Feb 22, 2024
2 parents d192d4b + 6a01935 commit 9420e05
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion BUILDING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Building
Clone the repo:

```sh
git clone https://github.com/Kindelia/HVM.git
git clone https://github.com/HigherOrderCO/HVM.git
cd HVM
```

Expand Down
2 changes: 1 addition & 1 deletion NIX.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Usage (Nix)
[Install Nix](https://nixos.org/manual/nix/stable/installation/installation.html) and enable [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes) then, in a shell, run:

```sh
git clone https://github.com/Kindelia/HVM.git
git clone https://github.com/HigherOrderCO/HVM.git
cd HVM
# Start a shell that has the `hvm` command without installing it.
nix shell .#hvm
Expand Down
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ On this example, we run a simple, recursive [Bubble Sort](https://en.wikipedia.o
(Haskell's compiler). Notice the algorithms are identical. The chart shows how much time each runtime took to sort a
list of given size (the lower, the better). The purple line shows GHC (single-thread), the green lines show HVM (1, 2, 4
and 8 threads). As you can see, both perform similarly, with HVM having a small edge. Sadly, here, its performance
doesn't improve with added cores. That's because Bubble Sort is an *inherently sequential* algorithm, so HVM can't
doesn't improve with added cores. That's because this implementation of Bubble Sort is *inherently sequential*, so HVM can't
improve it.

Radix Sort
Expand Down Expand Up @@ -242,7 +242,7 @@ purpose is to show yet another important advantage of HVM: beta-optimality. This
λ-encoded numbers **exponentially faster** than GHC, since it can deal with very higher-order programs with optimal
asymptotics, while GHC can not. As esoteric as this technique may look, it can actually be very useful to design
efficient functional algorithms. One application, for example, is to implement [runtime
deforestation](https://github.com/Kindelia/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general,
deforestation](https://github.com/HigherOrderCO/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general,
HVM is capable of applying any fusible function `2^n` times in linear time, which sounds impossible, but is indeed true.

*Charts made on [plotly.com](https://chart-studio.plotly.com/).*
Expand Down Expand Up @@ -276,7 +276,7 @@ More Information
- To learn more about the **underlying tech**, check [guide/HOW.md](guide/HOW.md).
- To ask questions and **join our community**, check our [Discord Server](https://discord.gg/kindelia).
- To ask questions and **join our community**, check our [Discord Server](https://discord.higherorderco.com).
- To **contact the author** directly, send an email to <[email protected]>.
Expand Down Expand Up @@ -323,7 +323,7 @@ benchmarks are NOT claiming that HVM is faster than GHC today.
### Does HVM support the full λ-Calculus, or System-F?
Not yet! HVM is an impementation of the bookkeeping-free version of the
reduction algorithm proposed on [TOIOFPL](https://www.researchgate.net/publication/235778993_The_optimal_implementation_of_functional_programming_languages)
reduction algorithm proposed on [TOIOFPL](https://www.cambridge.org/us/universitypress/subjects/computer-science/programming-languages-and-applied-logic/optimal-implementation-functional-programming-languages)
book, up to page 40. As such, it doesn't support some λ-terms, such as:
```
Expand All @@ -342,7 +342,7 @@ and recursion.
### Will HVM support the full λ-Calculus, or System-F?
Yes! We plan to, by implementing the full algorithm described on the
[TOIOFPL](https://www.researchgate.net/publication/235778993_The_optimal_implementation_of_functional_programming_languages),
[TOIOFPL](https://www.cambridge.org/us/universitypress/subjects/computer-science/programming-languages-and-applied-logic/optimal-implementation-functional-programming-languages),
i.e., after page 40. Sadly, this results in an overhead that affects
the performance of beta-reduction by about 10x. As such, we want to
do so with caution to keep HVM efficient. Currently, the plan is:
Expand Down Expand Up @@ -416,13 +416,13 @@ let f = (2 + x) in [λx. f, λx. f]
The solution to that question is the main insight that the Interaction Net model
brought to the table, and it is described in more details on the
[HOW.md](https://github.com/Kindelia/HVM/blob/master/guide/HOW.md) document.
[HOW.md](https://github.com/HigherOrderCO/HVM/blob/master/guide/HOW.md) document.
### Is HVM always *asymptotically* faster than GHC?
No. In most common cases, it will have the same asymptotics. In some cases, it
is exponentially faster. In [this
issue](https://github.com/Kindelia/HVM/issues/60), a user noticed that HVM
issue](https://github.com/HigherOrderCO/HVM/issues/60), a user noticed that HVM
displays quadratic asymptotics for certain functions that GHC computes in linear
time. That was a surprise to me, and, as far as I can tell, despite the
"optimal" brand, seems to be a limitation of the underlying theory. That said,
Expand Down Expand Up @@ -458,7 +458,7 @@ foldr (.) id funcs :: [Int -> Int]
GHC won't be able to "fuse" the functions on the `funcs` list, since they're not
known at compile time. HVM will do that just fine. See [this
issue](https://github.com/Kindelia/HVM/issues/167) for a practical example.
issue](https://github.com/HigherOrderCO/HVM/issues/167) for a practical example.
Another practical application for λ-encodings is for monads. On Haskell, the
Free Monad library uses Church encodings as an important optimization. Without
Expand Down
4 changes: 2 additions & 2 deletions guide/HOW.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ exist in one place greatly simplifies parallelism.
This was all known and possible since years ago (see other implementations of
optimal reduction), but all implementations of this algorithm, until now,
represented terms as graphs. This demanded a lot of pointer indirection, making
it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Symmetric-Interaction-Calculus),
it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Interaction-Calculus),
takes advantage of the fact that inputs are known to be λ-terms, allowing for a
50% lower memory usage, and letting us avoid several impossible cases. This
made the runtime 50x (!) faster, which finally allowed it to compete with GHC
Expand Down Expand Up @@ -126,7 +126,7 @@ having incremented each number in `list` by 1. Notes:

- You may write `@` instead of `λ`.

- Check [this](https://github.com/Kindelia/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work.
- Check [this](https://github.com/HigherOrderCO/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work.

What makes it fast
==================
Expand Down
4 changes: 2 additions & 2 deletions guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,9 +463,9 @@ hvm::runtime::eval(file, term, funs, size, tids, dbug);

*To learn how to design the `apply` function, first learn HVM's memory model
(documented on
[runtime/base/memory.rs](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/memory.rs)),
[runtime/base/memory.rs](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/memory.rs)),
and then consult some of the precompiled IO functions
[here](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/precomp.rs).
[here](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/precomp.rs).
You can also use this API to extend HVM with new compute primitives, but to make
this efficient, you'll need to use the `visit` function too. You can see some
examples by compiling a `.hvm` file to Rust, and then checking the `precomp.rs`
Expand Down

0 comments on commit 9420e05

Please sign in to comment.