Skip to content

Commit

Permalink
fix: old links
Browse files Browse the repository at this point in the history
  • Loading branch information
kings177 committed Feb 7, 2024
1 parent aa4d241 commit 7365a56
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 11 deletions.
2 changes: 1 addition & 1 deletion BUILDING.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Building
Clone the repo:

```sh
git clone https://github.com/Kindelia/HVM.git
git clone https://github.com/HigherOrderCO/HVM.git
cd HVM
```

Expand Down
2 changes: 1 addition & 1 deletion NIX.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Usage (Nix)
[Install Nix](https://nixos.org/manual/nix/stable/installation/installation.html) and enable [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes) then, in a shell, run:

```sh
git clone https://github.com/Kindelia/HVM.git
git clone https://github.com/HigherOrderCO/HVM.git
cd HVM
# Start a shell that has the `hvm` command without installing it.
nix shell .#hvm
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ purpose is to show yet another important advantage of HVM: beta-optimality. This
λ-encoded numbers **exponentially faster** than GHC, since it can deal with very higher-order programs with optimal
asymptotics, while GHC can not. As esoteric as this technique may look, it can actually be very useful to design
efficient functional algorithms. One application, for example, is to implement [runtime
deforestation](https://github.com/Kindelia/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general,
deforestation](https://github.com/HigherOrderCO/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general,
HVM is capable of applying any fusible function `2^n` times in linear time, which sounds impossible, but is indeed true.

*Charts made on [plotly.com](https://chart-studio.plotly.com/).*
Expand Down Expand Up @@ -276,7 +276,7 @@ More Information

- To learn more about the **underlying tech**, check [guide/HOW.md](guide/HOW.md).

- To ask questions and **join our community**, check our [Discord Server](https://discord.gg/kindelia).
- To ask questions and **join our community**, check our [Discord Server](https://discord.higherorderco.com).

- To **contact the author** directly, send an email to <[email protected]>.

Expand Down Expand Up @@ -416,13 +416,13 @@ let f = (2 + x) in [λx. f, λx. f]

The solution to that question is the main insight that the Interaction Net model
brought to the table, and it is described in more details on the
[HOW.md](https://github.com/Kindelia/HVM/blob/master/guide/HOW.md) document.
[HOW.md](https://github.com/HigherOrderCO/HVM/blob/master/guide/HOW.md) document.

### Is HVM always *asymptotically* faster than GHC?

No. In most common cases, it will have the same asymptotics. In some cases, it
is exponentially faster. In [this
issue](https://github.com/Kindelia/HVM/issues/60), a user noticed that HVM
issue](https://github.com/HigherOrderCO/HVM/issues/60), a user noticed that HVM
displays quadratic asymptotics for certain functions that GHC computes in linear
time. That was a surprise to me, and, as far as I can tell, despite the
"optimal" brand, seems to be a limitation of the underlying theory. That said,
Expand Down Expand Up @@ -458,7 +458,7 @@ foldr (.) id funcs :: [Int -> Int]

GHC won't be able to "fuse" the functions on the `funcs` list, since they're not
known at compile time. HVM will do that just fine. See [this
issue](https://github.com/Kindelia/HVM/issues/167) for a practical example.
issue](https://github.com/HigherOrderCO/HVM/issues/167) for a practical example.

Another practical application for λ-encodings is for monads. On Haskell, the
Free Monad library uses Church encodings as an important optimization. Without
Expand Down
4 changes: 2 additions & 2 deletions guide/HOW.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ exist in one place greatly simplifies parallelism.
This was all known and possible since years ago (see other implementations of
optimal reduction), but all implementations of this algorithm, until now,
represented terms as graphs. This demanded a lot of pointer indirection, making
it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Symmetric-Interaction-Calculus),
it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Interaction-Calculus),
takes advantage of the fact that inputs are known to be λ-terms, allowing for a
50% lower memory usage, and letting us avoid several impossible cases. This
made the runtime 50x (!) faster, which finally allowed it to compete with GHC
Expand Down Expand Up @@ -126,7 +126,7 @@ having incremented each number in `list` by 1. Notes:

- You may write `@` instead of `λ`.

- Check [this](https://github.com/Kindelia/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work.
- Check [this](https://github.com/HigherOrderCO/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work.

What makes it fast
==================
Expand Down
4 changes: 2 additions & 2 deletions guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,9 +463,9 @@ hvm::runtime::eval(file, term, funs, size, tids, dbug);

*To learn how to design the `apply` function, first learn HVM's memory model
(documented on
[runtime/base/memory.rs](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/memory.rs)),
[runtime/base/memory.rs](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/memory.rs)),
and then consult some of the precompiled IO functions
[here](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/precomp.rs).
[here](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/precomp.rs).
You can also use this API to extend HVM with new compute primitives, but to make
this efficient, you'll need to use the `visit` function too. You can see some
examples by compiling a `.hvm` file to Rust, and then checking the `precomp.rs`
Expand Down

0 comments on commit 7365a56

Please sign in to comment.