Skip to content

Commit

Permalink
Merge pull request #210 from ethersphere/editorial
Browse files Browse the repository at this point in the history
Initial editorial review
  • Loading branch information
significance authored Jun 25, 2021
2 parents 68590e2 + 25f946a commit 5db6b36
Show file tree
Hide file tree
Showing 37 changed files with 996 additions and 466 deletions.
22 changes: 11 additions & 11 deletions docs/access-the-swarm/host-your-website.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: host-your-website
---

:::tip
Comfortable with nodeJS and javascript? Check out [swarm-cli](/docs/working-with-bee/bee-tools), a command line tool you can use to easily interact with your Bee node!
Comfortable with nodeJS and JavaScript? Check out [swarm-cli](/docs/working-with-bee/bee-tools), a command line tool you can use to easily interact with your Bee node!
:::

Bee treats ENS as a first class citizen, wherever you can use a Swarm reference, you can also use an ENS domain where the `content` ENS Resolver record is set to be a `bzz://` reference.
Expand All @@ -21,7 +21,7 @@ In order to resolve ENS names using your API endpoints, you must specify a valid
bee start --resolver-options "https://cloudflare-eth.com"
```

If specifying using your `bee.yaml` configuration file, the syntax is as follows.
If specifying using your `bee.yaml` configuration file, the syntax is as follows:

```bash
resolver-options: [ "https://cloudflare-eth.com" ]
Expand All @@ -32,7 +32,7 @@ Once you have restarted your node, you should be able to see the Swarm homepage
[http://localhost:1633/bzz/swarm.eth/](http://localhost:1633/bzz/swarm.eth/)

:::info
Use the `resolver-options` flag to point the bee resolver to any ENS compatible smart-contract on any EVM compatible chain
Use the `resolver-options` flag to point the Bee resolver to any ENS compatible smart-contract on any EVM compatible chain
:::

:::warning
Expand All @@ -49,7 +49,7 @@ for more information.

This time we will also include the `Swarm-Index-Document` header set to the `index.html`. This will cause Bee to serve each directories `index.html` file as default when browsing to the directory root `/` url. We will also provide a custom error page, using the `Swarm-Error-Document` header.

In the case that your website is a single page app, where you would like to direct to the javascript history api powered router, you may provide the `index.html` page for both settings.
In the case that your website is a single page app, where you would like to direct to the JavaScript history API powered router, you may provide the `index.html` page for both settings.

```bash
curl \
Expand All @@ -76,25 +76,25 @@ Press 'Set' next to your resolver record.

![alt text](/img/ens-1.png "Press set resolver.")

Choose the public resolver.
Select 'Use Public Resolver'.

![alt text](/img/ens-2.png "Choose the public resolver.")
![alt text](/img/ens-2.png "Use Public Resolver.")

Press add a record.
Select '+' to add a record.

![alt text](/img/ens-3.png "Press add a record.")

Choose the Content Record type from the drop down menu.
Choose the 'Content' record type from the drop down menu.

![alt text](/img/ens-4.png "Choose the content record type from the drop down menu.")

Add the Swarm reference you created earlier and press 'save'.
Add the Swarm reference you created earlier and press 'Save'.

![alt text](/img/ens-5.png "Add the Swarm reference you created earlier and press 'save'.")
![alt text](/img/ens-5.png "Add the Swarm reference you created earlier and press 'Save'.")

Verify the Content Record has been created!

![alt text](/img/ens-6.png "Add the Swarm reference you created earlier.")
![alt text](/img/ens-6.png "Verify the Content Record has been created.")

Done! 👏

Expand Down
8 changes: 6 additions & 2 deletions docs/access-the-swarm/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,11 @@ title: Introduction
id: introduction
---

As well as earning BZZ and supporting the network, Bee it our entrypoint to Swarm's unstoppable data storage and distribution system. Swarm is a distributed system designed to work with toXDAIer with smart contracts to enable the development and infrastructure for full service applications running entirely on the decentralised web.
As well as earning BZZ and supporting the network, Bee is our
entrypoint to Swarm's unstoppable data storage and distribution
system. Swarm is a distributed system designed to work together with
smart contracts, powering the infrastructure that enables the development of full
service applications running entirely on the decentralised web.

Here's just a few of the amazing things you can do with Bee!

Expand All @@ -17,7 +21,7 @@ Find out how to [upload whole directories](/docs/access-the-swarm/upload-a-direc

### Host Your Website on the Decentralised Web

Swarm is an distributed international network of nodes that provides hosting for your unstoppable websites. See this guide to [hosting your website on swarm](/docs/access-the-swarm/host-your-website)
Swarm is a distributed international network of nodes that provides hosting for your unstoppable websites. See this guide to [hosting your website on swarm](/docs/access-the-swarm/host-your-website)

### Sync With the Network

Expand Down
22 changes: 16 additions & 6 deletions docs/access-the-swarm/keep-your-data-alive.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,31 @@ title: Keep Your Data Alive
id: keep-your-data-alive
---

The swarm comprises the sum total of all storage space provided by all of our nodes, called the DISC (Distributed Immutable Store of Chunks). The *right to write* data into this distributed store is determined by the postage stamps that have attached.
The swarm comprises the sum total of all storage space provided by all of our nodes, called the DISC (Distributed Immutable Store of Chunks). The *right to write* data into this distributed store is determined by the postage stamps that have been attached.

### Fund your node's wallet.

To start up your node, you will already have provided your node with XDAI for gas and BZZ which was transferred into your chequebook when your node was initialised and will be used to interact with other nodes using the *SWAP* protocol. In order to access more funds to buy batches of stamps, your wallet must be funded with BZZ. The easiest way to acheive this is to withdrawal funds from your chequebook.
To start up your node, you will already have provided your node with
XDAI for gas and BZZ which was transferred into your chequebook when
your node was initialised. This will be used to interact with other
nodes using the *SWAP* protocol. In order to access more funds to buy
batches of stamps, your wallet must be funded with BZZ. The easiest
way to acheive this is to withdraw funds from your chequebook:

```bash
curl -XPOST "http://localhost:1635/chequebook/withdraw?amount=1000"
```

## Purchase a Batch of Stamps

Stamps are created in batches, so that storer nodes may calculate the validity of a chunk's stamp with only local knowledge. This enables the privacy you enjoy in the swarm.
Stamps are created in batches, so that storer nodes may calculate the
validity of a chunk's stamp with only local knowledge. This enables
the privacy you enjoy in the swarm.

Stamp batches are created in *buckets* with *depth* 16. The entire swarm *address space* is divided into into 2^16 = 65,536 different buckets. When uploaded, each of your file's are split into 4kb chunks is assigned to a specific bucket based on its address.
Stamp batches are created in *buckets* with a *depth* 16. The entire
swarm *address space* is divided into 2^16 = 65,536 different
buckets. When uploaded, each of your file's are split into 4kb chunks
and assigned to a specific bucket based on it's address.

When creating a batch you must specify two values, *batch depth* and *amount*.

Expand All @@ -41,7 +51,7 @@ The *amount* you specify will determine the amount of time your chunks live in t

For now, we suggest you specify depth 20 and amount 10000000 for your
batches. This should be ample to upload quite some data, and to keep
your files in the Swarm for the forseeable future.
your files in the swarm for the forseeable future.

:::warning
When you purchase a batch of stamps, you agree to burn BZZ. Although your 'balance' slowly decrements as time goes on, there is no way to withdraw BZZ from a batch. This is an outcome of Swarm's decentralised design, to read more about how the swarm fits toXDAIer, read <a href="/the-book-of-swarm.pdf" target="_blank" rel="noopener noreferrer">The Book of Swarm</a> .
Expand All @@ -52,7 +62,7 @@ curl -s -XPOST http://localhost:1633/stamps/10000000/20
```

:::info
Once your batch has been purchased, it will take a few minutes for other Bee nodes in the swarm to catch up and register your batch. Allow some time for your batch to propogate to the network before proceeding to the next step.
Once your batch has been purchased, it will take a few minutes for other Bee nodes in the Swarm to catch up and register your batch. Allow some time for your batch to propagate in the network before proceeding to the next step.
:::

Look out for more ways to more accurately estimate the correct size of your batch coming soon!
Expand Down
8 changes: 6 additions & 2 deletions docs/access-the-swarm/light-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ id: light-nodes
---

:::danger
When a light node is requesting data from the network - it will not benefit from plausible deniability. This is because a light node does not forward on behalf of other nodes, and so it is always the originator of the request.
When a light node is requesting data from the network - it will not benefit from plausible deniability. This is because a light node does not forward on behalf of other nodes, and so it is always the *originator* of the request.
:::

#### Configuration
Expand All @@ -15,7 +15,11 @@ In order to configure light node mode, do not disable light mode in your Bee con

At present, light mode represents a pragmatic and elegant approach to improving network stability, reliability and resiliance.

In general, *light mode* may be thought of as simply not participating in the activity of forwarding or storing chunks for other members of the swarm, these nodes are strictly consumers, who will pay BZZ in return for services rendered by *full nodes* contributing towards moving data around the network.
In general, *light mode* may be thought of as simply not participating
in the activity of forwarding or storing chunks for other members of
the swarm, these nodes are strictly consumers, who will pay BZZ in
return for services rendered by *full nodes* - those contributing
towards moving data around the network.

This means that, although the node will participate in the pull
syncing protocol by filling up its local storage with the chunks
Expand Down
45 changes: 21 additions & 24 deletions docs/access-the-swarm/pinning.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,10 @@ title: Pinning
id: pinning
---

Each Bee node is configured to reserve a certain amount of memory on your computer's hard drive to store and serve chunks within their *neighbourhood of responsibility* for other nodes in the Swarm network. Once this alloted space has been filled, each Bee node delete older chunks to make way for newer ones as they are uploaded by the network.
Each Bee node is configured to reserve a certain amount of memory on your computer's hard drive to store and serve chunks within their *neighbourhood of responsibility* for other nodes in the Swarm network. Once this alloted space has been filled, each Bee node deletes older chunks to make way for newer ones as they are uploaded by the network.

Each time a chunk is accessed, it is moved back to the end of the deletion queue, so that regularly accessed content stays alive in the network and is not deleted by a node's garbage collection routine.

:::info
In order to upload your data to swarm, you must agree to burn some of your BZZ to signify to storer and fowarder nodes that the content is important. Before you progress to the next step, you must buy stamps! See this guide on how to [purchase an appropriate batch of stamps](/docs/access-the-swarm/keep-your-data-alive).
:::

This, however, presents a problem for content which is important, but accessed seldom requested. In order to keep this content alive, Bee nodes provide a facility to **pin** important content so that it is not deleted.

There are two flavours of pinning, *local* and *global*.
Expand All @@ -28,16 +24,16 @@ Files pinned using local pinning will still not necessarily be available to the
To store content so that it will persist even when Bee's garbage collection routine is deleting old chunks, we simply pass the `Swarm-Pin` header set to `true` when uploading.

```bash
curl -H "swarm-pin: true" -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" --data-binary @bee.mp4 localhost:1633/bzz\?bee.mp4
curl -H "Swarm-Pin: true" -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" --data-binary @bee.mp4 localhost:1633/bzz\?bee.mp4
```

```json
{"reference":"1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30"}
```

### Administrating Pinned Content
### Administer Pinned Content

To check what content is currently pinned on your node, query the `pins` endpoint of your Bee API.
To check what content is currently pinned on your node, query the `pins` endpoint of your Bee API:

```bash
curl localhost:1633/pins
Expand All @@ -47,7 +43,7 @@ curl localhost:1633/pins
{"references":["1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30"]}
```

or, to check for specific references
or, to check for specific references:

```bash
curl localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30
Expand All @@ -57,7 +53,7 @@ A `404` response indicates the content is not available.

#### Unpinning Content

If we later decide our content is no longer worth keeping, we can simply unpin it by sending a `DELETE` request to the pinning endpoint using the same reference.
If we later decide our content is no longer worth keeping, we can simply unpin it by sending a `DELETE` request to the pinning endpoint using the same reference:

```bash
curl -XDELETE http://localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30
Expand Down Expand Up @@ -112,17 +108,15 @@ While the pin operation will attempt to fetch content from the network if it is
## Global Pinning
[Local pinning](/docs/access-the-swarm/pinning#global-pinning) ensures that your own node does not delete uploaded files. But other nodes that store your
chunks (because they fall within their *neighbourhood of responsibility*) may have deleted content
that has not been accessed recently to make room for new chunks.
[Local pinning](/docs/access-the-swarm/pinning#local-pinning) ensures that your own node does not delete uploaded files. But other nodes that store your
chunks (because they fall within their *neighbourhood of responsibility*) may have deleted content that has not been accessed recently to make room for new chunks.
:::info
For more info on how chunks are distributed, persisted and stored within the network, read
<a href="/the-book-of-swarm.pdf" target="_blank" rel="noopener noreferrer">The Book of Swarm</a> .
:::
To keep this content alive, your Bee node can be configured to refresh this content when it is
requested by other nodes in the network, using **global pinning**.
To keep this content alive, your Bee node can be configured to refresh this content when it is requested by other nodes in the network, using **global pinning**.
First, we must start up our node with the `global-pinning-enable` flag set.
Expand All @@ -137,20 +131,21 @@ bee start\
Next, we pin our file locally, as shown above.
```bash
curl -H "swarm-pin: true" --data-binary @bee.mp4 localhost:1633/bzz\?bee.mp4
curl -H "Swarm-Pin: true" --data-binary @bee.mp4 localhost:1633/bzz\?bee.mp4
```
```json
{"reference":"7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f"}
```
Now, when we distribute links to our files, we must also specify the first two bytes of our
overlay address as the *target*. If a chunk that has already been garbage collected by
its storer nodes is requested, the storer node will send a message using
[PSS](/docs/dapps-on-swarm/pss) to the Swarm neighbourhood defined by this prefix,
of which our node is a member.
Now, when we distribute links to our files, we must also specify the
first two bytes of our overlay address as the *target*. If a chunk
that has already been garbage collected by its storer nodes is
requested, the storer node will send a message using
[PSS](/docs/dapps-on-swarm/pss) to the swarm neighbourhood defined by
this prefix, of which our node is a member.
Let's use the addresses API endpoint to find out our target prefix.
Let's use the addresses API endpoint to find out our target prefix:
```bash
curl -s http://localhost:1635/addresses | jq .overlay
Expand All @@ -160,10 +155,12 @@ curl -s http://localhost:1635/addresses | jq .overlay
"320ed0e01e6e3d06cab44c5ef85a0898e68f925a7ba3dc80ee614064bb7f9392"
```
Finally, we take the first two bytes of our overlay address, `320e` and include this when referencing our chunk.
Finally, we take the first two bytes of our overlay address, `320e` and include this when referencing our chunk:
```bash
curl http://localhost:1633/bzz/7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f?targets=320e
```
Now, even if our chunks are deleted, they will be repaired in the network by our local Bee node and will always be available to the whole swarm!
Now, even if our chunks are deleted, they will be repaired in the
network by our local Bee node and will always be available to the
whole swarm!
Loading

0 comments on commit 5db6b36

Please sign in to comment.