Skip to content
This repository has been archived by the owner on Aug 29, 2019. It is now read-only.

Split this repo into individual projects #13

Open
meeDamian opened this issue Nov 7, 2018 · 17 comments
Open

Split this repo into individual projects #13

meeDamian opened this issue Nov 7, 2018 · 17 comments
Assignees

Comments

@meeDamian
Copy link
Member

meeDamian commented Nov 7, 2018

One benefit to doing that is that we can automate image generation upon version tag creation:

  1. We change (add new minor version) Dockerfile and tag it
  2. A CI tool automatically picks up that tag and
  3. CI builds both amd64 and arm images
  4. CI uploads all images to Docker Cloud
  5. CI creates manifest
  6. Everything's dandy

It should also be possible to have all images in one repo, but that might necessitate some odd and complicated setup.

cc. @AnotherDroog @nolim1t - opinions?

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

Good idea, However we might not necessarily want to build on every single push as I'm also storing other stuff in the contrib/ folder which doesn't yet warrant a project, but closely related.

  • Copy of a cleaned up lightningd.sqlite3 database with a somewhat latest block store, so we can work with pruned nodes easily
  • bitcoind and lightning config file generator (Will put that with the box variant later). Generates RPC credentials, and builds a config file for bitcoind and lightning.
  • Service generation setup utility (to work with both docker and non docker images)

@meeDamian
Copy link
Member Author

Agreed about building, I've already solved it - each git push/merge gets build and tested, but only git tags get pushed to Docker Cloud.

I'm yet to look what's in contrib/.

How long does it take to build the ln database? What are the security trade-offs of bundling it in?

Config generators sound interesting and useful! I think it should be run on the pi upon first boot (not sure how you have it done?)

I think we should abandon support for non-docker images and focus on getting one flow as good as possible :). Also, if you mean systemd services, they're not used on Alpine

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

the lightningd.sqlite3 is from an existing lightning instance but I run some sql on it to remove any identifyable data. Then I upload it.

This is so that lightningd does not fetch blocks from the beginning and focuses on getting blocks from where It is left off. It should be bundled in for sure.

The config generator actually works. It builds a bitcoin.conf , and both config files for clightning and lnd.

The service scripts is actually not based on systemd. Its its own bash scripts which check the containers and restarts containers too. The idea is that they should be running (because we are pruned remember). I've added some non docker support too, but the focus is still docker.

@meeDamian
Copy link
Member Author

My security question was more about what exactly would people using that file, need to trust we don't do - how can this file be meddled with? Is using this file DL-ed from somewhere a security risk?

Shouldn't we use docker compose to manage the containers for us?

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

I don't think its a big security risk more than people downloading a pruned blockchain off us.

The alternative is they set up a Pi with enough storage for the entire blockchain, and download+verify it themselves in 2+ weeks.

Running lnd in neutrino mode has its own set of challenges right now. I'd probably wait till BIP157/158 gets implemented into core first.

docker compose so far doesn't work with raspbian.

Just have a lack of available devices right now to test the alpine image out at this time. If all goes well I'll probably deprecate the script as we'll have more control over networking. Otherwise that service is a good replacement.

@AnotherDroog
Copy link
Member

Ideally we use docker-compose, the OpenRC init System and cronjobs for periodic tasks as much as possible

It’s far more maintainable, composable and modular. This way others can mix and match instead of picking apart a monolithic script

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

thats the plan to get docker-compose working. But its good to have a plan B.

@meeDamian
Copy link
Member Author

I don't think its a big security risk more than people downloading a pruned blockchain off us.

I think we shouldn't be going the path of we compromised on X, so we can also compromise on Y. I think the approach should be more like: if we compromised on X let's make sure that Y, Z and W are sound, so that we can later just replace X and everything falls in place nicely.

But then again, I don't know how long the process of building of this database could take(?) is it significantly longer than the first fully-confirmed Bitcoin deposit? 30 mins - 2 hours?

The alternative is they set up a Pi with enough storage for the entire blockchain, and download+verify it themselves in 2+ weeks.

Running lnd in neutrino mode has its own set of challenges right now. I'd probably wait till BIP157/158 gets implemented into core first.

I think our initial approach will indeed be having nodes DL our weekly/monthly trusted pruned .zip, but eventually (when Neutrino support gets merged to bitcoin core etc) we'll be replacing it with a model of Neutrino right away, with full node performing sync in the background, and then switch backend upon sync completion.

docker compose so far doesn't work with raspbian.

We're no longer using Raspbian, so it might make sense to see if the same issues are present on Alpine - what exactly are the issues you encounter with compose? What doesn't work?

Just have a lack of available devices right now to test the alpine image out at this time. If all goes well I'll probably deprecate the script as we'll have more control over networking. Otherwise that service is a good replacement.

👍

Ideally we use docker-compose, the OpenRC init System and cronjobs for periodic tasks as much as possible

Just a note on that: probably best if each container/"app" manages its own tasks, and only system-wide ones (ex. periodic backups of everything?) are run on the host OS.

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

It requires a full node to build the database.

No idea how Long it takes but it isn’t immediate.

@meeDamian
Copy link
Member Author

I strongly believe we should at least know that before we go with introducing another source of trust and potential malice.

@nolim1t
Copy link
Member

nolim1t commented Nov 7, 2018

Its only until we can use neutrino properly, or theres official support with pruned nodes

@AnotherDroog
Copy link
Member

AnotherDroog commented Nov 7, 2018 via email

@meeDamian
Copy link
Member Author

FYI: I'm working on lnd here: lncm/docker-lnd and currently it's hosted on Docker Cloud on: meedamian/lnd, I'm happy to move it to lncm/lnd once the status of the stuff there is clarified(?) cc. @nolim1t

@nolim1t
Copy link
Member

nolim1t commented Nov 8, 2018

In @meeDamian ‘s favourite place so I can now pick up some more testing devices

@nolim1t
Copy link
Member

nolim1t commented Jan 13, 2019

Some of this is in progress already such as what I started doing with invoicer, however where this repo can still have its merits to exist is for:

  • Accepting contributions for other containers which can be beneficial to the project (i.e. nginx and maybe in the far future ipfs)
  • Allowing for other add-on modules

Will start offloading stuff off slowly and see how it all works out though

@nolim1t nolim1t self-assigned this Jan 13, 2019
@meeDamian
Copy link
Member Author

Let's revive this issue. Let's split all dockerfiles into separate repos: lncm/docker-NAME and have arm64 versions built with CI upon git tag. A bit more context here: lncm/pi-factory#129 (comment)

@meeDamian
Copy link
Member Author

This issue can be closed when:

@meeDamian meeDamian self-assigned this Feb 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants