-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 60.4 KB
/
index.json
1
[{"date":"1622749984", "url":"https://www.esamir.com/21/6/3/kubernetes-ssl-letsencrypt/", "title":"Kubernetes SSL Letsencrypt", "summary":"Create An Application Load Balancer Certificate Manager Certificate Issuer Certificate Update Nginx To Enable SSL Assumptions:\n You have some familiarity with Docker and Kubernetes NOTE: Any yaml file that\u0026rsquo;s included, please apply it via kubectl apply -f file.yml.\nOnce you have an application up and running you\u0026rsquo;ll need to ensure it\u0026rsquo;s secure. We\u0026rsquo;ll explore using letsencrypt to enable SSL. It\u0026rsquo;s a free certificate authority and makes it very easy to obtain a certificate.", "content":" Create An Application Load Balancer Certificate Manager Certificate Issuer Certificate Update Nginx To Enable SSL Assumptions:\n You have some familiarity with Docker and Kubernetes NOTE: Any yaml file that\u0026rsquo;s included, please apply it via kubectl apply -f file.yml.\nOnce you have an application up and running you\u0026rsquo;ll need to ensure it\u0026rsquo;s secure. We\u0026rsquo;ll explore using letsencrypt to enable SSL. It\u0026rsquo;s a free certificate authority and makes it very easy to obtain a certificate. Naturally SSL doesn\u0026rsquo;t mean your app is secure, but it\u0026rsquo;s a great first step.\nLetsEncrypt has two methods to validate the authenticity of the request in order issue a certificate.\n HTTP Validation DNS Validation Since we\u0026rsquo;re running in K8s, if you do use DNS validation you\u0026rsquo;ll need to use a DNS provider that integrates with K8s. In order to make this guide more portable and to avoid tying us to any particular DNS provider, I\u0026rsquo;m going to use HTTP Validation.\nCreate An Application Before we get started, we just need to create a simple application to test this with. We\u0026rsquo;ll install a hello world app running 3 replicas and a service load balancer for port 80. Here\u0026rsquo;s the yaml I\u0026rsquo;ve used below.\napiVersion:apps/v1kind:Deploymentmetadata:labels:app:hello-worldname:hello-worldspec:replicas:3selector:matchLabels:app:hello-worldstrategy:{}template:metadata:labels:app:hello-worldspec:containers:- image:docker.io/lsizani/helloimagePullPolicy:Alwaysname:hello-worldports:- containerPort:80resources:{}restartPolicy:Alwaysstatus:{}---apiVersion:v1kind:Servicemetadata:labels:app:hello-worldname:hello-docker-svcnamespace:defaultspec:ports:- port:80protocol:TCPtargetPort:80selector:app:hello-worldsessionAffinity:Nonetype:ClusterIPstatus:loadBalancer:{}Now, the IP address of the service is still internal so you won\u0026rsquo;t be able to connect to it unless you run the following command temporarily.\nkubectl port-forward svc/hello-docker-svc 8000:80 After which you can connect to http://127.0.0.1:8000 and you\u0026rsquo;ll see something along these lines:\nLoad Balancer At this point we need to actually expose this to the public. You\u0026rsquo;ll need to install an nginx-ingress Load balancer.\nhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install nginx ingress-nginx/ingress-nginx --set controller.publishService.enabled=true Configure nginx to serve traffic on port 80. Please note the lack of SSL. We\u0026rsquo;ll revisit this configuration and add SSL in a later pass.\napiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:hello-dockerannotations:kubernetes.io/ingress.class:nginxnginx.ingress.kubernetes.io/ssl-redirect:\u0026#34;false\u0026#34;spec:rules:- host:hello.demo.comhttp:paths:- path:/pathType:Prefixbackend:service:name:hello-docker-svcport:number:80I\u0026rsquo;m using the DNS name hello.demo.com which is just an example, but if your K8s cluster is configured correctly you\u0026rsquo;ll get provisioned an IP address. Please update the config to use your own DNS value and have it match the IP address.\nFor example, if hello.demo.com doesn\u0026rsquo;t resolve to your external IP address this won\u0026rsquo;t work and it\u0026rsquo;ll fail to generate an SSL.\nAt this point if we navigate to http://hello.demo.com we\u0026rsquo;ll see the nice little hello world screen we\u0026rsquo;ve seen previously.\nCertificate Manager Now, we need to add a certificate manager. This is where you would either choose to use DNS validation. As I mentioned in the intro, I won\u0026rsquo;t be using DNS validation to make this guide agnostic to the platform. If you do simply specify the correct options you install the helm chart.\nInstalling the cert-manager:\nhelm repo add jetstack https://charts.jetstack.io helm repo update helm install \\ cert-manager jetstack/cert-manager \\ --namespace cert-manager \\ --create-namespace \\ --version v1.3.1 \\ --set installCRDs=true Certificate Issuer There\u0026rsquo;s a Prod and Staging environment for LetsEncrypt. I installed both of them for convenience. Staging isn\u0026rsquo;t rate limited and allows you to debug/validate behavior without risking getting hit by a quota.\nStaging\napiVersion:cert-manager.io/v1kind:ClusterIssuermetadata:name:letsencrypt-stagingspec:acme:# You must replace this email address with your own.# Let\u0026#39;s Encrypt will use this to contact you about expiring# certificates, and issues related to your account.email:[email protected]:https://acme-staging-v02.api.letsencrypt.org/directoryprivateKeySecretRef:# Secret resource that will be used to store the account\u0026#39;s private key.name:letsencrypt-staging-ssl# Add a single challenge solver, HTTP01 using nginxsolvers:- http01:ingress:class:nginxProduction\napiVersion:cert-manager.io/v1kind:ClusterIssuermetadata:name:letsencrypt-prodspec:acme:# You must replace this email address with your own.# Let\u0026#39;s Encrypt will use this to contact you about expiring# certificates, and issues related to your account.email:[email protected]:https://acme-v02.api.letsencrypt.org/directoryprivateKeySecretRef:# Secret resource that will be used to store the account\u0026#39;s private key.name:letsencrypt-prod# Add a single challenge solver, HTTP01 using nginxsolvers:- http01:ingress:class:nginxAt this point we have the ability to create a certificate. It will utilize which ever issuer we want. In the example below, I\u0026rsquo;m using the letsencrypt-prod which matches the definition above.\nCertificate apiVersion:cert-manager.io/v1alpha2kind:Certificatemetadata:namespace:defaultname:hello-certsspec:secretName:hello-certsissuerRef:name:letsencrypt-prodkind:ClusterIssuerdnsNames:- hello.demo.comIf everything worked correctly you should be able to see the certificate via\nkubectl get secrets hello-certs -o json Update Nginx To Enable SSL Finally now that we have a certificate, let\u0026rsquo;s enable nginx to use it.\napiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:hello-dockerannotations:kubernetes.io/ingress.class:nginxspec:tls:- hosts:- hello.demo.comsecretName:hello-certsrules:- host:hello.demo.comhttp:paths:- path:/pathType:Prefixbackend:service:name:hello-docker-svcport:number:80At this point you should be able to connect https://hello.demo.com and get a valid response.\nYou can also check the validity of your certificate by testing it via SSL Labs I received in A+ for the certificate I generated\n","tags":["kubernetes","cloud","docker"], "section": "posts"},{"date":"1619715898", "url":"https://www.esamir.com/21/4/29/gcp-google-ipv6-kubernetes-support/", "title":"GCP Google IPv6 Kubernetes support", "summary":"Assumptions Implementation IPV6 Config Load Balancer Backend Configuration LoadBalancer FrontEnd configuration Verification Closing Notes HTTPS Most technical writes and writers in general tend to write in content in the hope that there will be value in their contribution for a good while. Usually most of the content I write is mainly for my own purposes to preserve the directions because I struggled to find a solution online.", "content":" Assumptions Implementation IPV6 Config Load Balancer Backend Configuration LoadBalancer FrontEnd configuration Verification Closing Notes HTTPS Most technical writes and writers in general tend to write in content in the hope that there will be value in their contribution for a good while. Usually most of the content I write is mainly for my own purposes to preserve the directions because I struggled to find a solution online. In other cases, I write because I have something to add to the conversation that I didn\u0026rsquo;t see highlighted in the other technical writing / online chatter.\nThis is one of those few articles that I hope very much will become obsolete sooner rather then later. It\u0026rsquo;s getting close to 10 years since IPv6 has launched. We were supposed to run out and have run out of IPV4 addresses ages ago. Somehow we still manage to squeeze some IPV4 addresses from here and there but we\u0026rsquo;re really way past the time when we should be moving forward.\nThere are far too many applications and libraries that still use IPV4 and don\u0026rsquo;t even think about IPv6 networking just yet. Docker is a great example where it feels like IPv6 was an afterthought. I wrote an article a while back about the pains of Docker IPv6. Since then, it sounds like they\u0026rsquo;ve added a few fixes that make the experience easier to use.\nI do wish IPv6 adoption was a bit more prevalent. This particular guide is how to get IPv6 working on Kubernetes on GCP. There is native support for IPv6 on kubernetes but that hasn\u0026rsquo;t made it out to GCP just yet. Most of VPC\u0026rsquo;s networking has yet to add support for IPv6.\nTo quote the article: \u0026ldquo;VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. However, it is possible to create an IPv6 address for a global load balancer.\u0026rdquo; The latter part is what we\u0026rsquo;re focusing on. We have a IPV4 based kubernetes cluster where we\u0026rsquo;re using a load balancer to direct traffic from an IPv6 stack to the application we have running.\nLastly although I\u0026rsquo;m writing this to solve a problem for kubernetes networking not having IPv6 support, this solution doesn\u0026rsquo;t actually rely or event expect kubernetes. We\u0026rsquo;re simply mapping traffic from one IPv6 address to an IPv4 address. As long as it\u0026rsquo;s responding than it\u0026rsquo;ll work fine.\nAssumptions You have a running web app in kubernetes deployed using IPV4. You have some familiarity with the GCP platform. Disclaimer: This is VERY google specific. So it will not translate easily to other cloud providers. There is likely a similar pattern you can use but you\u0026rsquo;ll have to check with your own cloud provider to see how to make that work.\nEnough of all this background, let\u0026rsquo;s get on the the fun stuff.\nImplementation IPV6 Config First step is to create a new external IPv6 address.\nOnce create you should see it in the listing. You can add a DNS entry for your hostname if you\u0026rsquo;d like or we can test this by IP if preferred.\nLoad Balancer Backend Configuration Create a new HTTP Load Balancer. Make sure to select HTTP and not TCP or UDP.\nMost load balancers in google defined a frontend (entrypoint), a backend (usually an internal GCP service to send the request to and routing rules (if needed) to control the flow of data.\nFirst we setup a backend. We\u0026rsquo;ll create a backend service and select Internet Network Endpoint Group for the backend type. I\u0026rsquo;m expecting HTTPS traffic so I\u0026rsquo;ll select https as well for the protocol.\nWhen you create a new instance group you\u0026rsquo;ll have to put in the IP address you want traffic to be directed to. Here\u0026rsquo;s an example of a ghost-blog with an example IP.\nAt this point you have all the pieces you need to create your backend. You can see what my final selection looks like here:\nYou can enable CDN, healthcheck, logging etc on here as well. I didn\u0026rsquo;t explore all these settings but feel free to enable/disable whatever you need.\nLoadBalancer FrontEnd configuration This part is much easier. Select the protocol, since I\u0026rsquo;m routing IPv6 traffic, I chose https, Select the IP Version and choose IPv6 and then the drop down should give you all the options available. Simple select the IP you created. In my case that would be blog-example-ip\nVerification Finally you can confirm that everything is working with:\nping6 2600:1901:0:39bc:: That of course should respond and be reachable, and the real test is\ncurl -6 https://myhostname should respond with your website\u0026rsquo;s content.\nIf you don\u0026rsquo;t want to wait for DNS propagation you can add this entry to your hosts file. Naturally replace this IP with your own IPv6 address.\n2600:1901:0:39bc::\tmyhostname Once everything is loaded I highly recommended this chrome extension called IPvFoo. It\u0026rsquo;ll show you all the IPv4 and IPv6 address associated with your website.\nYou can see an example for how GeekBeacon looks like. If you haven\u0026rsquo;t checked out it\u0026rsquo;s a really cool community I\u0026rsquo;m spending far too much time in.\nClosing Notes HTTPS This pattern forwards https -\u0026gt; https which is not usually how a load balancer is setup. We tend to want to offload the CPU cycles of encryption on the load balancer and the backend usually simply serves HTTP traffic.\nMy K8 application already handles HTTPS and I don\u0026rsquo;t really want to turn that off. My hope is that GCP would catch up and enable IPV6 in their GKE implementation sooner rather then later.\nAlso, I\u0026rsquo;m utilizing an K8s ingress control that manages https. The big benefit of relying on K8s rather then GCP is that if at some point I need to migrate to another solution on another cloud provider or hosting internally, the only real requirement is to have an IP + SSL certificate allocated. Everything else is simply a K8s manifest that needs to be deployed to one destination vs another. The ability to be cloud agnostic is much preferred to over the savings a few CPU clock cycles.\n","tags":["docker","linux","opensource","kubernetes"], "section": "posts"},{"date":"1618963200", "url":"https://www.esamir.com/21/4/21/local-email-development-service/", "title":"Local Email Development Service", "summary":"Whenever I\u0026rsquo;ve worked on any application that sends out emails it\u0026rsquo;s always an issue on how to mimic the behavior on my local laptop. Usually wherever the app is deployed is configured to be able to just \u0026lsquo;work\u0026rsquo;. aka you can send email by connecting to localhost with no auth.\nNow, how do I mimic locally? In the past i\u0026rsquo;ve tried setting up postfix etc in a docker stack, or more recently doing an smtp relay using google services.", "content":"Whenever I\u0026rsquo;ve worked on any application that sends out emails it\u0026rsquo;s always an issue on how to mimic the behavior on my local laptop. Usually wherever the app is deployed is configured to be able to just \u0026lsquo;work\u0026rsquo;. aka you can send email by connecting to localhost with no auth.\nNow, how do I mimic locally? In the past i\u0026rsquo;ve tried setting up postfix etc in a docker stack, or more recently doing an smtp relay using google services.\nFor reference the previous pattern:\n#!/usr/bin/env bash docker run --restart always --name mail \\ -e RELAY_HOST=smtp.gmail.com \\ -e RELAY_PORT=587 \\ -e RELAY_USERNAME=user \\ -e RELAY_PASSWORD=secret \\ -p 25:25 \\ -d bytemark/smtp This pattern requires you to enable unsecured application in your google account.\nThe new pattern I\u0026rsquo;ve started using of late is leveraging the wonderful tool called MailHog.\nTo set it up simply add the following to your docker-compose.yml file.\nmail:image:mailhog/mailhog:v1.0.1container_name:mailports:- 8025:8025- 1025:1025If your application is running in docker you don\u0026rsquo;t need to expose 1025, if you\u0026rsquo;re running your app outside of docker, you need to get to 1025 to send mail.\ndon\u0026rsquo;t forget to bring it up via docker-compose up -d mail\nAt this point you just need to configure your SMTP settings.\nHere\u0026rsquo;s an example snippet that I used for my app.\nreporting:to:[[email protected]]from:\u0026#34;[email protected]\u0026#34;subject:\u0026#34;Report for things\u0026#34;hostname:localhostusername:password:port:1025Main parts you should note is the hostname is localhost, and port is 1025.\nThen once everything is done you can connect to port 8025 and retrieve your email.\n","tags":["development","email"], "section": "posts"},{"date":"1616021241", "url":"https://www.esamir.com/21/3/17/letsencrypt-ssl-dns-automation-with-lego/", "title":"LetsEncrypt SSL DNS automation with lego", "summary":"Lego and LetsEncrypt Dependencies/Tooling SSL Ghost Web Site Systemd startup script Final Notes on ghost Nginx SSL Wrapping Backup Strategies Final Notes Lego and LetsEncrypt if you haven\u0026rsquo;t heard about letsencrypt you should. If you\u0026rsquo;re still serving all your traffic over HTTP you should stop and move over the HTTPS everything. Honestly at this point there shouldn\u0026rsquo;t be any reason not to use SSL.", "content":" Lego and LetsEncrypt Dependencies/Tooling SSL Ghost Web Site Systemd startup script Final Notes on ghost Nginx SSL Wrapping Backup Strategies Final Notes Lego and LetsEncrypt if you haven\u0026rsquo;t heard about letsencrypt you should. If you\u0026rsquo;re still serving all your traffic over HTTP you should stop and move over the HTTPS everything. Honestly at this point there shouldn\u0026rsquo;t be any reason not to use SSL. The CPU/server cost is minimal and there is tooling that makes this trivial. I\u0026rsquo;m going to walk you through the process of setting SSL using nginx and we\u0026rsquo;ll use docker for good measure.\nDependencies/Tooling Some familiarity with the following tools would be nice to have but not required.\n Docker nginx SSL SSL LetsEncrypt provides free SSL/TLS certificate for the world at large as long as you can verify that you are who you are. They have several tools that validate you, the easiest being running a web server, adding a DNS entry and so on. Certbot is likely the most famous tool that generates SSL certificate\nPersonally I prefer using Lego which is a letsencrypt written in Go. Partly I\u0026rsquo;m a big fan of Go as a language so I appreciate the developer\u0026rsquo;s choice but also it lets me wildcard certificates and SAN very easily.\nAlthough you can generate an SSL easily enough in order to get a wildcard DNS verification is much easier and simpler to use. Though you do need a more legit DNS provider that has an API exposed. Lego has a long list of supported DNS Providers. I have used Amazon Route53 and will likely move to a GCP one down the line, but honestly any of them will work fine.\nWhat is very cool and special about SAN is that it creates a single certificate that supports multiple domains.\nYou can run this anywhere but in my case I have it setup in /etc/letsencrypt.\nThe first step is obtaining the certificate the first time.\ncd /etc/letsencrypt AWS_ACCESS_KEY_ID=SECRET AWS_SECRET_ACCESS_KEY=\u0026#34;SECRET\u0026#34; lego --dns route53 --domains=\u0026#34;*.foobar.org\u0026#34; --domains=\u0026#34;foobar.org\u0026#34; --email valid@email run As you can see I created a wildcard and a foobar.org. It\u0026rsquo;s a bit annoying but *.foobar.org does not cover the base domain. You can enumerate all your domains if you don\u0026rsquo;t want to have a wild card. Example\ncd /etc/letsencrypt AWS_ACCESS_KEY_ID=SECRET AWS_SECRET_ACCESS_KEY=\u0026#34;SECRET\u0026#34; lego --dns route53 --domains=\u0026#34;www.foobar.org\u0026#34; --domains=\u0026#34;foobar.org\u0026#34; --domains=\u0026#34;mail.foobar.org\u0026#34; --domains=\u0026#34;ftp.foobar.org\u0026#34; --email valid@email run Once you have your certificate you simply need to ensure to renew it regularly. I created a simple bash scripts to do so.\n#!/usr/bin/env bash cd /etc/letsencrypt/ DOMAINS=\u0026#34;domain1 domain2 foobar.org\u0026#34; AWS_KEY=\u0026lt;CHANGEME\u0026gt; AWS_SECRET=\u0026lt;CHANGEME\u0026gt; for domain in $DOMAINS; do AWS_ACCESS_KEY_ID=$AWS_KEY AWS_SECRET_ACCESS_KEY=\u0026#34;$AWS_SECRET\u0026#34; lego --dns route53 --domains=\u0026#34;*.$domain\u0026#34; --domains=\u0026#34;$domain\u0026#34; --email [email protected] renew done Using AWS has a bit of a cost for using their DNS providers, but with the 3 domains I have I don\u0026rsquo;t think my bill comes to more than maybe $3/month. That\u0026rsquo;s well worth it to me.\nGhost Web Site version:\u0026#34;3.7\u0026#34;services:www:image:ghost:4.0.1ports:- \u0026#34;8890:2368\u0026#34;restart:alwaysenv_file:.envvolumes:- ./content:/var/lib/ghost/content/mysql:container_name:shared_mysqlimage:mysql:5.7.29restart:alwaysenv_file:.envvolumes:- ./data:/var/lib/mysqlAt this point we expose a port locally 8890 that is serving HTTP traffic.\nWe need a .env file that is referenced in the MySQL and ghost instance.\n## Database Ghost config database__client=mysql database__connection__host=shared_mysql database__connection__user=root database__connection__password=testing database__connection__database=ghost ##MySQL config MYSQL_ROOT_PASSWORD=testing MYSQL_DATABASE=gbfest ## Domain config #url=http://0.0.0.0:8890 url=https://www.foobar.org ## Email settings Production ## FILL THESE IN as appropriate #mail__from=donotreply@domain #mail__transport=SMTP #mail__options__port=587 #mail__options__host= #mail_options_service=SMTP #mail__options__auth__user= #mail__options__auth__pass= At this point we can bring up the website using docker-compose up -d but to earn brownie points, we\u0026rsquo;ll use a systemd file.\nSystemd startup script in /etc/systemd/system create the following file. We\u0026rsquo;ll assume you have a local user named docker_user that ideally has a shell of /bin/nologin that\u0026rsquo;s part of the docker group.\nghost.service file:\n[Unit] Description = Ghost Website After=docker.service Requires=docker.service [Service] Type=simple WorkingDirectory=/home/docker_user/ghost ExecStart=/usr/bin/docker-compose up ExecStop=/usr/bin/docker-compose stop ExecReload =/usr/bin/docker-compose restart User=docker_user Group=docker Restart=always RestartSec=3 [Install] WantedBy=multi-user.target At this point we can start/stop/enable using systemd.\nsystemctl enable ghost.service systemctl status ghost In otherwords, if you restart your computer your website will still come up.\nFinal Notes on ghost At this point we\u0026rsquo;re listening to traffic on http://localhost:8890 that we are not exposing to the internet because like good little internet samaritans we have firewall rules that prevents bad actors from getting in. But we do need to let people wanting to see our website access.\nNginx SSL Wrapping If you haven\u0026rsquo;t you should open up port 80 and 443 on your webserver. We won\u0026rsquo;t be serving traffic on 80 but if someone comes on 80 we should redirect them and in order to do that we need 80 exposed.\nin /etc/nginx/sites-available we\u0026rsquo;re going to create the following ghost.conf.\nserver { # SSL configuration \t# \tlisten 443 ssl; listen [::]:443 ssl; access_log /var/log/nginx/ghost_access.log; error_log /var/log/nginx/ghost_error.log; ssl_certificate /etc/letsencrypt/.lego/certificates/_.foobar.org.crt; ssl_certificate_key /etc/letsencrypt/.lego/certificates/_.foobar.org.key; client_max_body_size 50M; root /var/www/html/ghost; index index.html index.htm; server_name www.foobar.org; large_client_header_buffers 4 32k; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://0.0.0.0:8890; include proxy_params; } location ~ /(data|conf|bin|inc)/ { deny all; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Block access to \u0026#34;hidden\u0026#34; files and directories whose names begin with a # period. This includes directories used by version control systems such # as Subversion or Git to store control files. location ~ (^|/)\\. { return 403; } } server { listen 80; listen [::]:80; return 301 https://$host$request_uri; server_name www.foobar.org; } Before this is finalized we need to enable the site.\ncd /etc/nginx/sites-enabled ln -s ../sites-available/ghost.conf 01_ghost.conf nginx -t ## Validates config systemclt restart nginx We\u0026rsquo;re essentially redirecting port 80 to 443 and creating a proxy that wraps all traffic around SSL and services traffic for www.foobar.org. That also means if we connect to our server by IP we\u0026rsquo;ll get the default page rather then a valid website.\nBackup Strategies At this point you should be taking a regular SQL dump of your database, and backup the content of your ghost website. The ghost lab has a export for all your content which you should use regularly before doing any major operations.\nFinal Notes I personally like using Google Storage for all my images content. I\u0026rsquo;m currently maintaining a copy of the ghost docker image with additional plugins so support GS with plans of adding S3 soon. You can find the source here and docker images can be found here. The tags are matched to the official ghost docker container.\n","tags":["docker","ssl","security","automation","ghost","lego"], "section": "posts"},{"date":"1615766400", "url":"https://www.esamir.com/21/3/15/awesome-kubernetes-resources/", "title":"Awesome Kubernetes Resources", "summary":"A much more comprehensive list can be found here. This is likely a subset of the items in that repo and a few additions of tools i\u0026rsquo;ve tried out or want to try out to make my K8s life easier.\nCLI Tools:\n Helm Kubernetes package manager K9s CLI management tool. kube-shell integrated shell environment Cluster Provisioning\n kind Docker based kubernetes deployment. minikube VM based cross-platform kubernetes with addons support Automation and CI/CD", "content":"A much more comprehensive list can be found here. This is likely a subset of the items in that repo and a few additions of tools i\u0026rsquo;ve tried out or want to try out to make my K8s life easier.\nCLI Tools:\n Helm Kubernetes package manager K9s CLI management tool. kube-shell integrated shell environment Cluster Provisioning\n kind Docker based kubernetes deployment. minikube VM based cross-platform kubernetes with addons support Automation and CI/CD\n flux 2 Development:\n Hephy Developer workflow tool Skaffold client side deployment Testing/Troubleshooting\n Chaos Kube Kube Monkey Backup/Restore\n Velero Service Mesh:\n Istio Certifications\n killer Kubernetes Exam Simulator ","tags":["kubernetes","cloud","awesome"], "section": "posts"},{"date":"1615075200", "url":"https://www.esamir.com/21/3/7/home-backup-solution/", "title":"Home Backup Solution", "summary":"Home Backup Solution Choosing Storage Solution Backup Service Systemd Service BackUp Data Encryption Destination Source Schedule Backup Retention Final Step, Backup Duplicati References/ Requirements Home Backup Solution I setup a NAS a while ago using Arch Linux and ZFS migrating away from Synology. It\u0026rsquo;s really nice to control your data and be able to use more advanced tooling then the limited set of applications that are available from Synology.", "content":" Home Backup Solution Choosing Storage Solution Backup Service Systemd Service BackUp Data Encryption Destination Source Schedule Backup Retention Final Step, Backup Duplicati References/ Requirements Home Backup Solution I setup a NAS a while ago using Arch Linux and ZFS migrating away from Synology. It\u0026rsquo;s really nice to control your data and be able to use more advanced tooling then the limited set of applications that are available from Synology.\nI had S3 backups setup with Synology and wanted to create something similar on the Arch Server.\nI have about 3 TB of data I want to backup and was looking for something that\u0026rsquo;s secure and is a bit more intelligent than a crontab of aws sync.\nAlso, on the off chance that my backup takes more than a few days, I wanted it to be smart enough to not execute two jobs at the same time.\nChoosing Storage Solution I was initially using S3 but firstly I find the AWS permissioning incredibly convoluted to manage. I started using GCP more so of late and I really like its simplicity. I ended up giving it a go and made sure to use the equivalent of their \u0026lsquo;cold storage\u0026rsquo;\nFrom the available list of Storage classes I ended up choosing \u0026lsquo;Archive\u0026rsquo;\nit Also seems to have a very reasonable pricing for essentially long term storage that does not need to be access very frequently as you can see below.\nThe other reason I chose GS (Google Storage) over S3 is because the tool I ended up choosing didn\u0026rsquo;t support S3 coldstorage backup.\nBackup Service Duplicati is a really cool project though it can be a bit tricky to setup. I ended up utilizing docker-compose and docker to get it up and running.\nHere\u0026rsquo;s my simple little setup.\nversion:\u0026#34;3.7\u0026#34;services:duplicati:image:ghcr.io/linuxserver/duplicaticontainer_name:duplicatihostname:duplicatienvironment:- PUID=1000- PGID=1000- TZ=America/Los_Angeles# - CLI_ARGS= #optionalvolumes:- ./config:/config- /backups:/backups- /tank:/sourceports:- 8200:8200restart:unless-stoppedPlease note the mounts. /backups is the destination where you want to store data. I\u0026rsquo;m mainly using it for the cloud storage so that is mainly pointless. /tank is my ZFS mount point which is mapped to /source and of course the configuration is exposed as a local volume.\nSystemd Service Place the following file under /etc/systemd/system/duplicati.service\n[Unit] Description = Duplicati After=docker.service Requires=docker.service [Service] Type=idle WorkingDirectory=/home/docker_user/backup ExecStart=/usr/bin/docker-compose up ExecStop=/usr/bin/docker-compose stop ExecReload =/usr/bin/docker-compose restart User=docker_user Restart=always RestartSec=3 RestartPreventExitStatus=0 TimeoutStopSec=10 [Install] WantedBy=multi-user.target As you can see I\u0026rsquo;m running this as a special user I created with limited permissions only being used for running docker applications.\nLet\u0026rsquo;s enable the service using:\nsystemclt enable duplicati and of course if you haven\u0026rsquo;t done so already you should start the service.\nsystemctl start duplicati BackUp Data Once duplicati is running it\u0026rsquo;ll be accessible via at port 8200. My NAS\u0026rsquo;s local ip, is 192.168.1.200, so I can access it via http://192.168.1.200:8200\nYou should see something that looks like this:\nEncryption When you create a backup it\u0026rsquo;ll ask you to enter an Encryption key if you would like. GS is already private but the added level of encryption doesn\u0026rsquo;t hurt. Like all secrets make sure you remember the key in case you need it for disaster recovery.\nDestination The next screen lets you setup the GS bucket. You can use any number of alternative offsite solution ranging from a local backup, (s)ftp, S3, Azure, webdav and many more. You can find the full list here.\nIf you use GS, then authorization is made very simply by simply clicking on the Auth key will generate a token for duplicati to use.\nSource The next two steps i\u0026rsquo;ll skip over as they\u0026rsquo;re fairly straight forward. Just choose the Source data. If you remember you mounted your Raid/ZFS as /source and the destination if you did mount it should be /backup. Navigation has a mild windows feel with My Computer replacing the typical root but otherwise this should feel familiar. You can add a single or multiple paths.\nPlease note, that the output will be stored in a duplicati special format. You won\u0026rsquo;t be able to go to the backup location and simply look at the files without Duplicati restoring the data.\nSchedule The schedule is pretty easy to setup. Though it would be nice if they supported cron expression, I found their UI pretty flexible and configurable for every pattern that I was interested in.\nBackup Retention Finally the backup retention. You can set it up so backups older than so many days are deleted, or ensure that you have no more than N number of backups etc. Smart backup retention is pretty cool as well keeping one copy of the last 7 days, one copy of the last 4 weeks and one copy of the last 12 months. Personally since i\u0026rsquo;m storing about 3 TB of data as it is, I\u0026rsquo;m only keeping one copy remotely.\nFinal Step, Backup Duplicati I also wanted to backup my Duplicati installation. Initially I had setup duplicati to save to my NAS as a local backup but since I need duplicati to restore it I ended up just adding a little cron + rsync to ensure that I have a copy of the settings in case I do something extra dumb and destroy my docker installation of duplcati.\nReferences/ Requirements Duplicati Docker Docker Compose Google Cloud ","tags":["linux","tutorial","opensource","cloud","homeserver"], "section": "posts"},{"date":"1608781195", "url":"https://www.esamir.com/20/12/23/the-toxicity-of-open-source/", "title":"The Toxicity Of Open Source", "summary":"My most recent experiences though really saddened me and exposed a certain toxicity in the OSS community that I really wish wasn\u0026rsquo;t so prevalent.\nI\u0026rsquo;ve been posting on and off on dev.to but I never introduced myself so let me preface this post with a bit about me to give some context.\nI\u0026rsquo;ve been using open source and Linux on and off since the early 2000s. My first Linux distribution was Caldera 2.", "content":"My most recent experiences though really saddened me and exposed a certain toxicity in the OSS community that I really wish wasn\u0026rsquo;t so prevalent.\nI\u0026rsquo;ve been posting on and off on dev.to but I never introduced myself so let me preface this post with a bit about me to give some context.\nI\u0026rsquo;ve been using open source and Linux on and off since the early 2000s. My first Linux distribution was Caldera 2.4 an old RPM based that was cool in its hay-day and then turned towards the dark side with various different lawsuits against Linux that never went anywhere.\nI\u0026rsquo;ve been involved in more Linux Users Groups, mailing lists and conferences then I can count. I helped start a Linux conference that\u0026rsquo;s been running for a good while after I graduated educating and teaching people about the benefits of Linux and Open Source.\nThough the pandemic climate makes it difficult to have the same experience. Planning a conference exposed me to open source and open culture in a way that going to LUGs never had. I had the opportunity to get a true grasp of what the Free Software Foundation (FSF) truly stands for and talk and meet with some of the key figures of Linux/OSS movement.\nI\u0026rsquo;ve always appreciated Open Source because it provides me as a user with (first off an alternative to commercial programs) and the freedom to run, edit, contribute and share (the 4 FSF freedoms). I\u0026rsquo;ve always appreciated the freedom of choice that Linux has. Nobody needs 15 editors, but I really do love that there are 15 different ways to add \u0026lsquo;hello world\u0026rsquo; to a text document.\nWe\u0026rsquo;ve built up a community that is all about the freedom of expression, freedom of choice, freedom to use code and software in anyway you desire with the appropriate permissive license. So it\u0026rsquo;s always sad when you hear and see people bashing other users because they don\u0026rsquo;t agree with their choices. Yes we have our mini flame wars of vim vs emacs vs nano. Some people take it a bit too seriously but at the end of the day most respect other\u0026rsquo;s people choice to use whatever they like and move on. (I would like to think at least)\nI volunteered to help with an Open Source conference and they rallied last minute to try and figure out what they could do this year since everyone is quarantined still. They spent their time and researched the best tools they could find given the time, resources, and skillset available to them and ended up choosing Zoom. I know, not my favorite tool either but at the end of the day, it\u0026rsquo;s a well tested tool that works. They ran an entire 3 day event on zoom that had the founder of redhat, the FSF and so many more incredible speakers sharing their knowledge and expertise. Yet, somehow all of that was eclipsed by the rants about the platform they chose.\nThere were so many tweets and messages I ran across of that were complaining about the tooling. Yes, we know that Zoom isn\u0026rsquo;t open source, but if we\u0026rsquo;re using zoom to show a video from key figures in OSS speaking about relevant topics that is important to them and zoom is allowing us to share that content with a wider audience. Does it matter that much that it\u0026rsquo;s closed source? If we value a user\u0026rsquo;s ability to choose, shouldn\u0026rsquo;t we actually let the people choose.\nYou can\u0026rsquo;t scream from the mountain tops about how amazing Linux is, and how you appreciate the freedom of choice, then look condescendingly at the poor plebeian who chose to use Windows.\nSure, not my favorite operating system. I do find it limiting to my day to day tasks, but that\u0026rsquo;s my experience. There might be a legitimate reason that an individual may NEED to use windows, or hell maybe they just like it. They honestly don\u0026rsquo;t even need to justify their choice. They DO deserve to be able to say they are using windows, mac, photoshop, office or honestly any piece of software and any technology without being put on the defensive about it. There is nothing I can imagine that would hurt the Linux community more than this false sense to elitism that makes people dismiss others because their values or choices don\u0026rsquo;t align with theirs.\nI\u0026rsquo;ve gotten into so many conversation with people in the community about this. If you value the freedom of choice, then you can\u0026rsquo;t be upset that the person\u0026rsquo;s choice doesn\u0026rsquo;t line with your views.\nAm I completely off base here or am I missing something in the OSS community?\nThe year of the \u0026ldquo;Linux Desktop\u0026rdquo; will never come if every time someone is curious about our community we respond with antagonism, judgement, and all around toxicity. I\u0026rsquo;m naturally speaking about a vocal minority of the community but I do want to bring it to attention. Those few individuals no matter how brilliant and capable they might be are doing, in my opinion, more harm than good.\n","tags":["linux","opensource","community"], "section": "posts"},{"date":"1598400000", "url":"https://www.esamir.com/20/8/26/docker-ipv6-guide/", "title":"Docker IPV6 Guide", "summary":"IPV4 IPV6 Step 1, enable in the Daemon Step 2, Firewall rules Step 3, Docker Compose + IPV6 Step 4, Resolve NAT Issues Final thoughts. Unconfirmed Fix I spent a good bit of time trying to figure this out, so I thought I\u0026rsquo;d record this for posterity\u0026rsquo;s sake and others might benefit.\nAssumptions:\n You are somewhat familiar with docker You have some exposure with docker-compose You have at least a basic understanding of networking fundamentals.", "content":" IPV4 IPV6 Step 1, enable in the Daemon Step 2, Firewall rules Step 3, Docker Compose + IPV6 Step 4, Resolve NAT Issues Final thoughts. Unconfirmed Fix I spent a good bit of time trying to figure this out, so I thought I\u0026rsquo;d record this for posterity\u0026rsquo;s sake and others might benefit.\nAssumptions:\n You are somewhat familiar with docker You have some exposure with docker-compose You have at least a basic understanding of networking fundamentals. I\u0026rsquo;m not a networking expert but you should have an understanding of TCP, the existence of IPV4 and IPV6 and how they differ. Some basic understanding of firewalls, iptables would be good. First of all as a developer I find docker to be a glorious thing. Though it\u0026rsquo;s not limited to just docker, containers in general have made software development so much easier. It allows a user to run a simple command and have a database, cache, application pretty much any component at your disposal without having the need to be a system administrator and understand the intricacies of configurations in order to get all of these services up and running.\nThat being said, in production there are many MANY questions that need to be addressed before going from a dev\u0026rsquo;s use case of docker to a production environment but still it\u0026rsquo;s really a great tool.\nNow, back to the problem at hand, networking.\nIPV4 Docker was initially built with IPV4 in mind and IPV6 feels like an afterthought. For IPV4, if you want to expose a service, linked multiple docker containers together, do load balancing everything seems to \u0026lsquo;Just work\u0026rsquo;. Everything is taken care of for you under the hood and you really don\u0026rsquo;t need to do much.\nThis is an example configuration pulled from the wordpress docker hug.\nversion:\u0026#39;3.1\u0026#39;services:wordpress:image:wordpressrestart:alwaysports:- 8080:80environment:WORDPRESS_DB_HOST:dbWORDPRESS_DB_USER:exampleuserWORDPRESS_DB_PASSWORD:examplepassWORDPRESS_DB_NAME:exampledbvolumes:- wordpress:/var/www/htmldb:image:mysql:5.7restart:alwaysenvironment:MYSQL_DATABASE:exampledbMYSQL_USER:exampleuserMYSQL_PASSWORD:examplepassMYSQL_RANDOM_ROOT_PASSWORD:\u0026#39;1\u0026#39;volumes:- db:/var/lib/mysqlvolumes:wordpress:db:Bringing up different services and wiring them together is a breeze. All the iptables rules are taken care of and things just work. I can connect to http://localhost:8080 and I will see the wordpress getting started guide that walks me through the steps to configure my application.\nNow, IF you are operating in an IPV6 only world. Let\u0026rsquo;s see what this nightmare is about.\nIPV6 Step 1, enable in the Daemon You need to edit /etc/docker/daemon.json and add these options.\n{ \u0026#34;ipv6\u0026#34;: true, \u0026#34;fixed-cidr-v6\u0026#34;: \u0026#34;fd00::/80\u0026#34; } The fixed-cidr6 doesn\u0026rsquo;t matter too much but you shouldn\u0026rsquo;t clash with other networks already defined. The official documentation can be found here and I wouldn\u0026rsquo;t stray too far from it unless you know what you\u0026rsquo;re doing.\nNow, the official instructions say all you need to do is restart the service, but I\u0026rsquo;ve found you may need to restart your computer.\nAt this point, the network for docker0 default network has been updated to use IPv6. This does NOT mean that every docker network is IPv6 enabled.\nStep 2, Firewall rules For some reason this is not automatic and needs to be applied manually.\nip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE At this point basic functionality works fine. So to test this out we can run this:\ndocker run --rm -t busybox ping6 -c 4 google.com Which results in:\nPING google.com (2607:f8b0:400a:808::200e): 56 data bytes 64 bytes from 2607:f8b0:400a:808::200e: seq=0 ttl=119 time=17.133 ms 64 bytes from 2607:f8b0:400a:808::200e: seq=1 ttl=119 time=17.119 ms 64 bytes from 2607:f8b0:400a:808::200e: seq=2 ttl=119 time=17.281 ms 64 bytes from 2607:f8b0:400a:808::200e: seq=3 ttl=119 time=17.430 ms --- google.com ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 17.119/17.240/17.430 ms Step 3, Docker Compose + IPV6 At this point docker has support for IPV6, but since docker-compose generally creates a new network for each docker-compose.yml definition it won\u0026rsquo;t work as expected.\nThe big issue with docker-compose is that it seems IPV6 is not supported for any schema version higher than 2.1 (Current version is 3.7).\nHere is an equivalent version using IPV6 and docker-compose\nversion:\u0026#34;2.1\u0026#34;services:busy:image:busyboxcommand:ping6-c4google.comnetworks:- app_netnetworks:app_net:enable_ipv6:truedriver:bridgedriver_opts:com.docker.network.enable_ipv6:\u0026#34;true\u0026#34;ipam:driver:defaultconfig:- subnet:2001:3984:3989::/64gateway:2001:3984:3989::1Of course now we also need to update the iptable rules:\nsudo ip6tables -t nat -A POSTROUTING -s 2001:3984:3989::/64 ! -o docker0 -j MASQUERADE ANDDDDDD.. once you\u0026rsquo;re done you\u0026rsquo;ll have to remove them.\nsudo ip6tables -t nat -D POSTROUTING -s 2001:3984:3989::/64 ! -o docker0 -j MASQUERADE A few notes and warnings\n The network cannot clash with the existing docker0 network you gave the daemon. Version of the schema must not be higher than 2.1. Docker-compose Binary can be the latest version, but the schema has to be 2.1. You can read more info on the issue here. Step 4, Resolve NAT Issues Okay, so the way IPV4 works is that all traffic is masked so that if the host\u0026rsquo;s IP is 10.5.5.23 (for example) and we have 5 different containers all with their own individual addresses, let\u0026rsquo;s say 1.1.1.2-1.1.1.7 respectively. Any traffic that goes out is NATed and as far as the outside world and internet network is concerned the request came from 10.5.5.23.\nFor IPV6, The request is NOT NATed so all request are coming from the IPV6 docker address and if the firewall doesn\u0026rsquo;t explicitly allow that address then the request will be dropped.\nThis is especially troublesome for internal networks where we want to ensure we don\u0026rsquo;t allow external traffic in.\nAt this point we have 2 options.\n Allocate docker a range of IPv6 and update all firewall rules to allow traffic in from those IPs to the appropriate resources. This I find to be particularly annoying because docker in general is intended to be a transient resource. You create/destroy resources as needed to allow you to scale up and down. Having a permanent allocation of address in your network and having to subdivide that range across your multiple compose files can become a logistic nightmare.\nIPV6 NAT-ing In general this practice is frowned upon. I\u0026rsquo;m not a network expert so I will let you all make your decision on this but from a Developer perspective this solution is great. Though I do encourage you to read up more about this before just copy/pasting this in.\nI ran across this github project called docker-ipv6nat which solved all of my problems. Someone should really buy the author a beer.\nLet\u0026rsquo;s convert our busybox ping project to use this then I\u0026rsquo;ll explain everything it does.\nversion:\u0026#34;2.1\u0026#34;services:busy:image:busyboxcommand:ping6-c4google.comnetworks:- app_netipv6:image:robbertkl/ipv6natrestart:unless-stoppednetwork_mode:\u0026#34;host\u0026#34;privileged:truevolumes:- /var/run/docker.sock:/var/run/docker.sock:ro- /lib/modules:/lib/modules:ronetworks:beef:enable_ipv6:truedriver:bridgedriver_opts:com.docker.network.enable_ipv6:\u0026#34;true\u0026#34;ipam:driver:defaultconfig:- subnet:2001:3984:3989::/64This approach no longer requires you to create custom iptables rules AND it NATs ipv6 so the request will be accepted without having to update all firewall rules with your favorite IPv6 network. The project also cleans up after itself and removes any routing rules it creates.\nThis unblocked me personally and allowed me to have a docker container work as it used under IPV4.\nNow, is this ideal? It depends.\nFrom a networking perspective, it\u0026rsquo;s an anti-pattern because we\u0026rsquo;re doing doing things we shouldn\u0026rsquo;t be with IPV6 From a dev perspective it\u0026rsquo;s perfect.\nThe alternative though is to make things work as desired from a networking perspective and create an administration hell which makes docker have to be treated as if it was bare metal.\nFinal thoughts. For the love of god docker needs to update its IPV6 policy and the way their application supports IPV6. This is so overly complicated for no reason.\nI would love to hear everyone\u0026rsquo;s thoughts on this and if there are better ways of doing this, but for now for my own personal use I will use IPV6 NAT whenever and wherever i need IPV6 support.1G\nUnconfirmed Fix eLabFTW shared this fix with me. I haven\u0026rsquo;t had the time to really confirm it, but I thought I\u0026rsquo;d share the fix for everyone else\u0026rsquo;s benefit.\nI haven\u0026rsquo;t been able to get this to work on OS X and haven\u0026rsquo;t tested this on Linux yet, but should be added in for reference. It\u0026rsquo;s definitely worth exploring before following the guide in this article\n With docker 20.10.5 and docker-compose 1.28.6 you don\u0026rsquo;t need to use the ip6tables commands manually and docker can take care of doing the NAT properly (which is MUCH better!).\nFor that you need to have \u0026ldquo;experimental\u0026rdquo;: true in /etc/docker/daemon.json, along with \u0026ldquo;ip6tables\u0026rdquo;: true. Then restart docker service and check it has an ipv6 on docker0 bridge.\nI have found that for the network part in the docker-compose.yml file, you need to omit the \u0026ldquo;gateway\u0026rdquo; part (last line).\nAfter that, all should work nicely :D.\nSide note: if you\u0026rsquo;re using HAProxy, use bind :::80 v4v6. Nothing needs to be changed for exposed ports, it\u0026rsquo;ll listen on both.\n","tags":["docker","linux","opensource","kubernetes"], "section": "posts"},{"date":"1598400000", "url":"https://www.esamir.com/20/8/26/poor-mans-docker-deployment/", "title":"Poor Man's Docker Deployment", "summary":"SSL NGINX NGINX vhost config. Docker application Docker Systemd script Docker Check List. Final stage I have had several use cases where I want to deploy things in docker for multiple reasons, but ease of deployment is a big one. Low entry bar, less maintenance that makes docker deployment more appealing. I don\u0026rsquo;t have the resources to do a full K8 deployment and ended up with this pattern I wanted to share with others", "content":" SSL NGINX NGINX vhost config. Docker application Docker Systemd script Docker Check List. Final stage I have had several use cases where I want to deploy things in docker for multiple reasons, but ease of deployment is a big one. Low entry bar, less maintenance that makes docker deployment more appealing. I don\u0026rsquo;t have the resources to do a full K8 deployment and ended up with this pattern I wanted to share with others\nAssumptions:\n You are somewhat familiar with docker You have some exposure with docker-compose You know what systemd, init.d scripts are etc. You have some exposure to nginx or a similar Proxy. SSL Most services require some level of security. This is primarily focused towards web based services, so we\u0026rsquo;ll be using letsencrypt. I usually force all traffic through SSL, though you don\u0026rsquo;t have to.\nIf you\u0026rsquo;re not familiar with letsencrypt and certbot it\u0026rsquo;s the defacto way of getting a valid certificate of late (for free). There are other sources but they\u0026rsquo;re all paid services.\nI won\u0026rsquo;t go into the full details on setting this up, but I\u0026rsquo;m making the assumption that you have some kind of wildcard or host based SSL that are on your local file system under some variation of these paths:\n/etc/letsencrypt/live/myhost.org/fullchain.pem; /etc/letsencrypt/live/myhost.org/privkey.pem; If you did buy your own cert, I\u0026rsquo;m assuming you know how to get it in a format where nginx will accept them or are able to find the information through your god like google-fu.\nNGINX There are two approaches to this. They each have their drawbacks and advantages.\n Running NGINX in a container. The big advantage to this approach is that you don\u0026rsquo;t need to expose any ports except HTTPS and HTTP. Everything else is in the docker internal network.\nDownside, is that you\u0026rsquo;ll have to be using the same docker network for all services. You also will need to ensure a certain order of operation. Likely nginx needs to come up last in order to detect the running services. Or wait the timeout session for service discovery to work. I\u0026rsquo;ve always had better luck letting everything start first and starting the web server last.\nRunning NGINX on Host Downside: You will need to expose the services locally on various ports at the very least on localhost.\nUpside: You can simply start nginx as a system service and not worry about docker. They are disjoint and usually works fairly well.\nI\u0026rsquo;m using the second approach so I\u0026rsquo;ll mainly be exploring what needs to be done to get that working, but keep in mind that solution 1 is perfectly valid.\nNGINX vhost config. This will vary on your OS, but Debian bases hosts under /etc/nginx/sites-available you\u0026rsquo;ll need to create the config and have a symlink on /etc/nginx/sites-enabled/\nHere\u0026rsquo;s an example configuration:\nserver{# SSL configuration#listen443ssl;listen[::]:443ssl;access_log/var/log/nginx/APPNAME_access.log;error_log/var/log/nginx/APPNAME_error.log;sslon;ssl_certificate/etc/letsencrypt/live/appname.myhost.org/fullchain.pem;ssl_certificate_key/etc/letsencrypt/live/appname.myhost.org/privkey.pem;client_max_body_size50M;#log_format compression \u0026#39;$remote_addr - $remote_user [$time_local] \u0026#39;# \u0026#39;\u0026#34;$request\u0026#34; $status $bytes_sent \u0026#39;# \u0026#39;\u0026#34;$http_referer\u0026#34; \u0026#34;$http_user_agent\u0026#34; \u0026#34;$gzip_ratio\u0026#34;\u0026#39;;#access_log log/nixie_access.log compression;server_nameappname.myhost.org;location/{proxy_passhttp://localhost:8080;proxy_set_headerHost$host;proxy_set_headerX-Forwarded-For$remote_addr;proxy_set_headerX-Real-IP$remote_addr;}location=/favicon.ico{log_not_foundoff;access_logoff;}location=/robots.txt{allowall;log_not_foundoff;access_logoff;}location~*\\.(txt|log)${allow192.168.0.0/16;denyall;}location~\\..*/.*\\.php${return403;}# Block access to \u0026#34;hidden\u0026#34; files and directories whose names begin with a# period. This includes directories used by version control systems such# as Subversion or Git to store control files.location~(^|/)\\.{return403;}}server{listen80;listen[::]:80;server_nameappname.myhost.org;rewrite^https://$server_name$request_uri?permanent;}Here\u0026rsquo;s the big take away from this file. We are hosting a new application that\u0026rsquo;s exposing locally port 8080. Any request that comes in on appname.myhost.org will get redirected to localhost:8080 and we will return the response.\nThis will work great, but we do need to make sure that our docker stack is also running.\nDocker application I won\u0026rsquo;t spend too much time on this but let\u0026rsquo;s assume you have a wordpress application similar to the one in the example on their docker hub.\nThis is the config example pulled from that link:\nversion:\u0026#39;3.1\u0026#39;services:wordpress:image:wordpressrestart:alwaysports:- 8080:80environment:WORDPRESS_DB_HOST:dbWORDPRESS_DB_USER:exampleuserWORDPRESS_DB_PASSWORD:examplepassWORDPRESS_DB_NAME:exampledbvolumes:- wordpress:/var/www/htmldb:image:mysql:5.7restart:alwaysenvironment:MYSQL_DATABASE:exampledbMYSQL_USER:exampleuserMYSQL_PASSWORD:examplepassMYSQL_RANDOM_ROOT_PASSWORD:\u0026#39;1\u0026#39;volumes:- db:/var/lib/mysqlvolumes:wordpress:db:if everything works as expected when you start the application via docker-compose up -d http://localhost:8080 will become accessible.\nNow, what we need to do is to get it so our docker based application starts up each time our host restarts. It would be nice if we can manage a docker app like any other service installed.\nSo, we\u0026rsquo;re going to create a systemd start up script.\nDocker Systemd script I\u0026rsquo;m running all my docker apps as the docker_user with limited permissions.\n[Unit] Description = WordPress Blog After=docker.service Requires=docker.service [Service] Type=idle WorkingDirectory=/home/docker_user/blog ExecStart=/usr/bin/docker-compose up ExecStop=/usr/bin/docker-compose stop ExecReload =/usr/bin/docker-compose restart User=docker_user GUser=docker_user Restart=always RestartSec=3 RestartPreventExitStatus=0 TimeoutStopSec=10 [Install] WantedBy=multi-user.target let\u0026rsquo;s save this file in /etc/systemd/system/blog.service\nwe can enable the service on boot via:\nsudo systemctl enable blog.service You can also test this out manually by simply running:\nsudo service blog {start|stop|restart} At this point you should be able to get a response from:\nhttps://appname.myhost.org, granted the behavior and hostname will vary based on what you do end up running, but if you are running wordpress, You should see the Installation wizards come up.\nDocker Check List. It\u0026rsquo;s always a good practice to have restart: always Database backups are not configured here but you should have a script or cron that will create a datadump every so often. Final stage Ensure the nginx config is valid by using nginx -t and if it all checks out let\u0026rsquo;s restart the nginx server.\nsudo service nginx start At this point even if you have a power outage or someone does a sudo reboot all your services should come back up as expected.\nNaturally this still suffers from a single point of failure, but it\u0026rsquo;s much easier to manage IMO, then the typical bare metal deployments.\n","tags":["docker"], "section": "posts"},{"date":"1598054400", "url":"https://www.esamir.com/20/8/22/converting-raid-1-to-raid-5-on-linux-file-systems/", "title":"Converting Raid 1 to Raid 5 on Linux file systems", "summary":"Converting from Raid 1 to Raid 5 This assumes you have a functional Raid 1 and wish to convert it to a Raid 5.\nDisclaimer: At some point during this process I realized that I had a bad mother board. The reason my /dev/sdd1 failed wasn\u0026rsquo;t the drive, but the bus on the board. That being said, this is the unverified procedure.\nI\u0026rsquo;m running on Ubuntu 12.10 but you should be able to do this on any modern Linux distribution.", "content":"Converting from Raid 1 to Raid 5 This assumes you have a functional Raid 1 and wish to convert it to a Raid 5.\nDisclaimer: At some point during this process I realized that I had a bad mother board. The reason my /dev/sdd1 failed wasn\u0026rsquo;t the drive, but the bus on the board. That being said, this is the unverified procedure.\nI\u0026rsquo;m running on Ubuntu 12.10 but you should be able to do this on any modern Linux distribution.\n Stop all mdadm related tasks. sudo umount /media/data #of whatever mountpoint is appropriate. sudo /etc/init.d/mdadm stop sudo mdadm --stop /dev/md0 Change the raid layout This part is kind of scary, and I wouldn\u0026rsquo;t advice mounting the raid at this point. I especially didn\u0026rsquo;t like the fact that it looks like it\u0026rsquo;s overwriting my raid.. that made me nerveous, but it\u0026rsquo;s essentially restrucuting how data is stored, and putting a raid 5 mapping on 2 drives. ie. creating a degraded raid 5.\n#update this as appropriate. mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sda1 /dev/sdd1 WARNING: wait for this step to complete.. look at /prod/mdstats and wait for it to finish before proceeding.\nAdd a 3rd drive. #in my case I started with /dev/sdd1, added /dev/sda1 to create a raid 1, and then adding /dev/sdb1 as the final device. You don\u0026rsquo;t have to follow my convention. That made sense for my use case, since my dead drive was /dev/sdd you can simply start with /dev/sda1 and go alphabetical.\nmdadm --add /dev/md0 /dev/sdb1 mdadm --grow /dev/md0 --raid-devices=3 This part took around 15 hours for me.\nAt this point, I\u0026rsquo;d be okay with mounting my raid partition. Again, it\u0026rsquo;s safer not to\u0026hellip; but.. it\u0026rsquo;s won\u0026rsquo;t break the process if you do.\nExpand File System. At this point what we have is the equivalent of have a large hard drive, but a smaller partition on it. We need to grow the local file system.\nI\u0026rsquo;m covering 3 use cases.\na. LVM\nI\u0026rsquo;ve have had lvm on my raid before. Actually, I used to have raid + lukefs encryption + lvm. Too many layers though the performance isn\u0026rsquo;t as bad as you might expect.\nTODO: I have to look this up\u0026hellip; I\u0026rsquo;ll update this eventually.\nb. XFS\nxfs is a bit odd, you need to have the file system mounted in order to grow it.\nxfs_repair -n /dev/md0 #just to be safe. mount /dev/md0 /media/data\nxfs_growfs /media/data\nc. EXT3/4\ne2fsck -f /dev/md0 #check file system resize2fs /dev/md0 #grow file system Update fstab This should not need any changes, but just in case:\n/dev/md0 /media/data xfs defaults Update mdadm.conf sudo su - mdadm --detail --scan \u0026gt;\u0026gt; /etc/mdadm/mdadm.conf ##edit the file and remove the next to last line. ie. The command above appends the new mdadm config to your config file. So remove the previous raid 1 line. There should be a single line defining md0 which looks something like this:\nARRAY /dev/md/md0 metadata=1.2 UUID=0ec3c5aa:5cee600b:ef1e8f7d:09b20cc8 This is the line I removed:\n#ARRAY /dev/md/md0 metadata=0.90 UUID=bf8a2737:554e654c:c2eab133:b01f9710 In other words, assuming you only have 1 raid setup, your mdadm.conf should only have a single ARRAY configured.\nReferences:\n http://www.davelachapelle.ca/2008/07/25/converting-raid-1-to-raid-5/ ","tags":["linux","opensource","devops"], "section": "posts"}]