diff --git a/404.html b/404.html index 1de5ae125..4ed41eedf 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ - 404: Page not found | TotalDebug
Home 404: Page not found
404: Page not found
Cancel
+ 404: Page not found | TotalDebug
Home 404: Page not found
404: Page not found
Cancel
diff --git a/about/index.html b/about/index.html index 5d0995f6a..d6c61100d 100644 --- a/about/index.html +++ b/about/index.html @@ -1 +1 @@ - About | TotalDebug
Home About
About
Cancel

About

Hi, Im Steve a Full Stack Systems Engineer from Leeds, UK.

I created this website to provide a location where I could share articles related to my interests / findings when working as a Developer / Sys Admin / Solution Architect.

My interest in IT started when I was very young, my Uncle worked for a large IT company and I was always fascinated by his work. I would build my own computers with him and eventually built them for friends and family too.

I started my career in IT straight from high school where I had an apprenticeship with Kodak working in one of their datacenters, I then moved on to working for small IT Resellers, and then on to a small Leeds based data center and hosting provider where I honed my skills and continue to expand my knowledge.

Now working for one of the worlds largest service providers, I spends this time working on projects to ensure the business runs in the most efficient ways possible by implementing automation tooling and improving processes.

The world always seems brighter when you’ve just made something that wasn’t there before. Neil Gaiman

As a hobby, I likes to spend time on home automation & 3D printing projects and of course spending time with my beautiful wife and children.

Thank You for reading!

Key Skills

Workfow Optimisation

90%

System & Product Architecture

80%

Project Management

50%

People Leader

40%

Programming Skills

HTML5/CSS3

90%

Python

75%

Typescript / Javascript

60%

Certifications

Full Stack Systems Engineer III

2020 — present @ Rackspace Technology

Responsible for the design, function and future of the business products and support tooling. Working closely within functional team and with external stakeholders qualifying and documenting new products, both hardware and software.

Identifying areas of improvement both within the team and the larger organization. Designing, engineering, architecture, and integration of tooling to reduce total cost, to support customers and minimize human error.

Role:
  • Identify areas of improvement for existing or new tooling
  • Manage approved projects and implement within assigned timelines
  • Provide support and guidance to other engineers
  • Point of escalation for support teams and Full Stack Systems Engineers
Technologies: Python, Powershell, Go, Bootstrap, Github Actions / CICD, Jira project management

Full Stack Systems Engineer II

2019 — 2020 @ Rackspace Technology

A small team of engineers setup to help standardise, train and implement new ways workflows or applications to improve efficiency within the business. Also an escalation point for support teams.

Role:
  • Improve business workflows and tooling to provide more efficient service
  • Independently manage assigned tasks and communicate with task stakeholders
Technologies: Python, Powershell, Go, Bootstrap, Github Actions / CICD, Jira project management

Technical Support, Team Leader

2017 — 2019 @ Rackspace Technology

Responsible for leading a team of 13 Windows Engineers, providing 3rd line support to our customer base.

Role:
  • Management of Windows Engineering team, including performance reviews and 1-to-1
  • Planning of on-call rotas and shift patterns
  • Ticket queue reporting and ensuring SLAs are adhered to
Technologies: InfluxDB, Grafana, ServiceNow

Technical Lead

2015 — 2017 @ Datapipe (Acquired by Rackspace Technology)

Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.

Role:
  • Overall technical responsibility for a number of large enterprise businesses.
  • Work closely with the Account Director and Service Manager to co-ordinate teams and deliver the best possible service.
  • Track and progress problems and projects to enable continued improvement.
Achieved:
  • Worked with the companies two largest customers to ensure retention after a period of issues, one customer ended up expanding their contract and adding additional £1.2m due to this effort.
  • Provided a full CMDB device and relationship mapping to allow enhanced support and impact analysis.
  • Executed monthly usage analysis to provide feedback on potential savings or performance bottlenecks. This resulted in a major bottleneck being found with part of the infrastructure and mitigation being implemented.
  • Implemented a billing audit to ensure devices were being billed accurately each month.
  • Resolved some major problems that were causing poor customer performance.
Technologies: VMware vSphere, VMware vRealize, ServiceNow

Senior Hosting Engineer

2014 — 2015 @ Adapt (Acquired by Datapipe)

Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.

Role:
  • Architect and project manage deployment of customer environments.
  • Act as an escalation point for Junior engineers.
  • Improve technical documentation and procedures.
Achieved:
  • Architected a repeatable VMware environment with DR capabilities if required.
  • Fully automated deployment of environment to enable the most efficient use of engineers time.
  • Continued development of internal IPAM for asset tracking and IP Management.
Technologies: Juniper, VMware vSphere, Xen Server, NetApp, Windows Server, Linux (CentOS / Ubuntu), DNS, PHP, MySQL

Hosting Engineer

2013 — 2014 @ Sleek Networks (Acquired by Adapt)

Sleek Networks were a Data Center and Managed Service Provider, Hosting services for SME businesses across the UK. Later acquired by Adapt.

Role:
  • Provide assistance in maintaining the data center, including failover testing for generators and mains power.
  • Provisioning of new cabinets with power, cabling and switching gear.
  • 1st and 2nd line helpdesk support for any customer queries or issues.
Technologies: Juniper SRX and EX, VMware vSphere, Xen Server, NetApp Storage & Backup, Windows Server, Linux (CentOS / Ubuntu), DNS

Senior IT Consultant

2007 — 2013 @ Galtec Solutions

A small Managed Service Provider. Providing IT Support, project management & hardware / software sales to our customers.

Role:
  • Architect environments for SME customers.
  • Deploy Virtualisation and Windows infrastructures.
  • Evolve the internal & customer products offered.
Achieved:
  • Designed & Implemented the first Cloud hosted Bloxx appliance in partnership with Bloxx.
  • Designed & Implemented an off-site backup solution for our managed customers.
  • Designed & Implemented custom Sharepoint sites for businesses.
  • Implemented Microsoft Dynamics CRM.
Technologies: VMware vSphere, Veeam Backup & Replication, Windows Server 2003 / 2008, Windows Deployment Services, ManageEngine ServiceDesk Plus
+ About | TotalDebug
Home About
About
Cancel

About

Hi, Im Steve a Full Stack Systems Engineer from Leeds, UK.

I created this website to provide a location where I could share articles related to my interests / findings when working as a Developer / Sys Admin / Solution Architect.

My interest in IT started when I was very young, my Uncle worked for a large IT company and I was always fascinated by his work. I would build my own computers with him and eventually built them for friends and family too.

I started my career in IT straight from high school where I had an apprenticeship with Kodak working in one of their datacenters, I then moved on to working for small IT Resellers, and then on to a small Leeds based data center and hosting provider where I honed my skills and continue to expand my knowledge.

Now working for one of the worlds largest service providers, I spends this time working on projects to ensure the business runs in the most efficient ways possible by implementing automation tooling and improving processes.

The world always seems brighter when you’ve just made something that wasn’t there before. Neil Gaiman

As a hobby, I likes to spend time on home automation & 3D printing projects and of course spending time with my beautiful wife and children.

Thank You for reading!

Key Skills

Workfow Optimisation

90%

System & Product Architecture

80%

Project Management

50%

People Leader

40%

Programming Skills

HTML5/CSS3

90%

Python

75%

Typescript / Javascript

60%

Certifications

Full Stack Systems Engineer III

2020 — present @ Rackspace Technology

Responsible for the design, function and future of the business products and support tooling. Working closely within functional team and with external stakeholders qualifying and documenting new products, both hardware and software.

Identifying areas of improvement both within the team and the larger organization. Designing, engineering, architecture, and integration of tooling to reduce total cost, to support customers and minimize human error.

Role:
  • Identify areas of improvement for existing or new tooling
  • Manage approved projects and implement within assigned timelines
  • Provide support and guidance to other engineers
  • Point of escalation for support teams and Full Stack Systems Engineers
Technologies: Python, Powershell, Go, Bootstrap, Github Actions / CICD, Jira project management

Full Stack Systems Engineer II

2019 — 2020 @ Rackspace Technology

A small team of engineers setup to help standardise, train and implement new ways workflows or applications to improve efficiency within the business. Also an escalation point for support teams.

Role:
  • Improve business workflows and tooling to provide more efficient service
  • Independently manage assigned tasks and communicate with task stakeholders
Technologies: Python, Powershell, Go, Bootstrap, Github Actions / CICD, Jira project management

Technical Support, Team Leader

2017 — 2019 @ Rackspace Technology

Responsible for leading a team of 13 Windows Engineers, providing 3rd line support to our customer base.

Role:
  • Management of Windows Engineering team, including performance reviews and 1-to-1
  • Planning of on-call rotas and shift patterns
  • Ticket queue reporting and ensuring SLAs are adhered to
Technologies: InfluxDB, Grafana, ServiceNow

Technical Lead

2015 — 2017 @ Datapipe (Acquired by Rackspace Technology)

Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.

Role:
  • Overall technical responsibility for a number of large enterprise businesses.
  • Work closely with the Account Director and Service Manager to co-ordinate teams and deliver the best possible service.
  • Track and progress problems and projects to enable continued improvement.
Achieved:
  • Worked with the companies two largest customers to ensure retention after a period of issues, one customer ended up expanding their contract and adding additional £1.2m due to this effort.
  • Provided a full CMDB device and relationship mapping to allow enhanced support and impact analysis.
  • Executed monthly usage analysis to provide feedback on potential savings or performance bottlenecks. This resulted in a major bottleneck being found with part of the infrastructure and mitigation being implemented.
  • Implemented a billing audit to ensure devices were being billed accurately each month.
  • Resolved some major problems that were causing poor customer performance.
Technologies: VMware vSphere, VMware vRealize, ServiceNow

Senior Hosting Engineer

2014 — 2015 @ Adapt (Acquired by Datapipe)

Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.

Role:
  • Architect and project manage deployment of customer environments.
  • Act as an escalation point for Junior engineers.
  • Improve technical documentation and procedures.
Achieved:
  • Architected a repeatable VMware environment with DR capabilities if required.
  • Fully automated deployment of environment to enable the most efficient use of engineers time.
  • Continued development of internal IPAM for asset tracking and IP Management.
Technologies: Juniper, VMware vSphere, Xen Server, NetApp, Windows Server, Linux (CentOS / Ubuntu), DNS, PHP, MySQL

Hosting Engineer

2013 — 2014 @ Sleek Networks (Acquired by Adapt)

Sleek Networks were a Data Center and Managed Service Provider, Hosting services for SME businesses across the UK. Later acquired by Adapt.

Role:
  • Provide assistance in maintaining the data center, including failover testing for generators and mains power.
  • Provisioning of new cabinets with power, cabling and switching gear.
  • 1st and 2nd line helpdesk support for any customer queries or issues.
Technologies: Juniper SRX and EX, VMware vSphere, Xen Server, NetApp Storage & Backup, Windows Server, Linux (CentOS / Ubuntu), DNS

Senior IT Consultant

2007 — 2013 @ Galtec Solutions

A small Managed Service Provider. Providing IT Support, project management & hardware / software sales to our customers.

Role:
  • Architect environments for SME customers.
  • Deploy Virtualisation and Windows infrastructures.
  • Evolve the internal & customer products offered.
Achieved:
  • Designed & Implemented the first Cloud hosted Bloxx appliance in partnership with Bloxx.
  • Designed & Implemented an off-site backup solution for our managed customers.
  • Designed & Implemented custom Sharepoint sites for businesses.
  • Implemented Microsoft Dynamics CRM.
Technologies: VMware vSphere, Veeam Backup & Replication, Windows Server 2003 / 2008, Windows Deployment Services, ManageEngine ServiceDesk Plus
diff --git a/archives/index.html b/archives/index.html index 56945e988..14569359c 100644 --- a/archives/index.html +++ b/archives/index.html @@ -1 +1 @@ - Archives | TotalDebug
Home Archives
Archives
Cancel

Archives

2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2012
2011
+ Archives | TotalDebug
Home Archives
Archives
Cancel

Archives

2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2012
2011
diff --git a/assets/css/style.css.map b/assets/css/style.css.map index bed6c8dcc..c590f0171 100644 --- a/assets/css/style.css.map +++ b/assets/css/style.css.map @@ -3,25 +3,25 @@ "file": "style.css", "sources": [ "style.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/jekyll-theme-chirpy.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/colors/light-typography.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/colors/dark-typography.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/addon/variables.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/variables-hook.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/addon/module.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/addon/syntax.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/colors/light-syntax.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/colors/dark-syntax.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/addon/commons.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/home.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/post.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/tags.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/archives.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/categories.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/category-tag.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/layout/works.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/elements/skills.scss", - "../../../../../tmp/jekyll-remote-theme-20230808-1751-drnfmi/_sass/elements/timeline.scss" + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/jekyll-theme-chirpy.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/colors/light-typography.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/colors/dark-typography.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/addon/variables.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/variables-hook.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/addon/module.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/addon/syntax.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/colors/light-syntax.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/colors/dark-syntax.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/addon/commons.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/home.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/post.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/tags.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/archives.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/categories.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/category-tag.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/layout/works.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/elements/skills.scss", + "../../../../../tmp/jekyll-remote-theme-20230808-1787-7tmfai/_sass/elements/timeline.scss" ], "sourcesContent": [ "@import 'jekyll-theme-chirpy';\n\n/* append your custom style below */\n", diff --git a/categories/3d-printer/index.html b/categories/3d-printer/index.html index f4c7676a2..a2dc50cd2 100644 --- a/categories/3d-printer/index.html +++ b/categories/3d-printer/index.html @@ -1 +1 @@ - 3D Printer | TotalDebug
Home Categories 3D Printer
Category
Cancel
+ 3D Printer | TotalDebug
Home Categories 3D Printer
Category
Cancel
diff --git a/categories/authentication/index.html b/categories/authentication/index.html index 80590cef0..832b80cc4 100644 --- a/categories/authentication/index.html +++ b/categories/authentication/index.html @@ -1 +1 @@ - Authentication | TotalDebug
Home Categories Authentication
Category
Cancel
+ Authentication | TotalDebug
Home Categories Authentication
Category
Cancel
diff --git a/categories/automation/index.html b/categories/automation/index.html index c4998f904..082ba82a1 100644 --- a/categories/automation/index.html +++ b/categories/automation/index.html @@ -1 +1 @@ - Automation | TotalDebug
Home Categories Automation
Category
Cancel
+ Automation | TotalDebug
Home Categories Automation
Category
Cancel
diff --git a/categories/axes-calibration/index.html b/categories/axes-calibration/index.html index c96e79278..3aa9c2568 100644 --- a/categories/axes-calibration/index.html +++ b/categories/axes-calibration/index.html @@ -1 +1 @@ - Axes Calibration | TotalDebug
Home Categories Axes Calibration
Category
Cancel
+ Axes Calibration | TotalDebug
Home Categories Axes Calibration
Category
Cancel
diff --git a/categories/backups/index.html b/categories/backups/index.html index f8a202f8f..a61ebeee4 100644 --- a/categories/backups/index.html +++ b/categories/backups/index.html @@ -1 +1 @@ - Backups | TotalDebug
Home Categories Backups
Category
Cancel
+ Backups | TotalDebug
Home Categories Backups
Category
Cancel
diff --git a/categories/centos/index.html b/categories/centos/index.html index 6aed2e68d..d545ed6e2 100644 --- a/categories/centos/index.html +++ b/categories/centos/index.html @@ -1 +1 @@ - CentOS | TotalDebug
Home Categories CentOS
Category
Cancel
+ CentOS | TotalDebug
Home Categories CentOS
Category
Cancel
diff --git a/categories/cisco/index.html b/categories/cisco/index.html index 9ebddd6c4..7a19e28b3 100644 --- a/categories/cisco/index.html +++ b/categories/cisco/index.html @@ -1 +1 @@ - Cisco | TotalDebug
Home Categories Cisco
Category
Cancel
+ Cisco | TotalDebug
Home Categories Cisco
Category
Cancel
diff --git a/categories/code-quality/index.html b/categories/code-quality/index.html index 71f9031a4..bb2eee65b 100644 --- a/categories/code-quality/index.html +++ b/categories/code-quality/index.html @@ -1 +1 @@ - Code Quality | TotalDebug
Home Categories Code Quality
Category
Cancel
+ Code Quality | TotalDebug
Home Categories Code Quality
Category
Cancel
diff --git a/categories/containers/index.html b/categories/containers/index.html index bd1cdbd74..91f710a30 100644 --- a/categories/containers/index.html +++ b/categories/containers/index.html @@ -1 +1 @@ - Containers | TotalDebug
Home Categories Containers
Category
Cancel
+ Containers | TotalDebug
Home Categories Containers
Category
Cancel
diff --git a/categories/continuous-integration/index.html b/categories/continuous-integration/index.html index edefd1c25..c1f60dca2 100644 --- a/categories/continuous-integration/index.html +++ b/categories/continuous-integration/index.html @@ -1 +1 @@ - Continuous Integration | TotalDebug
Home Categories Continuous Integration
Category
Cancel
+ Continuous Integration | TotalDebug
Home Categories Continuous Integration
Category
Cancel
diff --git a/categories/database/index.html b/categories/database/index.html index c062d08bb..954966b27 100644 --- a/categories/database/index.html +++ b/categories/database/index.html @@ -1 +1 @@ - Database | TotalDebug
Home Categories Database
Category
Cancel
+ Database | TotalDebug
Home Categories Database
Category
Cancel
diff --git a/categories/development/index.html b/categories/development/index.html index 3700b1629..a41672469 100644 --- a/categories/development/index.html +++ b/categories/development/index.html @@ -1 +1 @@ - Development | TotalDebug
Home Categories Development
Category
Cancel
+ Development | TotalDebug
Home Categories Development
Category
Cancel
diff --git a/categories/docker/index.html b/categories/docker/index.html index 049a8c131..26485b05c 100644 --- a/categories/docker/index.html +++ b/categories/docker/index.html @@ -1 +1 @@ - Docker | TotalDebug
Home Categories Docker
Category
Cancel
+ Docker | TotalDebug
Home Categories Docker
Category
Cancel
diff --git a/categories/ender3/index.html b/categories/ender3/index.html index 0ebf20d53..ebac16ed5 100644 --- a/categories/ender3/index.html +++ b/categories/ender3/index.html @@ -1 +1 @@ - Ender3 | TotalDebug
Home Categories Ender3
Category
Cancel
+ Ender3 | TotalDebug
Home Categories Ender3
Category
Cancel
diff --git a/categories/exchange/index.html b/categories/exchange/index.html index da745bb45..de3a0625c 100644 --- a/categories/exchange/index.html +++ b/categories/exchange/index.html @@ -1 +1 @@ - Exchange | TotalDebug
Home Categories Exchange
Category
Cancel
+ Exchange | TotalDebug
Home Categories Exchange
Category
Cancel
diff --git a/categories/git/index.html b/categories/git/index.html index f70e28b11..da810e7a9 100644 --- a/categories/git/index.html +++ b/categories/git/index.html @@ -1 +1 @@ - Git | TotalDebug
Home Categories Git
Category
Cancel
+ Git | TotalDebug
Home Categories Git
Category
Cancel
diff --git a/categories/gpo/index.html b/categories/gpo/index.html index bf516f382..609a5d529 100644 --- a/categories/gpo/index.html +++ b/categories/gpo/index.html @@ -1 +1 @@ - GPO | TotalDebug
Home Categories GPO
Category
Cancel
+ GPO | TotalDebug
Home Categories GPO
Category
Cancel
diff --git a/categories/home-automation/index.html b/categories/home-automation/index.html index 13cafeb65..72f1eea80 100644 --- a/categories/home-automation/index.html +++ b/categories/home-automation/index.html @@ -1 +1 @@ - Home Automation | TotalDebug
Home Categories Home Automation
Category
Cancel
+ Home Automation | TotalDebug
Home Categories Home Automation
Category
Cancel
diff --git a/categories/how-to/index.html b/categories/how-to/index.html index 6766a6b0a..bf2fa9576 100644 --- a/categories/how-to/index.html +++ b/categories/how-to/index.html @@ -1 +1 @@ - how-to | TotalDebug
Home Categories how-to
Category
Cancel
+ how-to | TotalDebug
Home Categories how-to
Category
Cancel
diff --git a/categories/index.html b/categories/index.html index b75c57742..b83cd5706 100644 --- a/categories/index.html +++ b/categories/index.html @@ -1 +1 @@ - Categories | TotalDebug
Home Categories
Categories
Cancel

Categories

3D Printer 2 categories , 2 posts
Automation 1 category , 4 posts
Containers 1 category , 3 posts
Database 1 category , 1 post
Development 1 category , 1 post
Docker 2 categories , 2 posts
Git 1 post
Home Automation 1 category , 2 posts
Linux 6 categories , 11 posts
Microsoft 2 categories , 8 posts
Musings 2 posts
PHP 2 posts
Python 1 category , 3 posts
Ubiquiti 1 post
Virtualisation 2 categories , 5 posts
Website 2 posts
how-to 1 post
+ Categories | TotalDebug
Home Categories
Categories
Cancel

Categories

3D Printer 2 categories , 2 posts
Automation 1 category , 4 posts
Containers 1 category , 3 posts
Database 1 category , 1 post
Development 1 category , 1 post
Docker 2 categories , 2 posts
Git 1 post
Home Automation 1 category , 2 posts
Linux 6 categories , 11 posts
Microsoft 2 categories , 8 posts
Musings 2 posts
PHP 2 posts
Python 1 category , 3 posts
Ubiquiti 1 post
Virtualisation 2 categories , 5 posts
Website 2 posts
how-to 1 post
diff --git a/categories/linux/index.html b/categories/linux/index.html index 98ae32ae2..db1e50b33 100644 --- a/categories/linux/index.html +++ b/categories/linux/index.html @@ -1 +1 @@ - Linux | TotalDebug
Home Categories Linux
Category
Cancel
+ Linux | TotalDebug
Home Categories Linux
Category
Cancel
diff --git a/categories/microsoft/index.html b/categories/microsoft/index.html index 8c8cd40d9..2aa06e74e 100644 --- a/categories/microsoft/index.html +++ b/categories/microsoft/index.html @@ -1 +1 @@ - Microsoft | TotalDebug
Home Categories Microsoft
Category
Cancel
+ Microsoft | TotalDebug
Home Categories Microsoft
Category
Cancel
diff --git a/categories/migration/index.html b/categories/migration/index.html index 375a3601e..25964b49e 100644 --- a/categories/migration/index.html +++ b/categories/migration/index.html @@ -1 +1 @@ - Migration | TotalDebug
Home Categories Migration
Category
Cancel
+ Migration | TotalDebug
Home Categories Migration
Category
Cancel
diff --git a/categories/musings/index.html b/categories/musings/index.html index 20d6d93cd..a8c5fd1ef 100644 --- a/categories/musings/index.html +++ b/categories/musings/index.html @@ -1 +1 @@ - Musings | TotalDebug
Home Categories Musings
Category
Cancel
+ Musings | TotalDebug
Home Categories Musings
Category
Cancel
diff --git a/categories/networking/index.html b/categories/networking/index.html index c4bc19d54..30e19977a 100644 --- a/categories/networking/index.html +++ b/categories/networking/index.html @@ -1 +1 @@ - Networking | TotalDebug
Home Categories Networking
Category
Cancel
+ Networking | TotalDebug
Home Categories Networking
Category
Cancel
diff --git a/categories/node-red/index.html b/categories/node-red/index.html index f4d8797aa..00914351e 100644 --- a/categories/node-red/index.html +++ b/categories/node-red/index.html @@ -1 +1 @@ - Node-RED | TotalDebug
Home Categories Node-RED
Category
Cancel
+ Node-RED | TotalDebug
Home Categories Node-RED
Category
Cancel
diff --git a/categories/overlay2/index.html b/categories/overlay2/index.html index ea7b0b580..617da00a8 100644 --- a/categories/overlay2/index.html +++ b/categories/overlay2/index.html @@ -1 +1 @@ - Overlay2 | TotalDebug
Home Categories Overlay2
Category
Cancel
+ Overlay2 | TotalDebug
Home Categories Overlay2
Category
Cancel
diff --git a/categories/php/index.html b/categories/php/index.html index ff65aa703..c56f1f169 100644 --- a/categories/php/index.html +++ b/categories/php/index.html @@ -1 +1 @@ - PHP | TotalDebug
Home Categories PHP
Category
Cancel
+ PHP | TotalDebug
Home Categories PHP
Category
Cancel
diff --git a/categories/proxmox/index.html b/categories/proxmox/index.html index 3213043c2..8e3606cc4 100644 --- a/categories/proxmox/index.html +++ b/categories/proxmox/index.html @@ -1 +1 @@ - Proxmox | TotalDebug
Home Categories Proxmox
Category
Cancel
+ Proxmox | TotalDebug
Home Categories Proxmox
Category
Cancel
diff --git a/categories/python/index.html b/categories/python/index.html index 1f05bb862..af9814466 100644 --- a/categories/python/index.html +++ b/categories/python/index.html @@ -1 +1 @@ - Python | TotalDebug
Home Categories Python
Category
Cancel
+ Python | TotalDebug
Home Categories Python
Category
Cancel
diff --git a/categories/security/index.html b/categories/security/index.html index 17a395cf6..16fac8f1d 100644 --- a/categories/security/index.html +++ b/categories/security/index.html @@ -1 +1 @@ - Security | TotalDebug
Home Categories Security
Category
Cancel
+ Security | TotalDebug
Home Categories Security
Category
Cancel
diff --git a/categories/teamspeak/index.html b/categories/teamspeak/index.html index 5b83bbbf3..0f1d03240 100644 --- a/categories/teamspeak/index.html +++ b/categories/teamspeak/index.html @@ -1 +1 @@ - Teamspeak | TotalDebug
Home Categories Teamspeak
Category
Cancel
+ Teamspeak | TotalDebug
Home Categories Teamspeak
Category
Cancel
diff --git a/categories/terminal-services/index.html b/categories/terminal-services/index.html index 8850aa707..0722a8cad 100644 --- a/categories/terminal-services/index.html +++ b/categories/terminal-services/index.html @@ -1 +1 @@ - Terminal Services | TotalDebug
Home Categories Terminal Services
Category
Cancel
+ Terminal Services | TotalDebug
Home Categories Terminal Services
Category
Cancel
diff --git a/categories/ubiquiti/index.html b/categories/ubiquiti/index.html index 608d92d60..0217b6cde 100644 --- a/categories/ubiquiti/index.html +++ b/categories/ubiquiti/index.html @@ -1 +1 @@ - Ubiquiti | TotalDebug
Home Categories Ubiquiti
Category
Cancel
+ Ubiquiti | TotalDebug
Home Categories Ubiquiti
Category
Cancel
diff --git a/categories/unifi/index.html b/categories/unifi/index.html index a05a3aaf6..c1cfb7518 100644 --- a/categories/unifi/index.html +++ b/categories/unifi/index.html @@ -1 +1 @@ - Unifi | TotalDebug
Home Categories Unifi
Category
Cancel
+ Unifi | TotalDebug
Home Categories Unifi
Category
Cancel
diff --git a/categories/virtualisation/index.html b/categories/virtualisation/index.html index 9203a72da..276e01002 100644 --- a/categories/virtualisation/index.html +++ b/categories/virtualisation/index.html @@ -1 +1 @@ - Virtualisation | TotalDebug
Home Categories Virtualisation
Category
Cancel
+ Virtualisation | TotalDebug
Home Categories Virtualisation
Category
Cancel
diff --git a/categories/vmware/index.html b/categories/vmware/index.html index 0d94595c1..0c7125cf4 100644 --- a/categories/vmware/index.html +++ b/categories/vmware/index.html @@ -1 +1 @@ - VMware | TotalDebug
Home Categories VMware
Category
Cancel
+ VMware | TotalDebug
Home Categories VMware
Category
Cancel
diff --git a/categories/website/index.html b/categories/website/index.html index 7f1c7e540..5731b3897 100644 --- a/categories/website/index.html +++ b/categories/website/index.html @@ -1 +1 @@ - Website | TotalDebug
Home Categories Website
Category
Cancel
+ Website | TotalDebug
Home Categories Website
Category
Cancel
diff --git a/categories/windows/index.html b/categories/windows/index.html index 23ea32d81..c410c38cc 100644 --- a/categories/windows/index.html +++ b/categories/windows/index.html @@ -1 +1 @@ - Windows | TotalDebug
Home Categories Windows
Category
Cancel
+ Windows | TotalDebug
Home Categories Windows
Category
Cancel
diff --git a/categories/wireless/index.html b/categories/wireless/index.html index d7e6a5960..a37bf2457 100644 --- a/categories/wireless/index.html +++ b/categories/wireless/index.html @@ -1 +1 @@ - Wireless | TotalDebug
Home Categories Wireless
Category
Cancel
+ Wireless | TotalDebug
Home Categories Wireless
Category
Cancel
diff --git a/feed.xml b/feed.xml index 7e636c321..cce60e603 100644 --- a/feed.xml +++ b/feed.xml @@ -1 +1 @@ - https://totaldebug.uk/TotalDebugSteven Marks. Dev blog. Linux, workflow optimisation, programming, software development, Python, Typescript. 2023-08-08T09:29:11+01:00 Steven Marks https://totaldebug.uk/ Jekyll © 2023 Steven Marks /assets/img/favicons/favicon.ico /assets/img/favicons/favicon-96x96.png Add series links to Jekyll posts2023-06-09T09:06:40+01:00 2023-08-08T09:20:30+01:00 https://totaldebug.uk/posts/jekyll-post-series-links/ Steven Marks Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init. Rather than having to manually link to other articles related to the series, I thought it w... Last4Solar - My solar nightmare!2023-06-02T22:13:35+01:00 2023-07-29T20:09:20+01:00 https://totaldebug.uk/posts/last4solar-my-solar-nightmare/ Steven Marks At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house. With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable. Choosing a company for the installation I spent a long time... Automating deployments using Terraform with Proxmox and ansible2023-05-06T10:03:29+01:00 2023-08-08T09:20:30+01:00 https://totaldebug.uk/posts/automating-proxmox-with-terraform-ansible/ Steven Marks Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well. I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment. For this reason I have moved over to using Terraform with Proxmox and ans... Use Python pandas NOW for your big datasets2023-03-29T17:00:00+01:00 2023-03-29T17:00:00+01:00 https://totaldebug.uk/posts/use-python-pandas-now/ Steven Marks Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement. I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew. Most of the processing required multiple nested for loops and addition of c... How I host this site2023-02-25T19:22:00+00:00 2023-02-25T21:51:13+00:00 https://totaldebug.uk/posts/how-i-host-this-site/ Steven Marks My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure. Motivation for hosting this site I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the is... + https://totaldebug.uk/TotalDebugSteven Marks. Dev blog. Linux, workflow optimisation, programming, software development, Python, Typescript. 2023-08-08T09:43:21+01:00 Steven Marks https://totaldebug.uk/ Jekyll © 2023 Steven Marks /assets/img/favicons/favicon.ico /assets/img/favicons/favicon-96x96.png Add series links to Jekyll posts2023-06-09T09:06:40+01:00 2023-08-08T09:20:30+01:00 https://totaldebug.uk/posts/jekyll-post-series-links/ Steven Marks Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init. Rather than having to manually link to other articles related to the series, I thought it w... Last4Solar - My solar nightmare!2023-06-02T22:13:35+01:00 2023-07-29T20:09:20+01:00 https://totaldebug.uk/posts/last4solar-my-solar-nightmare/ Steven Marks At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house. With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable. Choosing a company for the installation I spent a long time... Automating deployments using Terraform with Proxmox and ansible2023-05-06T10:03:29+01:00 2023-08-08T09:20:30+01:00 https://totaldebug.uk/posts/automating-proxmox-with-terraform-ansible/ Steven Marks Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well. I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment. For this reason I have moved over to using Terraform with Proxmox and ans... Use Python pandas NOW for your big datasets2023-03-29T17:00:00+01:00 2023-03-29T17:00:00+01:00 https://totaldebug.uk/posts/use-python-pandas-now/ Steven Marks Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement. I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew. Most of the processing required multiple nested for loops and addition of c... How I host this site2023-02-25T19:22:00+00:00 2023-02-25T21:51:13+00:00 https://totaldebug.uk/posts/how-i-host-this-site/ Steven Marks My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure. Motivation for hosting this site I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the is... diff --git a/index.html b/index.html index 2ea12d8e8..fc84fa59d 100644 --- a/index.html +++ b/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Add series links to Jekyll posts

Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to ha...

Last4Solar - My solar nightmare!

At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house. With the decreasing...

Automating deployments using Terraform with Proxmox and ansible

Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well. I have found recently though that ...

Use Python pandas NOW for your big datasets

Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement. I quickly found th...

How I host this site

My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain stru...

Home Assistant medication notification using Node-RED

For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain. Due to thi...

Creating a standalone zigbee2mqtt hub with alpine linux

I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar m...

Preview Image

Configuring Homer Dashboard

In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them. Main Features Some of Homers main features are: Y...

Preview Image

Homer dashboard with Docker

Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especia...

Preview Image

Proxmox Template with Cloud Image and Cloud Init

Updated to latest Ubuntu image & Added enable for qemu service Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud imag...

+ TotalDebug
Home
TotalDebug
Cancel

Add series links to Jekyll posts

Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to ha...

Last4Solar - My solar nightmare!

At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house. With the decreasing...

Automating deployments using Terraform with Proxmox and ansible

Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well. I have found recently though that ...

Use Python pandas NOW for your big datasets

Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement. I quickly found th...

How I host this site

My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain stru...

Home Assistant medication notification using Node-RED

For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain. Due to thi...

Creating a standalone zigbee2mqtt hub with alpine linux

I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar m...

Preview Image

Configuring Homer Dashboard

In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them. Main Features Some of Homers main features are: Y...

Preview Image

Homer dashboard with Docker

Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especia...

Preview Image

Proxmox Template with Cloud Image and Cloud Init

Updated to latest Ubuntu image & Added enable for qemu service Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud imag...

diff --git a/junos-monitor-log-files/index.html b/junos-monitor-log-files/index.html index 7336dd80c..0692ff041 100644 --- a/junos-monitor-log-files/index.html +++ b/junos-monitor-log-files/index.html @@ -1 +1 @@ - JUNOS: Monitor Log Files | TotalDebug
Home JUNOS: Monitor Log Files
Post
Cancel

JUNOS: Monitor Log Files

1510765109

When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom?

Well these few commands show you how to do this.

In order to start the monitoring run the following command:

monitor start <log-file-name>

Here is an example command:

monitor start messages

Any changes to the log file will automatically be posted to your screen.

If you want to filter the logs to only show records with certain words then use the following command:

monitor start messages | match error

In order to stop the logs:

monitor stop

Hopefully this article will assist you in viewing your logs with more ease.

This post is licensed under CC BY 4.0 by the author.

CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

Install, Configure and add a repository with Git on CentOS 7

+ JUNOS: Monitor Log Files | TotalDebug
Home JUNOS: Monitor Log Files
Post
Cancel

JUNOS: Monitor Log Files

1510765109

When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom?

Well these few commands show you how to do this.

In order to start the monitoring run the following command:

monitor start <log-file-name>

Here is an example command:

monitor start messages

Any changes to the log file will automatically be posted to your screen.

If you want to filter the logs to only show records with certain words then use the following command:

monitor start messages | match error

In order to stop the logs:

monitor stop

Hopefully this article will assist you in viewing your logs with more ease.

This post is licensed under CC BY 4.0 by the author.

CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

Install, Configure and add a repository with Git on CentOS 7

diff --git a/page2/index.html b/page2/index.html index bac10834b..7402922f6 100644 --- a/page2/index.html +++ b/page2/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel
Preview Image

Type hinting and checking in Python

Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5. Type hints help to structure your p...

Preview Image

Creating the perfect Python project

Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practice...

Preview Image

Cookiecutter: Automate project creation!

As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, doc...

Preview Image

Sqitch, Sensible database change management

Overview Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the databas...

Preview Image

Using CloneZilla to migrate multiple disk server

Overview I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware. For a norma...

Preview Image

Use Git like a pro!

Over the past few months I have been using Git &amp; GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git. Th...

Preview Image

Use GitHub pages with unsupported plugins

I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported. Due to thi...

Preview Image

Docker Overlay2 with CentOS for production

The following short article runs through how to setup docker to use overlay2 with Centos for use in production Pre-Requisites Add an extra drive to CentOS (this could also be freespace on the ...

Preview Image

3d Printer Axes Calibration

One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just w...

Preview Image

I won a Ender 3 3D Printer and i'm addicted

About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer. To my surprise a few weeks later I recei...

+ TotalDebug
Home
TotalDebug
Cancel
Preview Image

Type hinting and checking in Python

Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5. Type hints help to structure your p...

Preview Image

Creating the perfect Python project

Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practice...

Preview Image

Cookiecutter: Automate project creation!

As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, doc...

Preview Image

Sqitch, Sensible database change management

Overview Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the databas...

Preview Image

Using CloneZilla to migrate multiple disk server

Overview I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware. For a norma...

Preview Image

Use Git like a pro!

Over the past few months I have been using Git &amp; GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git. Th...

Preview Image

Use GitHub pages with unsupported plugins

I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported. Due to thi...

Preview Image

Docker Overlay2 with CentOS for production

The following short article runs through how to setup docker to use overlay2 with Centos for use in production Pre-Requisites Add an extra drive to CentOS (this could also be freespace on the ...

Preview Image

3d Printer Axes Calibration

One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just w...

Preview Image

I won a Ender 3 3D Printer and i'm addicted

About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer. To my surprise a few weeks later I recei...

diff --git a/page3/index.html b/page3/index.html index 98ea609f9..d7af21975 100644 --- a/page3/index.html +++ b/page3/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel
Preview Image

CentOS 8 Teaming with WiFi Hidden SSID using nmcli

I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially w...

Continuous Integration and Deployment

I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge. Over the years I have built up quite an estate of servers that o...

UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes. This wouldn’t normally be a problem if the remote client was...

Ubiquiti UniFi USG Content Filter Configuration

Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this First we...

vCloud Director 8.10 – Renew SSL Certificates

Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired. I could not find a working guide explaining the steps so this post covers everything required to replace ex...

Docker install on CentOS & basic Docker commands

In this video I will take you through installing Docker on CentOS and some of the most common basic commands you will need to work with Docker.

What is Docker? - Overview

In this video I talk about what Docker is, how it can be used and how containerisation differs from virtualisation. For anyone just getting into Docker this video and my Docker series will take yo...

Preview Image

Install, Configure and add a repository with Git on CentOS 7

Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with ...

JUNOS: Monitor Log Files

When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom? Well these few commands show you ho...

CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentati...

+ TotalDebug
Home
TotalDebug
Cancel
Preview Image

CentOS 8 Teaming with WiFi Hidden SSID using nmcli

I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially w...

Continuous Integration and Deployment

I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge. Over the years I have built up quite an estate of servers that o...

UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes. This wouldn’t normally be a problem if the remote client was...

Ubiquiti UniFi USG Content Filter Configuration

Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this First we...

vCloud Director 8.10 – Renew SSL Certificates

Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired. I could not find a working guide explaining the steps so this post covers everything required to replace ex...

Docker install on CentOS & basic Docker commands

In this video I will take you through installing Docker on CentOS and some of the most common basic commands you will need to work with Docker.

What is Docker? - Overview

In this video I talk about what Docker is, how it can be used and how containerisation differs from virtualisation. For anyone just getting into Docker this video and my Docker series will take yo...

Preview Image

Install, Configure and add a repository with Git on CentOS 7

Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with ...

JUNOS: Monitor Log Files

When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom? Well these few commands show you ho...

CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentati...

diff --git a/page4/index.html b/page4/index.html index da3e610c5..7d6921015 100644 --- a/page4/index.html +++ b/page4/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Upgrade your Linux UniFi Controller in minutes!

Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the ge...

Preview Image

Setup Ubiquiti UniFi USG Remote User VPN

I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRA...

Preview Image

Install UniFi Controller on CentOS 7

This is a short simple guide to assist users with installing the Ubiquiti UniFi Controller required for all UniFi devices on a CentOS 7 Server. First we need to update our CentOS server and disa...

Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article...

Two-Factor Authentication: is it worth it, does it really add more security?

As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, incl...

Snapshot changes in vSphere 6.0

This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and cons...

Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script. We are using MariaDB ...

Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down? In thi...

vCloud Director 8.0 for Service Providers

As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product. Originally the idea was that organisation...

Failed to connect to VMware Lookup Service, SSL certificate verification failed

Recently I have been playing in my lab with VCSA and vCNS, I found that when I tried to connect to the vCenter I received this error: Failed to connect to VMware Lookup Service. SSL certificate ve...

+ TotalDebug
Home
TotalDebug
Cancel

Upgrade your Linux UniFi Controller in minutes!

Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the ge...

Preview Image

Setup Ubiquiti UniFi USG Remote User VPN

I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRA...

Preview Image

Install UniFi Controller on CentOS 7

This is a short simple guide to assist users with installing the Ubiquiti UniFi Controller required for all UniFi devices on a CentOS 7 Server. First we need to update our CentOS server and disa...

Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article...

Two-Factor Authentication: is it worth it, does it really add more security?

As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, incl...

Snapshot changes in vSphere 6.0

This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and cons...

Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script. We are using MariaDB ...

Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down? In thi...

vCloud Director 8.0 for Service Providers

As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product. Originally the idea was that organisation...

Failed to connect to VMware Lookup Service, SSL certificate verification failed

Recently I have been playing in my lab with VCSA and vCNS, I found that when I tried to connect to the vCenter I received this error: Failed to connect to VMware Lookup Service. SSL certificate ve...

diff --git a/page5/index.html b/page5/index.html index 176c4b72b..2d359bddf 100644 --- a/page5/index.html +++ b/page5/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

How to check if a VM disk is Thick or Thin provisioned

There are multiple ways to tell if a virtual machine has thick or thin provisioned VM Disk. Below are some of the ways I am able to see this information: VI Client (thick client) Select the V...

Bulk configure vCenter Alarms with PowerCLI

I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms th...

VMware ESXi Embedded Host Client Installation – Updated

In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host. Check o...

Mikrotik OpenVPN Server with Linux Client

I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore writte...

VMware Transparent Page Sharing TPS

What is TPS? Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory p...

VMware Large Snapshot Safe Removal

One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machi...

Dell VMware 5.5 FCoE Errors

Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time. I looked into this and f...

Add vCenter Logs to Syslog Server (GrayLog2)

In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require. First we want to i...

vCenter 6.0 VCSA Deployment

This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes. Download VCSA 6.0 from the VMware Website. Mount t...

NUMA and vNUMA made simple!

What is NUMA? Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU ...

+ TotalDebug
Home
TotalDebug
Cancel

How to check if a VM disk is Thick or Thin provisioned

There are multiple ways to tell if a virtual machine has thick or thin provisioned VM Disk. Below are some of the ways I am able to see this information: VI Client (thick client) Select the V...

Bulk configure vCenter Alarms with PowerCLI

I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms th...

VMware ESXi Embedded Host Client Installation – Updated

In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host. Check o...

Mikrotik OpenVPN Server with Linux Client

I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore writte...

VMware Transparent Page Sharing TPS

What is TPS? Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory p...

VMware Large Snapshot Safe Removal

One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machi...

Dell VMware 5.5 FCoE Errors

Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time. I looked into this and f...

Add vCenter Logs to Syslog Server (GrayLog2)

In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require. First we want to i...

vCenter 6.0 VCSA Deployment

This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes. Download VCSA 6.0 from the VMware Website. Mount t...

NUMA and vNUMA made simple!

What is NUMA? Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU ...

diff --git a/page6/index.html b/page6/index.html index 9c810db16..d342d3f24 100644 --- a/page6/index.html +++ b/page6/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Offline Upgrade ESXi 5.5 to 6.0

This is a very short and sweet article documenting the offline upgrade process from 5.5 to 6.0 Download the ESXi 6.0 Offline Bundle from the VMware website. Upload the file to the local data...

VMware Distributed Switches (dvSwitch)

In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup b...

Understanding Resource Pools in VMware

It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them ...

Graylog2 Cisco ASA / Cisco Catalyst

In order to correctly log Cisco device in Graylog2 setup the below configuration. This has now been added to the Graylog Marketplace https://marketplace.graylog.org/ Cisco ASA Configuration: lo...

Graylog2 CentOS Installation

I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a via...

Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it. Follow this small tutorial to create a Teamspeak ...

vCloud Director and vCenter Proxy Service Failure

Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming mon...

How to setup an NFS mount on CentOS 6

About NFS (Network File System) Mounts NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and o...

Email Report Virtual Machines with Snapshots

I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines. I decided that I needed a way of reporting on whic...

CentOS Server Hardening Tips

This article provides various hardening tips for your Linux server. 1. Minimise Packages to Minimise Vulnerability Do you really want all sorts of services installed?. It’s recommended to avoid ...

+ TotalDebug
Home
TotalDebug
Cancel

Offline Upgrade ESXi 5.5 to 6.0

This is a very short and sweet article documenting the offline upgrade process from 5.5 to 6.0 Download the ESXi 6.0 Offline Bundle from the VMware website. Upload the file to the local data...

VMware Distributed Switches (dvSwitch)

In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup b...

Understanding Resource Pools in VMware

It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them ...

Graylog2 Cisco ASA / Cisco Catalyst

In order to correctly log Cisco device in Graylog2 setup the below configuration. This has now been added to the Graylog Marketplace https://marketplace.graylog.org/ Cisco ASA Configuration: lo...

Graylog2 CentOS Installation

I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a via...

Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it. Follow this small tutorial to create a Teamspeak ...

vCloud Director and vCenter Proxy Service Failure

Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming mon...

How to setup an NFS mount on CentOS 6

About NFS (Network File System) Mounts NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and o...

Email Report Virtual Machines with Snapshots

I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines. I decided that I needed a way of reporting on whic...

CentOS Server Hardening Tips

This article provides various hardening tips for your Linux server. 1. Minimise Packages to Minimise Vulnerability Do you really want all sorts of services installed?. It’s recommended to avoid ...

diff --git a/page7/index.html b/page7/index.html index effbfe266..f477f54cd 100644 --- a/page7/index.html +++ b/page7/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

As of Version 3.0.11.1 this tutorial is no longer applicable. I will soon re-write this to accommodate the latest version. By default Teamspeak 3 uses a SQLite database, most people tend to us...

Migrate TeamSpeak 3 from SQLite to MySQL

One of the things I wanted to do was migrate my teamspeak server from SQLite to MySQL so I created the below which makes the migration easy. Stop the TeamSpeak Server 2.Run the following co...

Cisco ASDM Java Runtime Device Conenction

I have recently had a lot of issues with Cisco ASDM on new installs of Windows 7 and upwards. After lots of research and a bit of digging I have found a way to resolve this issue. Instal...

Setup rSnapshot backups on CentOS

In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up. You must ...

CentOS Use Public/Private Keys for Authentication

The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords. 1.First, create a public/private key pai...

How to view which Virtual Machines have Snapshots in VMware

This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable. Using vSphere Client ...

Use Google Authenticator for 2FA with SSH

By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it i...

PHP Notice: Undefined index

I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix! How to Fix One simple answer – is...

Managing Application Settings in PHP

There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving ...

How to recreate all Virtual Directories for Exchange 2007

Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be ...

+ TotalDebug
Home
TotalDebug
Cancel

Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

As of Version 3.0.11.1 this tutorial is no longer applicable. I will soon re-write this to accommodate the latest version. By default Teamspeak 3 uses a SQLite database, most people tend to us...

Migrate TeamSpeak 3 from SQLite to MySQL

One of the things I wanted to do was migrate my teamspeak server from SQLite to MySQL so I created the below which makes the migration easy. Stop the TeamSpeak Server 2.Run the following co...

Cisco ASDM Java Runtime Device Conenction

I have recently had a lot of issues with Cisco ASDM on new installs of Windows 7 and upwards. After lots of research and a bit of digging I have found a way to resolve this issue. Instal...

Setup rSnapshot backups on CentOS

In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up. You must ...

CentOS Use Public/Private Keys for Authentication

The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords. 1.First, create a public/private key pai...

How to view which Virtual Machines have Snapshots in VMware

This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable. Using vSphere Client ...

Use Google Authenticator for 2FA with SSH

By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it i...

PHP Notice: Undefined index

I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix! How to Fix One simple answer – is...

Managing Application Settings in PHP

There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving ...

How to recreate all Virtual Directories for Exchange 2007

Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be ...

diff --git a/page8/index.html b/page8/index.html index c5c3a9acb..5e9746504 100644 --- a/page8/index.html +++ b/page8/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Your client does not support opening this list with windows explorer

When using Office 365 and sharepoint 2010 you may find that trying to open a library in explorer will result in this error: “Your client does not support opening this list with windows explorer” ...

Folder redirection permissions. My Documents / Start Menu / Desktop

How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in fu...

How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated. So how did I overcome...

Upgrading a Cisco Catalyst 3560 Switch

Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisc...

Deploy .exe using batch check os version and if the update is already installed.

OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain. Instead I created...

Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

One of our customers was getting the below error and it took ages to find a solution so I thought I would post it here. Unexpected Exchange mailbox Server error: Server: [server.domain] User: [use...

Office 365 Scan to Email

Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this form...

Assigning Send As Permissions to a user

It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work.  I t...

How To View and Kill Processes On Remote Windows Computers

Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While bo...

Fortigate and LDAP 4.0 MR3 Patch1

Hi Guys, I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing ...

+ TotalDebug
Home
TotalDebug
Cancel

Your client does not support opening this list with windows explorer

When using Office 365 and sharepoint 2010 you may find that trying to open a library in explorer will result in this error: “Your client does not support opening this list with windows explorer” ...

Folder redirection permissions. My Documents / Start Menu / Desktop

How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in fu...

How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated. So how did I overcome...

Upgrading a Cisco Catalyst 3560 Switch

Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisc...

Deploy .exe using batch check os version and if the update is already installed.

OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain. Instead I created...

Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

One of our customers was getting the below error and it took ages to find a solution so I thought I would post it here. Unexpected Exchange mailbox Server error: Server: [server.domain] User: [use...

Office 365 Scan to Email

Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this form...

Assigning Send As Permissions to a user

It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work.  I t...

How To View and Kill Processes On Remote Windows Computers

Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While bo...

Fortigate and LDAP 4.0 MR3 Patch1

Hi Guys, I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing ...

diff --git a/page9/index.html b/page9/index.html index b1d33a13b..d0db1a509 100644 --- a/page9/index.html +++ b/page9/index.html @@ -1 +1 @@ - TotalDebug
Home
TotalDebug
Cancel

Server 2003 Reinstall Terminal Services Licensing.

I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied. I managed to resolve this issue by copying t...

Warning: Cannot modify header information – headers already sent by…

Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative ...

Mapping a network drive in NT4 with logon credentials

Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted. Here is a simple solution to the issue we ha...

Send on Behalf and Send As

Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another us...

The Missing Manual Part 1: Veeam B & R Direct SAN Backups

One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host,...

Killing a Windows service that hangs on "stopping"

It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the...

Synchronise time with external NTP server on Windows Server

Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from ...

How to Make the Shutdown Button Unavailable with Group Policy

You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen. To Edit th...

+ TotalDebug
Home
TotalDebug
Cancel

Server 2003 Reinstall Terminal Services Licensing.

I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied. I managed to resolve this issue by copying t...

Warning: Cannot modify header information – headers already sent by…

Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative ...

Mapping a network drive in NT4 with logon credentials

Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted. Here is a simple solution to the issue we ha...

Send on Behalf and Send As

Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another us...

The Missing Manual Part 1: Veeam B & R Direct SAN Backups

One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host,...

Killing a Windows service that hangs on "stopping"

It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the...

Synchronise time with external NTP server on Windows Server

Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from ...

How to Make the Shutdown Button Unavailable with Group Policy

You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen. To Edit th...

diff --git a/posts/3d-printer-axes-calibration/index.html b/posts/3d-printer-axes-calibration/index.html index e75b5492b..3c76e7a64 100644 --- a/posts/3d-printer-axes-calibration/index.html +++ b/posts/3d-printer-axes-calibration/index.html @@ -1,4 +1,4 @@ - 3d Printer Axes Calibration | TotalDebug
Home 3d Printer Axes Calibration
Post
Cancel

3d Printer Axes Calibration

1580166000
1655196277

One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just work, I was so wrong!

The good news is, its quite a simple process once you know how and in this article im going to share with you, how I calibrate my printer and get perfect prints almost every time.

I use an Ender 3 with a lot of upgrades, but the process is the same for almost all 3d printers , so you should be able to follow this article without issue.

What you will need:

  • 3d Printer
  • Correctly tensioned belts (they should make a nice twang sound)
  • Pronterface or Octoprint
  • Digital Calipers
  • Ruler (calipers sometimes get in the way but you may be ok)
  • Tape or marker
  • Filament
  • Something to take notes on

Axes Diagram:

3d printer axes Source: StackExchange.com

Setup Software:

First we need to gather all the current settings, to do this you must first send a command to the printer, this can be done with either:

Pronterface

You must plug the USB into the printer and a computer, then launch pronterface, it should auto detect the printer, then click Connect

You can now enter commands in the right window next to the Send button

Octoprint

Once Octoprint is setup go to the terminal tab and you can enter commands here

Gather Initial Info:

Issue the command: M92 then press enter or hit send. you should see something like this:

1
+ 3d Printer Axes Calibration | TotalDebug
Home 3d Printer Axes Calibration
Post
Cancel

3d Printer Axes Calibration

1580166000
1655196277

One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just work, I was so wrong!

The good news is, its quite a simple process once you know how and in this article im going to share with you, how I calibrate my printer and get perfect prints almost every time.

I use an Ender 3 with a lot of upgrades, but the process is the same for almost all 3d printers , so you should be able to follow this article without issue.

What you will need:

  • 3d Printer
  • Correctly tensioned belts (they should make a nice twang sound)
  • Pronterface or Octoprint
  • Digital Calipers
  • Ruler (calipers sometimes get in the way but you may be ok)
  • Tape or marker
  • Filament
  • Something to take notes on

Axes Diagram:

3d printer axes Source: StackExchange.com

Setup Software:

First we need to gather all the current settings, to do this you must first send a command to the printer, this can be done with either:

Pronterface

You must plug the USB into the printer and a computer, then launch pronterface, it should auto detect the printer, then click Connect

You can now enter commands in the right window next to the Send button

Octoprint

Once Octoprint is setup go to the terminal tab and you can enter commands here

Gather Initial Info:

Issue the command: M92 then press enter or hit send. you should see something like this:

1
 
echo: M92 X80.00 Y80.00 Z400.00 E93.00
 

Make a note of this information somewhere as we will be referring back to these values quite often.

Now we can begin to calibrate each of our motors.

X&Z-Axis Calibration

First start by homing your X axis and the Z axis. I will use the stop switch as the measuring point as this doesn’t move, however you can use any fixed point from the relevant axis.

First measure the distance from the stop switch to the edge of the moving part (X = Printhead, Z = Gantry) , if yours is touching the stop switch then the distance is 0mm.

Now tell your printer to move the Axis 100mm (you can set this to smaller or larger number as the calculation will still work) The further you move the axis the more accurate your calibration should be. Now with your calipers measure from the stop switch to the same point on the printhead, write down the measurement as “ActualDistance” you will need to do this for both the X & Z Axis

If you measured 100mm then you don’t need to do anything else, your axis is calibrated. However, you likely wont get exactly 100mm so we will need to adjust for this.

E Axis Calibration

There are two ways that you can calibrate the E Axis. With the HotEnd attached or without. Personally I prefer to remove the bowden tube from the extruder and measure this way, I find its much more accurate. Some people prefer to heat the HotEnd and let the filament flow through it.

First, remove your filament and disconnect the bowden tube, then we will need to push the filament through the extruder until you just see the end of it flat with the edge where the bowden tube attaches.

Now send 100mm to the E Axis to extrude (you will need to heat the HotEnd or it wont work)

Once this finishes, measure with your calipers the distance from the end of the filament to the extruder this should be 100mm, if not make a note of the measurement (ActualDistance)

Calculations

In order to calculate the Axis we need the following calculation, the calculation is the same no matter which Axis you are working on:

1
 
NewValue = 100mm / ActualDistance * CurrentValue
@@ -6,4 +6,4 @@
 2
 
M92 X86.02 Y81.20 Z400.00 E149.00
 M500
-

Also we add an M500 which will save the configuration, if you want to make sure the values have saved, restart your printer and issue M92 again you should see the new values.

This post is licensed under CC BY 4.0 by the author.

I won a Ender 3 3D Printer and i'm addicted

Docker Overlay2 with CentOS for production

+

Also we add an M500 which will save the configuration, if you want to make sure the values have saved, restart your printer and issue M92 again you should see the new values.

This post is licensed under CC BY 4.0 by the author.

I won a Ender 3 3D Printer and i'm addicted

Docker Overlay2 with CentOS for production

diff --git a/posts/active-sync-error-eventid-3005-unexpected-exchange-mailbox-server-error/index.html b/posts/active-sync-error-eventid-3005-unexpected-exchange-mailbox-server-error/index.html index e517a3d81..d1e57c07d 100644 --- a/posts/active-sync-error-eventid-3005-unexpected-exchange-mailbox-server-error/index.html +++ b/posts/active-sync-error-eventid-3005-unexpected-exchange-mailbox-server-error/index.html @@ -1,7 +1,7 @@ - Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error | TotalDebug
Home Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error
Post
Cancel

Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

1328054400
1614629284

One of our customers was getting the below error and it took ages to find a solution so I thought I would post it here.

1
+ Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error | TotalDebug
Home Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error
Post
Cancel

Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

1328054400
1614629284

One of our customers was getting the below error and it took ages to find a solution so I thought I would post it here.

1
 2
 3
 
Unexpected Exchange mailbox Server error: Server: [server.domain] User: [useremail] HTTP status code: [503]. Verify that the Exchange mailbox Server is working correctly.
 
 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
-

This is how I fixed the issue:

  1. Open IIS
  2. Right click Default-Website
  3. Click Properties
  4. Click advanced
  5. Review sites. most likely you will see host headers and ip address.
  6. Click add
  7. IP address = (all unassigned)
  8. TCP Port = 80
  9. Host Header Value = (Blank)
  10. Click OK
  11. delete the entry with the host headers and ip address assignerd.

This should resolve the issue please comment if you have any issues doing this.

This post is licensed under CC BY 4.0 by the author.

Office 365 Scan to Email

Deploy .exe using batch check os version and if the update is already installed.

+

This is how I fixed the issue:

  1. Open IIS
  2. Right click Default-Website
  3. Click Properties
  4. Click advanced
  5. Review sites. most likely you will see host headers and ip address.
  6. Click add
  7. IP address = (all unassigned)
  8. TCP Port = 80
  9. Host Header Value = (Blank)
  10. Click OK
  11. delete the entry with the host headers and ip address assignerd.

This should resolve the issue please comment if you have any issues doing this.

This post is licensed under CC BY 4.0 by the author.

Office 365 Scan to Email

Deploy .exe using batch check os version and if the update is already installed.

diff --git a/posts/add-vcenter-logs-to-syslog-server-graylog2/index.html b/posts/add-vcenter-logs-to-syslog-server-graylog2/index.html index 51d762321..6d60788ed 100644 --- a/posts/add-vcenter-logs-to-syslog-server-graylog2/index.html +++ b/posts/add-vcenter-logs-to-syslog-server-graylog2/index.html @@ -1,4 +1,4 @@ - Add vCenter Logs to Syslog Server (GrayLog2) | TotalDebug
Home Add vCenter Logs to Syslog Server (GrayLog2)
Post
Cancel

Add vCenter Logs to Syslog Server (GrayLog2)

1436137200
1614629284

In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require.

First we want to install NxLog on our vCenter Server, This will be our syslog client.

To configure NxLog go to: c:\Program Files (x86)\nxlog\conf and edit nxlog.conf with a word editor.

Add the following configuration into the file:

1
+ Add vCenter Logs to Syslog Server (GrayLog2) | TotalDebug
Home Add vCenter Logs to Syslog Server (GrayLog2)
Post
Cancel

Add vCenter Logs to Syslog Server (GrayLog2)

1436137200
1614629284

In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require.

First we want to install NxLog on our vCenter Server, This will be our syslog client.

To configure NxLog go to: c:\Program Files (x86)\nxlog\conf and edit nxlog.conf with a word editor.

Add the following configuration into the file:

1
 2
 3
 4
@@ -192,4 +192,4 @@
     Port        60002
     OutputType	GELF
 </Output>
-

Once this configuration has been completed we need to configure an output in GrayLog2 for each of our NxLog outputs, My example just shows how to do this for the VPXD log but it is the same for any log.

  • Login to GrayLog2 Web Interface
  • Go To System > Inputs
  • Select GELF UDP from the dropdown
  • Click Launch New Input
  • Tick Global Input or a specific GrayLog2 Server depending on your setup
  • Enter a Title e.g. VPXD Logs
  • Enter a port that you specified in the NxLog configuration (this must be unique)
  • Click Launch

You should now start to see the logs pouring in, vCenter does generate a LOT of logs so you may want to keep an eye on your syslog server as it could get overloaded with data.

Hope this helped you, any issues or questions please let me know over on my Discord

Steve

This post is licensed under CC BY 4.0 by the author.

vCenter 6.0 VCSA Deployment

Dell VMware 5.5 FCoE Errors

+

Once this configuration has been completed we need to configure an output in GrayLog2 for each of our NxLog outputs, My example just shows how to do this for the VPXD log but it is the same for any log.

  • Login to GrayLog2 Web Interface
  • Go To System > Inputs
  • Select GELF UDP from the dropdown
  • Click Launch New Input
  • Tick Global Input or a specific GrayLog2 Server depending on your setup
  • Enter a Title e.g. VPXD Logs
  • Enter a port that you specified in the NxLog configuration (this must be unique)
  • Click Launch

You should now start to see the logs pouring in, vCenter does generate a LOT of logs so you may want to keep an eye on your syslog server as it could get overloaded with data.

Hope this helped you, any issues or questions please let me know over on my Discord

Steve

This post is licensed under CC BY 4.0 by the author.

vCenter 6.0 VCSA Deployment

Dell VMware 5.5 FCoE Errors

diff --git a/posts/assigning-send-as-permissions-to-a-user/index.html b/posts/assigning-send-as-permissions-to-a-user/index.html index de048234c..9ff11388e 100644 --- a/posts/assigning-send-as-permissions-to-a-user/index.html +++ b/posts/assigning-send-as-permissions-to-a-user/index.html @@ -1 +1 @@ - Assigning Send As Permissions to a user | TotalDebug
Home Assigning Send As Permissions to a user
Post
Cancel

Assigning Send As Permissions to a user

1323129600
1666884241

It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work.  I too tried to follow the steps and found that they did not work. I know this feature works, so I went looking around for other documentation on this and found KB281208 which applies to Exchange 5.5 and 2000.  Following the steps in KB281208 properly gave an user Send As permission as another user. But I found the steps listed in KB281208 were not complete either. The additional step that I performed was to remove all other permissions other than Send As.  Here are the modified steps for KB281208 that I performed:

  1. Start Active Directory Users and Computers; click Start, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
  2. On the View menu, make sure that Advanced Features is selected.
  3. Double-click the user that you want to grant send as rights for, and then click theSecurity tab.
  4. Click Add, click the user that you want to give send as rights to, and then check send as under allow in the Permissions area.
  5. Remove all other permissions granted by default so only the send as permission is granted.
  6. Click OK to close the dialog box.

So after I verified that the steps for KB281208 worked, I was curious as to why the steps for KB327000 did not work.  What I found was that Step #7 of KB327000 applied to the permission to User Objects instead of This Object Only.  Here are the modified steps for KB327000 that I performed:

  1. On an Exchange computer, click Start, point to Programs, point to Microsoft Exchange, and then click Active Directory Users and Computers.
  2. On the View menu, click to select Advanced Features.
  3. Expand Users, right-click the MailboxOwner object where you want to grant the permission, and then click Properties.
  4. Click the Security tab, and then click Advanced.
  5. In the Access Control Settings for MailboxOwner dialog box, click Add.
  6. In the Select User, Computer, or Group dialog box, click the user account or the group that you want to grant Send as permissions to, and then click OK.
  7. In the Permission Entry for MailboxOwner dialog box, click This Object Only in theApply onto list.
  8. In the Permissions list, locate Send As, and then click to select the Allow check box.
  9. Click OK three times to close the dialog boxes.

The KB articles were updated to include correct information. But, if you had problems with this in the past, this might be why!

This post is licensed under CC BY 4.0 by the author.

How To View and Kill Processes On Remote Windows Computers

Office 365 Scan to Email

+ Assigning Send As Permissions to a user | TotalDebug
Home Assigning Send As Permissions to a user
Post
Cancel

Assigning Send As Permissions to a user

1323129600
1666884241

It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work.  I too tried to follow the steps and found that they did not work. I know this feature works, so I went looking around for other documentation on this and found KB281208 which applies to Exchange 5.5 and 2000.  Following the steps in KB281208 properly gave an user Send As permission as another user. But I found the steps listed in KB281208 were not complete either. The additional step that I performed was to remove all other permissions other than Send As.  Here are the modified steps for KB281208 that I performed:

  1. Start Active Directory Users and Computers; click Start, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
  2. On the View menu, make sure that Advanced Features is selected.
  3. Double-click the user that you want to grant send as rights for, and then click theSecurity tab.
  4. Click Add, click the user that you want to give send as rights to, and then check send as under allow in the Permissions area.
  5. Remove all other permissions granted by default so only the send as permission is granted.
  6. Click OK to close the dialog box.

So after I verified that the steps for KB281208 worked, I was curious as to why the steps for KB327000 did not work.  What I found was that Step #7 of KB327000 applied to the permission to User Objects instead of This Object Only.  Here are the modified steps for KB327000 that I performed:

  1. On an Exchange computer, click Start, point to Programs, point to Microsoft Exchange, and then click Active Directory Users and Computers.
  2. On the View menu, click to select Advanced Features.
  3. Expand Users, right-click the MailboxOwner object where you want to grant the permission, and then click Properties.
  4. Click the Security tab, and then click Advanced.
  5. In the Access Control Settings for MailboxOwner dialog box, click Add.
  6. In the Select User, Computer, or Group dialog box, click the user account or the group that you want to grant Send as permissions to, and then click OK.
  7. In the Permission Entry for MailboxOwner dialog box, click This Object Only in theApply onto list.
  8. In the Permissions list, locate Send As, and then click to select the Allow check box.
  9. Click OK three times to close the dialog boxes.

The KB articles were updated to include correct information. But, if you had problems with this in the past, this might be why!

This post is licensed under CC BY 4.0 by the author.

How To View and Kill Processes On Remote Windows Computers

Office 365 Scan to Email

diff --git a/posts/automating-proxmox-with-terraform-ansible/index.html b/posts/automating-proxmox-with-terraform-ansible/index.html index 34242042f..0beffa497 100644 --- a/posts/automating-proxmox-with-terraform-ansible/index.html +++ b/posts/automating-proxmox-with-terraform-ansible/index.html @@ -1,4 +1,4 @@ - Automating deployments using Terraform with Proxmox and ansible | TotalDebug
Home Automating deployments using Terraform with Proxmox and ansible
Post
Cancel

Automating deployments using Terraform with Proxmox and ansible

1683363809
1691482830

Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well.

I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment.

For this reason I have moved over to using Terraform with Proxmox and ansible.

Telemate developed a Terraform provider that maps Terraform functionality to the Proxmox API, so start by defining the use of that provider in provider.tf

1
+ Automating deployments using Terraform with Proxmox and ansible | TotalDebug
Home Automating deployments using Terraform with Proxmox and ansible
Post
Cancel

Automating deployments using Terraform with Proxmox and ansible

1683363809
1691482830

Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well.

I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment.

For this reason I have moved over to using Terraform with Proxmox and ansible.

Telemate developed a Terraform provider that maps Terraform functionality to the Proxmox API, so start by defining the use of that provider in provider.tf

1
 2
 3
 4
@@ -272,4 +272,4 @@
 ansible_user = ""
 

Don’t commit this file to Git as it contains sensitive information

Any variables in vars.tf that have a default value don’t need to be defined in the credential file if the default value is sufficient.

The Cloud-Init template

The configuration that is used utilises a cloud-init template, check out my previous post (Proxmox template with cloud image and cloud init) where I cover how to set this up for use in Proxmox with Terraform.

Usage

Now all of the files we require are created, lets get it running:

  1. Install Terraform and Ansible

    1
     
     apt install -y terraform ansible
    -
  2. Enter the directory where your Terraform files reside
  3. Run terraform init, this will initialize your Terraform configuration and pull all the required providers.
  4. Ensure that you have the credential.auto.tfvars file created and with your variables populated
  5. Run terraform plan -out plan and if everything seems good terraform apply.

Use terraform apply --auto-approve to automatically apply without a prompt

To destroy the infrastructure, run terraform destroy

Final Thoughts

There is so much more potential using Terraform and Ansible. I have just scratched the surface, but you could automate everything up to firewall configuration as well, this is something I still need to look into, but it would be great to deploy and configure the firewall based on each individual device.

If you have any cool ideas for using Terraform and Ansible please let me know in the comments below!

Until next time…

This post is licensed under CC BY 4.0 by the author.

Use Python pandas NOW for your big datasets

Last4Solar - My solar nightmare!

+
  • Enter the directory where your Terraform files reside
  • Run terraform init, this will initialize your Terraform configuration and pull all the required providers.
  • Ensure that you have the credential.auto.tfvars file created and with your variables populated
  • Run terraform plan -out plan and if everything seems good terraform apply.

    Use terraform apply --auto-approve to automatically apply without a prompt

    To destroy the infrastructure, run terraform destroy

    Final Thoughts

    There is so much more potential using Terraform and Ansible. I have just scratched the surface, but you could automate everything up to firewall configuration as well, this is something I still need to look into, but it would be great to deploy and configure the firewall based on each individual device.

    If you have any cool ideas for using Terraform and Ansible please let me know in the comments below!

    Until next time…

  • This post is licensed under CC BY 4.0 by the author.

    Use Python pandas NOW for your big datasets

    Last4Solar - My solar nightmare!

    diff --git a/posts/bulk-configure-vcenter-alarms-powercli/index.html b/posts/bulk-configure-vcenter-alarms-powercli/index.html index 514332664..537735309 100644 --- a/posts/bulk-configure-vcenter-alarms-powercli/index.html +++ b/posts/bulk-configure-vcenter-alarms-powercli/index.html @@ -1,4 +1,4 @@ - Bulk configure vCenter Alarms with PowerCLI | TotalDebug
    Home Bulk configure vCenter Alarms with PowerCLI
    Post
    Cancel

    Bulk configure vCenter Alarms with PowerCLI

    1452988800
    1614629284

    I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms that match the name you specify and set the email as required.

    This is a really basic script and can easily be modified to set alarms how you want them.

    1
    + Bulk configure vCenter Alarms with PowerCLI | TotalDebug
    Home Bulk configure vCenter Alarms with PowerCLI
    Post
    Cancel

    Bulk configure vCenter Alarms with PowerCLI

    1452988800
    1614629284

    I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms that match the name you specify and set the email as required.

    This is a really basic script and can easily be modified to set alarms how you want them.

    1
     2
     3
     4
    @@ -36,4 +36,4 @@
     }
     

    To edit multiple alarms at once simply change the $alarms variable as below:

    1
     
    $alarms = @("Test Alarm1", "Test Alarm2")
    -

    One thing you will probably notice is that we set the “Yellow” to “Red” status after everything else, the reason for this is that it is set by default when creating the alarm definition and we need to unset this before resetting with the required notification type.

    This post is licensed under CC BY 4.0 by the author.

    VMware ESXi Embedded Host Client Installation – Updated

    How to check if a VM disk is Thick or Thin provisioned

    +

    One thing you will probably notice is that we set the “Yellow” to “Red” status after everything else, the reason for this is that it is set by default when creating the alarm definition and we need to unset this before resetting with the required notification type.

    This post is licensed under CC BY 4.0 by the author.

    VMware ESXi Embedded Host Client Installation – Updated

    How to check if a VM disk is Thick or Thin provisioned

    diff --git a/posts/centos-67-ipsecl2tp-vpn-client-unifi-usg-l2tp-server/index.html b/posts/centos-67-ipsecl2tp-vpn-client-unifi-usg-l2tp-server/index.html index 654c4dd62..6f0b06672 100644 --- a/posts/centos-67-ipsecl2tp-vpn-client-unifi-usg-l2tp-server/index.html +++ b/posts/centos-67-ipsecl2tp-vpn-client-unifi-usg-l2tp-server/index.html @@ -1,4 +1,4 @@ - CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server | TotalDebug
    Home CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server
    Post
    Cancel

    CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

    1502051446
    1666888493

    Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentation around the net was setting up an L2TP VPN.

    In Windows or iOS its a nice simple setup where you enter all the required details and it sorts out the IPsec and L2TP VPN for you, In CentOS this is much different.

    First we need to add the EPEL Repository: Now we need to install the software:

    1
    + CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server | TotalDebug
    Home CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server
    Post
    Cancel

    CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

    1502051446
    1666888493

    Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentation around the net was setting up an L2TP VPN.

    In Windows or iOS its a nice simple setup where you enter all the required details and it sorts out the IPsec and L2TP VPN for you, In CentOS this is much different.

    First we need to add the EPEL Repository: Now we need to install the software:

    1
     
    yum -y install epel-release
     

    Now we need to install the software:

    1
     
    sudo yum -y install xl2tpd openswan
    @@ -248,4 +248,4 @@
     then
             sudo route add -net xxx.xxx.xxx.xxx/xx dev ppp0
     fi
    -

    This can then be created as a cron job to make sure the vpn is always up and running.

    This post is licensed under CC BY 4.0 by the author.

    Upgrade your Linux UniFi Controller in minutes!

    JUNOS: Monitor Log Files

    +

    This can then be created as a cron job to make sure the vpn is always up and running.

    This post is licensed under CC BY 4.0 by the author.

    Upgrade your Linux UniFi Controller in minutes!

    JUNOS: Monitor Log Files

    diff --git a/posts/centos-8-teaming-with-wifi-hidden-ssid-using-nmcli/index.html b/posts/centos-8-teaming-with-wifi-hidden-ssid-using-nmcli/index.html index 4f3834199..0036dbf85 100644 --- a/posts/centos-8-teaming-with-wifi-hidden-ssid-using-nmcli/index.html +++ b/posts/centos-8-teaming-with-wifi-hidden-ssid-using-nmcli/index.html @@ -1,4 +1,4 @@ - CentOS 8 Teaming with WiFi Hidden SSID using nmcli | TotalDebug
    Home CentOS 8 Teaming with WiFi Hidden SSID using nmcli
    Post
    Cancel

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    1572649200
    1666884241

    I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially with a hidden SSID.

    As part of my home systems I am utilising an old laptop as my home assistant server, this allows for battery backup and network teaming, if my switch dies, my WiFi will still work etc.

    Lets get to the meat and potatoes!

    So the first thing that we need to do is check our devices are available:

    1
    + CentOS 8 Teaming with WiFi Hidden SSID using nmcli | TotalDebug
    Home CentOS 8 Teaming with WiFi Hidden SSID using nmcli
    Post
    Cancel

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    1572649200
    1666884241

    I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially with a hidden SSID.

    As part of my home systems I am utilising an old laptop as my home assistant server, this allows for battery backup and network teaming, if my switch dies, my WiFi will still work etc.

    Lets get to the meat and potatoes!

    So the first thing that we need to do is check our devices are available:

    1
     2
     3
     4
    @@ -140,4 +140,4 @@
             down count: 0
     runner:
       active port: eno1
    -

    More information on runners can be found here

    This post is licensed under CC BY 4.0 by the author.

    Continuous Integration and Deployment

    I won a Ender 3 3D Printer and i'm addicted

    +

    More information on runners can be found here

    This post is licensed under CC BY 4.0 by the author.

    Continuous Integration and Deployment

    I won a Ender 3 3D Printer and i'm addicted

    diff --git a/posts/centos-public-private-key-auth/index.html b/posts/centos-public-private-key-auth/index.html index 0e9d0339a..15ae64d72 100644 --- a/posts/centos-public-private-key-auth/index.html +++ b/posts/centos-public-private-key-auth/index.html @@ -1,4 +1,4 @@ - CentOS Use Public/Private Keys for Authentication | TotalDebug
    Home CentOS Use Public/Private Keys for Authentication
    Post
    Cancel

    CentOS Use Public/Private Keys for Authentication

    1390176000
    1666884241

    The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords.

    1.First, create a public/private key pair on the client that you will use to connect to the server (you will need to do this from each client machine from which you connect):

    1
    + CentOS Use Public/Private Keys for Authentication | TotalDebug
    Home CentOS Use Public/Private Keys for Authentication
    Post
    Cancel

    CentOS Use Public/Private Keys for Authentication

    1390176000
    1666884241

    The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords.

    1.First, create a public/private key pair on the client that you will use to connect to the server (you will need to do this from each client machine from which you connect):

    1
     
    ssh-keygen -t rsa
     

    Leave the passphrase blank if you dont want to receive a prompt for this.

    This will create two files in your ~/.ssh directory called: id_rsa and id_rsa.pub The first: id_rsa is your private key and the second: id_rsa.pub is your public key.

    1. Now set permissions on your private key:
    1
     2
    @@ -12,4 +12,4 @@
     chmod 600 ~/.ssh/authorized_keys
     

    The above permissions are required if StrictModes is set to yes in /etc/ssh/sshd_config (the default).

    1. Ensure the correct SELinux contexts are set:
    1
     
    restorecon -Rv ~/.ssh
    -

    Now when you login to the server you shouldn’t be prompted for a password (unless you entered a passphrase). By default, ssh will first try to authenticate using keys. If no keys are found or authentication fails, then ssh will fall back to conventional password authentication.

    If you want access to and from some servers you would need to complete this process on each client server and master server

    If you have any issues with setting this up, please let me know over on my Discord.

    This post is licensed under CC BY 4.0 by the author.

    How to view which Virtual Machines have Snapshots in VMware

    Setup rSnapshot backups on CentOS

    +

    Now when you login to the server you shouldn’t be prompted for a password (unless you entered a passphrase). By default, ssh will first try to authenticate using keys. If no keys are found or authentication fails, then ssh will fall back to conventional password authentication.

    If you want access to and from some servers you would need to complete this process on each client server and master server

    If you have any issues with setting this up, please let me know over on my Discord.

    This post is licensed under CC BY 4.0 by the author.

    How to view which Virtual Machines have Snapshots in VMware

    Setup rSnapshot backups on CentOS

    diff --git a/posts/centos-server-hardening-tips/index.html b/posts/centos-server-hardening-tips/index.html index 943056a5f..9b9d15783 100644 --- a/posts/centos-server-hardening-tips/index.html +++ b/posts/centos-server-hardening-tips/index.html @@ -1,4 +1,4 @@ - CentOS Server Hardening Tips | TotalDebug
    Home CentOS Server Hardening Tips
    Post
    Cancel

    CentOS Server Hardening Tips

    1401318000
    1666884241

    This article provides various hardening tips for your Linux server.

    1. Minimise Packages to Minimise Vulnerability

    Do you really want all sorts of services installed?. It’s recommended to avoid installing packages that are not required to avoid vulnerabilities. This may minimise risks that compromise other services on your server. Find and remove or disable unwanted services from the server to minimize vulnerability. Use the chkconfig command to find out services which are running on runlevel 3.

    1
    + CentOS Server Hardening Tips | TotalDebug
    Home CentOS Server Hardening Tips
    Post
    Cancel

    CentOS Server Hardening Tips

    1401318000
    1666884241

    This article provides various hardening tips for your Linux server.

    1. Minimise Packages to Minimise Vulnerability

    Do you really want all sorts of services installed?. It’s recommended to avoid installing packages that are not required to avoid vulnerabilities. This may minimise risks that compromise other services on your server. Find and remove or disable unwanted services from the server to minimize vulnerability. Use the chkconfig command to find out services which are running on runlevel 3.

    1
     
    /sbin/chkconfig --list |grep '3:on'
     

    Once you’ve found any unwanted services that are running, disable them using the following command:

    1
     
    chkconfig serviceName off
    @@ -84,4 +84,4 @@
     
    sh /usr/local/ddos/ddos.sh
     
    Restart DDos Deflate
    1
     
    sh /usr/local/ddos/ddos.sh -c
    -

    14. Install DenyHosts

    DenyHosts is a security tool written in python that monitors server access logs to prevent brute force attacks on a virtual server. The program works by banning IP addresses that exceed a certain number of failed login attempts.


    This list is still not completed, I am constantly adding new security tips to it, should you have any you think I should include please comment below and I will add them.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    Email Report Virtual Machines with Snapshots

    +

    14. Install DenyHosts

    DenyHosts is a security tool written in python that monitors server access logs to prevent brute force attacks on a virtual server. The program works by banning IP addresses that exceed a certain number of failed login attempts.


    This list is still not completed, I am constantly adding new security tips to it, should you have any you think I should include please comment below and I will add them.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    Email Report Virtual Machines with Snapshots

    diff --git a/posts/check-vm-disk-thick-thin-provisioned/index.html b/posts/check-vm-disk-thick-thin-provisioned/index.html index 7f8f3bab1..18ccc91f1 100644 --- a/posts/check-vm-disk-thick-thin-provisioned/index.html +++ b/posts/check-vm-disk-thick-thin-provisioned/index.html @@ -1,3 +1,3 @@ - How to check if a VM disk is Thick or Thin provisioned | TotalDebug
    Home How to check if a VM disk is Thick or Thin provisioned
    Post
    Cancel

    How to check if a VM disk is Thick or Thin provisioned

    1458086400
    1614629284

    There are multiple ways to tell if a virtual machine has thick or thin provisioned VM Disk. Below are some of the ways I am able to see this information:

    VI Client (thick client)

    • Select the Virtual Machine
    • Choose Edit Settings
    • Select the disk you wish to check
    • look under Type

    Web Client

    Select your Host in Host and Cluster inventory -> Related Objects -> Virtual machines tab

    • Select your Host in Host and Cluster
    • click Related Objects
    • click Virtual machines tab

    PowerCLI

    RvTools

    • Launch RV Tools and enter the vCenter IP Address or Name
    • Enter the login details or tick use windows credentials
    • Go to the “vDisk” tab
    This post is licensed under CC BY 4.0 by the author.

    Bulk configure vCenter Alarms with PowerCLI

    Failed to connect to VMware Lookup Service, SSL certificate verification failed

    diff --git a/posts/cisco-asdm-java-runtime-device-conenction/index.html b/posts/cisco-asdm-java-runtime-device-conenction/index.html index 4243b8af9..1ffae4c82 100644 --- a/posts/cisco-asdm-java-runtime-device-conenction/index.html +++ b/posts/cisco-asdm-java-runtime-device-conenction/index.html @@ -1 +1 @@ - Cisco ASDM Java Runtime Device Conenction | TotalDebug
    Home Cisco ASDM Java Runtime Device Conenction
    Post
    Cancel

    Cisco ASDM Java Runtime Device Conenction

    1392940800
    1614629284

    I have recently had a lot of issues with Cisco ASDM on new installs of Windows 7 and upwards.

    After lots of research and a bit of digging I have found a way to resolve this issue.

    1. Install Java Runtime Environment 6 Update 7

    2. Install ASDM onto the computer

    3. Edit the properties it the ASDM Shortcut.

    4. change the beginning of the target from:

    C:\windows\system\java.exe

    TO:

    C:\Program Files (x86)\Java\jre1.6.0_07\bin\javaw.exe

    This should resolve the issue with version 7.1(1) not connecting to devices.

    This post is licensed under CC BY 4.0 by the author.

    Setup rSnapshot backups on CentOS

    Migrate TeamSpeak 3 from SQLite to MySQL

    + Cisco ASDM Java Runtime Device Conenction | TotalDebug
    Home Cisco ASDM Java Runtime Device Conenction
    Post
    Cancel

    Cisco ASDM Java Runtime Device Conenction

    1392940800
    1614629284

    I have recently had a lot of issues with Cisco ASDM on new installs of Windows 7 and upwards.

    After lots of research and a bit of digging I have found a way to resolve this issue.

    1. Install Java Runtime Environment 6 Update 7

    2. Install ASDM onto the computer

    3. Edit the properties it the ASDM Shortcut.

    4. change the beginning of the target from:

    C:\windows\system\java.exe

    TO:

    C:\Program Files (x86)\Java\jre1.6.0_07\bin\javaw.exe

    This should resolve the issue with version 7.1(1) not connecting to devices.

    This post is licensed under CC BY 4.0 by the author.

    Setup rSnapshot backups on CentOS

    Migrate TeamSpeak 3 from SQLite to MySQL

    diff --git a/posts/configuring-homer-dashboard/index.html b/posts/configuring-homer-dashboard/index.html index 0304d9f2e..7abb19fd5 100644 --- a/posts/configuring-homer-dashboard/index.html +++ b/posts/configuring-homer-dashboard/index.html @@ -1,4 +1,4 @@ - Configuring Homer Dashboard | TotalDebug
    Home Configuring Homer Dashboard
    Post
    Cancel

    Configuring Homer Dashboard

    1665911700

    In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them.

    Main Features

    Some of Homers main features are:

    • Yaml file configuration
    • Search
    • Grouping
    • Theme customisation
    • Service Health Checks
    • Keyboard shortcuts

    Configuration

    To begin configuration navigate to the homer data folder that we created in the previous article dockerfiles\homer\data, you will store all the files you require here, but first open config.yml.

    The initial configuration gives you an idea of how to layout your dashboard, each section has a great explanation on how to use it.

    One thing that isn’t covered is the service checks, we will look at that later.

    To setup a basic section and URL you would need something like this:

    1
    + Configuring Homer Dashboard | TotalDebug
    Home Configuring Homer Dashboard
    Post
    Cancel

    Configuring Homer Dashboard

    1665911700

    In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them.

    Main Features

    Some of Homers main features are:

    • Yaml file configuration
    • Search
    • Grouping
    • Theme customisation
    • Service Health Checks
    • Keyboard shortcuts

    Configuration

    To begin configuration navigate to the homer data folder that we created in the previous article dockerfiles\homer\data, you will store all the files you require here, but first open config.yml.

    The initial configuration gives you an idea of how to layout your dashboard, each section has a great explanation on how to use it.

    One thing that isn’t covered is the service checks, we will look at that later.

    To setup a basic section and URL you would need something like this:

    1
     2
     3
     4
    @@ -18,4 +18,4 @@
             tag: "media"
             url: "https://192.168.1.100:32400"
             target: "_blank"
    -

    To add more items, just copy the first item and change its details for the second service that you wish to link out to.

    For custom icons, you need to add the files to the tools folder and then update the logo line in the configuration.

    I recommend checking out dashboard-icons which contains a huge list of icons that work great with Homer.

    Service Checks

    Additional checks can be added to an item, these are called Custom Services, some applications have direct integration, others can only use ping. A full list of the supported services and how to configure them is listed here

    Custom Themes

    You can add custom CSS to homer in order to have a personal look similar to the one I have used from Walkxcode called homer-theme

    Easier Updates

    Sometimes updating via terminal using nano/vim can be a pain, I personally use VS Code for the majority of my editing, so I setup Remote SSH which allows me to connect to my docker server file system from VS Code and edit the configuration files directly in VS Code.

    Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Homer dashboard with Docker

    Creating a standalone zigbee2mqtt hub with alpine linux

    +

    To add more items, just copy the first item and change its details for the second service that you wish to link out to.

    For custom icons, you need to add the files to the tools folder and then update the logo line in the configuration.

    I recommend checking out dashboard-icons which contains a huge list of icons that work great with Homer.

    Service Checks

    Additional checks can be added to an item, these are called Custom Services, some applications have direct integration, others can only use ping. A full list of the supported services and how to configure them is listed here

    Custom Themes

    You can add custom CSS to homer in order to have a personal look similar to the one I have used from Walkxcode called homer-theme

    Easier Updates

    Sometimes updating via terminal using nano/vim can be a pain, I personally use VS Code for the majority of my editing, so I setup Remote SSH which allows me to connect to my docker server file system from VS Code and edit the configuration files directly in VS Code.

    Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Homer dashboard with Docker

    Creating a standalone zigbee2mqtt hub with alpine linux

    diff --git a/posts/continuous-integration-and-deployment/index.html b/posts/continuous-integration-and-deployment/index.html index 9e5098062..96cd5ffaf 100644 --- a/posts/continuous-integration-and-deployment/index.html +++ b/posts/continuous-integration-and-deployment/index.html @@ -1 +1 @@ - Continuous Integration and Deployment | TotalDebug
    Home Continuous Integration and Deployment
    Post
    Cancel

    Continuous Integration and Deployment

    1570748400
    1655991118

    I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge.

    Over the years I have built up quite an estate of servers that over time become more difficult to manage and maintain, mostly I will spend a long time researching and deploying a solution, but when it breaks weeks / months later i struggle to remember how it was all built.

    There must be a better way!

    So now im looking for the best way to deploy / re-deploy and test all of my servers and services with minimum effort and without breaking them if I do something wrong.

    I started by building out Ansible playbooks, one for each of my servers, this works great for deploying my servers with all the apps that I require, However this doesn’t help with things like home assistant configuration changes, if I change my config I have to do it via atom with a remote plugin that allows FTP on changes. This works… but if i make a mistake i take home assistant offline which doesn’t go down well with the family!

    After this I thought how can I update my configuration, keep it backed up, have the ability to roll it back and also test it before I put it on my server?

    So I have now started using GitHub to store my configuration, this gives me a backup in case my server dies and also helps the HA Community see examples of the configuration for their own deployments.

    I also want to check the new configuration when it gets committed to GIT but before I download it to home assistant, for this I use gitlab. Whenever gitlab detects a commit on the GIT repository it will begin a pipeline on gitlab that checks my latest configuration for various things:

    • MarkdownLint - Checks any files with markdown in to make sure it is valid
    • YAMLlint - Checks YAML files for formatting and validation
    • JSONlint - Checks any JSON files for formatting and validation
    • HA Stable / Dev / Beta - My Home Assistant configuration is then checked against the different builds

    By doing all of the above checks I will know that the code works as expected and I can also tell that it will work with all the current releases of HomeAssistant.

    Once the configuration has been checked the pipeline will trigger a webhook back to my Home Assistant server which then pulls the latest commit from GitHub and restarts HomeAssistant.

    Now I have gone from roughly 15 / 30 minutes for testing and troubleshooting, along with potential outages down to around 2 minutes and no long outage for my Home Assistant.

    Conclusion

    By doing this I have saved myself 13 / 28 minutes per configuration change, when you add that up over weeks / months of changes I have very quickly saved a days worth of configuration change! If you then add the time saved by using Ansible, I can deploy a brand new Home Assistant server in around 10 minutes which is fully configured and functional.

    This post is licensed under CC BY 4.0 by the author.

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    + Continuous Integration and Deployment | TotalDebug
    Home Continuous Integration and Deployment
    Post
    Cancel

    Continuous Integration and Deployment

    1570748400
    1655991118

    I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge.

    Over the years I have built up quite an estate of servers that over time become more difficult to manage and maintain, mostly I will spend a long time researching and deploying a solution, but when it breaks weeks / months later i struggle to remember how it was all built.

    There must be a better way!

    So now im looking for the best way to deploy / re-deploy and test all of my servers and services with minimum effort and without breaking them if I do something wrong.

    I started by building out Ansible playbooks, one for each of my servers, this works great for deploying my servers with all the apps that I require, However this doesn’t help with things like home assistant configuration changes, if I change my config I have to do it via atom with a remote plugin that allows FTP on changes. This works… but if i make a mistake i take home assistant offline which doesn’t go down well with the family!

    After this I thought how can I update my configuration, keep it backed up, have the ability to roll it back and also test it before I put it on my server?

    So I have now started using GitHub to store my configuration, this gives me a backup in case my server dies and also helps the HA Community see examples of the configuration for their own deployments.

    I also want to check the new configuration when it gets committed to GIT but before I download it to home assistant, for this I use gitlab. Whenever gitlab detects a commit on the GIT repository it will begin a pipeline on gitlab that checks my latest configuration for various things:

    • MarkdownLint - Checks any files with markdown in to make sure it is valid
    • YAMLlint - Checks YAML files for formatting and validation
    • JSONlint - Checks any JSON files for formatting and validation
    • HA Stable / Dev / Beta - My Home Assistant configuration is then checked against the different builds

    By doing all of the above checks I will know that the code works as expected and I can also tell that it will work with all the current releases of HomeAssistant.

    Once the configuration has been checked the pipeline will trigger a webhook back to my Home Assistant server which then pulls the latest commit from GitHub and restarts HomeAssistant.

    Now I have gone from roughly 15 / 30 minutes for testing and troubleshooting, along with potential outages down to around 2 minutes and no long outage for my Home Assistant.

    Conclusion

    By doing this I have saved myself 13 / 28 minutes per configuration change, when you add that up over weeks / months of changes I have very quickly saved a days worth of configuration change! If you then add the time saved by using Ansible, I can deploy a brand new Home Assistant server in around 10 minutes which is fully configured and functional.

    This post is licensed under CC BY 4.0 by the author.

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    diff --git a/posts/cookiecutter-automate-project-creation/index.html b/posts/cookiecutter-automate-project-creation/index.html index 447321098..5c6822a6a 100644 --- a/posts/cookiecutter-automate-project-creation/index.html +++ b/posts/cookiecutter-automate-project-creation/index.html @@ -1,4 +1,4 @@ - Cookiecutter: Automate project creation! | TotalDebug
    Home Cookiecutter: Automate project creation!
    Post
    Cancel

    Cookiecutter: Automate project creation!

    1636498800
    1666884241

    As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, documenting and best practices we have set.

    When we create a new project there are many repetitive tasks that take place, such as creating pyproject.toml, directory structures, documentation folders and many other tasks, these tasks are time consuming, repetitive and prone to user error.

    Some context

    Starting a new repository for a new project is always a chore, specially when working with large teams where others are collaborating with you. You have to follow the same standards and coding practices to ensure all developers know what is happening.

    Working in large teams means that with many different projects and repositories it is very likely that none of them will follow the same base structure that is expected. To help alleviate this problem and fulfil these expectations I created project templates that anyone can follow to ensure all base projects are the same.

    What is Cookiecutter

    Cookiecutter is a CLI tool built in Python that creates a project from boilerplate templates (mainly available on Github). It uses the templating system Jinja2 to replace and customize folders and/or files names, as well as their content.

    Although built with Python, you are not limited to templating Python projects, it can easily be implemented with other programming languages. However, to do this you will need to know or learn some Jinja and if you want to implement hooks this will need to be done in Python.

    Why use cookiecutter

    Well simply put, to save time building new project repositories, to avoid missing files or commit checks and probably one important step, to make life easier for new team members who will be expected to create projects.

    We also use it as a way to enforce standards, providing the developer with the necessary structure to ensure the rules are followed: write documentation perform tests, follow specific syntax standards by giving them the base structure in a boilerplate code, it makes it easier for developers to follow standards.

    In certain projects you may have a lot of repetitive code, such as creating Flask websites, with a cookiecutter template, you would be able to duplicate that code with ease and little time spent.

    How to use Cookiecutter

    Cookiecutter is super simple to use, you can either use one of the many templates that already exist online, or you can create one that suits your own needs.

    You can access templates from various locations:

    • Git repository
    • Local folder
    • Zip file

    If working with Git repositories, you can even start a template from any branch!

    To try out cookie cutter, first it needs to be installed:

    1
    + Cookiecutter: Automate project creation! | TotalDebug
    Home Cookiecutter: Automate project creation!
    Post
    Cancel

    Cookiecutter: Automate project creation!

    1636498800
    1666884241

    As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, documenting and best practices we have set.

    When we create a new project there are many repetitive tasks that take place, such as creating pyproject.toml, directory structures, documentation folders and many other tasks, these tasks are time consuming, repetitive and prone to user error.

    Some context

    Starting a new repository for a new project is always a chore, specially when working with large teams where others are collaborating with you. You have to follow the same standards and coding practices to ensure all developers know what is happening.

    Working in large teams means that with many different projects and repositories it is very likely that none of them will follow the same base structure that is expected. To help alleviate this problem and fulfil these expectations I created project templates that anyone can follow to ensure all base projects are the same.

    What is Cookiecutter

    Cookiecutter is a CLI tool built in Python that creates a project from boilerplate templates (mainly available on Github). It uses the templating system Jinja2 to replace and customize folders and/or files names, as well as their content.

    Although built with Python, you are not limited to templating Python projects, it can easily be implemented with other programming languages. However, to do this you will need to know or learn some Jinja and if you want to implement hooks this will need to be done in Python.

    Why use cookiecutter

    Well simply put, to save time building new project repositories, to avoid missing files or commit checks and probably one important step, to make life easier for new team members who will be expected to create projects.

    We also use it as a way to enforce standards, providing the developer with the necessary structure to ensure the rules are followed: write documentation perform tests, follow specific syntax standards by giving them the base structure in a boilerplate code, it makes it easier for developers to follow standards.

    In certain projects you may have a lot of repetitive code, such as creating Flask websites, with a cookiecutter template, you would be able to duplicate that code with ease and little time spent.

    How to use Cookiecutter

    Cookiecutter is super simple to use, you can either use one of the many templates that already exist online, or you can create one that suits your own needs.

    You can access templates from various locations:

    • Git repository
    • Local folder
    • Zip file

    If working with Git repositories, you can even start a template from any branch!

    To try out cookie cutter, first it needs to be installed:

    1
     
    pip install -U cookiecutter
     

    Once installed run the following command:

    1
     
    cookiecutter gh:totaldebug/python-package-template
    @@ -62,4 +62,4 @@
     
     if not re.match(MODULE_REGEX, module_name):
         print('ERROR: The project slug (%s) is not a valid Python module name. Please do not use a - and use _ instead' % module_name)
    -

    As you can see from the examples, you can either create a very simple template or add Jinja / Python for more complex and error validation.

    Final Thoughts

    Cookiecutter has saved me a lot of time in the creation or projects, also a lot of the boring template work is taken out of starting a new project which is always a bonus.

    Now all of my projects start in a good standard and should be easier to keep that way.

    If you would like to check out cookiecutter you could start by checking my python-package-template

    I have added things like Github actions and pre-commits to check work along with other python best practices that I hope to cover in my next article.

    Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Sqitch, Sensible database change management

    Creating the perfect Python project

    +

    As you can see from the examples, you can either create a very simple template or add Jinja / Python for more complex and error validation.

    Final Thoughts

    Cookiecutter has saved me a lot of time in the creation or projects, also a lot of the boring template work is taken out of starting a new project which is always a bonus.

    Now all of my projects start in a good standard and should be easier to keep that way.

    If you would like to check out cookiecutter you could start by checking my python-package-template

    I have added things like Github actions and pre-commits to check work along with other python best practices that I hope to cover in my next article.

    Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Sqitch, Sensible database change management

    Creating the perfect Python project

    diff --git a/posts/creating-standalone-zigbee2mqtt-hub-with-alpine-linux/index.html b/posts/creating-standalone-zigbee2mqtt-hub-with-alpine-linux/index.html index f7beeb37e..11215b168 100644 --- a/posts/creating-standalone-zigbee2mqtt-hub-with-alpine-linux/index.html +++ b/posts/creating-standalone-zigbee2mqtt-hub-with-alpine-linux/index.html @@ -1,4 +1,4 @@ - Creating a standalone zigbee2mqtt hub with alpine linux | TotalDebug
    Home Creating a standalone zigbee2mqtt hub with alpine linux
    Post
    Cancel

    Creating a standalone zigbee2mqtt hub with alpine linux

    1671113580

    I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar more efficiently once it’s installed.

    As part of my automation I used to run deconz with some zigbee IKEA Tradfri lights around the house, I found deconz limiting at the time and it doesn’t seem to have progressed much, whereas zigbee2mqtt seems to have moved a long way and has a lot of support.

    I also had the issue that Home Assistant now runs on a virtual machine in my loft, where the conbee II signal didn’t reach my devices, so to combat this I wanted to utilise an old raspberry pi and create a zigbee hub that is easy to maintain in a set and forget fashion, if it stops working, reboot and it works again.

    This is when I came up with ZigQt, an Alpine overlay that will fully configure a Zigbee2mqtt controller on a Raspberry Pi in a stateless manner. Through this article I will show you how to setup this great little ZigQt hub

    Hardware

    For this I have used the following hardware:

    • Raspberry Pi 3b plus
    • POE+ Hat (Optional)
    • Micro SD Card
    • Conbee II (can use other zigbee dongles)

    OS Installation

    For the OS I have used Alpine Linux, by default Alpine is a diskless OS, meaning it loads the whole OS into memory and this makes it lightning fast.

    Create a bootable MicroSD card with two partitions

    The goal is to have a MicroSD card containing two partitions:

    • The system partition: A fat32 partition, with boot and lba flags, on a small part of the MicroSD card, enough to store the system and the applications (suggested 512MB to 2GB).
    • The storage partition: A ext4 partition occupying the rest of the MicroSD card capacity, to use as persistent storage for any configuration data that may be needed.

    Creating the partitions (assuming you re using Linux)

    Mount the SD card (this should be automated, if not, you probably know how to do that and you probably don’t need that tutorial)

    List your disks:

    1
    + Creating a standalone zigbee2mqtt hub with alpine linux | TotalDebug
    Home Creating a standalone zigbee2mqtt hub with alpine linux
    Post
    Cancel

    Creating a standalone zigbee2mqtt hub with alpine linux

    1671113580

    I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar more efficiently once it’s installed.

    As part of my automation I used to run deconz with some zigbee IKEA Tradfri lights around the house, I found deconz limiting at the time and it doesn’t seem to have progressed much, whereas zigbee2mqtt seems to have moved a long way and has a lot of support.

    I also had the issue that Home Assistant now runs on a virtual machine in my loft, where the conbee II signal didn’t reach my devices, so to combat this I wanted to utilise an old raspberry pi and create a zigbee hub that is easy to maintain in a set and forget fashion, if it stops working, reboot and it works again.

    This is when I came up with ZigQt, an Alpine overlay that will fully configure a Zigbee2mqtt controller on a Raspberry Pi in a stateless manner. Through this article I will show you how to setup this great little ZigQt hub

    Hardware

    For this I have used the following hardware:

    • Raspberry Pi 3b plus
    • POE+ Hat (Optional)
    • Micro SD Card
    • Conbee II (can use other zigbee dongles)

    OS Installation

    For the OS I have used Alpine Linux, by default Alpine is a diskless OS, meaning it loads the whole OS into memory and this makes it lightning fast.

    Create a bootable MicroSD card with two partitions

    The goal is to have a MicroSD card containing two partitions:

    • The system partition: A fat32 partition, with boot and lba flags, on a small part of the MicroSD card, enough to store the system and the applications (suggested 512MB to 2GB).
    • The storage partition: A ext4 partition occupying the rest of the MicroSD card capacity, to use as persistent storage for any configuration data that may be needed.

    Creating the partitions (assuming you re using Linux)

    Mount the SD card (this should be automated, if not, you probably know how to do that and you probably don’t need that tutorial)

    List your disks:

    1
     2
     
    sudo fdisk -l
     Disk /dev/sda: 7624 MB, 7994343424 bytes, 15613952 sectors
    @@ -56,4 +56,4 @@
       address 10.42.0.10
       netmask 255.255.255.0
       gateway 10.42.0.1
    -

    The lo interface is recommended, but you only need add the specific interface you plan on using after this e.g. eth0 or wlan0 or usb0

    interfaces sample

    Wireless Network Configuration

    If using wireless, you will need to create a wpa_supplicant.conf file.

    Zigbee2MQTT Configuration

    A default zigbee2mqtt configuration is created during install, however this may not suite your needs in this case you can create a custom configuration.yaml file. Further configuration options can be found here

    Further customisation

    This repository may be forked/cloned/downloaded. The main script file is headless.sh. Execute ./make.sh to rebuild zigqt.apkovl.tar.gz with any of the changes made.

    On your Pi

    Initial Boot

    Each time the hub reboots, the initial boot sequence will be run, this ensures that the OS is the same on every boot greatly reducing the risk of changes to the OS causing issues with the hub.

    The following directories are mapped to persistent storage:

    • /var
    • /etc/zigbee2mqtt

    This ensures certain configuration is not lost on reboot.

    User/Password management

    The root user will have no password by default. It isn’t currently possible to update the password without breaking the way the overlay works, however in theory you could launch a copy of Alpine Linux without the zigqt overlay, setup a password and alternative used, run lbu commit to save the changes and then merge the required files with those in zigqt.apkovl.tar.gz

    If I manage to figure out an easier way to do this I will be sure to update this article.

    Zigbee2mqtt

    If everything has worked, zigbee2mqtt should be accessible at the following address: http://zigqt.local:8080.

    Any configuration changes made in the web interface will be saved to the persistent storage, so will still be in effect after a reboot.

    Updates

    To update to newer versions, simply reboot, the latest available zigbee2mqtt will be installed.

    Final thoughts

    At the moment this is the best solution I could think of to provide a fully functioning and maintenance free version of Zigbee2mqtt on a standalone Raspberry Pi.

    I hope to have a solution for the user and password management someday, but if you know a way to get around this please do let me know.

    This post is licensed under CC BY 4.0 by the author.

    Configuring Homer Dashboard

    Home Assistant medication notification using Node-RED

    +

    The lo interface is recommended, but you only need add the specific interface you plan on using after this e.g. eth0 or wlan0 or usb0

    interfaces sample

    Wireless Network Configuration

    If using wireless, you will need to create a wpa_supplicant.conf file.

    Zigbee2MQTT Configuration

    A default zigbee2mqtt configuration is created during install, however this may not suite your needs in this case you can create a custom configuration.yaml file. Further configuration options can be found here

    Further customisation

    This repository may be forked/cloned/downloaded. The main script file is headless.sh. Execute ./make.sh to rebuild zigqt.apkovl.tar.gz with any of the changes made.

    On your Pi

    Initial Boot

    Each time the hub reboots, the initial boot sequence will be run, this ensures that the OS is the same on every boot greatly reducing the risk of changes to the OS causing issues with the hub.

    The following directories are mapped to persistent storage:

    • /var
    • /etc/zigbee2mqtt

    This ensures certain configuration is not lost on reboot.

    User/Password management

    The root user will have no password by default. It isn’t currently possible to update the password without breaking the way the overlay works, however in theory you could launch a copy of Alpine Linux without the zigqt overlay, setup a password and alternative used, run lbu commit to save the changes and then merge the required files with those in zigqt.apkovl.tar.gz

    If I manage to figure out an easier way to do this I will be sure to update this article.

    Zigbee2mqtt

    If everything has worked, zigbee2mqtt should be accessible at the following address: http://zigqt.local:8080.

    Any configuration changes made in the web interface will be saved to the persistent storage, so will still be in effect after a reboot.

    Updates

    To update to newer versions, simply reboot, the latest available zigbee2mqtt will be installed.

    Final thoughts

    At the moment this is the best solution I could think of to provide a fully functioning and maintenance free version of Zigbee2mqtt on a standalone Raspberry Pi.

    I hope to have a solution for the user and password management someday, but if you know a way to get around this please do let me know.

    This post is licensed under CC BY 4.0 by the author.

    Configuring Homer Dashboard

    Home Assistant medication notification using Node-RED

    diff --git a/posts/creating-the-perfect-python-project/index.html b/posts/creating-the-perfect-python-project/index.html index 12b282d4c..adc51296b 100644 --- a/posts/creating-the-perfect-python-project/index.html +++ b/posts/creating-the-perfect-python-project/index.html @@ -1,4 +1,4 @@ - Creating the perfect Python project | TotalDebug
    Home Creating the perfect Python project
    Post
    Cancel

    Creating the perfect Python project

    1647990000
    1655154889

    Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practices will lead to a standardised and aligned coding experience for developers.

    In this article I will go through what I consider to be the best python project setup. Please follow along, or if you prefer to jump straight in, you can use cookiecutter to generate a new project following these standards, install poetry then create a new project.

    Poetry: Dependency Management

    Poetry is a Python dependency management and packaging system that makes package management easy!

    Poetry comes with all the features you would require to manage a project’s packages, it removes the need to freeze and potentially include packages that are not required for the specific project. Poetry only adds the libraries that you require for that specific project.

    No more need for the unmanageable requirements.txt file.

    Poetry will also add a venv to ensure only the required packages are loaded. with one simple command poetry shell you enter the venv with all the required packages.

    Lets get setup with poetry

    1
    + Creating the perfect Python project | TotalDebug
    Home Creating the perfect Python project
    Post
    Cancel

    Creating the perfect Python project

    1647990000
    1655154889

    Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practices will lead to a standardised and aligned coding experience for developers.

    In this article I will go through what I consider to be the best python project setup. Please follow along, or if you prefer to jump straight in, you can use cookiecutter to generate a new project following these standards, install poetry then create a new project.

    Poetry: Dependency Management

    Poetry is a Python dependency management and packaging system that makes package management easy!

    Poetry comes with all the features you would require to manage a project’s packages, it removes the need to freeze and potentially include packages that are not required for the specific project. Poetry only adds the libraries that you require for that specific project.

    No more need for the unmanageable requirements.txt file.

    Poetry will also add a venv to ensure only the required packages are loaded. with one simple command poetry shell you enter the venv with all the required packages.

    Lets get setup with poetry

    1
     2
     3
     4
    @@ -216,4 +216,4 @@
     2
     
    poetry install pre-commit --dev
     pre-commit install -t pre-commit -t commit-msg
    -

    Now when you run a commit you will see each hook running, this will then show any errors prior to committing, you can then fix the issues and try the commit again.

    You can also see I have conventional-pre-commit applied with the -t commit-msg tag this enforces the use of conventional commit messages for all commits, ensuring that our commit messages all follow the same standard.

    Example pre-commit Output

    Final Thoughts

    This method of utilising cookie cutter, and pre-commit hooks has saved me a lot of time, I think there is more to be explored with pre-commit hooks such as adding tests for my code etc. that will come with time on my development journey.

    With these methods I know my commit messages are tidy, and my code is cleaner than before its a great start with more to come.

    I also execute these as github actions on my projects, that way anyone else who contributes but doesn’t install the pre-commit hooks will be held accountable to resolve any issues prior to merging their pull requests.

    Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Cookiecutter: Automate project creation!

    Type hinting and checking in Python

    +

    Now when you run a commit you will see each hook running, this will then show any errors prior to committing, you can then fix the issues and try the commit again.

    You can also see I have conventional-pre-commit applied with the -t commit-msg tag this enforces the use of conventional commit messages for all commits, ensuring that our commit messages all follow the same standard.

    Example pre-commit Output

    Final Thoughts

    This method of utilising cookie cutter, and pre-commit hooks has saved me a lot of time, I think there is more to be explored with pre-commit hooks such as adding tests for my code etc. that will come with time on my development journey.

    With these methods I know my commit messages are tidy, and my code is cleaner than before its a great start with more to come.

    I also execute these as github actions on my projects, that way anyone else who contributes but doesn’t install the pre-commit hooks will be held accountable to resolve any issues prior to merging their pull requests.

    Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Cookiecutter: Automate project creation!

    Type hinting and checking in Python

    diff --git a/posts/dell-vmware-5-5-fcoe-errors/index.html b/posts/dell-vmware-5-5-fcoe-errors/index.html index 798a9bd9f..bba6bec5d 100644 --- a/posts/dell-vmware-5-5-fcoe-errors/index.html +++ b/posts/dell-vmware-5-5-fcoe-errors/index.html @@ -1,4 +1,4 @@ - Dell VMware 5.5 FCoE Errors | TotalDebug
    Home Dell VMware 5.5 FCoE Errors
    Post
    Cancel

    Dell VMware 5.5 FCoE Errors

    1437346800
    1614629284

    Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time.

    I looked into this and found that the latest Dell ESXi image also includes Drivers and scripts that enable the FCoE interfaces on cards that support it.

    To see if you have this problem check the below steps:

    on boot press ALT + F12, this will show what ESXi is doing on boot, you will then begin to see the following errors multiple times:

    1
    + Dell VMware 5.5 FCoE Errors | TotalDebug
    Home Dell VMware 5.5 FCoE Errors
    Post
    Cancel

    Dell VMware 5.5 FCoE Errors

    1437346800
    1614629284

    Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time.

    I looked into this and found that the latest Dell ESXi image also includes Drivers and scripts that enable the FCoE interfaces on cards that support it.

    To see if you have this problem check the below steps:

    on boot press ALT + F12, this will show what ESXi is doing on boot, you will then begin to see the following errors multiple times:

    1
     2
     
    FIP VLAN ID unavail. Retry VLAN discovery
     fcoe_ctlr_vlan_request() is done
    @@ -12,4 +12,4 @@
     rm 99bnx2fc.sh
     esxcli fcoe nic disable -n=vmnic4
     esxcli fcoe nic disable -n=vmnic5
    -

    This will remove the FCoE VIB, delete a script that runs to check for the VIB and then disable fcoe on the vmnics required.

    Hopefully this will help someone else as it took me a long time to find this solution and resolve the issue.

    This post is licensed under CC BY 4.0 by the author.

    Add vCenter Logs to Syslog Server (GrayLog2)

    VMware Large Snapshot Safe Removal

    +

    This will remove the FCoE VIB, delete a script that runs to check for the VIB and then disable fcoe on the vmnics required.

    Hopefully this will help someone else as it took me a long time to find this solution and resolve the issue.

    This post is licensed under CC BY 4.0 by the author.

    Add vCenter Logs to Syslog Server (GrayLog2)

    VMware Large Snapshot Safe Removal

    diff --git a/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html b/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html index 7e815818b..b7451ad7f 100644 --- a/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html +++ b/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html @@ -1 +1 @@ - Deploy .exe using batch check os version and if the update is already installed. | TotalDebug
    Home Deploy .exe using batch check os version and if the update is already installed.
    Post
    Cancel

    Deploy .exe using batch check os version and if the update is already installed.

    1328659200
    1614629284

    OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain.

    Instead I created a script that would check the OS Version and see if the update was already installed.

    1. First we hide the script from users: @ECHO Off
    2. Then we check they are running the correct OS (for windows 7 “Version 6.1”) ver | find "Windows XP" >NUL if errorlevel 1 goto end
    3. Check to see if the update is installed (chance the reg location depending on the install) reg QUERY &#8220;HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP\SP20\KB943729&#8221; >NUL 2>NUL if errorlevel 1 goto install_update goto end
    4. Then if it is the correct OS and the update isn’t installed run the exe :install_update \\PUT\_YOUR\_SHARE\_PATH\_HERE\Windows-KB943729-x86-ENU.exe /passive /norestart
    5. End (this is added so that the script will stop if the criteria are not met before the update is installed stopping errors. :end
    6. you can then add this to a group policy to allow it to be deployed
    This post is licensed under CC BY 4.0 by the author.

    Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

    Upgrading a Cisco Catalyst 3560 Switch

    + Deploy .exe using batch check os version and if the update is already installed. | TotalDebug
    Home Deploy .exe using batch check os version and if the update is already installed.
    Post
    Cancel

    Deploy .exe using batch check os version and if the update is already installed.

    1328659200
    1614629284

    OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain.

    Instead I created a script that would check the OS Version and see if the update was already installed.

    1. First we hide the script from users: @ECHO Off
    2. Then we check they are running the correct OS (for windows 7 “Version 6.1”) ver | find "Windows XP" >NUL if errorlevel 1 goto end
    3. Check to see if the update is installed (chance the reg location depending on the install) reg QUERY &#8220;HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP\SP20\KB943729&#8221; >NUL 2>NUL if errorlevel 1 goto install_update goto end
    4. Then if it is the correct OS and the update isn’t installed run the exe :install_update \\PUT\_YOUR\_SHARE\_PATH\_HERE\Windows-KB943729-x86-ENU.exe /passive /norestart
    5. End (this is added so that the script will stop if the criteria are not met before the update is installed stopping errors. :end
    6. you can then add this to a group policy to allow it to be deployed
    This post is licensed under CC BY 4.0 by the author.

    Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

    Upgrading a Cisco Catalyst 3560 Switch

    diff --git a/posts/docker-install-on-centos-basic-docker-commands/index.html b/posts/docker-install-on-centos-basic-docker-commands/index.html index eeeea9b19..62fcec90b 100644 --- a/posts/docker-install-on-centos-basic-docker-commands/index.html +++ b/posts/docker-install-on-centos-basic-docker-commands/index.html @@ -1 +1 @@ - Docker install on CentOS & basic Docker commands | TotalDebug
    Home Docker install on CentOS & basic Docker commands
    Post
    Cancel

    Docker install on CentOS & basic Docker commands

    1526849787
    1680258820

    In this video I will take you through installing Docker on CentOS and some of the most common basic commands you will need to work with Docker.

    This post is licensed under CC BY 4.0 by the author.

    What is Docker? - Overview

    vCloud Director 8.10 – Renew SSL Certificates

    + Docker install on CentOS & basic Docker commands | TotalDebug
    Home Docker install on CentOS & basic Docker commands
    Post
    Cancel

    Docker install on CentOS & basic Docker commands

    1526849787
    1680258820

    In this video I will take you through installing Docker on CentOS and some of the most common basic commands you will need to work with Docker.

    This post is licensed under CC BY 4.0 by the author.

    What is Docker? - Overview

    vCloud Director 8.10 – Renew SSL Certificates

    diff --git a/posts/docker-overlay2-with-centos-for-production/index.html b/posts/docker-overlay2-with-centos-for-production/index.html index e92bf2950..c373d249e 100644 --- a/posts/docker-overlay2-with-centos-for-production/index.html +++ b/posts/docker-overlay2-with-centos-for-production/index.html @@ -1,4 +1,4 @@ - Docker Overlay2 with CentOS for production | TotalDebug
    Home Docker Overlay2 with CentOS for production
    Post
    Cancel

    Docker Overlay2 with CentOS for production

    1588633200
    1655154889

    The following short article runs through how to setup docker to use overlay2 with Centos for use in production

    Pre-Requisites

    • Add an extra drive to CentOS (this could also be freespace on the existing disk)
    • Have docker installed (services stopped)

    Setup

    First we need to find our new disk:

    1
    + Docker Overlay2 with CentOS for production | TotalDebug
    Home Docker Overlay2 with CentOS for production
    Post
    Cancel

    Docker Overlay2 with CentOS for production

    1588633200
    1655154889

    The following short article runs through how to setup docker to use overlay2 with Centos for use in production

    Pre-Requisites

    • Add an extra drive to CentOS (this could also be freespace on the existing disk)
    • Have docker installed (services stopped)

    Setup

    First we need to find our new disk:

    1
     
    fdisk -l
     

    Once we have our new disk, we can start to create a our logical volume:

    1
     2
    @@ -20,4 +20,4 @@
     
    systemctl start docker
     

    To test that this has worked, run the following, you should see that now you are using Overlay2 as the storage driver:

    1
     
    docker info
    -
    This post is licensed under CC BY 4.0 by the author.

    3d Printer Axes Calibration

    Use GitHub pages with unsupported plugins

    +
    This post is licensed under CC BY 4.0 by the author.

    3d Printer Axes Calibration

    Use GitHub pages with unsupported plugins

    diff --git a/posts/email-report-virtual-machines-snapshots/index.html b/posts/email-report-virtual-machines-snapshots/index.html index a381ef3a6..f7ba37459 100644 --- a/posts/email-report-virtual-machines-snapshots/index.html +++ b/posts/email-report-virtual-machines-snapshots/index.html @@ -1,3 +1,3 @@ - Email Report Virtual Machines with Snapshots | TotalDebug
    Home Email Report Virtual Machines with Snapshots
    Post
    Cancel

    Email Report Virtual Machines with Snapshots

    1403046000
    1666884241

    I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines.

    I decided that I needed a way of reporting on which virtual machines had snapshots present, when they were created and how big they are.

    The attached PowerCLI script does just that! It will logon to vCenter check all of the virtual machines for snapshots and then send an email report to the email address specified.

    This script support’s the get-help command and tab completion of parameters.

    Get-VMSnapshotReport

    To use this script simply use the following commands:

    1
    + Email Report Virtual Machines with Snapshots | TotalDebug
    Home Email Report Virtual Machines with Snapshots
    Post
    Cancel

    Email Report Virtual Machines with Snapshots

    1403046000
    1666884241

    I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines.

    I decided that I needed a way of reporting on which virtual machines had snapshots present, when they were created and how big they are.

    The attached PowerCLI script does just that! It will logon to vCenter check all of the virtual machines for snapshots and then send an email report to the email address specified.

    This script support’s the get-help command and tab completion of parameters.

    Get-VMSnapshotReport

    To use this script simply use the following commands:

    1
     
    ./Get-VMSnapShotReport.ps1 -vCenter "my.vcenter.com" -user username -password YourPassword  -OlderThan 48 -EmailTo "user@domain.com" -EmailFrom "user@domain.com" -EmailSubject "My Snapshot Report" -EmailServer "mail.domain.com"
    -

    The only thing that I ask is that when using this script you keep my name and website present in the notes, if there are any improvements you think I could make please let me know.

    This post is licensed under CC BY 4.0 by the author.

    CentOS Server Hardening Tips

    How to setup an NFS mount on CentOS 6

    +

    The only thing that I ask is that when using this script you keep my name and website present in the notes, if there are any improvements you think I could make please let me know.

    This post is licensed under CC BY 4.0 by the author.

    CentOS Server Hardening Tips

    How to setup an NFS mount on CentOS 6

    diff --git a/posts/failed-connect-vmware-lookup-service-ssl-certificate-verification-failed/index.html b/posts/failed-connect-vmware-lookup-service-ssl-certificate-verification-failed/index.html index c91911beb..4a21e1359 100644 --- a/posts/failed-connect-vmware-lookup-service-ssl-certificate-verification-failed/index.html +++ b/posts/failed-connect-vmware-lookup-service-ssl-certificate-verification-failed/index.html @@ -1,5 +1,5 @@ - Failed to connect to VMware Lookup Service, SSL certificate verification failed | TotalDebug
    Home Failed to connect to VMware Lookup Service, SSL certificate verification failed
    Post
    Cancel

    Failed to connect to VMware Lookup Service, SSL certificate verification failed

    1459292400
    1614629284

    Recently I have been playing in my lab with VCSA and vCNS, I found that when I tried to connect to the vCenter I received this error:

    1
    + Failed to connect to VMware Lookup Service, SSL certificate verification failed | TotalDebug
    Home Failed to connect to VMware Lookup Service, SSL certificate verification failed
    Post
    Cancel

    Failed to connect to VMware Lookup Service, SSL certificate verification failed

    1459292400
    1614629284

    Recently I have been playing in my lab with VCSA and vCNS, I found that when I tried to connect to the vCenter I received this error:

    1
     2
     
    Failed to connect to VMware Lookup Service.
     SSL certificate verification failed.
    -

    I was stuck for a little while as to why I was getting this error, then I noticed that the SSL Cert had a different name to the appliance due to it being deployed and then renamed. Lucky for me the fix for this is very simple!

    • Go to http://:5480
    • Click the “Admin” tab
    • Change “Certificate regeneration enalbed” to yes, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
    • Restart the vCenter Appliance
    • Once the Appliance reboots it will re-generate the certificates
    • Change “Certificate regeneration enalbed” to no, this is either done with a toggle button to the right or radio button depending on the VCSA Version.

    try to reconnect your appliance / application to vCenter and it should work no problems.

    This post is licensed under CC BY 4.0 by the author.

    How to check if a VM disk is Thick or Thin provisioned

    vCloud Director 8.0 for Service Providers

    +

    I was stuck for a little while as to why I was getting this error, then I noticed that the SSL Cert had a different name to the appliance due to it being deployed and then renamed. Lucky for me the fix for this is very simple!

    • Go to http://:5480
    • Click the “Admin” tab
    • Change “Certificate regeneration enalbed” to yes, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
    • Restart the vCenter Appliance
    • Once the Appliance reboots it will re-generate the certificates
    • Change “Certificate regeneration enalbed” to no, this is either done with a toggle button to the right or radio button depending on the VCSA Version.

    try to reconnect your appliance / application to vCenter and it should work no problems.

    This post is licensed under CC BY 4.0 by the author.

    How to check if a VM disk is Thick or Thin provisioned

    vCloud Director 8.0 for Service Providers

    diff --git a/posts/folder-redirection-permissions-my-documents-start-menu-desktop/index.html b/posts/folder-redirection-permissions-my-documents-start-menu-desktop/index.html index d9aa3e68f..614425cc6 100644 --- a/posts/folder-redirection-permissions-my-documents-start-menu-desktop/index.html +++ b/posts/folder-redirection-permissions-my-documents-start-menu-desktop/index.html @@ -1 +1 @@ - Folder redirection permissions. My Documents / Start Menu / Desktop | TotalDebug
    Home Folder redirection permissions. My Documents / Start Menu / Desktop
    Post
    Cancel

    Folder redirection permissions. My Documents / Start Menu / Desktop

    1343602800
    1666884241

    How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in full access to all files and folders, as an outsider I had access to other peoples my documents from my laptop without being on the domain! Following this article will stop that happening to your data.

    When creating the redirection share, limit access to the share to only users that need access.

    Because redirected folders contain personal information, such as documents and EFS certificates care should be taken to protect them as well as possible. In general:

    • Restrict the share to only users that need access. Create a security group for users that have redirected folders on a particular share, and limit access to only those users.
    • When creating the share, hide the share by putting a $ after the share name. This will hide the share from casual browsers; the share will not be visible in My Network Places.
    • Only give users the minimum amount of permissions needed. The permissions needed are shown in the tables below:

    Table 12 NTFS Permissions for Folder Redirection Root Folder

    User Account Minimum permissions required
    Creator/Owner Full Control, Subfolders And Files Only
    Administrator None
    Security group of users needing to put data on share. List Folder/Read Data, Create Folders/Append Data – This Folder Only
    Everyone No Permissions
    Local System Full Control, This Folder, Subfolders And Files

    Table 13 Share level (SMB) Permissions for Folder Redirection Share

    User Account Default Permissions Minimum permissions required
    Everyone Full Control No Permissions
    Security group of users needing to put data on share. N/A Full Control,

    Table 14 NTFS Permissions for Each Users Redirected Folder

    User Account Default Permissions Minimum permissions required
    %Username% Full Control, Owner Of Folder Full Control, Owner Of Folder
    Local System Full Control Full Control
    Administrators No Permissions No Permissions
    Everyone No Permissions No Permissions

    Always use the NTFS Filesystem for volumes holding users data.

    For the most secure configuration, configure servers hosting redirected files to use the NTFS File System. Unlike FAT, NTFS supports Discretionary access control lists (DACLs) and system access control lists (SACLs), which control who can perform operations on a file and what events will trigger logging of actions performed on a file.

    Let the system create folders for each user.

    To ensure that Folder Redirection works optimally, create only the root share on the server, and let the system create the folders for each user. Folder Redirection will create a folder for the user with appropriate security.

    If you must create folders for the users, ensure that you have the correct permissions set, also note that if pre-creating folders you must clear the “grant the user exclusive rights to XXX checkbox on the settings tab of the Folder Redirection page. If you don’t clear this checkbox, then Folder Redirection will first check a pre-existing folder to ensure the user is the owner. If the folder is pre-created by the administrator, this check will fail and redirection will be aborted. Folder Redirection will then log an event in the Application event log:

    Error: Folder Redirection

    Event ID: 101

    Event Message:

    Failed to perform redirection of folder XXXX. The new directories for the redirected folder could not be created. The folder is configured to be redirected to \server\share, the final expanded path was \server\share\XXX .

    The following error occurred:

    This security ID may not be assigned as the owner of this object.

    It is strongly recommended that you do not pre-create folders, and allow Folder Redirection to create the folder for the user.

    Ensure correct permissions are set if redirecting to a users home directory.

    Windows Server 2003 and Windows XP allow you to redirect a users My Documents folder to their home directory. When redirecting to the home directory, the default security checks are not made – ownership and the existing directory security are not checked and any existing permissions are not changed – it is assumed that the permissions on the users home directory are set appropriately.

    If you are redirecting to a users home directory, be sure that the permissions on the users home directory are set appropriately for your organization.

    This post is licensed under CC BY 4.0 by the author.

    How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

    Your client does not support opening this list with windows explorer

    + Folder redirection permissions. My Documents / Start Menu / Desktop | TotalDebug
    Home Folder redirection permissions. My Documents / Start Menu / Desktop
    Post
    Cancel

    Folder redirection permissions. My Documents / Start Menu / Desktop

    1343602800
    1666884241

    How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in full access to all files and folders, as an outsider I had access to other peoples my documents from my laptop without being on the domain! Following this article will stop that happening to your data.

    When creating the redirection share, limit access to the share to only users that need access.

    Because redirected folders contain personal information, such as documents and EFS certificates care should be taken to protect them as well as possible. In general:

    • Restrict the share to only users that need access. Create a security group for users that have redirected folders on a particular share, and limit access to only those users.
    • When creating the share, hide the share by putting a $ after the share name. This will hide the share from casual browsers; the share will not be visible in My Network Places.
    • Only give users the minimum amount of permissions needed. The permissions needed are shown in the tables below:

    Table 12 NTFS Permissions for Folder Redirection Root Folder

    User Account Minimum permissions required
    Creator/Owner Full Control, Subfolders And Files Only
    Administrator None
    Security group of users needing to put data on share. List Folder/Read Data, Create Folders/Append Data – This Folder Only
    Everyone No Permissions
    Local System Full Control, This Folder, Subfolders And Files

    Table 13 Share level (SMB) Permissions for Folder Redirection Share

    User Account Default Permissions Minimum permissions required
    Everyone Full Control No Permissions
    Security group of users needing to put data on share. N/A Full Control,

    Table 14 NTFS Permissions for Each Users Redirected Folder

    User Account Default Permissions Minimum permissions required
    %Username% Full Control, Owner Of Folder Full Control, Owner Of Folder
    Local System Full Control Full Control
    Administrators No Permissions No Permissions
    Everyone No Permissions No Permissions

    Always use the NTFS Filesystem for volumes holding users data.

    For the most secure configuration, configure servers hosting redirected files to use the NTFS File System. Unlike FAT, NTFS supports Discretionary access control lists (DACLs) and system access control lists (SACLs), which control who can perform operations on a file and what events will trigger logging of actions performed on a file.

    Let the system create folders for each user.

    To ensure that Folder Redirection works optimally, create only the root share on the server, and let the system create the folders for each user. Folder Redirection will create a folder for the user with appropriate security.

    If you must create folders for the users, ensure that you have the correct permissions set, also note that if pre-creating folders you must clear the “grant the user exclusive rights to XXX checkbox on the settings tab of the Folder Redirection page. If you don’t clear this checkbox, then Folder Redirection will first check a pre-existing folder to ensure the user is the owner. If the folder is pre-created by the administrator, this check will fail and redirection will be aborted. Folder Redirection will then log an event in the Application event log:

    Error: Folder Redirection

    Event ID: 101

    Event Message:

    Failed to perform redirection of folder XXXX. The new directories for the redirected folder could not be created. The folder is configured to be redirected to \server\share, the final expanded path was \server\share\XXX .

    The following error occurred:

    This security ID may not be assigned as the owner of this object.

    It is strongly recommended that you do not pre-create folders, and allow Folder Redirection to create the folder for the user.

    Ensure correct permissions are set if redirecting to a users home directory.

    Windows Server 2003 and Windows XP allow you to redirect a users My Documents folder to their home directory. When redirecting to the home directory, the default security checks are not made – ownership and the existing directory security are not checked and any existing permissions are not changed – it is assumed that the permissions on the users home directory are set appropriately.

    If you are redirecting to a users home directory, be sure that the permissions on the users home directory are set appropriately for your organization.

    This post is licensed under CC BY 4.0 by the author.

    How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

    Your client does not support opening this list with windows explorer

    diff --git a/posts/fortigate-and-ldap-4-0-mr3-patch1/index.html b/posts/fortigate-and-ldap-4-0-mr3-patch1/index.html index 9277f50a2..bde4ed26b 100644 --- a/posts/fortigate-and-ldap-4-0-mr3-patch1/index.html +++ b/posts/fortigate-and-ldap-4-0-mr3-patch1/index.html @@ -1 +1 @@ - Fortigate and LDAP 4.0 MR3 Patch1 | TotalDebug
    Home Fortigate and LDAP 4.0 MR3 Patch1
    Post
    Cancel

    Fortigate and LDAP 4.0 MR3 Patch1

    1315782000
    1614629284

    Hi Guys,

    I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing out the long LDAP Strings can be a bit tricky and cause typo’s.

    1. Logon to the fortigate and go to the Users -> Remote -> LDAP (Create New)
    2. Fill in a Name for the connector
    3. Fill in the IP Address of the server that has LDAP Installed
    4. Change the Common Name Identifier to: sAMAccountName
    5. Enter the Distinguished Name if your domain was domain.local the distinguished name would be: DC=domain,DC=local
    6. Make your Bind Type Regular
    7. In the User DN Box you must type the full path to the user e.g. if you user is domain.local/users/service accounts/fortigate you would need the following: CN=fortigate,OU=Service Accounts,OU=Users,OU=MyBusiness,DC=domain,DC=local
    8. type the password for your service account

    This should be all that you require. one thing to keep an eye on is typo’s when doing the User DN this will stop you from being able to logon with an SSL-VPN or anything for that matter!

    If you get an error in the logs for SSL-VPN saying no_matching_policy then you will have a typo somewhere.

    This post is licensed under CC BY 4.0 by the author.

    Server 2003 Reinstall Terminal Services Licensing.

    How To View and Kill Processes On Remote Windows Computers

    + Fortigate and LDAP 4.0 MR3 Patch1 | TotalDebug
    Home Fortigate and LDAP 4.0 MR3 Patch1
    Post
    Cancel

    Fortigate and LDAP 4.0 MR3 Patch1

    1315782000
    1614629284

    Hi Guys,

    I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing out the long LDAP Strings can be a bit tricky and cause typo’s.

    1. Logon to the fortigate and go to the Users -> Remote -> LDAP (Create New)
    2. Fill in a Name for the connector
    3. Fill in the IP Address of the server that has LDAP Installed
    4. Change the Common Name Identifier to: sAMAccountName
    5. Enter the Distinguished Name if your domain was domain.local the distinguished name would be: DC=domain,DC=local
    6. Make your Bind Type Regular
    7. In the User DN Box you must type the full path to the user e.g. if you user is domain.local/users/service accounts/fortigate you would need the following: CN=fortigate,OU=Service Accounts,OU=Users,OU=MyBusiness,DC=domain,DC=local
    8. type the password for your service account

    This should be all that you require. one thing to keep an eye on is typo’s when doing the User DN this will stop you from being able to logon with an SSL-VPN or anything for that matter!

    If you get an error in the logs for SSL-VPN saying no_matching_policy then you will have a typo somewhere.

    This post is licensed under CC BY 4.0 by the author.

    Server 2003 Reinstall Terminal Services Licensing.

    How To View and Kill Processes On Remote Windows Computers

    diff --git a/posts/graylog2-centos-installation/index.html b/posts/graylog2-centos-installation/index.html index 55370103d..6b547c35a 100644 --- a/posts/graylog2-centos-installation/index.html +++ b/posts/graylog2-centos-installation/index.html @@ -1,4 +1,4 @@ - Graylog2 CentOS Installation | TotalDebug
    Home Graylog2 CentOS Installation
    Post
    Cancel

    Graylog2 CentOS Installation

    1421712000
    1614629284

    I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a viable option for what I required.

    After a little searching I came across Graylog2 which is an open source alternative to Splunk and is totally free! You only need to pay if you would like support from them.

    So here is how I setup the server and got it working on my CentOS Server.

    Download and install the Public Signing Key:

    1
    + Graylog2 CentOS Installation | TotalDebug
    Home Graylog2 CentOS Installation
    Post
    Cancel

    Graylog2 CentOS Installation

    1421712000
    1614629284

    I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a viable option for what I required.

    After a little searching I came across Graylog2 which is an open source alternative to Splunk and is totally free! You only need to pay if you would like support from them.

    So here is how I setup the server and got it working on my CentOS Server.

    Download and install the Public Signing Key:

    1
     
    rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch
     

    Create the following file /etc/yum.repos.d/elasticsearch.repo

    1
     2
    @@ -56,4 +56,4 @@
     2
     
    service graylog2-server start
     service graylog2-web start
    -

    Troubleshooting

    Logs are stored in the following locations: /var/log/elasticsearch/*.log /var/log/graylog2-server/*.log /var/log/graylog2-web/*.log

    any errors in here should be quite easy to resolve. if you have any issues please let me know and I will assist where possible.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    Graylog2 Cisco ASA / Cisco Catalyst

    +

    Troubleshooting

    Logs are stored in the following locations: /var/log/elasticsearch/*.log /var/log/graylog2-server/*.log /var/log/graylog2-web/*.log

    any errors in here should be quite easy to resolve. if you have any issues please let me know and I will assist where possible.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    Graylog2 Cisco ASA / Cisco Catalyst

    diff --git a/posts/graylog2-cisco-asa-cisco-catalyst/index.html b/posts/graylog2-cisco-asa-cisco-catalyst/index.html index f472b04fb..514bdb844 100644 --- a/posts/graylog2-cisco-asa-cisco-catalyst/index.html +++ b/posts/graylog2-cisco-asa-cisco-catalyst/index.html @@ -1,4 +1,4 @@ - Graylog2 Cisco ASA / Cisco Catalyst | TotalDebug
    Home Graylog2 Cisco ASA / Cisco Catalyst
    Post
    Cancel

    Graylog2 Cisco ASA / Cisco Catalyst

    1421798400
    1614629284

    In order to correctly log Cisco device in Graylog2 setup the below configuration.

    This has now been added to the Graylog Marketplace https://marketplace.graylog.org/

    Cisco ASA Configuration:

    1
    + Graylog2 Cisco ASA / Cisco Catalyst | TotalDebug
    Home Graylog2 Cisco ASA / Cisco Catalyst
    Post
    Cancel

    Graylog2 Cisco ASA / Cisco Catalyst

    1421798400
    1614629284

    In order to correctly log Cisco device in Graylog2 setup the below configuration.

    This has now been added to the Graylog Marketplace https://marketplace.graylog.org/

    Cisco ASA Configuration:

    1
     2
     3
     4
    @@ -12,4 +12,4 @@
     
    "regex_value": ">(.+?)%"
     

    To this:

    1
     
    "regex_value": "&gt;: (.+?):"
    -

    Cisco-ASA-Extractor.json

    This post is licensed under CC BY 4.0 by the author.

    Graylog2 CentOS Installation

    Understanding Resource Pools in VMware

    +

    Cisco-ASA-Extractor.json

    This post is licensed under CC BY 4.0 by the author.

    Graylog2 CentOS Installation

    Understanding Resource Pools in VMware

    diff --git a/posts/home-assistant-medication-notification-node-red/index.html b/posts/home-assistant-medication-notification-node-red/index.html index 4a63a675c..a0096e835 100644 --- a/posts/home-assistant-medication-notification-node-red/index.html +++ b/posts/home-assistant-medication-notification-node-red/index.html @@ -1,4 +1,4 @@ - Home Assistant medication notification using Node-RED | TotalDebug
    Home Home Assistant medication notification using Node-RED
    Post
    Cancel

    Home Assistant medication notification using Node-RED

    1673032920

    For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain.

    Due to this I decided I needed a way to log when I take my medication and then a notification on my phone when im due to take it again.

    I ended up creating a workflow in Node-RED that will do the following after I scan an NFC tag located on my fridge where I keep the medication:

    • Update a input_datetime in Home Assistant with the current date and time
    • Check every 60 minutes if the medication date is over 13 days ago
    • On Monday, check if its been 10 days since last medication, then send a notification reminding me to take my medication that week
    • After 14 days, if the input_datetime hasn’t been updated, send a notification to my mobile and TV every hour until it is reset.
    Medication Workflow

    Lets look at how I made this.

    Home Assistant Configuration

    Some changes need to be made within home assistant to make this work

    Input Datetime

    Adding the input_datetime entity requires editing the configuration.yaml file directly.

    Add the following to your configuration:

    1
    + Home Assistant medication notification using Node-RED | TotalDebug
    Home Home Assistant medication notification using Node-RED
    Post
    Cancel

    Home Assistant medication notification using Node-RED

    1673032920

    For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain.

    Due to this I decided I needed a way to log when I take my medication and then a notification on my phone when im due to take it again.

    I ended up creating a workflow in Node-RED that will do the following after I scan an NFC tag located on my fridge where I keep the medication:

    • Update a input_datetime in Home Assistant with the current date and time
    • Check every 60 minutes if the medication date is over 13 days ago
    • On Monday, check if its been 10 days since last medication, then send a notification reminding me to take my medication that week
    • After 14 days, if the input_datetime hasn’t been updated, send a notification to my mobile and TV every hour until it is reset.
    Medication Workflow

    Lets look at how I made this.

    Home Assistant Configuration

    Some changes need to be made within home assistant to make this work

    Input Datetime

    Adding the input_datetime entity requires editing the configuration.yaml file directly.

    Add the following to your configuration:

    1
     2
     3
     4
    @@ -20,4 +20,4 @@
     
    {"message":"You need to take your medication this week.","title":"This Week: Take Medication","data":{"color":"#2DF56D"}}
     

    Notify every 60 minutes after 14 days

    This workflow is essentially the same as the 10 day notification with a few tweaks so you can copy the previous workflow and make these changes:

    In the inject node select interval or interval between times then every X minutes and select all the days you want it to run.

    In the function node change the value to 1209600000 for 14 days, or as required for your notification.

    I also amended the message, and added an additional notify to my TV, this way it will popup on my TV every 60 minutes to annoy me into getting my medication.

    1
     
    {"message":"You need to take your medication.","title":"Take Medication","data":{"color":"#2DF56D"}}
    -

    That is everything done, you can now deploy and test.

    Final thoughts

    I have now been using this workflow for around 2 months and it has been working great.

    The notifications to the TV even annoy my wife which really does make me get my medication out quicker!

    If you have any ideas on how I could improve this workflow further, please leave a comment.

    This post is licensed under CC BY 4.0 by the author.

    Creating a standalone zigbee2mqtt hub with alpine linux

    How I host this site

    +

    That is everything done, you can now deploy and test.

    Final thoughts

    I have now been using this workflow for around 2 months and it has been working great.

    The notifications to the TV even annoy my wife which really does make me get my medication out quicker!

    If you have any ideas on how I could improve this workflow further, please leave a comment.

    This post is licensed under CC BY 4.0 by the author.

    Creating a standalone zigbee2mqtt hub with alpine linux

    How I host this site

    diff --git a/posts/homer-dashboard-with-docker/index.html b/posts/homer-dashboard-with-docker/index.html index 2c90e68d3..c2a13f4b1 100644 --- a/posts/homer-dashboard-with-docker/index.html +++ b/posts/homer-dashboard-with-docker/index.html @@ -1,4 +1,4 @@ - Homer dashboard with Docker | TotalDebug
    Home Homer dashboard with Docker
    Post
    Cancel

    Homer dashboard with Docker

    1665871020
    1665934897

    Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especially ones that I access infrequently.

    At one point I did have a dashboard that was HTML but I never updated it and I decided to remove it a year or so ago.

    After sitting on YouTube for a few hours watching rubbish I came across Homer, A simple to use Docker container that hosts am easily configurable dashboard with customisable designs.

    Homer is configured using YAML making it very familiar to myself having used Docker for a number of years now.

    Directory setup

    In order to use Homer with Docker first I created a directory to store the configuration file and any other assets such as images. Mine are on an NFS share but this would also be the same for local files. My file structure is as follows:

    1
    + Homer dashboard with Docker | TotalDebug
    Home Homer dashboard with Docker
    Post
    Cancel

    Homer dashboard with Docker

    1665871020
    1665934897

    Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especially ones that I access infrequently.

    At one point I did have a dashboard that was HTML but I never updated it and I decided to remove it a year or so ago.

    After sitting on YouTube for a few hours watching rubbish I came across Homer, A simple to use Docker container that hosts am easily configurable dashboard with customisable designs.

    Homer is configured using YAML making it very familiar to myself having used Docker for a number of years now.

    Directory setup

    In order to use Homer with Docker first I created a directory to store the configuration file and any other assets such as images. Mine are on an NFS share but this would also be the same for local files. My file structure is as follows:

    1
     2
     3
     4
    @@ -56,4 +56,4 @@
     
    http://<docker-host-ip-address>:<port>
     

    So in my case this would be:

    1
     
    http://172.16.20.4:8080
    -

    If everything has worked as expected you should see the following demo dashboard:

    Homer default demo dashboard

    For more information on how to configure this dashboard check out this article where I cover the configuration of the dashboards in more detail.

    Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Proxmox Template with Cloud Image and Cloud Init

    Configuring Homer Dashboard

    +

    If everything has worked as expected you should see the following demo dashboard:

    Homer default demo dashboard

    For more information on how to configure this dashboard check out this article where I cover the configuration of the dashboards in more detail.

    Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Proxmox Template with Cloud Image and Cloud Init

    Configuring Homer Dashboard

    diff --git a/posts/how-i-host-this-site/index.html b/posts/how-i-host-this-site/index.html index 367f4aa99..cdbce3d55 100644 --- a/posts/how-i-host-this-site/index.html +++ b/posts/how-i-host-this-site/index.html @@ -1,4 +1,4 @@ - How I host this site | TotalDebug
    Home How I host this site
    Post
    Cancel

    How I host this site

    1677352920
    1677361873

    My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure.

    Motivation for hosting this site

    I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the issues I have come across through my career working for one of the largest MSPs in the world.

    Sharing on social media has come more recently, but this site still serves as the main location for all of my content. Even more so in the past few months, social media platforms have shown that they are not a certain thing. Accounts get suspended, ownership changes kill services, rules change, etc. hosting my own site I have total control over the content with no risk of losing anything, which for me is well worthwhile.

    Ultimately, I wanted this site to be one place where you can always find my projects, regardless of what other platforms may do. I don’t make any money off my content so keeping it low cost is important. Seeing others lose their work due to account issues or frustration with a platform served as motivation to own my content.

    If you are a content creator, I encourage you to keep platform agnostic, allowing you to easily recover if for some reason an account is suspended.

    The site

    Lets get to the bones of it, this site is built using Jekyll, a Ruby based tool that is able to convert markdown into a static website, for my use case it was the perfect fit.

    Here are some of the things that I like about it:

    • Small footprint - I used Wordpress for my last site, but found it was massively bloated for my needs. Along with update issues and other administrative overheads. Jekyll being static removes a lot of this complexity.
    • Security - Wordpress and its plugins, due to its popularity sees a lot of vulnerabilities exploited. Again another benefit of using a static site generated by Jekyll this risk is significantly reduced.
    • CDN Friendly - Having static content means that the site is able to be cached, handling incredible loads at low cost across the globe.
    • Simple Format - Using markdown for all of the posts means that the content is pretty easy to move around. They can be used with other frameworks or easily converted to different formats if needed.
    • Git Friendly - I hold my entire site in Git, so backups are easy along with the history of any changes.

    Jekyll also supports additional features through plugins, like RSS, Sitemaps, metadata, pagination and much more. If there isn’t a plugin to meet your needs its simple to create something.

    It’s also incredibly fast at building a site and generates predicable, easy to host results. If you haven’t looked at Jekyll, you might give it a whirl!

    Jekyll though needs two things to make it really work:

    1. A way to build the site
    2. A place to host the site

    Building the site

    Building a Jekyll site is easy, you just run jekyll build, but to make things even easier I utilize GitHub actions to automate the builds and deploy whenever changes happen.

    Using actions is pretty simple, the documentation is also great so easy to learn, here is my workflow for this site:

    1
    + How I host this site | TotalDebug
    Home How I host this site
    Post
    Cancel

    How I host this site

    1677352920
    1677361873

    My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure.

    Motivation for hosting this site

    I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the issues I have come across through my career working for one of the largest MSPs in the world.

    Sharing on social media has come more recently, but this site still serves as the main location for all of my content. Even more so in the past few months, social media platforms have shown that they are not a certain thing. Accounts get suspended, ownership changes kill services, rules change, etc. hosting my own site I have total control over the content with no risk of losing anything, which for me is well worthwhile.

    Ultimately, I wanted this site to be one place where you can always find my projects, regardless of what other platforms may do. I don’t make any money off my content so keeping it low cost is important. Seeing others lose their work due to account issues or frustration with a platform served as motivation to own my content.

    If you are a content creator, I encourage you to keep platform agnostic, allowing you to easily recover if for some reason an account is suspended.

    The site

    Lets get to the bones of it, this site is built using Jekyll, a Ruby based tool that is able to convert markdown into a static website, for my use case it was the perfect fit.

    Here are some of the things that I like about it:

    • Small footprint - I used Wordpress for my last site, but found it was massively bloated for my needs. Along with update issues and other administrative overheads. Jekyll being static removes a lot of this complexity.
    • Security - Wordpress and its plugins, due to its popularity sees a lot of vulnerabilities exploited. Again another benefit of using a static site generated by Jekyll this risk is significantly reduced.
    • CDN Friendly - Having static content means that the site is able to be cached, handling incredible loads at low cost across the globe.
    • Simple Format - Using markdown for all of the posts means that the content is pretty easy to move around. They can be used with other frameworks or easily converted to different formats if needed.
    • Git Friendly - I hold my entire site in Git, so backups are easy along with the history of any changes.

    Jekyll also supports additional features through plugins, like RSS, Sitemaps, metadata, pagination and much more. If there isn’t a plugin to meet your needs its simple to create something.

    It’s also incredibly fast at building a site and generates predicable, easy to host results. If you haven’t looked at Jekyll, you might give it a whirl!

    Jekyll though needs two things to make it really work:

    1. A way to build the site
    2. A place to host the site

    Building the site

    Building a Jekyll site is easy, you just run jekyll build, but to make things even easier I utilize GitHub actions to automate the builds and deploy whenever changes happen.

    Using actions is pretty simple, the documentation is also great so easy to learn, here is my workflow for this site:

    1
     2
     3
     4
    @@ -74,4 +74,4 @@
             with:
               github_token: $
               publish_dir: ./_site
    -

    There are four steps:

    1. Checkout the repository
    2. Setup Ruby
    3. Install Dependencies & Build Site - Installs all of the site dependencies / plugins etc. and then builds the site static content
    4. Deploy - Deploys the generated site to GitHub Pages

    As you can see this runs whenever changes are pushed to the master branch, or I can manually run the workflow with workflow_dispatch

    Hosting

    The last thing we need is somewhere to host the site. The beauty about this is that Jekyll is just creating static HTML content, making loads of options available. In my case, to keep costs down I use Github Pages, its totally free, comes with SSL Certificates and seems to perform well enough for most small static sites.

    If you wanted something more performant, you could use Amazon S3, Digital Ocean Spaces Object Storage, or some other cloud-based solution.

    Final Thoughts

    I understand that this site is basic, but keeping it this way helps me focus on other things, I have no need to worry about keeping patching and the onslaught of spam. It just works! Since its hosted on GitHub Pages I don’t need to worry about the hosting, but should the site be suspended for some reason, I can easily take my content and move it elsewhere with little hassle.

    Hopefully, if you’re looking to create new content and save yourself some hassle you would look at doing this option. The point (for me at least) is just to share what I think is cool and what I work on.

    This post is licensed under CC BY 4.0 by the author.

    Home Assistant medication notification using Node-RED

    Use Python pandas NOW for your big datasets

    +

    There are four steps:

    1. Checkout the repository
    2. Setup Ruby
    3. Install Dependencies & Build Site - Installs all of the site dependencies / plugins etc. and then builds the site static content
    4. Deploy - Deploys the generated site to GitHub Pages

    As you can see this runs whenever changes are pushed to the master branch, or I can manually run the workflow with workflow_dispatch

    Hosting

    The last thing we need is somewhere to host the site. The beauty about this is that Jekyll is just creating static HTML content, making loads of options available. In my case, to keep costs down I use Github Pages, its totally free, comes with SSL Certificates and seems to perform well enough for most small static sites.

    If you wanted something more performant, you could use Amazon S3, Digital Ocean Spaces Object Storage, or some other cloud-based solution.

    Final Thoughts

    I understand that this site is basic, but keeping it this way helps me focus on other things, I have no need to worry about keeping patching and the onslaught of spam. It just works! Since its hosted on GitHub Pages I don’t need to worry about the hosting, but should the site be suspended for some reason, I can easily take my content and move it elsewhere with little hassle.

    Hopefully, if you’re looking to create new content and save yourself some hassle you would look at doing this option. The point (for me at least) is just to share what I think is cool and what I work on.

    This post is licensed under CC BY 4.0 by the author.

    Home Assistant medication notification using Node-RED

    Use Python pandas NOW for your big datasets

    diff --git a/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html b/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html index 117efdf30..5dda1b5b2 100644 --- a/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html +++ b/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html @@ -1 +1 @@ - How to Make the Shutdown Button Unavailable with Group Policy | TotalDebug
    Home How to Make the Shutdown Button Unavailable with Group Policy
    Post
    Cancel

    How to Make the Shutdown Button Unavailable with Group Policy

    1310116500
    1666901265

    You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen.

    To Edit the Local Policy on a Windows 2000-Based Computer

    To make the Shutdown button unavailable on a standalone Windows 2000-based computer:

    1. Click Start, and then click Run.
    2. In the Open box, type gpedit.msc, and then click OK.
    3. Expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, and then click Security Options.
    4. In the right pane, double-click Shutdown:Allow system to be shut down without having to log on.
    5. Click Disabled, and then click OK.NOTE: If domain-level policy settings are defined, they may override this local policy setting.
    6. Quit Group Policy Editor.
    7. Restart the computer.

    To Edit the Group Policy in a Domain

    To edit a domain-wide policy to make the Shutdownbutton unavailable::

    1. Start the Active Directory Users and Computers snap-in. To do this, click Start, point toPrograms, point to Administrative Tools, and then click Active Directory Users and Computers.
    2. In the console, right-click your domain, and then click Properties.
    3. Click the Group Policytab.
    4. In the Group Policy Object Links box, click the group policy for which you want to apply this setting. For example, click Default Domain Policy.
    5. Click Edit.
    6. Expand User Configuration, expand Administrative Templates, and then clickStart Menu & Taskbar.
    7. In the right pane, double-click Disable and remove the Shut Down command.
    8. Click Enabled, and then click OK.
    9. Quit the Group Policy editor, and then click OK.

    Troubleshooting

    Group Policy changes are not immediately enforced. Group Policy background processing can take up to 5 minutes to be refreshed on domain controllers, and up to 120 minutes to be refreshed on client computers. To force background processing of Group Policy settings, use the Secedit.exe tool. To do this:

    1. Click Start, and then click Run.
    2. In the Open box, type cmd, and then click OK.
    3. Type secedit /refreshpolicy user_policy, and then press ENTER.
    4. Type secedit /refreshpolicy machine_policy, and then press ENTER.
    5. Type exit, and then press ENTER to quit the command prompt.
    This post is licensed under CC BY 4.0 by the author.

    -

    Synchronise time with external NTP server on Windows Server

    + How to Make the Shutdown Button Unavailable with Group Policy | TotalDebug
    Home How to Make the Shutdown Button Unavailable with Group Policy
    Post
    Cancel

    How to Make the Shutdown Button Unavailable with Group Policy

    1310116500
    1666901265

    You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen.

    To Edit the Local Policy on a Windows 2000-Based Computer

    To make the Shutdown button unavailable on a standalone Windows 2000-based computer:

    1. Click Start, and then click Run.
    2. In the Open box, type gpedit.msc, and then click OK.
    3. Expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, and then click Security Options.
    4. In the right pane, double-click Shutdown:Allow system to be shut down without having to log on.
    5. Click Disabled, and then click OK.NOTE: If domain-level policy settings are defined, they may override this local policy setting.
    6. Quit Group Policy Editor.
    7. Restart the computer.

    To Edit the Group Policy in a Domain

    To edit a domain-wide policy to make the Shutdownbutton unavailable::

    1. Start the Active Directory Users and Computers snap-in. To do this, click Start, point toPrograms, point to Administrative Tools, and then click Active Directory Users and Computers.
    2. In the console, right-click your domain, and then click Properties.
    3. Click the Group Policytab.
    4. In the Group Policy Object Links box, click the group policy for which you want to apply this setting. For example, click Default Domain Policy.
    5. Click Edit.
    6. Expand User Configuration, expand Administrative Templates, and then clickStart Menu & Taskbar.
    7. In the right pane, double-click Disable and remove the Shut Down command.
    8. Click Enabled, and then click OK.
    9. Quit the Group Policy editor, and then click OK.

    Troubleshooting

    Group Policy changes are not immediately enforced. Group Policy background processing can take up to 5 minutes to be refreshed on domain controllers, and up to 120 minutes to be refreshed on client computers. To force background processing of Group Policy settings, use the Secedit.exe tool. To do this:

    1. Click Start, and then click Run.
    2. In the Open box, type cmd, and then click OK.
    3. Type secedit /refreshpolicy user_policy, and then press ENTER.
    4. Type secedit /refreshpolicy machine_policy, and then press ENTER.
    5. Type exit, and then press ENTER to quit the command prompt.
    This post is licensed under CC BY 4.0 by the author.

    -

    Synchronise time with external NTP server on Windows Server

    diff --git a/posts/how-to-recreate-all-virtual-directories-for-exchange-2007/index.html b/posts/how-to-recreate-all-virtual-directories-for-exchange-2007/index.html index b64db72b5..0e096bdf0 100644 --- a/posts/how-to-recreate-all-virtual-directories-for-exchange-2007/index.html +++ b/posts/how-to-recreate-all-virtual-directories-for-exchange-2007/index.html @@ -1,4 +1,4 @@ - How to recreate all Virtual Directories for Exchange 2007 | TotalDebug
    Home How to recreate all Virtual Directories for Exchange 2007
    Post
    Cancel

    How to recreate all Virtual Directories for Exchange 2007

    1353542400
    1666888493

    Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):

    Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):

    First you shall write down the information what you will get (for example: if it “Default Web Site” or “SBS Web Applications” and if they have the information, what INTERNURL or External URL is configured):

    – Open Exchange Management Shell with elevated permission – Run the following commands:

    1
    + How to recreate all Virtual Directories for Exchange 2007 | TotalDebug
    Home How to recreate all Virtual Directories for Exchange 2007
    Post
    Cancel

    How to recreate all Virtual Directories for Exchange 2007

    1353542400
    1666888493

    Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):

    Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):

    First you shall write down the information what you will get (for example: if it “Default Web Site” or “SBS Web Applications” and if they have the information, what INTERNURL or External URL is configured):

    – Open Exchange Management Shell with elevated permission – Run the following commands:

    1
     2
     3
     4
    @@ -89,4 +89,4 @@
     .\appcmd.exe set config "XXXXXXX/oab" "-section:windowsAuthentication" "-useKernelMode:False" /commit:apphost
     

    Run: iisreset /noforce

    You must rerun the Internet Address Management Wizard to stamp the new virtual directories with the proper external URL and maybe you have to check the certificates.

    ====================================== Troubleshooting for useKernelMode

    %windir%\system32\inetsrv\appcmd.exe set config /section:system.webServer/security/authentication/windowsAuthentication /useKernelMode:false
     

    With the following command you should see something like this:

    %windir%\system32\inetsrv\appcmd.exe list config /section:system.webServer/security/authentication/windowsAuthentication
    -
    This post is licensed under CC BY 4.0 by the author.

    Your client does not support opening this list with windows explorer

    Managing Application Settings in PHP

    +
    This post is licensed under CC BY 4.0 by the author.

    Your client does not support opening this list with windows explorer

    Managing Application Settings in PHP

    diff --git a/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html b/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html index a876f0971..ab26630b4 100644 --- a/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html +++ b/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html @@ -1,3 +1,3 @@ - How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008 | TotalDebug
    Home How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008
    Post
    Cancel

    How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

    1341356400
    1614629284

    I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated.

    So how did I overcome this?

    I found a very useful Microsoft KB article and adapted it to work with a domain account, see below for my adapted version.

    1. Click Start, click Run, type regedit, and then click OK.
    2. Locate the following registry key:
      1
      + How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008 | TotalDebug
      Home How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008
      Post
      Cancel

      How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

      1341356400
      1614629284

      I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated.

      So how did I overcome this?

      I found a very useful Microsoft KB article and adapted it to work with a domain account, see below for my adapted version.

      1. Click Start, click Run, type regedit, and then click OK.
      2. Locate the following registry key:
        1
         
          HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon
        -
      3. Double click DefaultDomainName, and type your domain name in here, Click OK
      4. Double click DefaultUserName, and type your username in here, Click OK
      5. Double click DefaultPassword, and type your password in here, Click OK
      6. Double click the AutoAdminLogon entry, type 1 in the Value Data box, and then click OK.
      7. If any of the above done exist create them as below
        1. In Registry Editor, click Edit, click New, and then click String Value.
        2. Type **** as the value name, and then press ENTER.
        3. Double-click the newly created key, and then type your password in the Value Data box.
      8. Restart the computer / server and watch it logon!
      This post is licensed under CC BY 4.0 by the author.

      Upgrading a Cisco Catalyst 3560 Switch

      Folder redirection permissions. My Documents / Start Menu / Desktop

      +
    3. Double click DefaultDomainName, and type your domain name in here, Click OK
    4. Double click DefaultUserName, and type your username in here, Click OK
    5. Double click DefaultPassword, and type your password in here, Click OK
    6. Double click the AutoAdminLogon entry, type 1 in the Value Data box, and then click OK.
    7. If any of the above done exist create them as below
      1. In Registry Editor, click Edit, click New, and then click String Value.
      2. Type **** as the value name, and then press ENTER.
      3. Double-click the newly created key, and then type your password in the Value Data box.
    8. Restart the computer / server and watch it logon!
    This post is licensed under CC BY 4.0 by the author.

    Upgrading a Cisco Catalyst 3560 Switch

    Folder redirection permissions. My Documents / Start Menu / Desktop

    diff --git a/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html b/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html index ed1cbf58f..f57e69d63 100644 --- a/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html +++ b/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html @@ -1,2 +1,2 @@ - How To View and Kill Processes On Remote Windows Computers | TotalDebug
    Home How To View and Kill Processes On Remote Windows Computers
    Post
    Cancel

    How To View and Kill Processes On Remote Windows Computers

    1315987200
    1666901265

    Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While both options are good alternatives, Windows XP and Vista provides a built in utility for viewing and killing process on remote Computers using Tasklist and Taskkill commands.

    Both tasklist.exe and taskkill,exe can be found in %SYSTEMROOT%\System32 (typically C:\Windows\System32) directory.

    To view processes on a remote Computer in your home, you will need to know the username and password on the Computer you want to view the processes. Once you have the user account information, the syntax for using tasklist follows:

    _tasklist.exe /S SYSTEM /U USERNAME /P PASSWORD_
    -

    (To view all tasklist options, type tasklist /? at the command prompt)

    To execute, click on Start \ Run… and in the run window type cmd to open a command prompt. Then type the tasklist command, substituting SYSTEM for the remote computer you want to view processes, USERNAME and PASSWORD with an account/password on the remote Computer.

    if you are in a Domain environment and have Administrator rights to the remote Computer, you will may not need to specify a Username and Password

    This post is licensed under CC BY 4.0 by the author.

    Fortigate and LDAP 4.0 MR3 Patch1

    Assigning Send As Permissions to a user

    + How To View and Kill Processes On Remote Windows Computers | TotalDebug
    Home How To View and Kill Processes On Remote Windows Computers
    Post
    Cancel

    How To View and Kill Processes On Remote Windows Computers

    1315987200
    1666901265

    Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While both options are good alternatives, Windows XP and Vista provides a built in utility for viewing and killing process on remote Computers using Tasklist and Taskkill commands.

    Both tasklist.exe and taskkill,exe can be found in %SYSTEMROOT%\System32 (typically C:\Windows\System32) directory.

    To view processes on a remote Computer in your home, you will need to know the username and password on the Computer you want to view the processes. Once you have the user account information, the syntax for using tasklist follows:

    _tasklist.exe /S SYSTEM /U USERNAME /P PASSWORD_
    +

    (To view all tasklist options, type tasklist /? at the command prompt)

    To execute, click on Start \ Run… and in the run window type cmd to open a command prompt. Then type the tasklist command, substituting SYSTEM for the remote computer you want to view processes, USERNAME and PASSWORD with an account/password on the remote Computer.

    if you are in a Domain environment and have Administrator rights to the remote Computer, you will may not need to specify a Username and Password

    This post is licensed under CC BY 4.0 by the author.

    Fortigate and LDAP 4.0 MR3 Patch1

    Assigning Send As Permissions to a user

    diff --git a/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html b/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html index 45c36e10f..6d0df4bf7 100644 --- a/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html +++ b/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html @@ -1 +1 @@ - I won a Ender 3 3D Printer and i'm addicted | TotalDebug
    Home I won a Ender 3 3D Printer and i'm addicted
    Post
    Cancel

    I won a Ender 3 3D Printer and i'm addicted

    1578870000
    1655154889

    About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer.

    To my surprise a few weeks later I received an email from Banggood stating that I had won and to email over my address, at first i thought that it was just a spam email.

    After a few weeks of waiting the printer arrived, I couldn’t believe it, I just got a £300 printer for FREE!

    On with the build!

    I then unboxed and went through building the printer. I followed the instructions which were very comprehensive (other than a few confusing sentences).

    It took me roughly 2 hours to build the printer.

    I ran through a test print, printing out a benchy to make sure that everything was working as expected. It looked perfect so ordered some more filament for future projects.

    The addiction begins…

    So now it starts, I spend the rest of my time glued to thingiverse deciding what to print! Although my son made this easier by constantly bugging me to print him a mini combine harvester. This was a difficult print, the thing prints as one, but the combine kept on fusing to the harvester so it wouldn’t spin. After a lot of calibration I finally made it work and he was delighted to get a new toy for free!

    Now I have the printer running, and have printed multiple useful prints, Laptop wall mount, Google Home wall mount, Microphone stand for my large desk etc. (and more toys)

    Time for the upgrades!

    The Ender3 is a brilliant little printer for the price, however it does have some issues that can be easily resolved with a few upgrades.

    Printed Upgrades

    Upper Filament Guide – Unfortunately not a great upgrade, and one that I scrapped. I have put this here to recommend avoiding this. (I have better options below) It is supposed to keep the filament further away from the printer and stop it wearing the extruder arm. However I found that it made horrible squeaking noises so that was out.

    Lower Filament Guide – This brilliant little print stops the filament from rubbing against Z Screw and getting grease on it which ruins prints. I recommend the linked guide as it doesn’t curl over the filament, again I found any that curled over the top would rub and cause a horrible squeak.

    Fan Cover – I found that little bits of filament would drop from the hot end into the fan that was open on the ender 3 case, this covers that up and also puts the directions for bed up and down.

    Hero Me Gen3 remix – Parts don’t get cooled fast enough with the standard cooler, therefore I printed this one, it focuses the air perfectly under the nozzle for really good cooling.

    This version is for the new Ender3’s as they have smaller screws than the older models.

    Extruder Knob – This print was one I didn’t know I needed until I printed it, this allows for easier retraction and extrusion by twisting the knob it will move the gear to easily feed filament.

    If you plan to upgrade to the MK8 Dual Gear Extruder Arm this part wont fit due to larger gears.

    Side Spool Holder – I had issues with the filament dragging and getting stuck due to the sharp angle that it was pulling at, this causes unnecessary wear on the extruder arm and gears. I found this side spool holder which moves the spool to the side of the printer, next to the extruder arm causing much less force to be required when extruding and less rubbing on the extruder arm.

    Spool Holder – This spool holder uses bearings to allow the filament to roll around much easier, reducing the drag on the extruder and in turn reducing the wear on the stepper motor.

    Purchased Upgrades

    SKR 1.3 - A great upgrade to silence those stepper motors, not only that the SKR has a 32bit chip which means more space for new features and faster gcode processing. The TMC2209 Stepper Motor Drivers really do make a massive difference when it comes to the noise of the printer.

    Now the only annoying thing are the fans… still on my to do list.

    Capricorn Bowden Tube – This Bowden Tube appears to be much better than the one shipped with the Ender 3, it is much more slick to the touch and a little more sturdy, this means the filament passes through it with ease, also the tight diameter means the filament has little room to flex and cause retraction issues. I found that the shipped bowden tube had also melted at the end and had filament stuck to it which leads me to believe it wasn’t installed very well at the factory.

    MK8 Dual Gear Extruder Arm – My extruder arm broke within a few months of use, I didn’t notice until I was having bad under extrusion and also seeing slippage on the extruder gear. I took the arm apart to clean the gear and found a tiny crack near the screw for the idler wheel, this was enough to stop the arm working at all due to the flex it added. This can be combated by printing a new extruder arm but I decided to upgrade to a dual gear option which results in:

    • Even less potential slippage
    • Stronger spring for the extruder arm
    • Metal body stops wear from filament rub

    PEI Magnetic Bed – This is an excellent upgrade from the stock bed, makes prints super smooth on the bed and sticks really well. I haven’t had a single failed print due to adhesion on this surface, also you can easily take stuff off once cooled without much effort or by flexing the magnetic plate for larger prints.

    Conclusion

    All in I think I have spent no more than £100 on upgrades and have a brilliant printer, the prints that I get out now are near perfect bed adhesion is excellent with the stock bed, however I ripped mine by overheating a print during testing and am awaiting a new PEI Magnetic build plate which I think will be my last upgrade for a little while!

    It would also be great to hear about anyone else experience with this printer.

    This post is licensed under CC BY 4.0 by the author.

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    3d Printer Axes Calibration

    + I won a Ender 3 3D Printer and i'm addicted | TotalDebug
    Home I won a Ender 3 3D Printer and i'm addicted
    Post
    Cancel

    I won a Ender 3 3D Printer and i'm addicted

    1578870000
    1655154889

    About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer.

    To my surprise a few weeks later I received an email from Banggood stating that I had won and to email over my address, at first i thought that it was just a spam email.

    After a few weeks of waiting the printer arrived, I couldn’t believe it, I just got a £300 printer for FREE!

    On with the build!

    I then unboxed and went through building the printer. I followed the instructions which were very comprehensive (other than a few confusing sentences).

    It took me roughly 2 hours to build the printer.

    I ran through a test print, printing out a benchy to make sure that everything was working as expected. It looked perfect so ordered some more filament for future projects.

    The addiction begins…

    So now it starts, I spend the rest of my time glued to thingiverse deciding what to print! Although my son made this easier by constantly bugging me to print him a mini combine harvester. This was a difficult print, the thing prints as one, but the combine kept on fusing to the harvester so it wouldn’t spin. After a lot of calibration I finally made it work and he was delighted to get a new toy for free!

    Now I have the printer running, and have printed multiple useful prints, Laptop wall mount, Google Home wall mount, Microphone stand for my large desk etc. (and more toys)

    Time for the upgrades!

    The Ender3 is a brilliant little printer for the price, however it does have some issues that can be easily resolved with a few upgrades.

    Printed Upgrades

    Upper Filament Guide – Unfortunately not a great upgrade, and one that I scrapped. I have put this here to recommend avoiding this. (I have better options below) It is supposed to keep the filament further away from the printer and stop it wearing the extruder arm. However I found that it made horrible squeaking noises so that was out.

    Lower Filament Guide – This brilliant little print stops the filament from rubbing against Z Screw and getting grease on it which ruins prints. I recommend the linked guide as it doesn’t curl over the filament, again I found any that curled over the top would rub and cause a horrible squeak.

    Fan Cover – I found that little bits of filament would drop from the hot end into the fan that was open on the ender 3 case, this covers that up and also puts the directions for bed up and down.

    Hero Me Gen3 remix – Parts don’t get cooled fast enough with the standard cooler, therefore I printed this one, it focuses the air perfectly under the nozzle for really good cooling.

    This version is for the new Ender3’s as they have smaller screws than the older models.

    Extruder Knob – This print was one I didn’t know I needed until I printed it, this allows for easier retraction and extrusion by twisting the knob it will move the gear to easily feed filament.

    If you plan to upgrade to the MK8 Dual Gear Extruder Arm this part wont fit due to larger gears.

    Side Spool Holder – I had issues with the filament dragging and getting stuck due to the sharp angle that it was pulling at, this causes unnecessary wear on the extruder arm and gears. I found this side spool holder which moves the spool to the side of the printer, next to the extruder arm causing much less force to be required when extruding and less rubbing on the extruder arm.

    Spool Holder – This spool holder uses bearings to allow the filament to roll around much easier, reducing the drag on the extruder and in turn reducing the wear on the stepper motor.

    Purchased Upgrades

    SKR 1.3 - A great upgrade to silence those stepper motors, not only that the SKR has a 32bit chip which means more space for new features and faster gcode processing. The TMC2209 Stepper Motor Drivers really do make a massive difference when it comes to the noise of the printer.

    Now the only annoying thing are the fans… still on my to do list.

    Capricorn Bowden Tube – This Bowden Tube appears to be much better than the one shipped with the Ender 3, it is much more slick to the touch and a little more sturdy, this means the filament passes through it with ease, also the tight diameter means the filament has little room to flex and cause retraction issues. I found that the shipped bowden tube had also melted at the end and had filament stuck to it which leads me to believe it wasn’t installed very well at the factory.

    MK8 Dual Gear Extruder Arm – My extruder arm broke within a few months of use, I didn’t notice until I was having bad under extrusion and also seeing slippage on the extruder gear. I took the arm apart to clean the gear and found a tiny crack near the screw for the idler wheel, this was enough to stop the arm working at all due to the flex it added. This can be combated by printing a new extruder arm but I decided to upgrade to a dual gear option which results in:

    • Even less potential slippage
    • Stronger spring for the extruder arm
    • Metal body stops wear from filament rub

    PEI Magnetic Bed – This is an excellent upgrade from the stock bed, makes prints super smooth on the bed and sticks really well. I haven’t had a single failed print due to adhesion on this surface, also you can easily take stuff off once cooled without much effort or by flexing the magnetic plate for larger prints.

    Conclusion

    All in I think I have spent no more than £100 on upgrades and have a brilliant printer, the prints that I get out now are near perfect bed adhesion is excellent with the stock bed, however I ripped mine by overheating a print during testing and am awaiting a new PEI Magnetic build plate which I think will be my last upgrade for a little while!

    It would also be great to hear about anyone else experience with this printer.

    This post is licensed under CC BY 4.0 by the author.

    CentOS 8 Teaming with WiFi Hidden SSID using nmcli

    3d Printer Axes Calibration

    diff --git a/posts/install-configure-and-add-repository-with-git-on-centos-7/index.html b/posts/install-configure-and-add-repository-with-git-on-centos-7/index.html index 3dcb1fa36..d293d81a5 100644 --- a/posts/install-configure-and-add-repository-with-git-on-centos-7/index.html +++ b/posts/install-configure-and-add-repository-with-git-on-centos-7/index.html @@ -1,4 +1,4 @@ - Install, Configure and add a repository with Git on CentOS 7 | TotalDebug
    Home Install, Configure and add a repository with Git on CentOS 7
    Post
    Cancel

    Install, Configure and add a repository with Git on CentOS 7

    1523086606
    1665773049

    Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with sites like GitHub offering a social coding experience, and multiple popular projects utilising it great functionality and availability for Open Source sharing.

    First off lets make sure that CentOS is up to date:

    1
    + Install, Configure and add a repository with Git on CentOS 7 | TotalDebug
    Home Install, Configure and add a repository with Git on CentOS 7
    Post
    Cancel

    Install, Configure and add a repository with Git on CentOS 7

    1523086606
    1665773049

    Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with sites like GitHub offering a social coding experience, and multiple popular projects utilising it great functionality and availability for Open Source sharing.

    First off lets make sure that CentOS is up to date:

    1
     
    yum update -y
     

    Then we can install Git, it couldn’t be simpler, just run the below command:

    1
     
    yum install -y git
    @@ -28,4 +28,4 @@
     git push -u origin master
     

    Cloning a Repository

    In some cases you may already have a repository that you would like to clone and then change the existing code, well that is simple to do too. Get the URL for the clone from GitHub or any other Git SVN and type the following:

    1
     
    git clone <URL TO REPOSITORY>
    -

    This will then download the contents from the repository and onto your CentOS Server

    Hopefully this tutorial has been useful for you, please feel free to ask me any questions that you may have, or if you would like a more in depth article for further functions of Git.

    This post is licensed under CC BY 4.0 by the author.

    JUNOS: Monitor Log Files

    What is Docker? - Overview

    +

    This will then download the contents from the repository and onto your CentOS Server

    Hopefully this tutorial has been useful for you, please feel free to ask me any questions that you may have, or if you would like a more in depth article for further functions of Git.

    This post is licensed under CC BY 4.0 by the author.

    JUNOS: Monitor Log Files

    What is Docker? - Overview

    diff --git a/posts/install-freeradius-centos-7-with-daloradius-for-management/index.html b/posts/install-freeradius-centos-7-with-daloradius-for-management/index.html index 6782ebd02..4101618ae 100644 --- a/posts/install-freeradius-centos-7-with-daloradius-for-management/index.html +++ b/posts/install-freeradius-centos-7-with-daloradius-for-management/index.html @@ -1,4 +1,4 @@ - Install FreeRadius on CentOS 7 with DaloRadius for management – Updated | TotalDebug
    Home Install FreeRadius on CentOS 7 with DaloRadius for management – Updated
    Post
    Cancel

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    1485907200
    1614629284

    I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article will run through how to install and set this up.

    I will be using FreeRADIUS as this is the most commonly used, it supports most common authentication protocols.

    Disable SELinux: vi /etc/sysconfig/selinux

    1
    + Install FreeRadius on CentOS 7 with DaloRadius for management – Updated | TotalDebug
    Home Install FreeRadius on CentOS 7 with DaloRadius for management – Updated
    Post
    Cancel

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    1485907200
    1614629284

    I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article will run through how to install and set this up.

    I will be using FreeRADIUS as this is the most commonly used, it supports most common authentication protocols.

    Disable SELinux: vi /etc/sysconfig/selinux

    1
     
    SELINUX=disabled
     

    First we need to update our CentOS server and install the required applications:

    1
     2
    @@ -166,4 +166,4 @@
     systemctl restart nginx
     

    Access the web interface:

    1
     
    http://FQDN_IP_OF_SERVER/daloradius/login.php
    -

    Default Login: User: Administrator Pass: radius

    This post is licensed under CC BY 4.0 by the author.

    Two-Factor Authentication: is it worth it, does it really add more security?

    Install UniFi Controller on CentOS 7

    +

    Default Login: User: Administrator Pass: radius

    This post is licensed under CC BY 4.0 by the author.

    Two-Factor Authentication: is it worth it, does it really add more security?

    Install UniFi Controller on CentOS 7

    diff --git a/posts/install-unifi-controller-centos-7/index.html b/posts/install-unifi-controller-centos-7/index.html index eb38a241b..9a8d32552 100644 --- a/posts/install-unifi-controller-centos-7/index.html +++ b/posts/install-unifi-controller-centos-7/index.html @@ -1,4 +1,4 @@ - Install UniFi Controller on CentOS 7 | TotalDebug
    Home Install UniFi Controller on CentOS 7
    Post
    Cancel

    Install UniFi Controller on CentOS 7

    1485942625
    1666888493

    This is a short simple guide to assist users with installing the Ubiquiti UniFi Controller required for all UniFi devices on a CentOS 7 Server.

    First we need to update our CentOS server and disable SELinux:

    1
    + Install UniFi Controller on CentOS 7 | TotalDebug
    Home Install UniFi Controller on CentOS 7
    Post
    Cancel

    Install UniFi Controller on CentOS 7

    1485942625
    1666888493

    This is a short simple guide to assist users with installing the Ubiquiti UniFi Controller required for all UniFi devices on a CentOS 7 Server.

    First we need to update our CentOS server and disable SELinux:

    1
     2
     3
     
    yum -y update
    @@ -90,4 +90,4 @@
     2
     
    rm -rf ~/UniFi.unix.zip
     systemctl reboot
    -

    Once the server is back online you should be able to access the controller via the URL: https://FQDN\_or\_IP:8443 Follow the simple wizard to complete the setup of your controller, I would also recommend you register with Ubiquiti when you setup the controller, this will allow you to manage it remotely on a mobile device.

    Credit to: https://deviantengineer.com

    This post is licensed under CC BY 4.0 by the author.

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    Setup Ubiquiti UniFi USG Remote User VPN

    +

    Once the server is back online you should be able to access the controller via the URL: https://FQDN\_or\_IP:8443 Follow the simple wizard to complete the setup of your controller, I would also recommend you register with Ubiquiti when you setup the controller, this will allow you to manage it remotely on a mobile device.

    Credit to: https://deviantengineer.com

    This post is licensed under CC BY 4.0 by the author.

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    Setup Ubiquiti UniFi USG Remote User VPN

    diff --git a/posts/jekyll-post-series-links/index.html b/posts/jekyll-post-series-links/index.html index cfb53a80d..12da1bd05 100644 --- a/posts/jekyll-post-series-links/index.html +++ b/posts/jekyll-post-series-links/index.html @@ -1,4 +1,4 @@ - Add series links to Jekyll posts | TotalDebug
    Home Add series links to Jekyll posts
    Post
    Cancel

    Add series links to Jekyll posts

    1686298000
    1691482830

    Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init.

    Rather than having to manually link to other articles related to the series, I thought it would be better to have a section at the top that lists all articles related to the series.

    The metadata

    For this to work each post that is required to be part of a series should contain some metadata with a name for that series. For example:

    1
    + Add series links to Jekyll posts | TotalDebug
    Home Add series links to Jekyll posts
    Post
    Cancel

    Add series links to Jekyll posts

    1686298000
    1691482830

    Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init.

    Rather than having to manually link to other articles related to the series, I thought it would be better to have a section at the top that lists all articles related to the series.

    The metadata

    For this to work each post that is required to be part of a series should contain some metadata with a name for that series. For example:

    1
     2
     3
     
    ---
    @@ -46,4 +46,4 @@
     {% endif %}
     

    This works as follows:

    1. Check that the page has the series medatada
    2. Get all posts that have series that match. Sort these in ascending order.
    3. Display a card with:
      1. The series name
      2. How many parts there are in the series
      3. Clickable link to the other posts in the series

    Add to layout

    We have everything we need to get this working, but we need to add it to the layout for our posts, edit the post.hhml file and add the include as follows:

    1
     
    {% include post-series.html %}
    -

    You can add this anywhere you would like it to appear on your post, for my website, I have it appear after the meta but before the article begins as per the below screenshot:

    Series Example

    Final Thoughts

    We now have a great new feature on our blog, super easy to add to the website and this is one of the reasons I love using Jekyll for my website, so much is possible and with very little effort.

    There are many additional features that could be added with this small snippet, for example you could create a page that shows all of the series that you have or you could add the series to a menu rather than just the top of the post page.

    Hope this helped!

    This post is licensed under CC BY 4.0 by the author.

    Last4Solar - My solar nightmare!

    -

    +

    You can add this anywhere you would like it to appear on your post, for my website, I have it appear after the meta but before the article begins as per the below screenshot:

    Series Example

    Final Thoughts

    We now have a great new feature on our blog, super easy to add to the website and this is one of the reasons I love using Jekyll for my website, so much is possible and with very little effort.

    There are many additional features that could be added with this small snippet, for example you could create a page that shows all of the series that you have or you could add the series to a menu rather than just the top of the post page.

    Hope this helped!

    This post is licensed under CC BY 4.0 by the author.

    Last4Solar - My solar nightmare!

    -

    diff --git a/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html b/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html index ec5f69491..018bed633 100644 --- a/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html +++ b/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html @@ -1,4 +1,4 @@ - Killing a Windows service that hangs on "stopping" | TotalDebug
    Home Killing a Windows service that hangs on "stopping"
    Post
    Cancel

    Killing a Windows service that hangs on "stopping"

    1311030000
    1666884241

    It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the ServiceProcess classes in the .NET Framework or by other means (net stop, Win32 API), the service remains in the state of stopping and never reaches the stopped phase. It’s pretty simple to simulate this behaviour by creating a Windows Service in C# (or any .NET language whatsoever) and adding an infinite loop in the Stop method. The only way to stop the service is by killing the process then. However, sometimes it’s not clear what the process name or ID is (e.g. when you’re running a service hosting application that can cope with multiple instances such as SQL Server Notification Services). The way to do it is as follows:

    1. Go to the command-prompt and query the service (e.g. the SMTP service) by using sc:
      1
      + Killing a Windows service that hangs on "stopping" | TotalDebug
      Home Killing a Windows service that hangs on "stopping"
      Post
      Cancel

      Killing a Windows service that hangs on "stopping"

      1311030000
      1666884241

      It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the ServiceProcess classes in the .NET Framework or by other means (net stop, Win32 API), the service remains in the state of stopping and never reaches the stopped phase. It’s pretty simple to simulate this behaviour by creating a Windows Service in C# (or any .NET language whatsoever) and adding an infinite loop in the Stop method. The only way to stop the service is by killing the process then. However, sometimes it’s not clear what the process name or ID is (e.g. when you’re running a service hosting application that can cope with multiple instances such as SQL Server Notification Services). The way to do it is as follows:

      1. Go to the command-prompt and query the service (e.g. the SMTP service) by using sc:
        1
         
          sc queryex SMTPSvc
         
        • This will give you the following information:
        1
         2
        @@ -26,4 +26,4 @@
          or something like this (the state will mention stopping).
         
        • Over here you can find the process identifier (PID), so it’s pretty easy to kill the associated process either by using the task manager or by using taskkill:
          1
           
          taskkill /PID 388 /F
          -

          where the /F flag is needed to force the process kill (first try without the flag).</li> </ol>

      This post is licensed under CC BY 4.0 by the author.

      Synchronise time with external NTP server on Windows Server

      The Missing Manual Part 1: Veeam B & R Direct SAN Backups

      +

      where the /F flag is needed to force the process kill (first try without the flag).</li> </ol>

    This post is licensed under CC BY 4.0 by the author.

    Synchronise time with external NTP server on Windows Server

    The Missing Manual Part 1: Veeam B & R Direct SAN Backups

    diff --git a/posts/last4solar-my-solar-nightmare/index.html b/posts/last4solar-my-solar-nightmare/index.html index c3f33c25f..e44559273 100644 --- a/posts/last4solar-my-solar-nightmare/index.html +++ b/posts/last4solar-my-solar-nightmare/index.html @@ -1 +1 @@ - Last4Solar - My solar nightmare! | TotalDebug
    Home Last4Solar - My solar nightmare!
    Post
    Cancel

    Last4Solar - My solar nightmare!

    1685740415
    1690657760

    At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house.

    With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable.

    Choosing a company for the installation

    I spent a long time looking at different solar companies in my area, the majority had extremely long waiting times around 6+ months before they could install. I then found YouTube recommendations for a local company that had brilliant TrustPilot reviews, they had been around for quite some time and their website was very informative

    Most other companies I saw looked like a single person outfit which for me wasn’t really an ideal option as aftercare may become an issue especially if they retire or stop doing solar. in hindsight this shouldn’t have been a deciding factor.

    At this point I had decided that I would go with First4Solar, they were the clear winners in price and the initial delivery…

    The quoting process

    After getting in touch with First4Solar they asked some initial questions:

    1. What our yearly power consumption was
    2. The orientation of our roof
    3. Where we would like the panels
    4. If we wanted battery storage and how much
    5. What access was like to the roofs

    I told them all of this, additionally asking for automated whole house failover in case of a grid failure and as many panels as possible on the roof along with around 11kw batteries.

    So far so good.

    Until I got the first quote, which didn’t match what I had asked for, I assumed that this was just a miss-communication, they had provided a quote for a standard package:

    • 16x 415W Jinko N-Type Panels
    • GivEnergy 9.5kWh Battery with 100% DOD
    • 3.6kW Charge / Discharge
    • Can be used with off-peak tariffs
    • Emergency Power Supply Compatible (manual)

    The issue I had with this was I wanted as many panels as possible on the roof, which this didn’t do, when adding the extra panels, the GivEnergy inverter wasn’t powerful enough, it would basically be at full capacity based on the above, and in my eyes its always better to have a bigger inverter to support the addition of more panels or other technology in the future, there was no growing room here which again wasn’t what I requested.

    Part of their reasoning for using the GivEnergy inverter was that its under 3.6kWh which means no DNO G99 application is required speeding up the process.

    I then asked for a custom quote that doesn’t follow the standard package, they then quoted for the following:

    • 24x Jinko Tiger 415W N-Type Black Framed Mono
    • SolaX G4 7.5kW Hybrid Inverter
    • SolaX Triple Power HV 5.8kWh (Master) V2
    • SolaX Triple Power HV 5.8kWh (Slave) V2
    • EPS - Manual Changeover

    Again you can see they didn’t add the automated failover, however they did say that if the power draw of the house was higher than the inverter could handle, it would stop until the load dropped, which I agreed made sense as I could shut off none essential items, but not really what I wanted as I have servers to keep running.

    In the end this second quote is what I went with after they persuaded me that it was best, I guess they were the experts so I would go with their recommendation.

    Paying the deposit

    Now we are at the stage of paying the deposit, all seemed good and even the director called me to explain the process and to get my deposit paid ASAP to not miss my install date. So I called to try pay by card but “the card machine wasn’t working” and they recommend paying by BACS anyway. At the time I thought nothing of it, so I transferred the money via BACS.

    This was the first mistake that I made, however I was totally unaware at the time.

    The second mistake was believing being told once paid I would automatically be registered with HEIS so my deposit would be secure and covered by the HEIS insurance, which turned out not to be the case and I should have been contacted by HEIS.

    Radio Silence

    After paying the deposit (7th Dec 2022) I had total radio silence, If I contacted them the person I needed was “on another call” or “off sick” or one of many other excuses.

    I finally managed to speak with someone who gave me an install date of 4th & 5th May 2023, which they would later call and re-arrange for the 18th & 19th May 2023.

    I then received a further call to re-arrange again, but told them if they moved the install date I would cancel the order and go elsewhere. After this I never heard from them again.

    There was no mention of the DNO G98/99 application and the install was about two weeks off, so again I tried contacting them around ten times which eventually got me through to someone who said it was in hand and they were waiting for the DNO to get back to them, I have contacted the DNO to see if they got the application and am currently awaiting a call back, although I suspect that there was never an application sent.

    Concern starts to set in

    Other family members were also having issues, dates being set back and different excuses every time. After a while the bad reviews started to pour into TrustPilot at this point I knew something was wrong and began to try get a refund, but all the phones had stopped working and were going to automated systems, even though surprisingly the sales line was still working and deposits were still being taken from customers!

    I never got a refund and never spoke to another person at F4S, I did find a facebook group with hundreds of people who were having the exact same issues.

    Is this fraud?

    The company took my deposit at a point where they must have known they were trading insolvent, but continued to take people’s money, I have heard from other customers that their credit cards were used to pay other suppliers of F4S.

    Raised with HEIS who I found out I was not registered and so my £3.5k was not covered by their insurance and I would need to take this up with the bank, the bank also would not touch this as I made a BACS payment (an expensive mistake to make)

    The Takeover

    So now we are at a state where i’m 3.5k down and potentially no longer able to afford a solar install, but there was some light at the end of the tunnel.

    A company called Contact Solar had purchased the customer list and agreed to do what they could to help the customers that F4S had left in limbo and without their deposits. This was an absolutely brilliant thing for them to do for all these customers, however I had concerns that with reports of 1500+ customers they would struggle to keep up with the installs and I could be waiting months again, this really did unsettle me.

    That said, I had no evidence that this was the case and Contact Solar provided a very competitive price and a good proposal based on the money left on the contract.

    Decision Time

    I now had to choose between Contact Solar

    • 24 x 405w JA Solar Panels
    • SunSynk 8kw Hybrid Inverter
    • 10.64kWh Battery Storage

    or ese group

    • 24 x 405w Longi Solar Panels
    • Lux Powertek LXP 7.6Kw Inverter
    • 12.8kWh Greenlinx LXP LV-L3.2-1p battery storage
    • Pigeon proofing
    • 15 Year maintenance plan

    As you can see both offered quite a good deal, with reputable equipment that integrate well with my Home Assistant and Octopus Energy

    The main difference I found was the Depth of Discharge (DoD) on the SunSynk batteries was 90% but on Greenlinx batteries its 100%, plus the batteries are larger and the addition of the 15 year maintenance from ESE just made the deal a little better for me.

    I also found ESE seemed to have more time to answer my questions, they would always call back when they said and were very helpful, with contact solar, it was all via email which had slow responses and there wasn’t much of a personal touch that made you feel like they wanted your business.

    So as you can probably tell, my business went to ese group

    Sales Aftercare

    On 1st June 2023 I was at the stage of paying a deposit, getting DNO Approval, scheduling the survey and installation dates.

    After agreeing to continue I was passed over to make payment, recommended by ESE to pay by Credit Card for the added protection (I would have insisted on this anyway, but it was nice that they mentioned the added protection it offers and that it was their recommended payment method).

    Once I had paid I was asked when I would like the install, I asked ASAP due to being so delayed from my previous provider. The install date provided was the 22nd June! not even 4 weeks and they could have the installation completed, crazy to think how long I had been waiting that they could do this so quick.

    I agreed and was told I would get a call the following day to arrange a survey, sure enough at 09:30 I had a call to say that someone was in the area today doing another survey and could come do mine after, the surveyor came to check everything and said that it would either be him or one of the other members of the team that would carry out the install.

    The Install

    I have now had my install completed, the team came and had the inverter, batteries and 1st string of panels installed on the first day, on the second day they got the final string installed and everything was done. The job was very tidy and looks great.

    I did have one issue where the battery kept draining itself and power usage was jumping all over the place, but on investigation I found that the CT Clamp was on upside down, easy mistake to make and it was easy enough for me to fix.

    The electrician got the dongle hooked up to my WiFi and helped setup the app on my phone. I did however have to contact ESE as by default they don’t leave the ESS Greenlinx battery dongles in, this means its not possible to individually monitor the batteries and should this be required in the future I would be stuck if ESE went into administration. (I’m now waiting for these to be shipped to me)

    Aftercare

    Generally ESE have been great, the install was quick and efficient.

    However I do believe I was slightly miss-lead by them, I went with them as the contract stated I would be able to use any energy provider to get SEG payments via the flexi-orb certificate which they said is the same as MCS but an alternative.

    The way this was worded lead me to believe I would be find with ANY energy supplier, that’s not the case, currently only 5 energy suppliers accept Flexi-orb and Octopus is not one of these. This is partially my fault as I didn’t double check or specifically ask if Octopus would accept it, but believe that the contract shouldn’t state SEG payments can be from any supplier I choose when in fact what they meant was any supplier that accepts Flexi-Orb certificates.

    The results

    Here you can see my solar, battery usage along with my import / export from the grid.

    Solar graph

    The first full month of the install I have spent £6.95 on electricity and that was with some quite bad weather days. I’m still not on an export tariff so unsure how much additional funds this will yield but judging by the amount I have generated it should be a nice bit of money which will hopefully cover the standing charges.

    Things to check / be aware of

    To ensure that you don’t get stuck in the same situation, there are a few things I would highly recommend you take into consideration:

    1. Pay your deposit by Credit Card.
    2. If they insist on a Bank Transfer, REFUSE!
      1. They may give excuses like the card machine isn’t working, if they don’t offer you to pay another day, walk away! (I can’t stress this enough!)
      2. Bank Transfers (BACS) payments are not protected by your bank only credit cards are.
    3. Not all solar companies are MCS Certified, some issue Flexi-Orb Certificates instead.
      1. Not all energy suppliers accept flexi-orb, at the time of writing only five of the major suppliers accept them
        1. E.ON
        2. Scottish Power
        3. British Gas
        4. SSE
        5. OVO Energy
      2. It is likely that once Flexi-Orb is accredited that all energy suppliers will accept it, but at time of writing this is something to be aware of.
    4. HEIS will email you within 48 hours to confirm registration and your cover
      1. If you don’t get an email from HEIS, contact them immediately to ensure you have been registered, failure to do so will mean you are not protected by them.
      2. Honestly, from what I have heard, HEIS protection isn’t worth the paper its written on.
    5. You are only covered by HEIS for 120 days, if your install is going to be after this time, you wouldn’t be covered.
      1. If your install gets delayed and will breach the 120 days, contact HEIS and ask what could be done to ensure you are still protected
    6. You will need a DNO application approved before installation:
      1. DNO G98 - For installs under 16A per phase, which is the equivalent of 3.68kWp for a single-phase supply
      2. DNO G99 - For installs greater than 16A per phase
    7. Ensure the electrician is qualified ideally being registered with NICEIC
      1. Without a qualified electrician doing the install you will be unable to get a valid certificate
    8. Ensure you are provided with the necessary electrical certificates to ensure your install is legal
      1. Without this you would need to have an electrician do an EICR on this
    9. Don’t rely on your installer doing things correctly, check everything

    Final Thoughts

    My overall solar experience has been stressful to say the least, I’m glad that its finally getting sorted but its been a horrible situation for me and the other 1000 First 4 Solar customers that have been conned out of money, likely never to see it again!

    If you are looking for solar I have been very impressed with ese group so far, obviously I will update this based on the install but my Dad had his completed by them and they did a great job.

    Were you impacted by this nightmare? let me know in the comments how your install went.

    This post is licensed under CC BY 4.0 by the author.

    Automating deployments using Terraform with Proxmox and ansible

    Add series links to Jekyll posts

    + Last4Solar - My solar nightmare! | TotalDebug
    Home Last4Solar - My solar nightmare!
    Post
    Cancel

    Last4Solar - My solar nightmare!

    1685740415
    1690657760

    At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house.

    With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable.

    Choosing a company for the installation

    I spent a long time looking at different solar companies in my area, the majority had extremely long waiting times around 6+ months before they could install. I then found YouTube recommendations for a local company that had brilliant TrustPilot reviews, they had been around for quite some time and their website was very informative

    Most other companies I saw looked like a single person outfit which for me wasn’t really an ideal option as aftercare may become an issue especially if they retire or stop doing solar. in hindsight this shouldn’t have been a deciding factor.

    At this point I had decided that I would go with First4Solar, they were the clear winners in price and the initial delivery…

    The quoting process

    After getting in touch with First4Solar they asked some initial questions:

    1. What our yearly power consumption was
    2. The orientation of our roof
    3. Where we would like the panels
    4. If we wanted battery storage and how much
    5. What access was like to the roofs

    I told them all of this, additionally asking for automated whole house failover in case of a grid failure and as many panels as possible on the roof along with around 11kw batteries.

    So far so good.

    Until I got the first quote, which didn’t match what I had asked for, I assumed that this was just a miss-communication, they had provided a quote for a standard package:

    • 16x 415W Jinko N-Type Panels
    • GivEnergy 9.5kWh Battery with 100% DOD
    • 3.6kW Charge / Discharge
    • Can be used with off-peak tariffs
    • Emergency Power Supply Compatible (manual)

    The issue I had with this was I wanted as many panels as possible on the roof, which this didn’t do, when adding the extra panels, the GivEnergy inverter wasn’t powerful enough, it would basically be at full capacity based on the above, and in my eyes its always better to have a bigger inverter to support the addition of more panels or other technology in the future, there was no growing room here which again wasn’t what I requested.

    Part of their reasoning for using the GivEnergy inverter was that its under 3.6kWh which means no DNO G99 application is required speeding up the process.

    I then asked for a custom quote that doesn’t follow the standard package, they then quoted for the following:

    • 24x Jinko Tiger 415W N-Type Black Framed Mono
    • SolaX G4 7.5kW Hybrid Inverter
    • SolaX Triple Power HV 5.8kWh (Master) V2
    • SolaX Triple Power HV 5.8kWh (Slave) V2
    • EPS - Manual Changeover

    Again you can see they didn’t add the automated failover, however they did say that if the power draw of the house was higher than the inverter could handle, it would stop until the load dropped, which I agreed made sense as I could shut off none essential items, but not really what I wanted as I have servers to keep running.

    In the end this second quote is what I went with after they persuaded me that it was best, I guess they were the experts so I would go with their recommendation.

    Paying the deposit

    Now we are at the stage of paying the deposit, all seemed good and even the director called me to explain the process and to get my deposit paid ASAP to not miss my install date. So I called to try pay by card but “the card machine wasn’t working” and they recommend paying by BACS anyway. At the time I thought nothing of it, so I transferred the money via BACS.

    This was the first mistake that I made, however I was totally unaware at the time.

    The second mistake was believing being told once paid I would automatically be registered with HEIS so my deposit would be secure and covered by the HEIS insurance, which turned out not to be the case and I should have been contacted by HEIS.

    Radio Silence

    After paying the deposit (7th Dec 2022) I had total radio silence, If I contacted them the person I needed was “on another call” or “off sick” or one of many other excuses.

    I finally managed to speak with someone who gave me an install date of 4th & 5th May 2023, which they would later call and re-arrange for the 18th & 19th May 2023.

    I then received a further call to re-arrange again, but told them if they moved the install date I would cancel the order and go elsewhere. After this I never heard from them again.

    There was no mention of the DNO G98/99 application and the install was about two weeks off, so again I tried contacting them around ten times which eventually got me through to someone who said it was in hand and they were waiting for the DNO to get back to them, I have contacted the DNO to see if they got the application and am currently awaiting a call back, although I suspect that there was never an application sent.

    Concern starts to set in

    Other family members were also having issues, dates being set back and different excuses every time. After a while the bad reviews started to pour into TrustPilot at this point I knew something was wrong and began to try get a refund, but all the phones had stopped working and were going to automated systems, even though surprisingly the sales line was still working and deposits were still being taken from customers!

    I never got a refund and never spoke to another person at F4S, I did find a facebook group with hundreds of people who were having the exact same issues.

    Is this fraud?

    The company took my deposit at a point where they must have known they were trading insolvent, but continued to take people’s money, I have heard from other customers that their credit cards were used to pay other suppliers of F4S.

    Raised with HEIS who I found out I was not registered and so my £3.5k was not covered by their insurance and I would need to take this up with the bank, the bank also would not touch this as I made a BACS payment (an expensive mistake to make)

    The Takeover

    So now we are at a state where i’m 3.5k down and potentially no longer able to afford a solar install, but there was some light at the end of the tunnel.

    A company called Contact Solar had purchased the customer list and agreed to do what they could to help the customers that F4S had left in limbo and without their deposits. This was an absolutely brilliant thing for them to do for all these customers, however I had concerns that with reports of 1500+ customers they would struggle to keep up with the installs and I could be waiting months again, this really did unsettle me.

    That said, I had no evidence that this was the case and Contact Solar provided a very competitive price and a good proposal based on the money left on the contract.

    Decision Time

    I now had to choose between Contact Solar

    • 24 x 405w JA Solar Panels
    • SunSynk 8kw Hybrid Inverter
    • 10.64kWh Battery Storage

    or ese group

    • 24 x 405w Longi Solar Panels
    • Lux Powertek LXP 7.6Kw Inverter
    • 12.8kWh Greenlinx LXP LV-L3.2-1p battery storage
    • Pigeon proofing
    • 15 Year maintenance plan

    As you can see both offered quite a good deal, with reputable equipment that integrate well with my Home Assistant and Octopus Energy

    The main difference I found was the Depth of Discharge (DoD) on the SunSynk batteries was 90% but on Greenlinx batteries its 100%, plus the batteries are larger and the addition of the 15 year maintenance from ESE just made the deal a little better for me.

    I also found ESE seemed to have more time to answer my questions, they would always call back when they said and were very helpful, with contact solar, it was all via email which had slow responses and there wasn’t much of a personal touch that made you feel like they wanted your business.

    So as you can probably tell, my business went to ese group

    Sales Aftercare

    On 1st June 2023 I was at the stage of paying a deposit, getting DNO Approval, scheduling the survey and installation dates.

    After agreeing to continue I was passed over to make payment, recommended by ESE to pay by Credit Card for the added protection (I would have insisted on this anyway, but it was nice that they mentioned the added protection it offers and that it was their recommended payment method).

    Once I had paid I was asked when I would like the install, I asked ASAP due to being so delayed from my previous provider. The install date provided was the 22nd June! not even 4 weeks and they could have the installation completed, crazy to think how long I had been waiting that they could do this so quick.

    I agreed and was told I would get a call the following day to arrange a survey, sure enough at 09:30 I had a call to say that someone was in the area today doing another survey and could come do mine after, the surveyor came to check everything and said that it would either be him or one of the other members of the team that would carry out the install.

    The Install

    I have now had my install completed, the team came and had the inverter, batteries and 1st string of panels installed on the first day, on the second day they got the final string installed and everything was done. The job was very tidy and looks great.

    I did have one issue where the battery kept draining itself and power usage was jumping all over the place, but on investigation I found that the CT Clamp was on upside down, easy mistake to make and it was easy enough for me to fix.

    The electrician got the dongle hooked up to my WiFi and helped setup the app on my phone. I did however have to contact ESE as by default they don’t leave the ESS Greenlinx battery dongles in, this means its not possible to individually monitor the batteries and should this be required in the future I would be stuck if ESE went into administration. (I’m now waiting for these to be shipped to me)

    Aftercare

    Generally ESE have been great, the install was quick and efficient.

    However I do believe I was slightly miss-lead by them, I went with them as the contract stated I would be able to use any energy provider to get SEG payments via the flexi-orb certificate which they said is the same as MCS but an alternative.

    The way this was worded lead me to believe I would be find with ANY energy supplier, that’s not the case, currently only 5 energy suppliers accept Flexi-orb and Octopus is not one of these. This is partially my fault as I didn’t double check or specifically ask if Octopus would accept it, but believe that the contract shouldn’t state SEG payments can be from any supplier I choose when in fact what they meant was any supplier that accepts Flexi-Orb certificates.

    The results

    Here you can see my solar, battery usage along with my import / export from the grid.

    Solar graph

    The first full month of the install I have spent £6.95 on electricity and that was with some quite bad weather days. I’m still not on an export tariff so unsure how much additional funds this will yield but judging by the amount I have generated it should be a nice bit of money which will hopefully cover the standing charges.

    Things to check / be aware of

    To ensure that you don’t get stuck in the same situation, there are a few things I would highly recommend you take into consideration:

    1. Pay your deposit by Credit Card.
    2. If they insist on a Bank Transfer, REFUSE!
      1. They may give excuses like the card machine isn’t working, if they don’t offer you to pay another day, walk away! (I can’t stress this enough!)
      2. Bank Transfers (BACS) payments are not protected by your bank only credit cards are.
    3. Not all solar companies are MCS Certified, some issue Flexi-Orb Certificates instead.
      1. Not all energy suppliers accept flexi-orb, at the time of writing only five of the major suppliers accept them
        1. E.ON
        2. Scottish Power
        3. British Gas
        4. SSE
        5. OVO Energy
      2. It is likely that once Flexi-Orb is accredited that all energy suppliers will accept it, but at time of writing this is something to be aware of.
    4. HEIS will email you within 48 hours to confirm registration and your cover
      1. If you don’t get an email from HEIS, contact them immediately to ensure you have been registered, failure to do so will mean you are not protected by them.
      2. Honestly, from what I have heard, HEIS protection isn’t worth the paper its written on.
    5. You are only covered by HEIS for 120 days, if your install is going to be after this time, you wouldn’t be covered.
      1. If your install gets delayed and will breach the 120 days, contact HEIS and ask what could be done to ensure you are still protected
    6. You will need a DNO application approved before installation:
      1. DNO G98 - For installs under 16A per phase, which is the equivalent of 3.68kWp for a single-phase supply
      2. DNO G99 - For installs greater than 16A per phase
    7. Ensure the electrician is qualified ideally being registered with NICEIC
      1. Without a qualified electrician doing the install you will be unable to get a valid certificate
    8. Ensure you are provided with the necessary electrical certificates to ensure your install is legal
      1. Without this you would need to have an electrician do an EICR on this
    9. Don’t rely on your installer doing things correctly, check everything

    Final Thoughts

    My overall solar experience has been stressful to say the least, I’m glad that its finally getting sorted but its been a horrible situation for me and the other 1000 First 4 Solar customers that have been conned out of money, likely never to see it again!

    If you are looking for solar I have been very impressed with ese group so far, obviously I will update this based on the install but my Dad had his completed by them and they did a great job.

    Were you impacted by this nightmare? let me know in the comments how your install went.

    This post is licensed under CC BY 4.0 by the author.

    Automating deployments using Terraform with Proxmox and ansible

    Add series links to Jekyll posts

    diff --git a/posts/managing-application-settings-in-php/index.html b/posts/managing-application-settings-in-php/index.html index 58286f88d..4ddcff1be 100644 --- a/posts/managing-application-settings-in-php/index.html +++ b/posts/managing-application-settings-in-php/index.html @@ -1,4 +1,4 @@ - Managing Application Settings in PHP | TotalDebug
    Home Managing Application Settings in PHP
    Post
    Cancel

    Managing Application Settings in PHP

    1388880000
    1666884241

    There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving the database connection details in a PHP file and the rest in a database table.

    The advantage of using this approach over the others will be apparent when developing downloadable scripts, as updates will not need to modify a configuration file of an already setup script.

    To start create a table containing 3 fields: auto increment ID, setting name and setting value:

    CREATE TABLE IF NOT EXISTS `settings` (
    + Managing Application Settings in PHP | TotalDebug
    Home Managing Application Settings in PHP
    Post
    Cancel

    Managing Application Settings in PHP

    1388880000
    1666884241

    There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving the database connection details in a PHP file and the rest in a database table.

    The advantage of using this approach over the others will be apparent when developing downloadable scripts, as updates will not need to modify a configuration file of an already setup script.

    To start create a table containing 3 fields: auto increment ID, setting name and setting value:

    CREATE TABLE IF NOT EXISTS `settings` (
       `setting_id` int(11) NOT NULL AUTO_INCREMENT,
       `setting` varchar(50) NOT NULL,
       `value` varchar(500) NOT NULL,
    @@ -119,4 +119,4 @@
     $mail-&gt;Host = $setting['email_server'];
     $mail-&gt;Port = $setting['email_port'];
     ?>
    -

    This code does not filter the values sent to SaveSetting(). To prevent SQL injection and XSS attacks please make sure you check the values before saving them and also after reading them using GetSetting().

    This post is licensed under CC BY 4.0 by the author.

    How to recreate all Virtual Directories for Exchange 2007

    PHP Notice: Undefined index

    +

    This code does not filter the values sent to SaveSetting(). To prevent SQL injection and XSS attacks please make sure you check the values before saving them and also after reading them using GetSetting().

    This post is licensed under CC BY 4.0 by the author.

    How to recreate all Virtual Directories for Exchange 2007

    PHP Notice: Undefined index

    diff --git a/posts/mapping-a-network-drive-in-nt4-with-logon-credentials/index.html b/posts/mapping-a-network-drive-in-nt4-with-logon-credentials/index.html index 28dcf6d7c..a828f773e 100644 --- a/posts/mapping-a-network-drive-in-nt4-with-logon-credentials/index.html +++ b/posts/mapping-a-network-drive-in-nt4-with-logon-credentials/index.html @@ -1 +1 @@ - Mapping a network drive in NT4 with logon credentials | TotalDebug
    Home Mapping a network drive in NT4 with logon credentials
    Post
    Cancel

    Mapping a network drive in NT4 with logon credentials

    1311807600
    1614629284

    Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted.

    Here is a simple solution to the issue we have been having:

    [crayon lang=”cmd”]net use I: \SERVERNAME\SHARENAME /User:DOMAIN\username password[/crayon]

    run this at startup or as a logon script and the issue will be no more.

    This post is licensed under CC BY 4.0 by the author.

    Send on Behalf and Send As

    Warning: Cannot modify header information – headers already sent by…

    + Mapping a network drive in NT4 with logon credentials | TotalDebug
    Home Mapping a network drive in NT4 with logon credentials
    Post
    Cancel

    Mapping a network drive in NT4 with logon credentials

    1311807600
    1614629284

    Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted.

    Here is a simple solution to the issue we have been having:

    [crayon lang=”cmd”]net use I: \SERVERNAME\SHARENAME /User:DOMAIN\username password[/crayon]

    run this at startup or as a logon script and the issue will be no more.

    This post is licensed under CC BY 4.0 by the author.

    Send on Behalf and Send As

    Warning: Cannot modify header information – headers already sent by…

    diff --git a/posts/migrate-teamspeak-3-sqlite-mysql/index.html b/posts/migrate-teamspeak-3-sqlite-mysql/index.html index 30f2963d6..29cd6353a 100644 --- a/posts/migrate-teamspeak-3-sqlite-mysql/index.html +++ b/posts/migrate-teamspeak-3-sqlite-mysql/index.html @@ -1,4 +1,4 @@ - Migrate TeamSpeak 3 from SQLite to MySQL | TotalDebug
    Home Migrate TeamSpeak 3 from SQLite to MySQL
    Post
    Cancel

    Migrate TeamSpeak 3 from SQLite to MySQL

    1398639600
    1614629284

    One of the things I wanted to do was migrate my teamspeak server from SQLite to MySQL so I created the below which makes the migration easy.

    1. Stop the TeamSpeak Server

    2.Run the following command to export configuration:

    1
    + Migrate TeamSpeak 3 from SQLite to MySQL | TotalDebug
    Home Migrate TeamSpeak 3 from SQLite to MySQL
    Post
    Cancel

    Migrate TeamSpeak 3 from SQLite to MySQL

    1398639600
    1614629284

    One of the things I wanted to do was migrate my teamspeak server from SQLite to MySQL so I created the below which makes the migration easy.

    1. Stop the TeamSpeak Server

    2.Run the following command to export configuration:

    1
     
    sqlite3 ts3server.sqlitedb .dump | grep -v "sqlite_sequence" |grep -v "COMMIT;" | grep -v "BEGIN TRANSACTION;" | grep -v "PRAGMA " | sed 's/autoincrement/auto_increment/Ig' | sed 's/"/`/Ig' &gt; ts3_export.sql
     

    This will export the SQLite configuration in MySQL Format to a file called ts3_export.sql

    1. Import the configuration to MySQL:
    1
     
    mysql -u username -p database_name &lt; ts3_export.sql
    @@ -54,4 +54,4 @@
     password=ts3password
     database=ts3db
     socket=
    -
    1. Start TeamSpeak and it should now be working on MySQL
    This post is licensed under CC BY 4.0 by the author.

    Cisco ASDM Java Runtime Device Conenction

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    +
    1. Start TeamSpeak and it should now be working on MySQL
    This post is licensed under CC BY 4.0 by the author.

    Cisco ASDM Java Runtime Device Conenction

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    diff --git a/posts/mikrotik-openvpn-server-with-linux-client/index.html b/posts/mikrotik-openvpn-server-with-linux-client/index.html index b3ec974fe..2a29931a4 100644 --- a/posts/mikrotik-openvpn-server-with-linux-client/index.html +++ b/posts/mikrotik-openvpn-server-with-linux-client/index.html @@ -1,4 +1,4 @@ - Mikrotik OpenVPN Server with Linux Client | TotalDebug
    Home Mikrotik OpenVPN Server with Linux Client
    Post
    Cancel

    Mikrotik OpenVPN Server with Linux Client

    1438124400
    1666884241

    I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore written this guide, taking you from certificate creation all the way to VPN connectivity.

    For this tutorial I will have SSH to my Mikrotik (you can use a winbox terminal), I have chosen not to use WinBox for the configuration as its easier to deploy this way.

    Certificate Creation

    First we need to create our certificate templates on our Mikrotik.

    1
    + Mikrotik OpenVPN Server with Linux Client | TotalDebug
    Home Mikrotik OpenVPN Server with Linux Client
    Post
    Cancel

    Mikrotik OpenVPN Server with Linux Client

    1438124400
    1666884241

    I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore written this guide, taking you from certificate creation all the way to VPN connectivity.

    For this tutorial I will have SSH to my Mikrotik (you can use a winbox terminal), I have chosen not to use WinBox for the configuration as its easier to deploy this way.

    Certificate Creation

    First we need to create our certificate templates on our Mikrotik.

    1
     2
     
    /certificate
     add name=ca-template country=GB locality=Leeds organization=SpottedHyena state=WestYorkshire common-name="home.server.co.uk" key-size=2048 unit=IT
    @@ -92,4 +92,4 @@
     
    openvpn /etc/openvpn/MyVpn.ovpn
     

    In the second SSH Window run:

    1
     
    tail -f /var/log/openvpn.log
    -

    Watch the log closely, you will see errors in here which will help with troubleshooting any issues.

    Troubleshooting

    Compression: At the time of writing compression is not supported by Mikrotik, please make sure no LZO lines are present in the configuration.

    Certificates: Check that your certificate and key were imported properly and that your client is configured to trust the self-signed certificate or the CA you used.

    Security

    There are some security improvements that could be made to this configuration, however this is to get you up and running.

    1. Limit the port access to a specific Source IP Address so that only you can connect
    2. Configure better passwords, the ones shown are examples only
    3. Consider using a separate bridge so that the VPN has its own filters and rules
    4. Change the security of the firewall-auth.txt and home.up files to 600

    Hopefully this will be helpful to someone out there.

    If you have any issues add a comment below and I will get back to you ASAP.

    This post is licensed under CC BY 4.0 by the author.

    VMware Transparent Page Sharing TPS

    VMware ESXi Embedded Host Client Installation – Updated

    +

    Watch the log closely, you will see errors in here which will help with troubleshooting any issues.

    Troubleshooting

    Compression: At the time of writing compression is not supported by Mikrotik, please make sure no LZO lines are present in the configuration.

    Certificates: Check that your certificate and key were imported properly and that your client is configured to trust the self-signed certificate or the CA you used.

    Security

    There are some security improvements that could be made to this configuration, however this is to get you up and running.

    1. Limit the port access to a specific Source IP Address so that only you can connect
    2. Configure better passwords, the ones shown are examples only
    3. Consider using a separate bridge so that the VPN has its own filters and rules
    4. Change the security of the firewall-auth.txt and home.up files to 600

    Hopefully this will be helpful to someone out there.

    If you have any issues add a comment below and I will get back to you ASAP.

    This post is licensed under CC BY 4.0 by the author.

    VMware Transparent Page Sharing TPS

    VMware ESXi Embedded Host Client Installation – Updated

    diff --git a/posts/numa-and-vnuma-made-simple/index.html b/posts/numa-and-vnuma-made-simple/index.html index 30148d923..2475afe81 100644 --- a/posts/numa-and-vnuma-made-simple/index.html +++ b/posts/numa-and-vnuma-made-simple/index.html @@ -1 +1 @@ - NUMA and vNUMA made simple! | TotalDebug
    Home NUMA and vNUMA made simple!
    Post
    Cancel

    NUMA and vNUMA made simple!

    1426464000
    1614629284

    What is NUMA?

    Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU and memory together form a NUMA node (as shown in the diagram below).

    Memory access time can differ due to the memory location relative to a processor, because a CPU can access it own memory faster than remote memory thus creating higher latency if remote memory is required.

    In short NUMA links multiple small high performing nodes together inside a single server.

    NUMA Diagram

    What is vNUMA

    vNUMA stands for Virtual Non-Uniform Memory Access, ESX has been NUMA-aware singe 2002, with VMware ESX 1.5 Introducing memory management features to improve locality on NUMA hardware. This works very well for placing VMs on local memory for resources being used by that VM, particularly for VMs that are smaller than the NUMA node. Large VMs, however, will start to see performance issues as they breach a single node, these VMs will require some additional help with resource scheduling.

    When enabled vNUMA exposes the VM OS to the physical NUMA. This provides performance improvements with the VM by allowing the OS and programs to best utilise the NUMA optimisations. VMs will then benefit from NUMA, even if the VM itself is larger than the physical NUMA nodes

    • An administrator can enable / disable vNUMA on a VM using advanced vNUMA Controls
    • If a VM has more than eight vCPUs, vNUMA is auto enabled
    • If CPU Hot Add is enabled, vNUMA is Disabled
    • The operating system must be NUMA Aware

    How to determine the size of a NUMA node

    In most cases the easiest way to determine a NUMA nodes boundaries is by dividing the amount of physical RAM by the number of logical processoes (cores), this is a very loose guideline. Further information on determining the specific NUMA node setup can be found here:

    What happens with vNUMA during vMotion?

    A VM will initially have its vNUMA topology built when it is powered on, each time it reboots this will be reapplied depending on the host it sits upon, In the case of a vMotion the vNUMA will stay the same until the VM is rebooted and it will re-evaluate its vNUMA topology. This is another great argument to make sure all hardware in a cluster is the same as it will avoid NUMA mismatched which could cause severe performance issues.

    Check if a VM is using resources from another NUMA node

    If you start to see performance issues with VMs then I would recommend running this test to make sure that the VM isnt using resources from other Nodes.

    1. SSH to the ESXi host that the VM resides on
    2. Type esxtop and press enter
    3. Press “m”
    4. Press “f”
    5. Press “G” until a * shows next to NUMA STATS
    6. look at the column N%L this shows the numa usage if it is lower than 100 it is sharing resources from another numa node, see the example shown below: Numa Usage

    As you can see we have multiple VMs using different NUMA nodes, these VMs were showing slower performance than expected, once we sized them correctly they stopped sharing NUMA nodes and this resolved our issues.

    Conclusion

    NUMA plays a vital part in understanding performance within virtual environments, VMware ESXi 5.0 and above have extended capabilities for VMs with intelligent NUMA scheduling and improved VM-Level optimisation with vNUMA. It is important to understand how both NUMA and vNUMA work when sizing any virtual machines as this can have a detremental effect on your environments performance

    This post is licensed under CC BY 4.0 by the author.

    Offline Upgrade ESXi 5.5 to 6.0

    vCenter 6.0 VCSA Deployment

    + NUMA and vNUMA made simple! | TotalDebug
    Home NUMA and vNUMA made simple!
    Post
    Cancel

    NUMA and vNUMA made simple!

    1426464000
    1614629284

    What is NUMA?

    Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU and memory together form a NUMA node (as shown in the diagram below).

    Memory access time can differ due to the memory location relative to a processor, because a CPU can access it own memory faster than remote memory thus creating higher latency if remote memory is required.

    In short NUMA links multiple small high performing nodes together inside a single server.

    NUMA Diagram

    What is vNUMA

    vNUMA stands for Virtual Non-Uniform Memory Access, ESX has been NUMA-aware singe 2002, with VMware ESX 1.5 Introducing memory management features to improve locality on NUMA hardware. This works very well for placing VMs on local memory for resources being used by that VM, particularly for VMs that are smaller than the NUMA node. Large VMs, however, will start to see performance issues as they breach a single node, these VMs will require some additional help with resource scheduling.

    When enabled vNUMA exposes the VM OS to the physical NUMA. This provides performance improvements with the VM by allowing the OS and programs to best utilise the NUMA optimisations. VMs will then benefit from NUMA, even if the VM itself is larger than the physical NUMA nodes

    • An administrator can enable / disable vNUMA on a VM using advanced vNUMA Controls
    • If a VM has more than eight vCPUs, vNUMA is auto enabled
    • If CPU Hot Add is enabled, vNUMA is Disabled
    • The operating system must be NUMA Aware

    How to determine the size of a NUMA node

    In most cases the easiest way to determine a NUMA nodes boundaries is by dividing the amount of physical RAM by the number of logical processoes (cores), this is a very loose guideline. Further information on determining the specific NUMA node setup can be found here:

    What happens with vNUMA during vMotion?

    A VM will initially have its vNUMA topology built when it is powered on, each time it reboots this will be reapplied depending on the host it sits upon, In the case of a vMotion the vNUMA will stay the same until the VM is rebooted and it will re-evaluate its vNUMA topology. This is another great argument to make sure all hardware in a cluster is the same as it will avoid NUMA mismatched which could cause severe performance issues.

    Check if a VM is using resources from another NUMA node

    If you start to see performance issues with VMs then I would recommend running this test to make sure that the VM isnt using resources from other Nodes.

    1. SSH to the ESXi host that the VM resides on
    2. Type esxtop and press enter
    3. Press “m”
    4. Press “f”
    5. Press “G” until a * shows next to NUMA STATS
    6. look at the column N%L this shows the numa usage if it is lower than 100 it is sharing resources from another numa node, see the example shown below: Numa Usage

    As you can see we have multiple VMs using different NUMA nodes, these VMs were showing slower performance than expected, once we sized them correctly they stopped sharing NUMA nodes and this resolved our issues.

    Conclusion

    NUMA plays a vital part in understanding performance within virtual environments, VMware ESXi 5.0 and above have extended capabilities for VMs with intelligent NUMA scheduling and improved VM-Level optimisation with vNUMA. It is important to understand how both NUMA and vNUMA work when sizing any virtual machines as this can have a detremental effect on your environments performance

    This post is licensed under CC BY 4.0 by the author.

    Offline Upgrade ESXi 5.5 to 6.0

    vCenter 6.0 VCSA Deployment

    diff --git a/posts/office-365-scan-to-email/index.html b/posts/office-365-scan-to-email/index.html index 24858de53..987085677 100644 --- a/posts/office-365-scan-to-email/index.html +++ b/posts/office-365-scan-to-email/index.html @@ -1 +1 @@ - Office 365 Scan to Email | TotalDebug
    Home Office 365 Scan to Email
    Post
    Cancel

    Office 365 Scan to Email

    1325808000
    1614629284

    Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this format so i found this work around hope it helps you!

    You basically need to create an smtp relay on a local server / computer to forward your scans to then set the smtp relay up as below which will then do the authentication part for you.

     

    SMTP relay settings for Office 365

    To configure an SMTP relay in Office 365, you need the following:

    • A user who has an Exchange Online mailbox
    • The SMTP set to port 587
    • Transport Layer Security (TLS) encryption enabled
    • The mailbox server name

    To obtain SMTP settings information, follow these steps:

    1. Sign in to Outlook Web App.
    2. Click Options, and then click See All Options.
    3. Click Account, click My Account, and then in the Account Information area, click Settings for POP, IMAP, and SMTP access.Note the SMTP settings information that is displayed on this page.

    Configure Internet Information Services (IIS)

    To configure Internet Information Services (IIS) so that your LOB programs can use the SMTP relay, follow these steps:

    1. Create a user who has an Exchange Online mailbox. To do this, use one of the following methods:
      • Create the user in Active Directory Domain Services, run directory synchronization, and then activate the user by using an Exchange Online license.Note The user must not have an on-premises mailbox.
      • Create the user by using the Office 365 portal or by using Microsoft Online Services PowerShell Module, and then assign the user an Exchange Online license.
    2. Configure the IIS SMTP relay server. To do this, follow these steps: <li type="a"> Install IIS on an internal server. During the installation, select the option to install the SMTP components. </li> <li type="a"> In Internet Information Services (IIS) Manager, expand the Default SMTP Virtual Server, and then click Domains. </li> <li type="a"> Right-click Domains, click New, click Domain, and then click Remote. </li> <li type="a"> In the Name box, type *.com, and then click Finish. </li>
    3. Double-click the domain that you just created.
    4. Click to select the Allow incoming mail to be relayed to this domain check box.
    5. In the Route domain area, click Forward all mail to smart host, and then in the box, type the mailbox server name.
    6. Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
    7. Right-click the Default SMTP Virtual Server node, and then click Properties.
    8. On the Delivery tab, click Outbound Connections.
    9. In the TCP Port box, type 587, and then click OK.
    10. Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
    11. On the Access tab, click Authentication, click to select the Anonymous access check box, and then click OK.
    12. On the Relay tab, select Only the list below, type the IP addresses of the client computers that will be sending the email messages, and then click OK.
    This post is licensed under CC BY 4.0 by the author.

    Assigning Send As Permissions to a user

    Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

    + Office 365 Scan to Email | TotalDebug
    Home Office 365 Scan to Email
    Post
    Cancel

    Office 365 Scan to Email

    1325808000
    1614629284

    Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this format so i found this work around hope it helps you!

    You basically need to create an smtp relay on a local server / computer to forward your scans to then set the smtp relay up as below which will then do the authentication part for you.

     

    SMTP relay settings for Office 365

    To configure an SMTP relay in Office 365, you need the following:

    • A user who has an Exchange Online mailbox
    • The SMTP set to port 587
    • Transport Layer Security (TLS) encryption enabled
    • The mailbox server name

    To obtain SMTP settings information, follow these steps:

    1. Sign in to Outlook Web App.
    2. Click Options, and then click See All Options.
    3. Click Account, click My Account, and then in the Account Information area, click Settings for POP, IMAP, and SMTP access.Note the SMTP settings information that is displayed on this page.

    Configure Internet Information Services (IIS)

    To configure Internet Information Services (IIS) so that your LOB programs can use the SMTP relay, follow these steps:

    1. Create a user who has an Exchange Online mailbox. To do this, use one of the following methods:
      • Create the user in Active Directory Domain Services, run directory synchronization, and then activate the user by using an Exchange Online license.Note The user must not have an on-premises mailbox.
      • Create the user by using the Office 365 portal or by using Microsoft Online Services PowerShell Module, and then assign the user an Exchange Online license.
    2. Configure the IIS SMTP relay server. To do this, follow these steps: <li type="a"> Install IIS on an internal server. During the installation, select the option to install the SMTP components. </li> <li type="a"> In Internet Information Services (IIS) Manager, expand the Default SMTP Virtual Server, and then click Domains. </li> <li type="a"> Right-click Domains, click New, click Domain, and then click Remote. </li> <li type="a"> In the Name box, type *.com, and then click Finish. </li>
    3. Double-click the domain that you just created.
    4. Click to select the Allow incoming mail to be relayed to this domain check box.
    5. In the Route domain area, click Forward all mail to smart host, and then in the box, type the mailbox server name.
    6. Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
    7. Right-click the Default SMTP Virtual Server node, and then click Properties.
    8. On the Delivery tab, click Outbound Connections.
    9. In the TCP Port box, type 587, and then click OK.
    10. Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
    11. On the Access tab, click Authentication, click to select the Anonymous access check box, and then click OK.
    12. On the Relay tab, select Only the list below, type the IP addresses of the client computers that will be sending the email messages, and then click OK.
    This post is licensed under CC BY 4.0 by the author.

    Assigning Send As Permissions to a user

    Active Sync Error EventID 3005 Unexpected Exchange Mailbox Server Error

    diff --git a/posts/offline-upgrade-esxi-5-5-to-6-0/index.html b/posts/offline-upgrade-esxi-5-5-to-6-0/index.html index 456d7da45..e1dc1eab4 100644 --- a/posts/offline-upgrade-esxi-5-5-to-6-0/index.html +++ b/posts/offline-upgrade-esxi-5-5-to-6-0/index.html @@ -1,7 +1,7 @@ - Offline Upgrade ESXi 5.5 to 6.0 | TotalDebug
    Home Offline Upgrade ESXi 5.5 to 6.0
    Post
    Cancel

    Offline Upgrade ESXi 5.5 to 6.0

    1426377600
    1614629284

    This is a very short and sweet article documenting the offline upgrade process from 5.5 to 6.0

    1. Download the ESXi 6.0 Offline Bundle from the VMware website.
    2. Upload the file to the local datastore of the ESXi Host.
    3. Enable SSH on the ESXi Host
    4. Connect to the ESXi Host and run the below command:
    1
    + Offline Upgrade ESXi 5.5 to 6.0 | TotalDebug
    Home Offline Upgrade ESXi 5.5 to 6.0
    Post
    Cancel

    Offline Upgrade ESXi 5.5 to 6.0

    1426377600
    1614629284

    This is a very short and sweet article documenting the offline upgrade process from 5.5 to 6.0

    1. Download the ESXi 6.0 Offline Bundle from the VMware website.
    2. Upload the file to the local datastore of the ESXi Host.
    3. Enable SSH on the ESXi Host
    4. Connect to the ESXi Host and run the below command:
    1
     
    esxcli storage filesystem list
     
    1. you should now have a list of your datastores, copy the mount point and add it to the below command:
    1
     
    esxcli software vib install -d <path_to_bundle.zip>
     
    1. wait until the upgrade has completed then enter the follwoing to reboot the host:
    1
     
    reboot
    -

    The host will reboot and you will now be able to connect with your client, you will be prompted to download the latest client and then you will be away!

    This post is licensed under CC BY 4.0 by the author.

    VMware Distributed Switches (dvSwitch)

    NUMA and vNUMA made simple!

    +

    The host will reboot and you will now be able to connect with your client, you will be prompted to download the latest client and then you will be away!

    This post is licensed under CC BY 4.0 by the author.

    VMware Distributed Switches (dvSwitch)

    NUMA and vNUMA made simple!

    diff --git a/posts/php-notice-undefined-index/index.html b/posts/php-notice-undefined-index/index.html index 63540e60a..29da62d23 100644 --- a/posts/php-notice-undefined-index/index.html +++ b/posts/php-notice-undefined-index/index.html @@ -1,4 +1,4 @@ - PHP Notice: Undefined index | TotalDebug
    Home PHP Notice: Undefined index
    Post
    Cancel

    PHP Notice: Undefined index

    1388880000
    1666884241

    I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix!

    How to Fix

    One simple answer – isset() !

    isset() function in PHP determines whether a variable is set and is not NULL. It returns a Boolean value, that is, if the variable is set it will return true and if the variable value is null it will return false. More details on this function can be found in PHP Manual

    Example

    Let us consider an example. Below is the HTML code for a comment form in a blog.

    1
    + PHP Notice: Undefined index | TotalDebug
    Home PHP Notice: Undefined index
    Post
    Cancel

    PHP Notice: Undefined index

    1388880000
    1666884241

    I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix!

    How to Fix

    One simple answer – isset() !

    isset() function in PHP determines whether a variable is set and is not NULL. It returns a Boolean value, that is, if the variable is set it will return true and if the variable value is null it will return false. More details on this function can be found in PHP Manual

    Example

    Let us consider an example. Below is the HTML code for a comment form in a blog.

    1
     2
     3
     4
    @@ -54,4 +54,4 @@
     
    www.someexample.com/comments.php
     www.someexample.come/comments.php?action=add
     www.someexample.com/comments.php?action=delete
    -

    All these URL’s go to the same page but each time performs a different task. So when I try to access the page through the first URL, it will give me the ‘Undefined index’ notice since the parameter ‘action’ is not set.

    We can fix this using the isset() function too. But on this instance, we can just ignore it by hiding the notices like this. error\_reporting(E\_ALL ^ E_NOTICE);

    You can also turn off error reporting in your php.ini file or .htaccess file, but it is not considered as a wise move if you are still in the testing stage.

    This is another simple solution in PHP for a common complex problem. Hope it is useful.

    This is an example only my form has no security hardening. Use at own risk.

    This post is licensed under CC BY 4.0 by the author.

    Managing Application Settings in PHP

    Use Google Authenticator for 2FA with SSH

    +

    All these URL’s go to the same page but each time performs a different task. So when I try to access the page through the first URL, it will give me the ‘Undefined index’ notice since the parameter ‘action’ is not set.

    We can fix this using the isset() function too. But on this instance, we can just ignore it by hiding the notices like this. error\_reporting(E\_ALL ^ E_NOTICE);

    You can also turn off error reporting in your php.ini file or .htaccess file, but it is not considered as a wise move if you are still in the testing stage.

    This is another simple solution in PHP for a common complex problem. Hope it is useful.

    This is an example only my form has no security hardening. Use at own risk.

    This post is licensed under CC BY 4.0 by the author.

    Managing Application Settings in PHP

    Use Google Authenticator for 2FA with SSH

    diff --git a/posts/proxmox-template-with-cloud-image-and-cloud-init/index.html b/posts/proxmox-template-with-cloud-image-and-cloud-init/index.html index 1e4243b90..57a6f20e2 100644 --- a/posts/proxmox-template-with-cloud-image-and-cloud-init/index.html +++ b/posts/proxmox-template-with-cloud-image-and-cloud-init/index.html @@ -1,4 +1,4 @@ - Proxmox Template with Cloud Image and Cloud Init | TotalDebug
    Home Proxmox Template with Cloud Image and Cloud Init
    Post
    Cancel

    Proxmox Template with Cloud Image and Cloud Init

    1664909640
    1686233406

    Updated to latest Ubuntu image & Added enable for qemu service

    Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud images are small cloud certified that have Cloud init pre-installed and ready to accept configuration.

    Cloud images and Cloud init also work with Proxmox and if you combine this with Terraform you have a fully automated deployment model. See Automating deployments using Terraform with Proxmox and ansible for instructions on how to do this. int

    Guide

    Download image

    First you will need to choose an Ubuntu Cloud Image

    Rather than downloading this, copy the URL.

    Then SSH into your Proxmox server and run wget with the URL you just copied, similar to below:

    1
    + Proxmox Template with Cloud Image and Cloud Init | TotalDebug
    Home Proxmox Template with Cloud Image and Cloud Init
    Post
    Cancel

    Proxmox Template with Cloud Image and Cloud Init

    1664909640
    1686233406

    Updated to latest Ubuntu image & Added enable for qemu service

    Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud images are small cloud certified that have Cloud init pre-installed and ready to accept configuration.

    Cloud images and Cloud init also work with Proxmox and if you combine this with Terraform you have a fully automated deployment model. See Automating deployments using Terraform with Proxmox and ansible for instructions on how to do this. int

    Guide

    Download image

    First you will need to choose an Ubuntu Cloud Image

    Rather than downloading this, copy the URL.

    Then SSH into your Proxmox server and run wget with the URL you just copied, similar to below:

    1
     
    wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
     

    This will download the image onto your proxmox server ready for use.

    Install packages

    The qemu-guest-agent is not installed on the cloud-images, so we need a way to inject that into out image file. This can be done with a great tool called virt-customize this is installed with the package libguestfs-tools. libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images.

    Install:

    1
     
    sudo apt update -y && sudo apt install libguestfs-tools -y
    @@ -42,4 +42,4 @@
     2
     
    sudo qm stop 999 && sudo qm destroy 999
     rm jammy-server-cloudimg-amd64.img
    -

    References

    https://registry.terraform.io/modules/sdhibit/cloud-init-vm/proxmox/latest/examples/ubuntu_single_vm

    Closing

    Hopefully this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Type hinting and checking in Python

    Homer dashboard with Docker

    +

    References

    https://registry.terraform.io/modules/sdhibit/cloud-init-vm/proxmox/latest/examples/ubuntu_single_vm

    Closing

    Hopefully this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Type hinting and checking in Python

    Homer dashboard with Docker

    diff --git a/posts/send-on-behalf-and-send-as/index.html b/posts/send-on-behalf-and-send-as/index.html index dd9ccc98c..52cd6a399 100644 --- a/posts/send-on-behalf-and-send-as/index.html +++ b/posts/send-on-behalf-and-send-as/index.html @@ -1,7 +1,7 @@ - Send on Behalf and Send As | TotalDebug
    Home Send on Behalf and Send As
    Post
    Cancel

    Send on Behalf and Send As

    1311758400
    1666901265

    Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another user. What this means, is that the recipient is cognitive of who actually initiated the sending message, regardless of who it was sent on behalf of. This may not be what you are looking to accomplish. In many cases, you may want to send as another person and you do not want the recipient to be cognitive about who initiated the message. Of course, a possible downside to this, is that if the recipient replies, it may go to a user who did not initiate the sent message and might be confused depending on the circumstances. Send As can be useful in a scenario where you are sending as a mail-enabled distribution group. If someone replies, it will go to that distribution group which ultimately gets sent to every user who is a part of that distribution group. This article will explains how to use both methods.

    Send on Behalf

    There are three ways to configure Send on Behalf. The first method is by using Outlook Delegates which allows a user to grant another user to Send on Behalf of their mailbox. The second method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to Send on Behalf of another user. The third and final method is using the Exchange Management Console (EMC).

    Outlook Delegates

    There are major steps in order to use Outlook Delegates. The first is to select the user and add him as a delegate. You then must share your mailbox to that user.

    1. Go to Tools and choose Options
    2. Go to the Delegates Tab and click Add
    3. Select the user who wish to grant access to and click Add and then Ok

    There are more options you can choose from once you select OK after adding that user. Nothing in the next window is necessary to grant send on behalf.

    1. When back at the main Outlook window, in the Folder List, choose your mailbox at the root level. This will appear as Mailbox – Full Name
    2. Right-click and choose Change Sharing Permissions
    3. Click the Add button
    4. Select the user who wish to grant access to and click Add and then Ok
    5. In the permissions section, you must grant the user at minimum, Non-editing Author.

    Exchange Management Shell (EMS)

    This is a fairly simple process to complete. It consists of running only the following command and you are finished. The command is as follows:

    Set-Mailbox UserMailbox -GrantSendOnBehalfTo UserWhoSends

    Exchange Management Console (EMC)

    1. Go to Recipient Management and choose Mailbox
    2. Choose the mailbox and choose Properties in Action Pane
    3. Go to the Mail Flow Settings Tab and choose Delivery Options
    4. Click the Add button
    5. Select the user who wish to grant access to and click Add and then Ok

    Send As

    As of Exchange 2007 SP1, there are two ways to configure SendAs. The first method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to SendAs of another user. The second and final method (added in SP1) is using the Exchange Management Console (EMC).

    Exchange Management Shell (EMS)

    The first method is to grant a specific user the ability to SendAs as another user. It consists of running only the following command and you are finished. The command is as follows:

    1
    + Send on Behalf and Send As | TotalDebug
    Home Send on Behalf and Send As
    Post
    Cancel

    Send on Behalf and Send As

    1311758400
    1666901265

    Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another user. What this means, is that the recipient is cognitive of who actually initiated the sending message, regardless of who it was sent on behalf of. This may not be what you are looking to accomplish. In many cases, you may want to send as another person and you do not want the recipient to be cognitive about who initiated the message. Of course, a possible downside to this, is that if the recipient replies, it may go to a user who did not initiate the sent message and might be confused depending on the circumstances. Send As can be useful in a scenario where you are sending as a mail-enabled distribution group. If someone replies, it will go to that distribution group which ultimately gets sent to every user who is a part of that distribution group. This article will explains how to use both methods.

    Send on Behalf

    There are three ways to configure Send on Behalf. The first method is by using Outlook Delegates which allows a user to grant another user to Send on Behalf of their mailbox. The second method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to Send on Behalf of another user. The third and final method is using the Exchange Management Console (EMC).

    Outlook Delegates

    There are major steps in order to use Outlook Delegates. The first is to select the user and add him as a delegate. You then must share your mailbox to that user.

    1. Go to Tools and choose Options
    2. Go to the Delegates Tab and click Add
    3. Select the user who wish to grant access to and click Add and then Ok

    There are more options you can choose from once you select OK after adding that user. Nothing in the next window is necessary to grant send on behalf.

    1. When back at the main Outlook window, in the Folder List, choose your mailbox at the root level. This will appear as Mailbox – Full Name
    2. Right-click and choose Change Sharing Permissions
    3. Click the Add button
    4. Select the user who wish to grant access to and click Add and then Ok
    5. In the permissions section, you must grant the user at minimum, Non-editing Author.

    Exchange Management Shell (EMS)

    This is a fairly simple process to complete. It consists of running only the following command and you are finished. The command is as follows:

    Set-Mailbox UserMailbox -GrantSendOnBehalfTo UserWhoSends

    Exchange Management Console (EMC)

    1. Go to Recipient Management and choose Mailbox
    2. Choose the mailbox and choose Properties in Action Pane
    3. Go to the Mail Flow Settings Tab and choose Delivery Options
    4. Click the Add button
    5. Select the user who wish to grant access to and click Add and then Ok

    Send As

    As of Exchange 2007 SP1, there are two ways to configure SendAs. The first method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to SendAs of another user. The second and final method (added in SP1) is using the Exchange Management Console (EMC).

    Exchange Management Shell (EMS)

    The first method is to grant a specific user the ability to SendAs as another user. It consists of running only the following command and you are finished. The command is as follows:

    1
     2
     3
     
    ```powershell
     Add-ADPermission UserMailbox -ExtendedRights Send-As -user UserWhoSends
     ```
    -

    Exchange Management Console (EMC)

    1. Go to Recipient Management and choose Mailbox
    2. Choose the mailbox and choose Manage Send As Permissions in Action Pane
    3. Select the user who wish to grant access to and click Add and then Ok

    Miscellaneous Information

    No “From:” Button

    In order for a user to Send on Behalf or Send As another user, their Outlook profile must be configured to show a From: button. By default, Outlook does not show the From: button. In order to configure a user’s Outlook profile to show the From: button:

    Replies

    If you are sending as another user, the recipient user might reply. By default, Outlook is configured to set the reply address to whoever is configured as the sending address. So if I am user A sending on behalf of user B, the reply address will be set to user B. If you are the user initiating the sending message, you can configure your Outlook profile to manually configure the reply address.

    Conflicting Methods

    If you are configuring Send on Behalf permissions on the Exchange Server, ensure that the user is not trying to use the Outlook delegates at the same time. Recently, at a client, I was given the task to configure Send As as well as Send on Behalf. As I was configuring Send As on the server, I found out that the client was attempting to use Outlook Delegates at the same time. Send As would not work. Once the user removed the user from Outlook Delegates and removed permissions for that user at the root level of your mailbox that appears as Mailbox – Full Name, Send As began to work. So keep in mind, if you are configuring Send As or Send on Behalf, use only one method for a specific user.

    SendAs Disappearing

    If you are in a Protected Group, something in Active Directory called SDProp will come by every hour and remove SendAs permissions on users in these protected groups.  What security rights are configured on these security accounts are determined based on what security rights are assigned on the adminSDHolder object which exists in each domain.  The important part for you to remember is that every hour, inheritance on these protected groups will be removed and SendAs will be wiped away.

    A good blog article explaining what adminSDHolder and SDprop are and what Protected Groups  is located here.

    This post is licensed under CC BY 4.0 by the author.

    The Missing Manual Part 1: Veeam B & R Direct SAN Backups

    Mapping a network drive in NT4 with logon credentials

    +

    Exchange Management Console (EMC)

    1. Go to Recipient Management and choose Mailbox
    2. Choose the mailbox and choose Manage Send As Permissions in Action Pane
    3. Select the user who wish to grant access to and click Add and then Ok

    Miscellaneous Information

    No “From:” Button

    In order for a user to Send on Behalf or Send As another user, their Outlook profile must be configured to show a From: button. By default, Outlook does not show the From: button. In order to configure a user’s Outlook profile to show the From: button:

    Replies

    If you are sending as another user, the recipient user might reply. By default, Outlook is configured to set the reply address to whoever is configured as the sending address. So if I am user A sending on behalf of user B, the reply address will be set to user B. If you are the user initiating the sending message, you can configure your Outlook profile to manually configure the reply address.

    Conflicting Methods

    If you are configuring Send on Behalf permissions on the Exchange Server, ensure that the user is not trying to use the Outlook delegates at the same time. Recently, at a client, I was given the task to configure Send As as well as Send on Behalf. As I was configuring Send As on the server, I found out that the client was attempting to use Outlook Delegates at the same time. Send As would not work. Once the user removed the user from Outlook Delegates and removed permissions for that user at the root level of your mailbox that appears as Mailbox – Full Name, Send As began to work. So keep in mind, if you are configuring Send As or Send on Behalf, use only one method for a specific user.

    SendAs Disappearing

    If you are in a Protected Group, something in Active Directory called SDProp will come by every hour and remove SendAs permissions on users in these protected groups.  What security rights are configured on these security accounts are determined based on what security rights are assigned on the adminSDHolder object which exists in each domain.  The important part for you to remember is that every hour, inheritance on these protected groups will be removed and SendAs will be wiped away.

    A good blog article explaining what adminSDHolder and SDprop are and what Protected Groups  is located here.

    This post is licensed under CC BY 4.0 by the author.

    The Missing Manual Part 1: Veeam B & R Direct SAN Backups

    Mapping a network drive in NT4 with logon credentials

    diff --git a/posts/server-2003-reinstall-terminal-services-licensing/index.html b/posts/server-2003-reinstall-terminal-services-licensing/index.html index 2c5d0b2dc..ada16200b 100644 --- a/posts/server-2003-reinstall-terminal-services-licensing/index.html +++ b/posts/server-2003-reinstall-terminal-services-licensing/index.html @@ -1 +1 @@ - Server 2003 Reinstall Terminal Services Licensing. | TotalDebug
    Home Server 2003 Reinstall Terminal Services Licensing.
    Post
    Cancel

    Server 2003 Reinstall Terminal Services Licensing.

    1313658000
    1666901265

    I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied.

    I managed to resolve this issue by copying the licensing db to a different folder and then re-installing terminal services and copying it back.

    1. stop Terminal Services Licensing service
    2. Copy c:\windows\system32\LServer\TLSLic.edb
    3. Paste the db to a different location
    4. Uninstall Terminal Services Licensing from add remove components
    5. Re-Install Terminal Services Licensing
    6. stop Terminal Services Licensing service
    7. copy the TLSLic.edb back to c:\windows\system32\LServer\ overwriting the new db that is in there
    8. start Terminal Services Licensing service

    Now you will notice that TS Licensing is working and all of your licences still work.

    You CANNOT move this to another server it is registered to that Licensing server!!!

    This post is licensed under CC BY 4.0 by the author.

    Warning: Cannot modify header information – headers already sent by…

    Fortigate and LDAP 4.0 MR3 Patch1

    + Server 2003 Reinstall Terminal Services Licensing. | TotalDebug
    Home Server 2003 Reinstall Terminal Services Licensing.
    Post
    Cancel

    Server 2003 Reinstall Terminal Services Licensing.

    1313658000
    1666901265

    I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied.

    I managed to resolve this issue by copying the licensing db to a different folder and then re-installing terminal services and copying it back.

    1. stop Terminal Services Licensing service
    2. Copy c:\windows\system32\LServer\TLSLic.edb
    3. Paste the db to a different location
    4. Uninstall Terminal Services Licensing from add remove components
    5. Re-Install Terminal Services Licensing
    6. stop Terminal Services Licensing service
    7. copy the TLSLic.edb back to c:\windows\system32\LServer\ overwriting the new db that is in there
    8. start Terminal Services Licensing service

    Now you will notice that TS Licensing is working and all of your licences still work.

    You CANNOT move this to another server it is registered to that Licensing server!!!

    This post is licensed under CC BY 4.0 by the author.

    Warning: Cannot modify header information – headers already sent by…

    Fortigate and LDAP 4.0 MR3 Patch1

    diff --git a/posts/setup-nfs-mount-centos-6/index.html b/posts/setup-nfs-mount-centos-6/index.html index 9f5f6f7e3..b895c33c6 100644 --- a/posts/setup-nfs-mount-centos-6/index.html +++ b/posts/setup-nfs-mount-centos-6/index.html @@ -1,4 +1,4 @@ - How to setup an NFS mount on CentOS 6 | TotalDebug
    Home How to setup an NFS mount on CentOS 6
    Post
    Cancel

    How to setup an NFS mount on CentOS 6

    1403564400
    1614629284

    About NFS (Network File System) Mounts

    NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.

    Setup

    An NFS mount is set up between at least two servers. The machine hosting the shared directory is called the server, while the ones that connect to it are clients.

    This tutorial will take you through setting up the NFS server.

    The system should be setup as root

    1
    + How to setup an NFS mount on CentOS 6 | TotalDebug
    Home How to setup an NFS mount on CentOS 6
    Post
    Cancel

    How to setup an NFS mount on CentOS 6

    1403564400
    1614629284

    About NFS (Network File System) Mounts

    NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.

    Setup

    An NFS mount is set up between at least two servers. The machine hosting the shared directory is called the server, while the ones that connect to it are clients.

    This tutorial will take you through setting up the NFS server.

    The system should be setup as root

    1
     
    sudo su -
     

    Setting up the NFS Server

    1. Install the required software and start services

    First we use yum to install the required nfs programs.

    1
     
    yum install nfs-utils nfs-utils-lib
    @@ -16,4 +16,4 @@
     
    /home           12.33.44.555(rw,sync,no_root_squash,no_subtree_check)
     

    These settings achieve the following:

    1. rw: This option allows the client to both read and write within the shared directory
    2. sync: Sync confirms requests to the shared directory only once the changes have been committed.
    3. no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger filesystem, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
    4. no_root_squash: This phrase allows root to connect to the designated directory

    Once completed save the file and exit it, then run the following command to export the settings:

    1
     
    exportfs -a
    -

    You now have a fully functioning NFS server. If there is anything you think I have missed from this tutorial please comment below.

    This post is licensed under CC BY 4.0 by the author.

    Email Report Virtual Machines with Snapshots

    vCloud Director and vCenter Proxy Service Failure

    +

    You now have a fully functioning NFS server. If there is anything you think I have missed from this tutorial please comment below.

    This post is licensed under CC BY 4.0 by the author.

    Email Report Virtual Machines with Snapshots

    vCloud Director and vCenter Proxy Service Failure

    diff --git a/posts/setup-rsnapshot-backups-centos/index.html b/posts/setup-rsnapshot-backups-centos/index.html index a6ac27f75..7c8e29f78 100644 --- a/posts/setup-rsnapshot-backups-centos/index.html +++ b/posts/setup-rsnapshot-backups-centos/index.html @@ -1,4 +1,4 @@ - Setup rSnapshot backups on CentOS | TotalDebug
    Home Setup rSnapshot backups on CentOS
    Post
    Cancel

    Setup rSnapshot backups on CentOS

    1390262400
    1666884241

    In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up.

    1. You must first have rSync and rSnapshot installed:
    1
    + Setup rSnapshot backups on CentOS | TotalDebug
    Home Setup rSnapshot backups on CentOS
    Post
    Cancel

    Setup rSnapshot backups on CentOS

    1390262400
    1666884241

    In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up.

    1. You must first have rSync and rSnapshot installed:
    1
     
    yum -y install rsync rsnapshot
     
    1. Once installed you will then need to create the correct configuration files for your server. Here is an example of what I use (save as backup_config.conf):
    1
     2
    @@ -62,4 +62,4 @@
     0 0 * * * /usr/bin/rsnapshot -c /etc/rsnapshot/mp-vps01.conf daily
     

    If you would like email alerts use the following:

    1
     
    0 0 * * * /usr/bin/rsnapshot -c /etc/rsnapshot/mp-vps01.conf daily | mail -s "My Backup Job" your@email.co.uk
    -

    If the backup fails the email will be empty, I still haven’t figured out how to resolve this to email the errors, If you know please let me know in the comments!

    This post is licensed under CC BY 4.0 by the author.

    CentOS Use Public/Private Keys for Authentication

    Cisco ASDM Java Runtime Device Conenction

    +

    If the backup fails the email will be empty, I still haven’t figured out how to resolve this to email the errors, If you know please let me know in the comments!

    This post is licensed under CC BY 4.0 by the author.

    CentOS Use Public/Private Keys for Authentication

    Cisco ASDM Java Runtime Device Conenction

    diff --git a/posts/setup-ubiquiti-unifi-usg-remote-user-vpn/index.html b/posts/setup-ubiquiti-unifi-usg-remote-user-vpn/index.html index 8936c714c..5b0f634f6 100644 --- a/posts/setup-ubiquiti-unifi-usg-remote-user-vpn/index.html +++ b/posts/setup-ubiquiti-unifi-usg-remote-user-vpn/index.html @@ -1 +1 @@ - Setup Ubiquiti UniFi USG Remote User VPN | TotalDebug
    Home Setup Ubiquiti UniFi USG Remote User VPN
    Post
    Cancel

    Setup Ubiquiti UniFi USG Remote User VPN

    1486142422
    1666884241

    I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRADIUS Setup

    Once RADIUS is setup the easy part is configuring the USG through the UniFi controller.

    1. First you will need to login to your UniFi Controller
    2. Go to the settings 
    3. Then select networks
    4. Create a new network
    5. Add a name for the VPN
    6. Select Remote User VPN for the Purpose
    7. Enter and IP Address with CIDR e.g. 192.168.10.1/24
    8. Enter the IP Address for your RADIUS Server
    9. Enter the port for your RADIUS Server (Default is 1812)
    10. Enter your RADIUS Servers Secret Key / Password
    11. Click Save

    That is all you need to do!

    In version 5.3.11 and below P2TP is not supported which means it will not work with iPhones / iPads etc. this is supposed to be resolved in the next release.

    This post is licensed under CC BY 4.0 by the author.

    Install UniFi Controller on CentOS 7

    Upgrade your Linux UniFi Controller in minutes!

    + Setup Ubiquiti UniFi USG Remote User VPN | TotalDebug
    Home Setup Ubiquiti UniFi USG Remote User VPN
    Post
    Cancel

    Setup Ubiquiti UniFi USG Remote User VPN

    1486142422
    1666884241

    I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRADIUS Setup

    Once RADIUS is setup the easy part is configuring the USG through the UniFi controller.

    1. First you will need to login to your UniFi Controller
    2. Go to the settings 
    3. Then select networks
    4. Create a new network
    5. Add a name for the VPN
    6. Select Remote User VPN for the Purpose
    7. Enter and IP Address with CIDR e.g. 192.168.10.1/24
    8. Enter the IP Address for your RADIUS Server
    9. Enter the port for your RADIUS Server (Default is 1812)
    10. Enter your RADIUS Servers Secret Key / Password
    11. Click Save

    That is all you need to do!

    In version 5.3.11 and below P2TP is not supported which means it will not work with iPhones / iPads etc. this is supposed to be resolved in the next release.

    This post is licensed under CC BY 4.0 by the author.

    Install UniFi Controller on CentOS 7

    Upgrade your Linux UniFi Controller in minutes!

    diff --git a/posts/snapshot-changes-vsphere-6-0/index.html b/posts/snapshot-changes-vsphere-6-0/index.html index 6601115ce..6211d453e 100644 --- a/posts/snapshot-changes-vsphere-6-0/index.html +++ b/posts/snapshot-changes-vsphere-6-0/index.html @@ -1 +1 @@ - Snapshot changes in vSphere 6.0 | TotalDebug
    Home Snapshot changes in vSphere 6.0
    Post
    Cancel

    Snapshot changes in vSphere 6.0

    1469487600
    1655108715

    This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and consolidate them in vSphere 6.0 with VVols. Most people who use VMware are aware of limitations with snapshots on VMs that have heavy IO or large snapshots attached to them. In a large number of cases we see snapshots fail to remove and then require hours of downtime to actually consolidate.

    Previously we would take a snapshot, this would make the VMDK Read-Only and create a new Delta file that all the new changes would be written to. this file would continue to grow and potentially would end up as big as the VM’s allocated space. Depending on the size of the snapshot we would also take helper snapshots or “Safe Removal Snapshots”, these would allow us to lower the IO on the large snapshot so that the VM didnt see as big an impact when consolidating the first larger snapshot. This would then mean we could remove the helper snapshot, in some cases though the IO was too high for this to work. This could cause VMware to “Stun” the server effectively freezing IO and allowing the snapshot removal to take over causing downtime to our end users.

    Eventually if you were unable to merge the snapshots to the base disk the server would need to be powered down and the snapshot removed, this could take hours…

    In vSphere 6.0 with VVols this has totally changed!

    newsnapshots

    As you can see we now take a snapshot, but the base disk is still Read/Write, multiple delta files are created with the changed original data. This means that when we remove the snapshot all we need to do is tell VMware to delete the deltas, no need to write it all to the base VMDK as its already there. This technique was first implemented for the VMware Mirror Driver in vMotion, VMware have now utilised this to provide a near seamless snapshot capability in v6.0 stopping large amounts of downtime all together. There should no longer be any noticeable stun time as we are only removing the references to the snapshot.

    Interesting piece of information that I thought some of you might find useful.

    UPDATE:

    I decided to do a test of snapshot removal times, Using the same VM on both VVol and a normal Datastore by writing a 10gb file to them in the same manner. The VM on VVol took 3 seconds to remove, the VM on the normal Datastore took just over 3 minutes. This doesn’t sound like a lot but this is a lab on a VM with no load, imagine a 100GB snapshot with heavy load!

    So it looks like there are huge benefits to be had with VVol moving forwards.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    Two-Factor Authentication: is it worth it, does it really add more security?

    + Snapshot changes in vSphere 6.0 | TotalDebug
    Home Snapshot changes in vSphere 6.0
    Post
    Cancel

    Snapshot changes in vSphere 6.0

    1469487600
    1655108715

    This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and consolidate them in vSphere 6.0 with VVols. Most people who use VMware are aware of limitations with snapshots on VMs that have heavy IO or large snapshots attached to them. In a large number of cases we see snapshots fail to remove and then require hours of downtime to actually consolidate.

    Previously we would take a snapshot, this would make the VMDK Read-Only and create a new Delta file that all the new changes would be written to. this file would continue to grow and potentially would end up as big as the VM’s allocated space. Depending on the size of the snapshot we would also take helper snapshots or “Safe Removal Snapshots”, these would allow us to lower the IO on the large snapshot so that the VM didnt see as big an impact when consolidating the first larger snapshot. This would then mean we could remove the helper snapshot, in some cases though the IO was too high for this to work. This could cause VMware to “Stun” the server effectively freezing IO and allowing the snapshot removal to take over causing downtime to our end users.

    Eventually if you were unable to merge the snapshots to the base disk the server would need to be powered down and the snapshot removed, this could take hours…

    In vSphere 6.0 with VVols this has totally changed!

    newsnapshots

    As you can see we now take a snapshot, but the base disk is still Read/Write, multiple delta files are created with the changed original data. This means that when we remove the snapshot all we need to do is tell VMware to delete the deltas, no need to write it all to the base VMDK as its already there. This technique was first implemented for the VMware Mirror Driver in vMotion, VMware have now utilised this to provide a near seamless snapshot capability in v6.0 stopping large amounts of downtime all together. There should no longer be any noticeable stun time as we are only removing the references to the snapshot.

    Interesting piece of information that I thought some of you might find useful.

    UPDATE:

    I decided to do a test of snapshot removal times, Using the same VM on both VVol and a normal Datastore by writing a 10gb file to them in the same manner. The VM on VVol took 3 seconds to remove, the VM on the normal Datastore took just over 3 minutes. This doesn’t sound like a lot but this is a lab on a VM with no load, imagine a 100GB snapshot with heavy load!

    So it looks like there are huge benefits to be had with VVol moving forwards.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    Two-Factor Authentication: is it worth it, does it really add more security?

    diff --git a/posts/sqitch-sensible-database-change-management/index.html b/posts/sqitch-sensible-database-change-management/index.html index 64991d4c1..1ff91ebd8 100644 --- a/posts/sqitch-sensible-database-change-management/index.html +++ b/posts/sqitch-sensible-database-change-management/index.html @@ -1,7 +1,7 @@ - Sqitch, Sensible database change management | TotalDebug
    Home Sqitch, Sensible database change management
    Post
    Cancel

    Sqitch, Sensible database change management

    1625612400
    1655154889

    Overview

    Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the database changes between dev / staging / prod without missing parts of functions or missing table columns, especially over long development periods.

    Due to this I spent the past month looking into many different ways to manage this, we ended up landing on sqitch, it wasn’t the first product tested and below I will run through some of the others that I found and the issues we saw with them.

    Expectations

    So what did our team expect would be delivered by the database change management tool?

    Well here is the list:

    • Native SQL support
    • No limitations on SQL functionality
    • Open Source, or have a feature rich community edition that is well supported
    • Easily managed version control, ideally without need for new SQL files for each change
    • Ability to rollback changes to specific versions
    • Unix command line utility for easy automation

    The testing phase

    Over about a month I tested the following products:

    Flyway

    Flyway was very close to being the chosen product, it had most of our requirements with a few limitations, but it was the best I had found.

    Pros:

    • Uses native SQL
    • Easy file naming

    Cons:

    • A new file is required for every change, this would lead to hundreds of version files
    • Inability to rollback to a specific version in time
    • Heavily limited functionality on the community edition
    • More complex implementation

    Liqibase

    Liqibase was looking great, until I discovered that the main language used is XML, SQL is supported, however most documentation is XML based and I didn’t have the time to spend learning the XML format to eventually find out that some specific feature we use isn’t supported by this format.

    All in I found that it was more complex to get started than Flyway and the documentation wasn’t the best.

    Pros:

    • More features in the free version than Flyway
    • Diff feature to compare two databases
    • Rollback is free
    • Utilises one file for migrations

    Cons:

    • XML is the primary language used
    • Targeted rollback is an addon

    SQL Alchemy

    As this is an ORM it was removed from the running fairly quickly, there is no native SQL support, which means a high chance of missing SQL functionality, one such feature was the ability to create and update Postgres functions

    Pros:

    • Uses Python so can be baked into projects
    • Development Teams don’t need to know/learn SQL

    Cons:

    • Functionality limited to what the developers implement
    • Risk of compatibility issues in the future
    • No support for native SQL files

    Sqitch

    Sqitch was the last option on the table, I found this tool when searching YouTube when a very early version was being presented.

    The idea of Sqitch is to use Version control to track the changes in files, for our requirements this was perfect. It meant I could update existing SQL files and Sqitch would know a change was made and could then be deployed.

    One downside to this plan is that not all these features are implemented yet. Although the developers working on the project are making massive strides and I feel it wont be long until they have achieved the original goal they set out for.

    Pros:

    • Uses native SQL
    • Utilises a git like version control system
    • You always edit the original file
    • Open source allowing you to customise as needed
    • Very responsive community
    • Ability to support almost any database

    Cons:

    • Some expected features are not implemented yet
    • No commercial support, only community based

    Implementation

    Now that we have tested and decided that Sqitch is the product for us, its time to implement the solution.

    Installation is super simple, its written in Perl so can be installed on almost any system, or you can use it within a Docker container.

    I won’t cover the installation as its easy enough and documented well on the sqitch website.

    One thing that I would recommend is to change the default location of the files, by default Sqitch will add deploy, revert and verify to the root directory. Your SQL goes inside these directories. I prefer to have these in a separate directory to keep the root directory tidy, to do this you would run a command similar to below when initialising your repository:

    1
    + Sqitch, Sensible database change management | TotalDebug
    Home Sqitch, Sensible database change management
    Post
    Cancel

    Sqitch, Sensible database change management

    1625612400
    1655154889

    Overview

    Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the database changes between dev / staging / prod without missing parts of functions or missing table columns, especially over long development periods.

    Due to this I spent the past month looking into many different ways to manage this, we ended up landing on sqitch, it wasn’t the first product tested and below I will run through some of the others that I found and the issues we saw with them.

    Expectations

    So what did our team expect would be delivered by the database change management tool?

    Well here is the list:

    • Native SQL support
    • No limitations on SQL functionality
    • Open Source, or have a feature rich community edition that is well supported
    • Easily managed version control, ideally without need for new SQL files for each change
    • Ability to rollback changes to specific versions
    • Unix command line utility for easy automation

    The testing phase

    Over about a month I tested the following products:

    Flyway

    Flyway was very close to being the chosen product, it had most of our requirements with a few limitations, but it was the best I had found.

    Pros:

    • Uses native SQL
    • Easy file naming

    Cons:

    • A new file is required for every change, this would lead to hundreds of version files
    • Inability to rollback to a specific version in time
    • Heavily limited functionality on the community edition
    • More complex implementation

    Liqibase

    Liqibase was looking great, until I discovered that the main language used is XML, SQL is supported, however most documentation is XML based and I didn’t have the time to spend learning the XML format to eventually find out that some specific feature we use isn’t supported by this format.

    All in I found that it was more complex to get started than Flyway and the documentation wasn’t the best.

    Pros:

    • More features in the free version than Flyway
    • Diff feature to compare two databases
    • Rollback is free
    • Utilises one file for migrations

    Cons:

    • XML is the primary language used
    • Targeted rollback is an addon

    SQL Alchemy

    As this is an ORM it was removed from the running fairly quickly, there is no native SQL support, which means a high chance of missing SQL functionality, one such feature was the ability to create and update Postgres functions

    Pros:

    • Uses Python so can be baked into projects
    • Development Teams don’t need to know/learn SQL

    Cons:

    • Functionality limited to what the developers implement
    • Risk of compatibility issues in the future
    • No support for native SQL files

    Sqitch

    Sqitch was the last option on the table, I found this tool when searching YouTube when a very early version was being presented.

    The idea of Sqitch is to use Version control to track the changes in files, for our requirements this was perfect. It meant I could update existing SQL files and Sqitch would know a change was made and could then be deployed.

    One downside to this plan is that not all these features are implemented yet. Although the developers working on the project are making massive strides and I feel it wont be long until they have achieved the original goal they set out for.

    Pros:

    • Uses native SQL
    • Utilises a git like version control system
    • You always edit the original file
    • Open source allowing you to customise as needed
    • Very responsive community
    • Ability to support almost any database

    Cons:

    • Some expected features are not implemented yet
    • No commercial support, only community based

    Implementation

    Now that we have tested and decided that Sqitch is the product for us, its time to implement the solution.

    Installation is super simple, its written in Perl so can be installed on almost any system, or you can use it within a Docker container.

    I won’t cover the installation as its easy enough and documented well on the sqitch website.

    One thing that I would recommend is to change the default location of the files, by default Sqitch will add deploy, revert and verify to the root directory. Your SQL goes inside these directories. I prefer to have these in a separate directory to keep the root directory tidy, to do this you would run a command similar to below when initialising your repository:

    1
     
    sqitch init myApp --top-dir sql --uri https://github.com/totaldebug/sqitch_demo --engine pg
     

    This command will tell Sqitch that you want to init a sqitch project within the directory sql for the GitHub repository sqitch_demo and with the engine pg (PostgreSQL) there are other options and databases supported all listed here

    Once you have initialised the project you are ready to add a change. The basic pattern is:

    • Create a branch
    • Add SQL changes
    • Modify the code as needed
    • Commit
    • Merge to master

    So when first starting out you would want to create the schema to do this you would:

    1. Create a branch in your Git repo
    2. Run sqitch add appschema
    3. Edit sql/deploy/appschema.sql, sql/revert/appschema.sql and sql/verify/appschema.sql
    4. Run sqitch deploy db:pg://user@127.0.0.1:5432/sqitch_demo to deploy the changes
    5. Edit any code as normal
    6. Run any tests
    7. Commit your changes
    8. Merge the changes back to the main branch

    In order to ensure that your revert SQL is working as expected, it is a good idea to revert and redeploy your changes:

    1
     
    sqitch rebase --onto @HEAD^ -y
     

    This command will revert the last change, and redeploy it to the database. This is essentially a shorter way of running:

    1
     
    sqitch revert --to @HEAD^ -y && sqitch deploy db:pg://user@127.0.0.1:5432/sqitch_demo
    -

    When the deploy command is issued, sqitch will run down the plan file and execute each change that is required.

    If this is the first time deploying Sqitch to a database, it will automatically create all the required tables to track future deployments and changes.

    Conclusion

    I’ve barely scratched the surface of Sqitch’s capabilities. To say how long Git and change management has been around, its amazing that its taken this long for someone to get it right. If you are having issues with managing database change, I highly suggest that you try Sqitch.

    This post is licensed under CC BY 4.0 by the author.

    Using CloneZilla to migrate multiple disk server

    Cookiecutter: Automate project creation!

    +

    When the deploy command is issued, sqitch will run down the plan file and execute each change that is required.

    If this is the first time deploying Sqitch to a database, it will automatically create all the required tables to track future deployments and changes.

    Conclusion

    I’ve barely scratched the surface of Sqitch’s capabilities. To say how long Git and change management has been around, its amazing that its taken this long for someone to get it right. If you are having issues with managing database change, I highly suggest that you try Sqitch.

    This post is licensed under CC BY 4.0 by the author.

    Using CloneZilla to migrate multiple disk server

    Cookiecutter: Automate project creation!

    diff --git a/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html b/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html index 1cd15ee70..bdb361d64 100644 --- a/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html +++ b/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html @@ -1,4 +1,4 @@ - Synchronise time with external NTP server on Windows Server | TotalDebug
    Home Synchronise time with external NTP server on Windows Server
    Post
    Cancel

    Synchronise time with external NTP server on Windows Server

    1310116500
    1666901265

    Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from the domain’s PDC Operation Master. Therefore the PDC must synchronize his time from an external source. I usually use the servers listed at the NTP Pool Project website. Before you begin, don’t forget to open the default UDP 123 port (in- and outbound) on your firewall.

    First, locate your PDC Server. Open the command prompt and type:

    1
    + Synchronise time with external NTP server on Windows Server | TotalDebug
    Home Synchronise time with external NTP server on Windows Server
    Post
    Cancel

    Synchronise time with external NTP server on Windows Server

    1310116500
    1666901265

    Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from the domain’s PDC Operation Master. Therefore the PDC must synchronize his time from an external source. I usually use the servers listed at the NTP Pool Project website. Before you begin, don’t forget to open the default UDP 123 port (in- and outbound) on your firewall.

    First, locate your PDC Server. Open the command prompt and type:

    1
     
    netdom /query fsmo
     

    Log in to your PDC Server and open the command prompt. run the following command:

    1
     
    net stop w32time
    @@ -10,4 +10,4 @@
     
    w32tm /resync /nowait
     

    To check that the command has worked run the following:

    1
     
    **_w32tm /query /configuration_**
    -

    When doing this on SBS you may get an access denied error if you do remove: /reliable:yes from the line on number 3.

    This post is licensed under CC BY 4.0 by the author.

    How to Make the Shutdown Button Unavailable with Group Policy

    Killing a Windows service that hangs on "stopping"

    +

    When doing this on SBS you may get an access denied error if you do remove: /reliable:yes from the line on number 3.

    This post is licensed under CC BY 4.0 by the author.

    How to Make the Shutdown Button Unavailable with Group Policy

    Killing a Windows service that hangs on "stopping"

    diff --git a/posts/teamspeak-3-centos-7-using-mariadb-database-3-0-12-4/index.html b/posts/teamspeak-3-centos-7-using-mariadb-database-3-0-12-4/index.html index 68e5f45df..9261e9a9e 100644 --- a/posts/teamspeak-3-centos-7-using-mariadb-database-3-0-12-4/index.html +++ b/posts/teamspeak-3-centos-7-using-mariadb-database-3-0-12-4/index.html @@ -1,4 +1,4 @@ - Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4) | TotalDebug
    Home Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)
    Post
    Cancel

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    1464130800
    1683582393

    This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script.

    We are using MariaDB as MySQL no longer ships with CentOS and MariaDB is a fork of MySQL

    Checkout the video at YouTube:

    A few prerequisites that will be required before proceeding with this tutorial:

    1
    + Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4) | TotalDebug
    Home Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)
    Post
    Cancel

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    1464130800
    1683582393

    This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script.

    We are using MariaDB as MySQL no longer ships with CentOS and MariaDB is a fork of MySQL

    Checkout the video at YouTube:

    A few prerequisites that will be required before proceeding with this tutorial:

    1
     2
     
    yum update -y
     yum install wget perl net-tools mariadb mariadb-server -y
    @@ -142,4 +142,4 @@
     
    firewall-cmd --zone=public --add-port=30033/tcp --permanent
     

    Reload the firewall:

    1
     
    firewall-cmd --reload
    -

    and connect with our TS3 Client. The first person to logon will be asked to provide a privilege key, enter the one retrieved during the installation.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    Snapshot changes in vSphere 6.0

    +

    and connect with our TS3 Client. The first person to logon will be asked to provide a privilege key, enter the one retrieved during the installation.

    This post is licensed under CC BY 4.0 by the author.

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    Snapshot changes in vSphere 6.0

    diff --git a/posts/teamspeak-3-mysql-centos-6-x-3-0-11-1-onwards/index.html b/posts/teamspeak-3-mysql-centos-6-x-3-0-11-1-onwards/index.html index d4cbeccdb..c8e06b9b1 100644 --- a/posts/teamspeak-3-mysql-centos-6-x-3-0-11-1-onwards/index.html +++ b/posts/teamspeak-3-mysql-centos-6-x-3-0-11-1-onwards/index.html @@ -1,4 +1,4 @@ - Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards) | TotalDebug
    Home Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)
    Post
    Cancel

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    1414713600
    1614629284

    By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it.

    Follow this small tutorial to create a Teamspeak 3 Server on CentOS 6.x using a MySQL Database!

    First we need to install or upgrade MySQL: Install:

    1
    + Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards) | TotalDebug
    Home Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)
    Post
    Cancel

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    1414713600
    1614629284

    By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it.

    Follow this small tutorial to create a Teamspeak 3 Server on CentOS 6.x using a MySQL Database!

    First we need to install or upgrade MySQL: Install:

    1
     2
     3
     4
    @@ -259,4 +259,4 @@
     chkconfig --add teamspeak
     chkconfig teamspeak on
     service teamspeak start
    -
    This post is licensed under CC BY 4.0 by the author.

    vCloud Director and vCenter Proxy Service Failure

    Graylog2 CentOS Installation

    +
    This post is licensed under CC BY 4.0 by the author.

    vCloud Director and vCenter Proxy Service Failure

    Graylog2 CentOS Installation

    diff --git a/posts/teamspeak-3-mysql-centos-6-x/index.html b/posts/teamspeak-3-mysql-centos-6-x/index.html index d55c50c74..ad8045bcd 100644 --- a/posts/teamspeak-3-mysql-centos-6-x/index.html +++ b/posts/teamspeak-3-mysql-centos-6-x/index.html @@ -1,4 +1,4 @@ - Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1) | TotalDebug
    Home Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)
    Post
    Cancel

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    1399330800
    1666884241

    As of Version 3.0.11.1 this tutorial is no longer applicable. I will soon re-write this to accommodate the latest version.

    By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it.

    Follow this small tutorial to create a Teamspeak 3 Server on CentOS 6.x using a MySQL Database! V

    VIDEO AVAILABLE HERE  First we need to have mysql installed:

    1
    + Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1) | TotalDebug
    Home Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)
    Post
    Cancel

    Teamspeak 3 with MySQL on CentOS 6.x (before 3.0.11.1)

    1399330800
    1666884241

    As of Version 3.0.11.1 this tutorial is no longer applicable. I will soon re-write this to accommodate the latest version.

    By default Teamspeak 3 uses a SQLite database, most people tend to use this however for those of us that prefer MySQL there is a way to change it.

    Follow this small tutorial to create a Teamspeak 3 Server on CentOS 6.x using a MySQL Database! V

    VIDEO AVAILABLE HERE  First we need to have mysql installed:

    1
     
    yum install mysql-server mysql-common
     

    To use a MySQL database, you need to install additional libraries not available from the default repositories. Download MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm (This is 64 bit version. If you are on a 32 bit system, you’ll need to find it somewhere) and install

    1
     
    yum localinstall MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm
    @@ -242,4 +242,4 @@
     chkconfig --add teamspeak
     chkconfig teamspeak on
     service teamspeak start
    -
    This post is licensed under CC BY 4.0 by the author.

    Migrate TeamSpeak 3 from SQLite to MySQL

    CentOS Server Hardening Tips

    +
    This post is licensed under CC BY 4.0 by the author.

    Migrate TeamSpeak 3 from SQLite to MySQL

    CentOS Server Hardening Tips

    diff --git a/posts/teamspeak-3-recovering-privilege-key-first-startup-mysqlmariadb/index.html b/posts/teamspeak-3-recovering-privilege-key-first-startup-mysqlmariadb/index.html index cfcaa4d19..d53674fec 100644 --- a/posts/teamspeak-3-recovering-privilege-key-first-startup-mysqlmariadb/index.html +++ b/posts/teamspeak-3-recovering-privilege-key-first-startup-mysqlmariadb/index.html @@ -1,4 +1,4 @@ - Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only) | TotalDebug
    Home Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)
    Post
    Cancel

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    1462402800
    1666884422

    When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down?

    In this article I will show you how to retrieve it!

    1. Login to your Teamspeak3 server
    2. Connect to SQL:
      mysql -uyouruser -p
    + Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only) | TotalDebug
    Home Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)
    Post
    Cancel

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    1462402800
    1666884422

    When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down?

    In this article I will show you how to retrieve it!

    1. Login to your Teamspeak3 server
    2. Connect to SQL:
      mysql -uyouruser -p
     
    1. Select your TS3 Database:
      USE <DatabaseName>;
     
    1. Sleect the Tokens Table:
      SELECT * FROM tokens;
    -
    1. You should see a privilege key copy this (token_key) column

    Its as simple as that! the privilege key can only be used once, when it has been used it will be removed from the tokens table.

    This post is licensed under CC BY 4.0 by the author.

    vCloud Director 8.0 for Service Providers

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    +
    1. You should see a privilege key copy this (token_key) column

    Its as simple as that! the privilege key can only be used once, when it has been used it will be removed from the tokens table.

    This post is licensed under CC BY 4.0 by the author.

    vCloud Director 8.0 for Service Providers

    Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4)

    diff --git a/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html b/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html index f43936be7..91ceadba9 100644 --- a/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html +++ b/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html @@ -1 +1 @@ - The Missing Manual Part 1: Veeam B & R Direct SAN Backups | TotalDebug
    Home The Missing Manual Part 1: Veeam B & R Direct SAN Backups
    Post
    Cancel

    The Missing Manual Part 1: Veeam B & R Direct SAN Backups

    1311289200
    1666884241

    One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host, the data would flow from SAN to backup server directly. The benefits of this process are very clear… reduced CPU and network load on the ever so valuable ESXi resources.

    The problem is that by default this just doesn’t work with Veeam if you haven’t properly setup your backup server. I will try and keep this process simple, and vendor agnostic ( from a SAN point of view).

    The first step to making the vStorage API “SAN backup” work is to make sure your backup server has the Microsoft iSCSI initiator installed. It is already installed by default on Windows 2008 server, however for windows 2003 server you will need to go to Microsoft to download the latest version.

    You will need to configure your SAN to allow the IQN address of the iSCSI initiator to have access to the volumes on the SAN… this process is different for each vendor. See screen shot on how to find this in the Configuration tab of the iscsi initiator

    After installing MS iSCSI initiator, and setting up your SAN, we need to configure it to see the SAN volumes; do this by opening the “iSCSI initiator” option from control panel. At the top of the main tab there is a field where you can put your SAN’s IP address, enter that now and then press Quick Connect. Shortly a list of all of the volumes that your backup server has access to should appear, once they do select each one and press the “connect” button. Because the volumes are formatted VMFS windows will not show them in My Computer, but if you go to Disk Management inside of Computer manager you should now see that the backup server can see these volumes.

    Update: A note from the Veeam Team “One thing that we (Veeam) recommends is to disable automount on your Windows backup server. To do this open up a command prompt and enter in diskpart. Hit enter and then type “Automount disable”. This is to ensure that the Windows server doesn’t try and format the volumes at all. However, before any of this is done if you can through your SAN software, give the Veeam Backup server Read-Only access to your VMFS volumes.”

    After preforming these steps go ahead and configure Veeam to use the SAN backup option, and you should notice (especially if you have separate NICs for the SAN network) that all of your data is moving through the SAN directly to the backup server without proxying through the ESXi hosts.

    This post is licensed under CC BY 4.0 by the author.

    Killing a Windows service that hangs on "stopping"

    Send on Behalf and Send As

    + The Missing Manual Part 1: Veeam B & R Direct SAN Backups | TotalDebug
    Home The Missing Manual Part 1: Veeam B & R Direct SAN Backups
    Post
    Cancel

    The Missing Manual Part 1: Veeam B & R Direct SAN Backups

    1311289200
    1666884241

    One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host, the data would flow from SAN to backup server directly. The benefits of this process are very clear… reduced CPU and network load on the ever so valuable ESXi resources.

    The problem is that by default this just doesn’t work with Veeam if you haven’t properly setup your backup server. I will try and keep this process simple, and vendor agnostic ( from a SAN point of view).

    The first step to making the vStorage API “SAN backup” work is to make sure your backup server has the Microsoft iSCSI initiator installed. It is already installed by default on Windows 2008 server, however for windows 2003 server you will need to go to Microsoft to download the latest version.

    You will need to configure your SAN to allow the IQN address of the iSCSI initiator to have access to the volumes on the SAN… this process is different for each vendor. See screen shot on how to find this in the Configuration tab of the iscsi initiator

    After installing MS iSCSI initiator, and setting up your SAN, we need to configure it to see the SAN volumes; do this by opening the “iSCSI initiator” option from control panel. At the top of the main tab there is a field where you can put your SAN’s IP address, enter that now and then press Quick Connect. Shortly a list of all of the volumes that your backup server has access to should appear, once they do select each one and press the “connect” button. Because the volumes are formatted VMFS windows will not show them in My Computer, but if you go to Disk Management inside of Computer manager you should now see that the backup server can see these volumes.

    Update: A note from the Veeam Team “One thing that we (Veeam) recommends is to disable automount on your Windows backup server. To do this open up a command prompt and enter in diskpart. Hit enter and then type “Automount disable”. This is to ensure that the Windows server doesn’t try and format the volumes at all. However, before any of this is done if you can through your SAN software, give the Veeam Backup server Read-Only access to your VMFS volumes.”

    After preforming these steps go ahead and configure Veeam to use the SAN backup option, and you should notice (especially if you have separate NICs for the SAN network) that all of your data is moving through the SAN directly to the backup server without proxying through the ESXi hosts.

    This post is licensed under CC BY 4.0 by the author.

    Killing a Windows service that hangs on "stopping"

    Send on Behalf and Send As

    diff --git a/posts/two-factor-authentication-worth-really-add-security/index.html b/posts/two-factor-authentication-worth-really-add-security/index.html index cd58b37d8..7c036b6f9 100644 --- a/posts/two-factor-authentication-worth-really-add-security/index.html +++ b/posts/two-factor-authentication-worth-really-add-security/index.html @@ -1 +1 @@ - Two-Factor Authentication: is it worth it, does it really add more security? | TotalDebug
    Home Two-Factor Authentication: is it worth it, does it really add more security?
    Post
    Cancel

    Two-Factor Authentication: is it worth it, does it really add more security?

    1482174000
    1690657760

    As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, including LastPass, LinkedIn, Twitter and Adobe.

    This has cast a light on the problems that passwords bring and how vulnerable users are as a result. Most of these companies are now implementing Two-Factor authentication, but is it really as secure as we are lead to believe? what are its pitfalls?

    In this article i’m going to go through some of the pros and cons relating to Two-Factor authentication (or 2FA)

    What is 2FA?

    Simply put Two-Factor authentication / multi-factor authentication is the ability to employ multiple layers of authentication, in most cases this would be your password and then a token that expires after a short period of time.

    Other types of authentication could include but are not limited to:

    • Finger Print Recognition
    • Retinal scanners
    • Face Recognition

    Try this example: You have a house, with a safe, inside is a gold bar. The safe has a combination on it that only you know and the house has a door that is locked, only you have the key for this door. It takes two steps of “authentication” to get into the safe and retrieve your gold.

    If you added more doors with different locks this would add more “authentication” and it would make the house harder to enter to get to the safe.

    How does it work?

    There are multiple ways that 2FA tokens work, one method is time based. Both the server and client take the current time e.g. 15:15 they then turn this into a number 1515 and run it through an algorithm that hashes it into a multiple digit code, both devices use the same algorithm to generate the code and thus both generate the same code (as long as the times match), this is obviously a very simplified explanation but shows how both the server and client generate the same codes securely.

    To setup 2FA in most cases the website you are using will have a QR-Code that you can scan into an app such as Authy or Google Authenticator, this will then display a numbered token for around 8 seconds before expiring and a new code being generated. After you have entered your conventional username and password you would be prompted for your “Token” once entered you will be authenticated into your account. If you don’t type the token and submit it before the token expires your authentication would fail and you would need to enter the new token.

    How Secure is 2FA?

    Like any security mechanism there are ways that it can be hacked/compromised, however with two layers of authentication we are making it much harder for any hacker to gain access to our accounts, most people use the same password across multiple websites, with this method if someone does get that password but doesn’t have the 2FA Token then they aren’t getting into your accounts.

    Not all deployments of 2FA are as secure as others, this comes down to the algorithms that are used and the reliance on any 3rd party servers to generate the 2FA Tokens. The type of 2FA used would really depend on the application and users that would be using it. Hardware based 2FA is much more secure than software based but relies on 3rd party hardware.

    Conclusion

    Personally I believe that 2FA should be used where possible, if you have a smartphone that can install one of the 2FA applications I see no reason to avoid this. It makes your accounts and personal information more secure and most importantly harder to hack!

    This post is licensed under CC BY 4.0 by the author.

    Snapshot changes in vSphere 6.0

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    + Two-Factor Authentication: is it worth it, does it really add more security? | TotalDebug
    Home Two-Factor Authentication: is it worth it, does it really add more security?
    Post
    Cancel

    Two-Factor Authentication: is it worth it, does it really add more security?

    1482174000
    1690657760

    As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, including LastPass, LinkedIn, Twitter and Adobe.

    This has cast a light on the problems that passwords bring and how vulnerable users are as a result. Most of these companies are now implementing Two-Factor authentication, but is it really as secure as we are lead to believe? what are its pitfalls?

    In this article i’m going to go through some of the pros and cons relating to Two-Factor authentication (or 2FA)

    What is 2FA?

    Simply put Two-Factor authentication / multi-factor authentication is the ability to employ multiple layers of authentication, in most cases this would be your password and then a token that expires after a short period of time.

    Other types of authentication could include but are not limited to:

    • Finger Print Recognition
    • Retinal scanners
    • Face Recognition

    Try this example: You have a house, with a safe, inside is a gold bar. The safe has a combination on it that only you know and the house has a door that is locked, only you have the key for this door. It takes two steps of “authentication” to get into the safe and retrieve your gold.

    If you added more doors with different locks this would add more “authentication” and it would make the house harder to enter to get to the safe.

    How does it work?

    There are multiple ways that 2FA tokens work, one method is time based. Both the server and client take the current time e.g. 15:15 they then turn this into a number 1515 and run it through an algorithm that hashes it into a multiple digit code, both devices use the same algorithm to generate the code and thus both generate the same code (as long as the times match), this is obviously a very simplified explanation but shows how both the server and client generate the same codes securely.

    To setup 2FA in most cases the website you are using will have a QR-Code that you can scan into an app such as Authy or Google Authenticator, this will then display a numbered token for around 8 seconds before expiring and a new code being generated. After you have entered your conventional username and password you would be prompted for your “Token” once entered you will be authenticated into your account. If you don’t type the token and submit it before the token expires your authentication would fail and you would need to enter the new token.

    How Secure is 2FA?

    Like any security mechanism there are ways that it can be hacked/compromised, however with two layers of authentication we are making it much harder for any hacker to gain access to our accounts, most people use the same password across multiple websites, with this method if someone does get that password but doesn’t have the 2FA Token then they aren’t getting into your accounts.

    Not all deployments of 2FA are as secure as others, this comes down to the algorithms that are used and the reliance on any 3rd party servers to generate the 2FA Tokens. The type of 2FA used would really depend on the application and users that would be using it. Hardware based 2FA is much more secure than software based but relies on 3rd party hardware.

    Conclusion

    Personally I believe that 2FA should be used where possible, if you have a smartphone that can install one of the 2FA applications I see no reason to avoid this. It makes your accounts and personal information more secure and most importantly harder to hack!

    This post is licensed under CC BY 4.0 by the author.

    Snapshot changes in vSphere 6.0

    Install FreeRadius on CentOS 7 with DaloRadius for management – Updated

    diff --git a/posts/type-hinting-and-checking-in-python/index.html b/posts/type-hinting-and-checking-in-python/index.html index 4110fa86e..ef98ef2ff 100644 --- a/posts/type-hinting-and-checking-in-python/index.html +++ b/posts/type-hinting-and-checking-in-python/index.html @@ -1,4 +1,4 @@ - Type hinting and checking in Python | TotalDebug
    Home Type hinting and checking in Python
    Post
    Cancel

    Type hinting and checking in Python

    1657029000

    Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5.

    Type hints help to structure your projects better, however they are just hints, they don’t impact the runtime.

    As your code base gets larger or you utilise unfamiliar libraries type hints can help with debugging and stopping mistakes from being made when writing new code. When utilising an IDE such as VSCode (with extensions) and PyCharm you will be presented with warning messages each time an incorrect type is used.

    Pros and Cons

    Adding Type hints comes with some great pros:

    • Great to assist in the documentation of your code
    • Enable IDEs to provide better autocomplete functionality
    • Help discover errors during development
    • Force you to think about what type should be used and returned, enabling better design decisions.

    However, there are also some downsides to type hinting:

    • Adds development time
    • Only works with Python 3.5+. (although this shouldn’t be an issue now)
    • Can cause a minor start-up delay in code that uses it especially when using the typing module
    • Code can be harder to write, especially for complex types

    When should type hinting be added:

    • Large projects with multiple developers
    • Design and development of libraries, type hints will help developers that are not familiar with the library
    • If you plan on writing tests it is recommended to use type hinting

    Function Typing

    Type hints can be added to a function as follows:

    • After each parameter, add a colon and a data type
    • After the function add an arrow function -> and data type

    A function with type hints should look similar to the one below:

    1
    + Type hinting and checking in Python | TotalDebug
    Home Type hinting and checking in Python
    Post
    Cancel

    Type hinting and checking in Python

    1657029000

    Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5.

    Type hints help to structure your projects better, however they are just hints, they don’t impact the runtime.

    As your code base gets larger or you utilise unfamiliar libraries type hints can help with debugging and stopping mistakes from being made when writing new code. When utilising an IDE such as VSCode (with extensions) and PyCharm you will be presented with warning messages each time an incorrect type is used.

    Pros and Cons

    Adding Type hints comes with some great pros:

    • Great to assist in the documentation of your code
    • Enable IDEs to provide better autocomplete functionality
    • Help discover errors during development
    • Force you to think about what type should be used and returned, enabling better design decisions.

    However, there are also some downsides to type hinting:

    • Adds development time
    • Only works with Python 3.5+. (although this shouldn’t be an issue now)
    • Can cause a minor start-up delay in code that uses it especially when using the typing module
    • Code can be harder to write, especially for complex types

    When should type hinting be added:

    • Large projects with multiple developers
    • Design and development of libraries, type hints will help developers that are not familiar with the library
    • If you plan on writing tests it is recommended to use type hinting

    Function Typing

    Type hints can be added to a function as follows:

    • After each parameter, add a colon and a data type
    • After the function add an arrow function -> and data type

    A function with type hints should look similar to the one below:

    1
     2
     
    def add_numbers(num1: int, num2: int) -> int:
       return num1 + num2
    @@ -14,4 +14,4 @@
     2
     
    def add_numbers(num1: Union[int, float], num2: Union[int, float]) -> Union[int, float]:
       return num1 + num2
    -

    With this updated example if we used add_numbers(1.1, 1.2) the output would work without error and type hints would not display a warning.

    Static Type Checking - Mypy

    Mypy will run against your code and print out any type errors that are found. Mypy doesn’t need to execute the code, it will simply run through it much the same as a linter tool would do.

    If no type hinting is present in the code, no errors will be produced by Mypy.

    Mypy can be run against a single file or an entire folder. I also utilise pre-commits which wont allow code to be committed if there are any errors present. I also introduced these checks with Github Actions to ensure any contributions to my projects follow these requirements.

    Final Thoughts

    Type hints are a great way to ensure your code is used in the correct manner and to reduce the risk of errors being introduced during development. Although they are not required by Python, I feel that type hints should be added to all projects as it assists with clean code and reduces errors.

    The following resources are great for additional help with type hinting:

    This post is licensed under CC BY 4.0 by the author.

    Creating the perfect Python project

    Proxmox Template with Cloud Image and Cloud Init

    +

    With this updated example if we used add_numbers(1.1, 1.2) the output would work without error and type hints would not display a warning.

    Static Type Checking - Mypy

    Mypy will run against your code and print out any type errors that are found. Mypy doesn’t need to execute the code, it will simply run through it much the same as a linter tool would do.

    If no type hinting is present in the code, no errors will be produced by Mypy.

    Mypy can be run against a single file or an entire folder. I also utilise pre-commits which wont allow code to be committed if there are any errors present. I also introduced these checks with Github Actions to ensure any contributions to my projects follow these requirements.

    Final Thoughts

    Type hints are a great way to ensure your code is used in the correct manner and to reduce the risk of errors being introduced during development. Although they are not required by Python, I feel that type hints should be added to all projects as it assists with clean code and reduces errors.

    The following resources are great for additional help with type hinting:

    This post is licensed under CC BY 4.0 by the author.

    Creating the perfect Python project

    Proxmox Template with Cloud Image and Cloud Init

    diff --git a/posts/ubiquiti-unifi-usg-content-filter-configuration/index.html b/posts/ubiquiti-unifi-usg-content-filter-configuration/index.html index 784816f39..d2fa27607 100644 --- a/posts/ubiquiti-unifi-usg-content-filter-configuration/index.html +++ b/posts/ubiquiti-unifi-usg-content-filter-configuration/index.html @@ -1,4 +1,4 @@ - Ubiquiti UniFi USG Content Filter Configuration | TotalDebug
    Home Ubiquiti UniFi USG Content Filter Configuration
    Post
    Cancel

    Ubiquiti UniFi USG Content Filter Configuration

    1568728800
    1665846838

    Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this

    First we need to logon to the USG via SSH, On windows I recommend Putty

    Once we have logged in, run the below command:

    1
    + Ubiquiti UniFi USG Content Filter Configuration | TotalDebug
    Home Ubiquiti UniFi USG Content Filter Configuration
    Post
    Cancel

    Ubiquiti UniFi USG Content Filter Configuration

    1568728800
    1665846838

    Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this

    First we need to logon to the USG via SSH, On windows I recommend Putty

    Once we have logged in, run the below command:

    1
     
    update webproxy blacklists
     

    This will download all of the content filter categories to the USG, this can take some time as there is approx. 100MB (70-80MB is “adult”)

    When this is completed run the following:

    1
     2
    @@ -60,4 +60,4 @@
                     }
             }
     }
    -

    Save this information into a file on your controller

    • File Location: /opt/UniFi/data/sites/[site name/default]/
    • File Name: config.gateway.json

    once you have done this whenever you make any changes to your USG the Content Filtering will be re-applied.

    Hopefully this article has assisted you with your configuration. Any questions please let me know.

    This post is licensed under CC BY 4.0 by the author.

    vCloud Director 8.10 – Renew SSL Certificates

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    +

    Save this information into a file on your controller

    • File Location: /opt/UniFi/data/sites/[site name/default]/
    • File Name: config.gateway.json

    once you have done this whenever you make any changes to your USG the Content Filtering will be re-applied.

    Hopefully this article has assisted you with your configuration. Any questions please let me know.

    This post is licensed under CC BY 4.0 by the author.

    vCloud Director 8.10 – Renew SSL Certificates

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    diff --git a/posts/understanding-resource-pools-vmware/index.html b/posts/understanding-resource-pools-vmware/index.html index da12a6baa..5a15fed21 100644 --- a/posts/understanding-resource-pools-vmware/index.html +++ b/posts/understanding-resource-pools-vmware/index.html @@ -1,3 +1,3 @@ - Understanding Resource Pools in VMware | TotalDebug
    Home Understanding Resource Pools in VMware
    Post
    Cancel

    Understanding Resource Pools in VMware

    1423440000
    1666884241

    It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them as folders. Even with some other great resources out there that discuss this topic, a lack of education remains on how resource pools work, and what they do. In this post, I’ll give you my spin on some of the ideals behind a resource pool, and then discuss ways to properly balance resource pools by hand and with the help of some PowerShell scripts I have created for you.

    What is a Resource Pool?

    A VMware resource pool is a way of guaranteeing or providing higher priority on a VM’s CPU and/or Memory, the priority set at the pool is then split between each of the individual VM’s in that pool equally.

    Who Needs Resource Pools?

    You can’t make a resource pool on a cluster unless you have DRS running. So, if your license level excludes DRS, you can’t use resource pools. If you are graced with the awesomeness of DRS, you may need a resource pool if you want to give different types of workloads different priorities for two scenarios:

    • For when memory and CPU resources become constrained on the cluster.
    • For when a workload needs a dedicated amount of resources at all times.

    Now, this isn’t to say that a resource pool is the only way to accomplish these things – you can use per VM shares and reservations. But, these values sometimes reset when a VM vMotions to another host, and frankly it’s a bit of an administrative nightmare to manage resource settings on the VMs individually.

    I personally like resource pools and use them often in a mixed workload environment. If you don’t have the luxury of a dedicated management cluster, resource pools are an easy way to dedicate resources to your vCenter, VUM, DB, and other “virtual infrastructure management” (VIM) VMs.

    Why People Fear Resource Pools

    People fear resource pools because they are mysterious. Ok, maybe not that mysterious, but they are a bit awkward at first, one common misuse of resource pools that I see quite a lot is as folders, to sort VM’s rather than as a performance control. Also, they are easy to misunderstand, and thus misuse.

    Where Did I Get The Numbers?

    Let’s start with the resource pools. You’ll notice 3 points for each pool – the shares (high, normal or low), the amount of shares for RAM, and the amount of shares for CPU. Here is the math (supporting document):

    • RAM is calculated like this: [Cluster RAM in MB] * [20 for High10 for Normal5 for Low]
    • Our cluster has 100 GB of RAM (grey section) and so the math is: 102,400 MB of RAM * 20 = 2,048,000 for High and 102,400 MB of RAM * 5 = 512,000 for Low
    • CPU is calculated like this: [Cluster CPU Cores] * [2,000 for High, 1,000 for Normal, 500 for Low]
    • Our cluster has 100 CPU cores (grey section) and so the math is: 100 * 2,000 = 200,000 for High and 100 * 500 = 50,000 for Low

    Based on this, the Production resource pool has roughly 80% of the shares. However, when you divide those shares for the resource pool by the number of VMs that live in the resource pool, you start to see the problem. The bottom part of the graphic shows the entitlements at a Per VM level. Test has more than twice what Production has when looking at individual VMs.

    This script will calculate the Per VM resource allocation for you:

    Get-ResourcePoolSharesReport

    The script has many options and will calculate what the share value should be by using the -RecommendedShares

    Maintaining the Balance

    So now you are thinking oh no! My resource pools are totally wrong and this could be causing all my performance issues, so how do you keep the balance?

    The trick to keeping your resource pools balanced is to work it out backwards and never, ever use the default high, normal, and low shares values. Decide the weight of your per VM shares first. Let’s say that I want my Test VMs to receive half as much share weight as Production. Shares are an arbitrary value that just determine weight, they aren’t a magic number so you could create your own values. I prefer to stick with the VMware defaults, this way you know where you stand. So, let’s give Test VMs 500 shares per CPU and Per MB Ram each, and Production VMs 2000 shares per CPU and Per MB Ram. I would change the resource pools to this: Calculations: [Total amount of VM RAM in Pool] * [shares] = [Required RAM Shares] [Total amount of VM vCPU in Pool] * [shares] = [Required CPU Shares]

    I would recommend having all virtual machines in a resource pool to avoid any issues with balancing your load. If you don’t want to do that then make sure you set your custom shares according to the VMware standards.

    Our resource pools: Production would get 90,000 * 20 = 180,000 shares of RAM, 90 * 2000 = 180,000 shares of CPU Test would get 10,000 * 5 = 50000 shares of RAM, 10 * 500 = 5000 Shares of CPU

    Much easier, right? Note! If the number of VMs in the resource pool change, you’ll need to update the resource pool shares value to reflect the added VMs. Your options are to manually update the pool when the number of VMs inside change (no fun) or use PowerCLI!

    Using PowerCLI to Balance Resource Pool Shares

    Now let’s do some coding. This very basic script will connect to the vCenter server and cluster specified and look at the resource pools within. It then reports on the number of VMs contained within and offers to adjust the shares value based on an input you provide. It confirms before making any changes:

    Set-ResourcePoolShares

    Script Usage:

    1
    + Understanding Resource Pools in VMware | TotalDebug
    Home Understanding Resource Pools in VMware
    Post
    Cancel

    Understanding Resource Pools in VMware

    1423440000
    1666884241

    It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them as folders. Even with some other great resources out there that discuss this topic, a lack of education remains on how resource pools work, and what they do. In this post, I’ll give you my spin on some of the ideals behind a resource pool, and then discuss ways to properly balance resource pools by hand and with the help of some PowerShell scripts I have created for you.

    What is a Resource Pool?

    A VMware resource pool is a way of guaranteeing or providing higher priority on a VM’s CPU and/or Memory, the priority set at the pool is then split between each of the individual VM’s in that pool equally.

    Who Needs Resource Pools?

    You can’t make a resource pool on a cluster unless you have DRS running. So, if your license level excludes DRS, you can’t use resource pools. If you are graced with the awesomeness of DRS, you may need a resource pool if you want to give different types of workloads different priorities for two scenarios:

    • For when memory and CPU resources become constrained on the cluster.
    • For when a workload needs a dedicated amount of resources at all times.

    Now, this isn’t to say that a resource pool is the only way to accomplish these things – you can use per VM shares and reservations. But, these values sometimes reset when a VM vMotions to another host, and frankly it’s a bit of an administrative nightmare to manage resource settings on the VMs individually.

    I personally like resource pools and use them often in a mixed workload environment. If you don’t have the luxury of a dedicated management cluster, resource pools are an easy way to dedicate resources to your vCenter, VUM, DB, and other “virtual infrastructure management” (VIM) VMs.

    Why People Fear Resource Pools

    People fear resource pools because they are mysterious. Ok, maybe not that mysterious, but they are a bit awkward at first, one common misuse of resource pools that I see quite a lot is as folders, to sort VM’s rather than as a performance control. Also, they are easy to misunderstand, and thus misuse.

    Where Did I Get The Numbers?

    Let’s start with the resource pools. You’ll notice 3 points for each pool – the shares (high, normal or low), the amount of shares for RAM, and the amount of shares for CPU. Here is the math (supporting document):

    • RAM is calculated like this: [Cluster RAM in MB] * [20 for High10 for Normal5 for Low]
    • Our cluster has 100 GB of RAM (grey section) and so the math is: 102,400 MB of RAM * 20 = 2,048,000 for High and 102,400 MB of RAM * 5 = 512,000 for Low
    • CPU is calculated like this: [Cluster CPU Cores] * [2,000 for High, 1,000 for Normal, 500 for Low]
    • Our cluster has 100 CPU cores (grey section) and so the math is: 100 * 2,000 = 200,000 for High and 100 * 500 = 50,000 for Low

    Based on this, the Production resource pool has roughly 80% of the shares. However, when you divide those shares for the resource pool by the number of VMs that live in the resource pool, you start to see the problem. The bottom part of the graphic shows the entitlements at a Per VM level. Test has more than twice what Production has when looking at individual VMs.

    This script will calculate the Per VM resource allocation for you:

    Get-ResourcePoolSharesReport

    The script has many options and will calculate what the share value should be by using the -RecommendedShares

    Maintaining the Balance

    So now you are thinking oh no! My resource pools are totally wrong and this could be causing all my performance issues, so how do you keep the balance?

    The trick to keeping your resource pools balanced is to work it out backwards and never, ever use the default high, normal, and low shares values. Decide the weight of your per VM shares first. Let’s say that I want my Test VMs to receive half as much share weight as Production. Shares are an arbitrary value that just determine weight, they aren’t a magic number so you could create your own values. I prefer to stick with the VMware defaults, this way you know where you stand. So, let’s give Test VMs 500 shares per CPU and Per MB Ram each, and Production VMs 2000 shares per CPU and Per MB Ram. I would change the resource pools to this: Calculations: [Total amount of VM RAM in Pool] * [shares] = [Required RAM Shares] [Total amount of VM vCPU in Pool] * [shares] = [Required CPU Shares]

    I would recommend having all virtual machines in a resource pool to avoid any issues with balancing your load. If you don’t want to do that then make sure you set your custom shares according to the VMware standards.

    Our resource pools: Production would get 90,000 * 20 = 180,000 shares of RAM, 90 * 2000 = 180,000 shares of CPU Test would get 10,000 * 5 = 50000 shares of RAM, 10 * 500 = 5000 Shares of CPU

    Much easier, right? Note! If the number of VMs in the resource pool change, you’ll need to update the resource pool shares value to reflect the added VMs. Your options are to manually update the pool when the number of VMs inside change (no fun) or use PowerCLI!

    Using PowerCLI to Balance Resource Pool Shares

    Now let’s do some coding. This very basic script will connect to the vCenter server and cluster specified and look at the resource pools within. It then reports on the number of VMs contained within and offers to adjust the shares value based on an input you provide. It confirms before making any changes:

    Set-ResourcePoolShares

    Script Usage:

    1
     
    .\Set-ResourcePoolShares.ps1 -vcenter "vCenter.domain.com" -cluster "your-cluster"
    -

    I am also in the process of writing some more resource pool scripts that will email a report should you have any pools not at the correct resource levels.

    You can find all of my various scripts in my GitHub repository

    Final Thoughts

    I hope this has helped you understand when to use and cleared some confusion around resource pools, although it is a big chunk of information to swallow in one bite, and I’m sure there are a lot of other opinions floating around out there that won’t agree with mine. I’m OK with that. One thing that would be a great feature would be the ability to set per VM shares on the resource pool, and let the pool automatically adjust for membership values.

    Any comments and views are appreciated so please share.

    This post is licensed under CC BY 4.0 by the author.

    Graylog2 Cisco ASA / Cisco Catalyst

    VMware Distributed Switches (dvSwitch)

    +

    I am also in the process of writing some more resource pool scripts that will email a report should you have any pools not at the correct resource levels.

    You can find all of my various scripts in my GitHub repository

    Final Thoughts

    I hope this has helped you understand when to use and cleared some confusion around resource pools, although it is a big chunk of information to swallow in one bite, and I’m sure there are a lot of other opinions floating around out there that won’t agree with mine. I’m OK with that. One thing that would be a great feature would be the ability to set per VM shares on the resource pool, and let the pool automatically adjust for membership values.

    Any comments and views are appreciated so please share.

    This post is licensed under CC BY 4.0 by the author.

    Graylog2 Cisco ASA / Cisco Catalyst

    VMware Distributed Switches (dvSwitch)

    diff --git a/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html b/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html index 566e87dc8..f88620662 100644 --- a/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html +++ b/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html @@ -1,4 +1,4 @@ - UniFi L2TP: set a static IP for a specific user (built-in Radius Server) | TotalDebug
    Home UniFi L2TP: set a static IP for a specific user (built-in Radius Server)
    Post
    Cancel

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    1570465140
    1665846838

    When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes.

    This wouldn’t normally be a problem if the remote client was only taking to my internal network, however I run a server that my internal network communicates out to via IP Address, so if this changes it all stops working.

    This article walks through how to setup a static IP Address for an L2TP Client.

    First we need to get a dump of our configuration from the USG, to do this we need to SSH to the USG and run a dump:

    1
    + UniFi L2TP: set a static IP for a specific user (built-in Radius Server) | TotalDebug
    Home UniFi L2TP: set a static IP for a specific user (built-in Radius Server)
    Post
    Cancel

    UniFi L2TP: set a static IP for a specific user (built-in Radius Server)

    1570465140
    1665846838

    When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes.

    This wouldn’t normally be a problem if the remote client was only taking to my internal network, however I run a server that my internal network communicates out to via IP Address, so if this changes it all stops working.

    This article walks through how to setup a static IP Address for an L2TP Client.

    First we need to get a dump of our configuration from the USG, to do this we need to SSH to the USG and run a dump:

    1
     
    mca-ctrl -t dump-cfg
     

    Once we have this I recommend copying it into your favourite text editor. We want to delete everything except the following:

    1
     2
    @@ -52,4 +52,4 @@
     
    /opt/UniFi/data/sites/default/
     

    once in this directory create a new file called config.gateway.json and paste the above configuration into it.

    To test the new configuration file you can run this command:

    1
     
    python -m json.tool config.gateway.json
    -

    You shouldn’t see any errors if this is correct.

    We now can re-provision the USG which will pickup the configuration from the Controller and update the VPN settings.

    This post is licensed under CC BY 4.0 by the author.

    Ubiquiti UniFi USG Content Filter Configuration

    Continuous Integration and Deployment

    +

    You shouldn’t see any errors if this is correct.

    We now can re-provision the USG which will pickup the configuration from the Controller and update the VPN settings.

    This post is licensed under CC BY 4.0 by the author.

    Ubiquiti UniFi USG Content Filter Configuration

    Continuous Integration and Deployment

    diff --git a/posts/upgrading-a-cisco-catalyst-3560-switch/index.html b/posts/upgrading-a-cisco-catalyst-3560-switch/index.html index 1efbc4472..c2e6da9d7 100644 --- a/posts/upgrading-a-cisco-catalyst-3560-switch/index.html +++ b/posts/upgrading-a-cisco-catalyst-3560-switch/index.html @@ -1,4 +1,4 @@ - Upgrading a Cisco Catalyst 3560 Switch | TotalDebug
    Home Upgrading a Cisco Catalyst 3560 Switch
    Post
    Cancel

    Upgrading a Cisco Catalyst 3560 Switch

    1340751600
    1666888493

    Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisco documentation I followed. It’s for the 3550, but the Cisco support engineer said that it’s close enough.

    First Hurdle: VLAN Mismatch Error

    I quickly got bunch of errors that stated “Native VLAN Mismatch: discovered on Gigabit Ethernet 0/1.” The far end of the new switch is VLAN1. To fix this error, I moved port 1 from VLAN 3 to VLAN 1. These are the commands I ran.

    1
    + Upgrading a Cisco Catalyst 3560 Switch | TotalDebug
    Home Upgrading a Cisco Catalyst 3560 Switch
    Post
    Cancel

    Upgrading a Cisco Catalyst 3560 Switch

    1340751600
    1666888493

    Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisco documentation I followed. It’s for the 3550, but the Cisco support engineer said that it’s close enough.

    First Hurdle: VLAN Mismatch Error

    I quickly got bunch of errors that stated “Native VLAN Mismatch: discovered on Gigabit Ethernet 0/1.” The far end of the new switch is VLAN1. To fix this error, I moved port 1 from VLAN 3 to VLAN 1. These are the commands I ran.

    1
     2
     3
     4
    @@ -84,4 +84,4 @@
     switch#reload
     

    Upon reboot:

    1
     
    switch#show ver
    -

    5. Drank a celebratory drink. Coffee of course, because I was still at work.

    This post is licensed under CC BY 4.0 by the author.

    Deploy .exe using batch check os version and if the update is already installed.

    How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

    +

    5. Drank a celebratory drink. Coffee of course, because I was still at work.

    This post is licensed under CC BY 4.0 by the author.

    Deploy .exe using batch check os version and if the update is already installed.

    How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008

    diff --git a/posts/use-git-like-a-pro/index.html b/posts/use-git-like-a-pro/index.html index 92b87eb78..a62952a0a 100644 --- a/posts/use-git-like-a-pro/index.html +++ b/posts/use-git-like-a-pro/index.html @@ -1,4 +1,4 @@ - Use Git like a pro! | TotalDebug
    Home Use Git like a pro!
    Post
    Cancel

    Use Git like a pro!

    1609714800
    1655154889

    Over the past few months I have been using Git & GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git.

    There are obviously many ways to create workflows using Git, however below is the way that I have started to manage my workflow, this is likely to change over time as it is only my first workflow but this is a start!

    What to solve?

    There are many things that I didn’t like about the way I used Git in the past and so these are some of the issues I am aiming to solve:

    • Versioning
    • Standardised git commit messages
    • How best to utilise Branches
    • When should Pull Requests be used
    • How can the workflow be Automated

    Why solve them?

    Well this is quite straight forward, to improve the readability of my Git Repos especially in open source projects, but also to keep my mind clear and organised.

    How were these issues solved?

    Below I have split each area to solve out, this explains how I solved the issues I was experiencing.

    Versioning

    Versioning was something that I never thought about, I increased when I wanted to based on what I thought was right.

    Then I started doing code professionally and was introduced to the Semantic Versioning specification.

    This made much more sense by adding a relationship between each different increment.

    A version number would be MAJOR.MINOR.PATCH, Increments as below:

    • MAJOR version when changes are mede that would break previous functionality.
    • MINOR version when functionality is added in a backwards compatible manner.
    • PATCH version where you make backwards compatible bug fixes.

    by using this method people are now able to easily identify what type of change has been implemented and if it is likely to break their current project.

    Conventional Commits

    My commit records were… well… a total mess, Looking at other repos this is quite common and not many projects follow a standard. I was looking for a better way to provide commit messages that just make sense and are easy to read, in my research I found a standard called Conventional Commits.

    Conventional Commits is a specification for adding human and machine readable meanings to commit messages, this then allows the creation of ChangeLogs through Automation and makes life easier for a human to tell what has changed!

    The specification is real simple so doesn’t take much to get your head around:

    1
    + Use Git like a pro! | TotalDebug
    Home Use Git like a pro!
    Post
    Cancel

    Use Git like a pro!

    1609714800
    1655154889

    Over the past few months I have been using Git & GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git.

    There are obviously many ways to create workflows using Git, however below is the way that I have started to manage my workflow, this is likely to change over time as it is only my first workflow but this is a start!

    What to solve?

    There are many things that I didn’t like about the way I used Git in the past and so these are some of the issues I am aiming to solve:

    • Versioning
    • Standardised git commit messages
    • How best to utilise Branches
    • When should Pull Requests be used
    • How can the workflow be Automated

    Why solve them?

    Well this is quite straight forward, to improve the readability of my Git Repos especially in open source projects, but also to keep my mind clear and organised.

    How were these issues solved?

    Below I have split each area to solve out, this explains how I solved the issues I was experiencing.

    Versioning

    Versioning was something that I never thought about, I increased when I wanted to based on what I thought was right.

    Then I started doing code professionally and was introduced to the Semantic Versioning specification.

    This made much more sense by adding a relationship between each different increment.

    A version number would be MAJOR.MINOR.PATCH, Increments as below:

    • MAJOR version when changes are mede that would break previous functionality.
    • MINOR version when functionality is added in a backwards compatible manner.
    • PATCH version where you make backwards compatible bug fixes.

    by using this method people are now able to easily identify what type of change has been implemented and if it is likely to break their current project.

    Conventional Commits

    My commit records were… well… a total mess, Looking at other repos this is quite common and not many projects follow a standard. I was looking for a better way to provide commit messages that just make sense and are easy to read, in my research I found a standard called Conventional Commits.

    Conventional Commits is a specification for adding human and machine readable meanings to commit messages, this then allows the creation of ChangeLogs through Automation and makes life easier for a human to tell what has changed!

    The specification is real simple so doesn’t take much to get your head around:

    1
     2
     3
     4
    @@ -16,4 +16,4 @@
     
    <issue number>-<short_description>
     

    Example:

    1
     
    311-softLimit
    -

    By doing this I am able to quickly link a branch to a specific issue in the project. Branches also enable me to make multiple commits at smaller increments, which I then use Pull Requests to merge with Master

    Pull Requests

    I now utilise Pull Requests to move my branch into the master, the pull request has various checks using GitHub Actions depending on the project type This would be things like:

    • Version check: confirm that the version in the project files has been incremented since the last release
    • Tests: Check that the code functions as expected
    • Linting: Check that the code still adheres to the relevant standards

    With all of my Repos I will only enable Allow squash merging this allows me to create one good commit message that covers the issues fixed for the specific branch we are merging, rather than all the commits from the development lifecycle (keeping my master commits clean)

    Version Tags

    Once I have completed all of the pull requests for a specific release I will then add a version tag to the master.

    This version tag creates a point in time reference along with triggering my release automation once it is pushed.

    Automated Workflow

    In order to streamline my delivery to release I have started to utilise GitHub Actions, This allows me to have endless automation capabilities.

    Currently I utilise Actions for the following:

    • Linting
    • Tests
    • Version Checks
    • ChangeLog Generation
    • Release creation
    • Push to external artifactories (e.g. Docker Hub, Ansible Galaxy etc.)

    The changelog and release process is something that I have just started doing, I was manually writing out my changelog for any new releases which was time consuming and required a lot of manual back and forth to confirm what was changed, not an issue whilst a project is small, but if it grows that would quickly become out of control.

    Final Thoughts

    I believe that at this time for the work I am doing this is the best workflow for myself, If you have any thoughts on ways this could be further improved, please let me know over on my Discord

    This post is licensed under CC BY 4.0 by the author.

    Use GitHub pages with unsupported plugins

    Using CloneZilla to migrate multiple disk server

    +

    By doing this I am able to quickly link a branch to a specific issue in the project. Branches also enable me to make multiple commits at smaller increments, which I then use Pull Requests to merge with Master

    Pull Requests

    I now utilise Pull Requests to move my branch into the master, the pull request has various checks using GitHub Actions depending on the project type This would be things like:

    • Version check: confirm that the version in the project files has been incremented since the last release
    • Tests: Check that the code functions as expected
    • Linting: Check that the code still adheres to the relevant standards

    With all of my Repos I will only enable Allow squash merging this allows me to create one good commit message that covers the issues fixed for the specific branch we are merging, rather than all the commits from the development lifecycle (keeping my master commits clean)

    Version Tags

    Once I have completed all of the pull requests for a specific release I will then add a version tag to the master.

    This version tag creates a point in time reference along with triggering my release automation once it is pushed.

    Automated Workflow

    In order to streamline my delivery to release I have started to utilise GitHub Actions, This allows me to have endless automation capabilities.

    Currently I utilise Actions for the following:

    • Linting
    • Tests
    • Version Checks
    • ChangeLog Generation
    • Release creation
    • Push to external artifactories (e.g. Docker Hub, Ansible Galaxy etc.)

    The changelog and release process is something that I have just started doing, I was manually writing out my changelog for any new releases which was time consuming and required a lot of manual back and forth to confirm what was changed, not an issue whilst a project is small, but if it grows that would quickly become out of control.

    Final Thoughts

    I believe that at this time for the work I am doing this is the best workflow for myself, If you have any thoughts on ways this could be further improved, please let me know over on my Discord

    This post is licensed under CC BY 4.0 by the author.

    Use GitHub pages with unsupported plugins

    Using CloneZilla to migrate multiple disk server

    diff --git a/posts/use-github-pages-with-unsupported-plugins/index.html b/posts/use-github-pages-with-unsupported-plugins/index.html index dd8011584..ecf444fe6 100644 --- a/posts/use-github-pages-with-unsupported-plugins/index.html +++ b/posts/use-github-pages-with-unsupported-plugins/index.html @@ -1,4 +1,4 @@ - Use GitHub pages with unsupported plugins | TotalDebug
    Home Use GitHub pages with unsupported plugins
    Post
    Cancel

    Use GitHub pages with unsupported plugins

    1607468400
    1655154889

    I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported.

    Due to this I needed to find a workaround, which I wanted to share with you all

    Advantages of this method

    Control over gemset

    • Jekyll Version - Instead of using the version forced upon you by GitHub, you can use any version you want
    • Plugins - You can use any Jekyll plugins irrespective of them being supported by GitHub

    Workflow Management

    • Customization - By using GitHub Actions, you are able to customize the build steps however you need them
    • Logging - The build log is visible and can be adjusted, so it is much easier to debug errors

    Setting up the GitHub Action

    GitHub actions are created by adding a YAML file in the directory .github/workflows. Here we will create our action using the Jekyll Action from the Marketplace.

    Create a workflow file github-pages.yml, then add the below information:

    1
    + Use GitHub pages with unsupported plugins | TotalDebug
    Home Use GitHub pages with unsupported plugins
    Post
    Cancel

    Use GitHub pages with unsupported plugins

    1607468400
    1655154889

    I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported.

    Due to this I needed to find a workaround, which I wanted to share with you all

    Advantages of this method

    Control over gemset

    • Jekyll Version - Instead of using the version forced upon you by GitHub, you can use any version you want
    • Plugins - You can use any Jekyll plugins irrespective of them being supported by GitHub

    Workflow Management

    • Customization - By using GitHub Actions, you are able to customize the build steps however you need them
    • Logging - The build log is visible and can be adjusted, so it is much easier to debug errors

    Setting up the GitHub Action

    GitHub actions are created by adding a YAML file in the directory .github/workflows. Here we will create our action using the Jekyll Action from the Marketplace.

    Create a workflow file github-pages.yml, then add the below information:

    1
     2
     3
     4
    @@ -30,4 +30,4 @@
           - uses: helaili/jekyll-action@2.0.1
             env:
               JEKYLL_PAT: $
    -

    This workflow is doing the following:

    • We trigger on.push to master, or by a manual dispatch workflow_dispatch
    • The checkout action clones your repository.
    • Our action is specified along with the required version helaili/jekyll-action@2.0.1
    • We set an environment variable for the action to use JEKYLL_PAT a Personal Access Token

    Providing permissions

    The action needs permissions to push the Jekyll data to your gh-pages branch (this will be created if it doesn’t exist)

    In order to do this, you must create a GitHub Personal Access Token on your GitHub profile, then set this as an environment variable using Secrets.

    1. On your GitHub profile, under Developer Settings, go to the Personal Access Tokens section.
    2. Create a token. Give it a name like “GitHub Actions” and ensure it has permissions to public_repos (or the entire repo scope for private repository) — necessary for the action to commit to the gh-pages branch.
    3. Copy the token value.
    4. Go to your repository’s Settings and then the Secrets tab.
    5. Create a token named JEKYLL_PAT (important) and paste your token into the value

    Deployment

    On pushing changes onto master the action will be triggered and the build will start.

    You can watch the progress by looking at the actions that are currently running via your repository

    If all goes well you should see a green build status on the gh-pages branch.

    If this is a new repository you will also need to setup the pages to use the new gh-pages branch instead of master. this can be found in the repository settings.

    This post is licensed under CC BY 4.0 by the author.

    Docker Overlay2 with CentOS for production

    Use Git like a pro!

    +

    This workflow is doing the following:

    • We trigger on.push to master, or by a manual dispatch workflow_dispatch
    • The checkout action clones your repository.
    • Our action is specified along with the required version helaili/jekyll-action@2.0.1
    • We set an environment variable for the action to use JEKYLL_PAT a Personal Access Token

    Providing permissions

    The action needs permissions to push the Jekyll data to your gh-pages branch (this will be created if it doesn’t exist)

    In order to do this, you must create a GitHub Personal Access Token on your GitHub profile, then set this as an environment variable using Secrets.

    1. On your GitHub profile, under Developer Settings, go to the Personal Access Tokens section.
    2. Create a token. Give it a name like “GitHub Actions” and ensure it has permissions to public_repos (or the entire repo scope for private repository) — necessary for the action to commit to the gh-pages branch.
    3. Copy the token value.
    4. Go to your repository’s Settings and then the Secrets tab.
    5. Create a token named JEKYLL_PAT (important) and paste your token into the value

    Deployment

    On pushing changes onto master the action will be triggered and the build will start.

    You can watch the progress by looking at the actions that are currently running via your repository

    If all goes well you should see a green build status on the gh-pages branch.

    If this is a new repository you will also need to setup the pages to use the new gh-pages branch instead of master. this can be found in the repository settings.

    This post is licensed under CC BY 4.0 by the author.

    Docker Overlay2 with CentOS for production

    Use Git like a pro!

    diff --git a/posts/use-google-authenticator-ssh/index.html b/posts/use-google-authenticator-ssh/index.html index e75ffd8ee..86e64f379 100644 --- a/posts/use-google-authenticator-ssh/index.html +++ b/posts/use-google-authenticator-ssh/index.html @@ -1,4 +1,4 @@ - Use Google Authenticator for 2FA with SSH | TotalDebug
    Home Use Google Authenticator for 2FA with SSH
    Post
    Cancel

    Use Google Authenticator for 2FA with SSH

    1389139200
    1684502826

    By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it is much more secure. But like someone can guess a password or get it from alternative sources, they can also steal your private SSH key and then access all data that key has access to.

    In this guide, We will setup Two-Factor authentication (2FA) meaning that more than one factor is required to authenticate or log in. This means any hackers would need to compromise multiple devices, like your computer and your phone to get access.

    Prerequisites

    To follow this tutorial, you will need:

    • One CentOS 8 or Ubuntu server with a sudo non-root user and SSH key
    • A phone or tablet with an OATH-TOTP app, like Authy or Google Authenticator

    Install chrony to synchronize the system clock

    This step is very important, due to the way 2FA works, the time must be accurate on the server. Run the following commands to setup and install chrony:

    1
    + Use Google Authenticator for 2FA with SSH | TotalDebug
    Home Use Google Authenticator for 2FA with SSH
    Post
    Cancel

    Use Google Authenticator for 2FA with SSH

    1389139200
    1684502826

    By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it is much more secure. But like someone can guess a password or get it from alternative sources, they can also steal your private SSH key and then access all data that key has access to.

    In this guide, We will setup Two-Factor authentication (2FA) meaning that more than one factor is required to authenticate or log in. This means any hackers would need to compromise multiple devices, like your computer and your phone to get access.

    Prerequisites

    To follow this tutorial, you will need:

    • One CentOS 8 or Ubuntu server with a sudo non-root user and SSH key
    • A phone or tablet with an OATH-TOTP app, like Authy or Google Authenticator

    Install chrony to synchronize the system clock

    This step is very important, due to the way 2FA works, the time must be accurate on the server. Run the following commands to setup and install chrony:

    1
     2
     3
     4
    @@ -106,4 +106,4 @@
     - : ALL : ALL
     

    Local login attempts from 10.0.0.0/24 will not require two-factor authentication, while all others do. Now we need to edit the ssh daemon configuration file.

    Please keep in mind that this could add a security risk if not locked down sufficiently

    Restart the SSH daemon:

    1
     
    systemctl restart sshd
    -

    Final Thoughts

    This how-to guide has taken you through how to add 2FA authentication using google authentication via your computer and your phone making your system considerably more secure. It is now much more difficult for a brute force attack via SSH.

    This post is licensed under CC BY 4.0 by the author.

    PHP Notice: Undefined index

    How to view which Virtual Machines have Snapshots in VMware

    +

    Final Thoughts

    This how-to guide has taken you through how to add 2FA authentication using google authentication via your computer and your phone making your system considerably more secure. It is now much more difficult for a brute force attack via SSH.

    This post is licensed under CC BY 4.0 by the author.

    PHP Notice: Undefined index

    How to view which Virtual Machines have Snapshots in VMware

    diff --git a/posts/use-python-pandas-now/index.html b/posts/use-python-pandas-now/index.html index a16b3a211..6e3d93bb9 100644 --- a/posts/use-python-pandas-now/index.html +++ b/posts/use-python-pandas-now/index.html @@ -1,4 +1,4 @@ - Use Python pandas NOW for your big datasets | TotalDebug
    Home Use Python pandas NOW for your big datasets
    Post
    Cancel

    Use Python pandas NOW for your big datasets

    1680105600

    Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement.

    I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew.

    Most of the processing required multiple nested for loops and addition of columns to json formatted data, this had some large processing requirements and multi threaded processing wouldn’t help in these scenarios.

    I knew there had to be a better way to process this data faster, and so I looked into using pandas.

    What is pandas?

    pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.

    Test results

    I ran some testing on 100 rows of data, one using for loops and one using pandas. With for loops the test took 19.09s to complete, with pandas an impressive 1.21s an improvement of 17.88s. When I run this on the full dataset which currently sits at around 16,500 rows it takes 33.15s seconds, an impressive improvement from a full run with for loops (which I had to cancel after 3 hours, it took too long for my requirements).

    Pandas first steps

    Install and import

    Pandas is an easy package to install. Open up your terminal program (for Mac users) or command line (for PC users) and install it using either of the following commands:

    1
    + Use Python pandas NOW for your big datasets | TotalDebug
    Home Use Python pandas NOW for your big datasets
    Post
    Cancel

    Use Python pandas NOW for your big datasets

    1680105600

    Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement.

    I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew.

    Most of the processing required multiple nested for loops and addition of columns to json formatted data, this had some large processing requirements and multi threaded processing wouldn’t help in these scenarios.

    I knew there had to be a better way to process this data faster, and so I looked into using pandas.

    What is pandas?

    pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.

    Test results

    I ran some testing on 100 rows of data, one using for loops and one using pandas. With for loops the test took 19.09s to complete, with pandas an impressive 1.21s an improvement of 17.88s. When I run this on the full dataset which currently sits at around 16,500 rows it takes 33.15s seconds, an impressive improvement from a full run with for loops (which I had to cancel after 3 hours, it took too long for my requirements).

    Pandas first steps

    Install and import

    Pandas is an easy package to install. Open up your terminal program (for Mac users) or command line (for PC users) and install it using either of the following commands:

    1
     
    poetry install pandas
     

    OR

    1
     
    pip install pandas
    @@ -63,4 +63,4 @@
             how="left",
         )
     
    -

    Final Thoughts

    There is much more that can be done with pandas and DataFrames, this just scratches the surface and gives a very basic overview. The main reason for writing this article is to show what a difference in performance is made from using pandas, if you aren’t using this for your data yet, I recommend that you do!

    This post is licensed under CC BY 4.0 by the author.

    How I host this site

    Automating deployments using Terraform with Proxmox and ansible

    +

    Final Thoughts

    There is much more that can be done with pandas and DataFrames, this just scratches the surface and gives a very basic overview. The main reason for writing this article is to show what a difference in performance is made from using pandas, if you aren’t using this for your data yet, I recommend that you do!

    This post is licensed under CC BY 4.0 by the author.

    How I host this site

    Automating deployments using Terraform with Proxmox and ansible

    diff --git a/posts/using-clonezilla-to-migrate-multi-disk-server/index.html b/posts/using-clonezilla-to-migrate-multi-disk-server/index.html index 4840b1366..5ba67527a 100644 --- a/posts/using-clonezilla-to-migrate-multi-disk-server/index.html +++ b/posts/using-clonezilla-to-migrate-multi-disk-server/index.html @@ -1 +1 @@ - Using CloneZilla to migrate multiple disk server | TotalDebug
    Home Using CloneZilla to migrate multiple disk server
    Post
    Cancel

    Using CloneZilla to migrate multiple disk server

    1621033200
    1666901265

    Overview

    I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware.

    For a normal migration I would just use CloneZilla’s remote-source to remote-dest feature, however I could only get this to work for a single source disk, which is fine for the majority of my servers, however I do have some with multiple disks which became an issue.

    What was the problem?

    At its core CloneZilla is designed to clone a single disk to multiple other disks, you can do this many different ways, however if you have a machine with multiple disks then it is not possible to do this in the traditional way that most tutorials online show you.

    I really struggled to find any information on this subject, and most of my research turned up how to clone a single disk to multiple disks rather than how to clone multiple disks to multiple disks!

    Its easy to see how this could be difficult for CloneZilla, I mean how would it know which two disks to clone the source data to on the destination? without some form of GUI where you need to pair up all the disks it would be difficult.

    The solution

    In order to overcome this issue, I created a CloneZilla image this was cloned onto an NFS share. Once complete, I was able to load the image on the destination machine, as there were only two disks in the destination server the image was applied without any issue, on boot I could see that both disks had been cloned over from the image.

    The only thing I didn’t like about this is that I had to first create the image, then deploy that image, when I only have one server to clone and not need for an image it would be nice for CloneZilla to implement something in the remote-source / remote-dest that allows this functionality.

    Final Thoughts

    CloneZilla is an excellent tool for performing these migrations, It’s very easy to use and clones the images quite quickly. In my opinion it is much easier than other solutions provided on the Proxmox website, in fact other methods using the OVF Tool never worked for me (there are also lots of reports of other users having the same issues) which is why I ended up going with CloneZilla.

    If you have had any experience with Proxmox migrations using CloneZilla or have a trick that makes the OVF Tool migrations work please let me know over on my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Use Git like a pro!

    Sqitch, Sensible database change management

    + Using CloneZilla to migrate multiple disk server | TotalDebug
    Home Using CloneZilla to migrate multiple disk server
    Post
    Cancel

    Using CloneZilla to migrate multiple disk server

    1621033200
    1666901265

    Overview

    I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware.

    For a normal migration I would just use CloneZilla’s remote-source to remote-dest feature, however I could only get this to work for a single source disk, which is fine for the majority of my servers, however I do have some with multiple disks which became an issue.

    What was the problem?

    At its core CloneZilla is designed to clone a single disk to multiple other disks, you can do this many different ways, however if you have a machine with multiple disks then it is not possible to do this in the traditional way that most tutorials online show you.

    I really struggled to find any information on this subject, and most of my research turned up how to clone a single disk to multiple disks rather than how to clone multiple disks to multiple disks!

    Its easy to see how this could be difficult for CloneZilla, I mean how would it know which two disks to clone the source data to on the destination? without some form of GUI where you need to pair up all the disks it would be difficult.

    The solution

    In order to overcome this issue, I created a CloneZilla image this was cloned onto an NFS share. Once complete, I was able to load the image on the destination machine, as there were only two disks in the destination server the image was applied without any issue, on boot I could see that both disks had been cloned over from the image.

    The only thing I didn’t like about this is that I had to first create the image, then deploy that image, when I only have one server to clone and not need for an image it would be nice for CloneZilla to implement something in the remote-source / remote-dest that allows this functionality.

    Final Thoughts

    CloneZilla is an excellent tool for performing these migrations, It’s very easy to use and clones the images quite quickly. In my opinion it is much easier than other solutions provided on the Proxmox website, in fact other methods using the OVF Tool never worked for me (there are also lots of reports of other users having the same issues) which is why I ended up going with CloneZilla.

    If you have had any experience with Proxmox migrations using CloneZilla or have a trick that makes the OVF Tool migrations work please let me know over on my Discord.

    This post is licensed under CC BY 4.0 by the author.

    Use Git like a pro!

    Sqitch, Sensible database change management

    diff --git a/posts/vcenter-6-0-vcsa-deployment/index.html b/posts/vcenter-6-0-vcsa-deployment/index.html index 170a406c0..0bd4e0bda 100644 --- a/posts/vcenter-6-0-vcsa-deployment/index.html +++ b/posts/vcenter-6-0-vcsa-deployment/index.html @@ -1 +1 @@ - vCenter 6.0 VCSA Deployment | TotalDebug
    Home vCenter 6.0 VCSA Deployment
    Post
    Cancel

    vCenter 6.0 VCSA Deployment

    1427068800
    1614629284

    This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes.

    1. Download VCSA 6.0 from the VMware Website.
    2. Mount the ISO on your computer.
    3. go to the VCSA folder and install the VMware Client Intergration Plugin.
    4. launch vcsa-setup.html from the ISO.
    5. you will be prompted to Install or Upgrade, Choose Install
    6. Accept the terms and click next
    7. Enter the FQDN / IP and user details for an ESXi Host connect to target server
    8. wait for validation then click yes on the certificate warning.
    9. Enter the appliance name and root password. VCSA Root Password
    10. Select the install type, there are now 2 choices, you can either deploy the appliance as one virtual machine or two, when deploying as two virtual machines one would be the platform services controller and the second vCenter Server. Install Type

    11. Select the SSO type. You have the choice of setting up a new SSO Domain or joining an existing one if you already have one in place. sso_type

    12. select the size of the appliance, this ranges from Tiny (10 hosts, 100 VMs) to Large (1,000 hosts and 10,000 VMs) Appliance Size

    13. Select the datastore you would like vCenter to reside on, tick “Enable Thin Disk Mode” if you want the Appliance to be Thin Provisioned. Select Datastore
    14. Select the database type, either Use an embedded database or Oracle database. Select Database Type
    15. Fill in the Network settings as required, choosing the correct network / IP addressing required for your network. Network Settings
    16. vCenter will now begin to deploy. Deploying vCenter

    You should now have a fully working vCenter Server Appliance 6.0, this install is much improved from previous versions and makes it much easier for basic users to get the appliance deployed.

    This post is licensed under CC BY 4.0 by the author.

    NUMA and vNUMA made simple!

    Add vCenter Logs to Syslog Server (GrayLog2)

    + vCenter 6.0 VCSA Deployment | TotalDebug
    Home vCenter 6.0 VCSA Deployment
    Post
    Cancel

    vCenter 6.0 VCSA Deployment

    1427068800
    1614629284

    This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes.

    1. Download VCSA 6.0 from the VMware Website.
    2. Mount the ISO on your computer.
    3. go to the VCSA folder and install the VMware Client Intergration Plugin.
    4. launch vcsa-setup.html from the ISO.
    5. you will be prompted to Install or Upgrade, Choose Install
    6. Accept the terms and click next
    7. Enter the FQDN / IP and user details for an ESXi Host connect to target server
    8. wait for validation then click yes on the certificate warning.
    9. Enter the appliance name and root password. VCSA Root Password
    10. Select the install type, there are now 2 choices, you can either deploy the appliance as one virtual machine or two, when deploying as two virtual machines one would be the platform services controller and the second vCenter Server. Install Type

    11. Select the SSO type. You have the choice of setting up a new SSO Domain or joining an existing one if you already have one in place. sso_type

    12. select the size of the appliance, this ranges from Tiny (10 hosts, 100 VMs) to Large (1,000 hosts and 10,000 VMs) Appliance Size

    13. Select the datastore you would like vCenter to reside on, tick “Enable Thin Disk Mode” if you want the Appliance to be Thin Provisioned. Select Datastore
    14. Select the database type, either Use an embedded database or Oracle database. Select Database Type
    15. Fill in the Network settings as required, choosing the correct network / IP addressing required for your network. Network Settings
    16. vCenter will now begin to deploy. Deploying vCenter

    You should now have a fully working vCenter Server Appliance 6.0, this install is much improved from previous versions and makes it much easier for basic users to get the appliance deployed.

    This post is licensed under CC BY 4.0 by the author.

    NUMA and vNUMA made simple!

    Add vCenter Logs to Syslog Server (GrayLog2)

    diff --git a/posts/vcloud-director-8-0-service-providers/index.html b/posts/vcloud-director-8-0-service-providers/index.html index 5e1bba714..4eca43f44 100644 --- a/posts/vcloud-director-8-0-service-providers/index.html +++ b/posts/vcloud-director-8-0-service-providers/index.html @@ -1 +1 @@ - vCloud Director 8.0 for Service Providers | TotalDebug
    Home vCloud Director 8.0 for Service Providers
    Post
    Cancel

    vCloud Director 8.0 for Service Providers

    1459292400
    1614629284

    As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product.

    Originally the idea was that organisations would use vCloud Director for test environments but as the “Cloud” becomes cheaper and companies move their hosting out to 3rd party providers it makes sense for VMware to push consumers towards hosted platforms for cheaper billing and better support.

    With the release of the vRealize product suite we see the new Automation product that allows users to automate deployments on hosted vCloud platforms which is a great step forwards.

    So what’s new in vCloud Director 8.0?

    1. vSphere 6.0 Support: Support for vSphere 6.0 in backward compatibility mode.
    2. NSX support: Support for NSX 6.1.4 in backward compatibility mode. This means that tenants’ consumption capability is unchanged and remains at the vCloud Networking and Security feature level of vCloud Director 5.6.
    3. Organization virtual data center templates: Allows system administrators to create organization virtual data center templates, including resource delegation, that organization users can deploy to create new organization virtual data centers.
    4. vApp enhancements: Enhancements to vApp functionality, including the ability to reconfigure virtual machines within a vApp, and network connectivity and virtual machine capability during vApp instantiation.
    5. OAuth support for identity sources: Support for OAuth2 tokens.
    6. Tenant throttling: This prevents a single tenant from consuming all of the resources for a single instance of vCloud director. Ensuring fairness of execution and scheduling among tenants.

    So not much has changed even though the version number has jumped quite dramatically. One thing that i will be interested in seeing is if the NSX Support adds much more functionality and what the upgrade paths are from vCNS to NSX for existing providers.

    This post is licensed under CC BY 4.0 by the author.

    Failed to connect to VMware Lookup Service, SSL certificate verification failed

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    + vCloud Director 8.0 for Service Providers | TotalDebug
    Home vCloud Director 8.0 for Service Providers
    Post
    Cancel

    vCloud Director 8.0 for Service Providers

    1459292400
    1614629284

    As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product.

    Originally the idea was that organisations would use vCloud Director for test environments but as the “Cloud” becomes cheaper and companies move their hosting out to 3rd party providers it makes sense for VMware to push consumers towards hosted platforms for cheaper billing and better support.

    With the release of the vRealize product suite we see the new Automation product that allows users to automate deployments on hosted vCloud platforms which is a great step forwards.

    So what’s new in vCloud Director 8.0?

    1. vSphere 6.0 Support: Support for vSphere 6.0 in backward compatibility mode.
    2. NSX support: Support for NSX 6.1.4 in backward compatibility mode. This means that tenants’ consumption capability is unchanged and remains at the vCloud Networking and Security feature level of vCloud Director 5.6.
    3. Organization virtual data center templates: Allows system administrators to create organization virtual data center templates, including resource delegation, that organization users can deploy to create new organization virtual data centers.
    4. vApp enhancements: Enhancements to vApp functionality, including the ability to reconfigure virtual machines within a vApp, and network connectivity and virtual machine capability during vApp instantiation.
    5. OAuth support for identity sources: Support for OAuth2 tokens.
    6. Tenant throttling: This prevents a single tenant from consuming all of the resources for a single instance of vCloud director. Ensuring fairness of execution and scheduling among tenants.

    So not much has changed even though the version number has jumped quite dramatically. One thing that i will be interested in seeing is if the NSX Support adds much more functionality and what the upgrade paths are from vCNS to NSX for existing providers.

    This post is licensed under CC BY 4.0 by the author.

    Failed to connect to VMware Lookup Service, SSL certificate verification failed

    Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only)

    diff --git a/posts/vcloud-director-8-10-renew-ssl-certificates/index.html b/posts/vcloud-director-8-10-renew-ssl-certificates/index.html index 35be4de5c..182e5e48c 100644 --- a/posts/vcloud-director-8-10-renew-ssl-certificates/index.html +++ b/posts/vcloud-director-8-10-renew-ssl-certificates/index.html @@ -1,4 +1,4 @@ - vCloud Director 8.10 – Renew SSL Certificates | TotalDebug
    Home vCloud Director 8.10 – Renew SSL Certificates
    Post
    Cancel

    vCloud Director 8.10 – Renew SSL Certificates

    1537398000
    1666884241

    Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired.

    I could not find a working guide explaining the steps so this post covers everything required to replace expiring / expired certificates with new ones.

    First Cell Steps

    First we lets check that the Cell doesn’t have any running jobs:

    1
    + vCloud Director 8.10 – Renew SSL Certificates | TotalDebug
    Home vCloud Director 8.10 – Renew SSL Certificates
    Post
    Cancel

    vCloud Director 8.10 – Renew SSL Certificates

    1537398000
    1666884241

    Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired.

    I could not find a working guide explaining the steps so this post covers everything required to replace expiring / expired certificates with new ones.

    First Cell Steps

    First we lets check that the Cell doesn’t have any running jobs:

    1
     
    /opt/vmware/vcloud-director/bin/cell-management-tool -u &lt;AdminUser&gt; cell --status
     

    You will be prompted for your administrator account password.

    Once you have done this you should see the following output:

    1
     2
    @@ -46,4 +46,4 @@
     
    ./cell-management-tool -u <;AdminUser> cell --shutdown
     
    1
     
    service vmware-vcd start
    -
    This post is licensed under CC BY 4.0 by the author.

    Docker install on CentOS & basic Docker commands

    Ubiquiti UniFi USG Content Filter Configuration

    +
    This post is licensed under CC BY 4.0 by the author.

    Docker install on CentOS & basic Docker commands

    Ubiquiti UniFi USG Content Filter Configuration

    diff --git a/posts/vcloud-director-vcenter-proxy-service-failure/index.html b/posts/vcloud-director-vcenter-proxy-service-failure/index.html index 2b08997bc..c135910b0 100644 --- a/posts/vcloud-director-vcenter-proxy-service-failure/index.html +++ b/posts/vcloud-director-vcenter-proxy-service-failure/index.html @@ -1,4 +1,4 @@ - vCloud Director and vCenter Proxy Service Failure | TotalDebug
    Home vCloud Director and vCenter Proxy Service Failure
    Post
    Cancel

    vCloud Director and vCenter Proxy Service Failure

    1405033200
    1614629284

    Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming months.

    One issue that we have hit a few times was the vCD cell stopped working properly (Multi-cell environment). I could log into the vCD provider and organization portals but the deployment of vApps would run for an abnormally long time and then fail after 20 minutes.

    The first thing I tried to do to resolve this issue was reconnect vCenter to vCloud, in the past this has been the solution to this type of problem, however I noticed two problems:

    Problem #1: Performing a Reconnect on the vCenter Server object resulted in Error performing operation and Unable to find the cell running this listener.

    Problem #2: None of the cells have a vCenter proxy service running on the cell server.

    I then stumbled upon some SQL Queries that I wasn’t too sure about, I passed these over to VMware and they confirmed this is the correct action to take and it is none destructive. The below steps take you through resolving this issue:

    1. Stop all your Cells
    1
    + vCloud Director and vCenter Proxy Service Failure | TotalDebug
    Home vCloud Director and vCenter Proxy Service Failure
    Post
    Cancel

    vCloud Director and vCenter Proxy Service Failure

    1405033200
    1614629284

    Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming months.

    One issue that we have hit a few times was the vCD cell stopped working properly (Multi-cell environment). I could log into the vCD provider and organization portals but the deployment of vApps would run for an abnormally long time and then fail after 20 minutes.

    The first thing I tried to do to resolve this issue was reconnect vCenter to vCloud, in the past this has been the solution to this type of problem, however I noticed two problems:

    Problem #1: Performing a Reconnect on the vCenter Server object resulted in Error performing operation and Unable to find the cell running this listener.

    Problem #2: None of the cells have a vCenter proxy service running on the cell server.

    I then stumbled upon some SQL Queries that I wasn’t too sure about, I passed these over to VMware and they confirmed this is the correct action to take and it is none destructive. The below steps take you through resolving this issue:

    1. Stop all your Cells
    1
     
    service vmware-vcd stop
     
    1. Backup the entire vCloud SQL Database. This is just a precaution.
    2. run the below query in SQL Management Studio
    1
     2
    @@ -34,4 +34,4 @@
     go
     
    1. Start one of your Cells and verify that the issue is resolved
    1
     
    service vmware-vcd start
    -
    1. Start the remaining cells.

    The script should run successfully wiping out all rows in each of the named tables.

    I was now able to restart the vCD cell and my problems were gone. Everything was working again. All errors have vanished.

    These [vCenter Proxy Service] issues are usually caused by a disconnect from the database, causing the tables to become stale. vCD constantly needs the ability to write to the database and when it cannot, the cell ends up in a state that is similar to the one that you have seen. The qrtz tables contain information that controls the coordinator service, and lets it know when the coordinator to be dropped and restarted, for cell to cell fail over to another cell in multi cell environment. When the tables are purged it forces the cell on start up to recheck its status and start the coordinator service. In your situation the cell, due to corrupt records in the table was not allowing this to happen. So by clearing them forced the cell to recheck and to restart the coordinator.

    This post is licensed under CC BY 4.0 by the author.

    How to setup an NFS mount on CentOS 6

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    +
    1. Start the remaining cells.

    The script should run successfully wiping out all rows in each of the named tables.

    I was now able to restart the vCD cell and my problems were gone. Everything was working again. All errors have vanished.

    These [vCenter Proxy Service] issues are usually caused by a disconnect from the database, causing the tables to become stale. vCD constantly needs the ability to write to the database and when it cannot, the cell ends up in a state that is similar to the one that you have seen. The qrtz tables contain information that controls the coordinator service, and lets it know when the coordinator to be dropped and restarted, for cell to cell fail over to another cell in multi cell environment. When the tables are purged it forces the cell on start up to recheck its status and start the coordinator service. In your situation the cell, due to corrupt records in the table was not allowing this to happen. So by clearing them forced the cell to recheck and to restart the coordinator.

    This post is licensed under CC BY 4.0 by the author.

    How to setup an NFS mount on CentOS 6

    Teamspeak 3 with MySQL on CentOS 6.x (3.0.11.1 Onwards)

    diff --git a/posts/view-virtual-machines-snapshots-vmware/index.html b/posts/view-virtual-machines-snapshots-vmware/index.html index 3a9ab711c..c6ac37302 100644 --- a/posts/view-virtual-machines-snapshots-vmware/index.html +++ b/posts/view-virtual-machines-snapshots-vmware/index.html @@ -1 +1 @@ - How to view which Virtual Machines have Snapshots in VMware | TotalDebug
    Home How to view which Virtual Machines have Snapshots in VMware
    Post
    Cancel

    How to view which Virtual Machines have Snapshots in VMware

    1389830400
    1614629284

    This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable.

    1. Using vSphere Client
      1. In vCenter go to: Home > Inventory > Datastores and Datastore Clusters
      2. Select your cluster in the left panel
      3. Choose “Storage Views” tab in the right pane.
      4. Sort by “Snapshot Space”
      5. Anything with more than 0.00b has a snapshot present
    2. Using Power CLI
      1. Connect to vCenter with PowerCLI
      2. Run this command: get-vmget-snapshotformat-list vm,name

    You may also be interested in this article: Email Report Virtual Machine Snapshots

    This post is licensed under CC BY 4.0 by the author.

    Use Google Authenticator for 2FA with SSH

    CentOS Use Public/Private Keys for Authentication

    + How to view which Virtual Machines have Snapshots in VMware | TotalDebug
    Home How to view which Virtual Machines have Snapshots in VMware
    Post
    Cancel

    How to view which Virtual Machines have Snapshots in VMware

    1389830400
    1614629284

    This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable.

    1. Using vSphere Client
      1. In vCenter go to: Home > Inventory > Datastores and Datastore Clusters
      2. Select your cluster in the left panel
      3. Choose “Storage Views” tab in the right pane.
      4. Sort by “Snapshot Space”
      5. Anything with more than 0.00b has a snapshot present
    2. Using Power CLI
      1. Connect to vCenter with PowerCLI
      2. Run this command: get-vmget-snapshotformat-list vm,name

    You may also be interested in this article: Email Report Virtual Machine Snapshots

    This post is licensed under CC BY 4.0 by the author.

    Use Google Authenticator for 2FA with SSH

    CentOS Use Public/Private Keys for Authentication

    diff --git a/posts/vmware-distributed-switches-dvswitch/index.html b/posts/vmware-distributed-switches-dvswitch/index.html index a5b73f898..77c3fd086 100644 --- a/posts/vmware-distributed-switches-dvswitch/index.html +++ b/posts/vmware-distributed-switches-dvswitch/index.html @@ -1 +1 @@ - VMware Distributed Switches (dvSwitch) | TotalDebug
    Home VMware Distributed Switches (dvSwitch)
    Post
    Cancel

    VMware Distributed Switches (dvSwitch)

    1424217600
    1614629284

    In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup by hand and with the help of some PowerShell scripts I have created for you.

    What is a distributed switch?

    A distributed switch (dvSwitch) is very similar to a standard vSwitch, the main difference is that the switch is managed by vCenter instead of the individual ESXi Hosts, the ESXi/ESX 4.x and ESXi 5.x hosts that belong to a dvSwitch do not need further configuration to be compliant.

    Distributed switches provide similar functionality to vSwitches. dvPortgroups is a set of dvPorts. The dvSwitch equivalent of portgroups is a set of ports in a vSwitch. Configuration is inherited from dvSwitch to dvPortgroup, just as from vSwitch to Portgroup.

    Virtual machines, Service Console interfaces (vswif), and VMKernel interfaces can be connected to dvPortgroups just as they could be connected to portgroups in vSwitches.

    This means that if you have 100 ESXi Hosts you only need to configure the PortGroups once and then add the ESXi Hosts to the dvSwitch rather than configuring the networking individually on each host.

    How Do You Use a dvSwitch?

    Below I have created an example of a two host cluster using a dvSwitch, the dvSwitchis first configured on vCenter and then hosts are added to the dvSwitch. Adding a host to a dvSwitch will then push the network configuration to the host.

    Once a host is added to the dvSwitchyou only need to assign the VMK’s and IP Addresses for it to begin functioning correctly. If you have migrated from a vSwitch you can migrate the VMK’s across saving additional configuration.

    vdSwitch diagram

    As you can see from the image there are a few differences from a standard switch, you now have “dvUplinks” these are virtual vmnic’s for the physical network cards that are associated to the same service. e.g. management on host A could be vmnic0 where as on host B it could be vmnic8 without dvUplinks we would not be able to assign the same service to different vmnics on each host.

    After you get your head around dvUplinks everything else falls into place, the rest of the dvSwitch is the same as a standard switch (other than features)

    VMK’s are host specific due to the requirement for an IP Address, these cannot be allocated on a pool basis which is a shame. You have to manually add VMK’s by going to the host network configuration, selecting vSphere Distributed Switch and then select Manage Virtual adapters, this will then allow you to add / remove / migrate VMK’s to and from specific port groups.

    Pros & Cons

    There are only a few pros and cons to distributed switches, I have listed all the ones I am aware of below: (if you know any more please leave a comment!)

    Pros

    • Private VLAN’s
    • Netflow – ability for NetFlow collectors to collect data from the dvSwitch to determine what network device is talking and what protocols they are using
    • SPAN and LLDP – allows for port mirroring and traffic analysis of network traffic using protocol analyzers
    • Easy to add a new host
    • Easy to add a new port group to all hosts
    • Load Based Teaming, Load Balancing without the IP Hash worry.

    Cons

    • If vCenter fails there is no way to manage your dvSwitch
    • Requires an Enterprise Plus License

    Different Features

    These features are available with both types of virtual switches:

    • Can forward L2 frames
    • Can segment traffic into VLANs
    • Can use and understand 802.1q VLAN encapsulation
    • Can have more than one uplink (NIC Teaming)
    • Can have traffic shaping for the outbound (TX) traffic

    These features are available only with a Distributed Switch:

    • Can shape inbound (RX) traffic
    • Has a central unified management interface through vCenter Server
    • Supports Private VLANs (PVLANs)
    • Provides potential customization of Data and Control Planes

    vSphere 5.x provides these improvements to Distributed Switch functionality:

    • Increased visibility of inter-virtual machine traffic through Netflow
    • Improved monitoring through port mirroring (dvMirror)
    • Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.
    • The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
    • Additional port security is enabled through traffic filtering support.
    • Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.

    Automated dvSwitch Backup Script

    Below is a script that I have written that allows automated backups of your dvSwitches.

    Get-dvSwitchBackup

    I have also got many other scripts available for use here on my GitHub.

    Final Thoughts

    vSphere Distributed Virtual Switches are definitely the correct choice for companies that have the license, is it worth buying the licensing just for dvSwitch? I wouldn’t say so unless you require one of the specific features only dvSwitch supports. When you environment starts to grow I would say they are vital to saving time deploying hosts and re-configuring networks. I would recommend that you only use one or the other and don’t use a Hybrid configuration, in a Hybrid mode you are adding more configuration for your team and also added complexity that is not required. As long as you always have a backup of your dvSwitch you will not have any issues with loss of configuration.

    If you have anything to add please comment below, all feedback is appreciated.

    This post is licensed under CC BY 4.0 by the author.

    Understanding Resource Pools in VMware

    Offline Upgrade ESXi 5.5 to 6.0

    + VMware Distributed Switches (dvSwitch) | TotalDebug
    Home VMware Distributed Switches (dvSwitch)
    Post
    Cancel

    VMware Distributed Switches (dvSwitch)

    1424217600
    1614629284

    In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup by hand and with the help of some PowerShell scripts I have created for you.

    What is a distributed switch?

    A distributed switch (dvSwitch) is very similar to a standard vSwitch, the main difference is that the switch is managed by vCenter instead of the individual ESXi Hosts, the ESXi/ESX 4.x and ESXi 5.x hosts that belong to a dvSwitch do not need further configuration to be compliant.

    Distributed switches provide similar functionality to vSwitches. dvPortgroups is a set of dvPorts. The dvSwitch equivalent of portgroups is a set of ports in a vSwitch. Configuration is inherited from dvSwitch to dvPortgroup, just as from vSwitch to Portgroup.

    Virtual machines, Service Console interfaces (vswif), and VMKernel interfaces can be connected to dvPortgroups just as they could be connected to portgroups in vSwitches.

    This means that if you have 100 ESXi Hosts you only need to configure the PortGroups once and then add the ESXi Hosts to the dvSwitch rather than configuring the networking individually on each host.

    How Do You Use a dvSwitch?

    Below I have created an example of a two host cluster using a dvSwitch, the dvSwitchis first configured on vCenter and then hosts are added to the dvSwitch. Adding a host to a dvSwitch will then push the network configuration to the host.

    Once a host is added to the dvSwitchyou only need to assign the VMK’s and IP Addresses for it to begin functioning correctly. If you have migrated from a vSwitch you can migrate the VMK’s across saving additional configuration.

    vdSwitch diagram

    As you can see from the image there are a few differences from a standard switch, you now have “dvUplinks” these are virtual vmnic’s for the physical network cards that are associated to the same service. e.g. management on host A could be vmnic0 where as on host B it could be vmnic8 without dvUplinks we would not be able to assign the same service to different vmnics on each host.

    After you get your head around dvUplinks everything else falls into place, the rest of the dvSwitch is the same as a standard switch (other than features)

    VMK’s are host specific due to the requirement for an IP Address, these cannot be allocated on a pool basis which is a shame. You have to manually add VMK’s by going to the host network configuration, selecting vSphere Distributed Switch and then select Manage Virtual adapters, this will then allow you to add / remove / migrate VMK’s to and from specific port groups.

    Pros & Cons

    There are only a few pros and cons to distributed switches, I have listed all the ones I am aware of below: (if you know any more please leave a comment!)

    Pros

    • Private VLAN’s
    • Netflow – ability for NetFlow collectors to collect data from the dvSwitch to determine what network device is talking and what protocols they are using
    • SPAN and LLDP – allows for port mirroring and traffic analysis of network traffic using protocol analyzers
    • Easy to add a new host
    • Easy to add a new port group to all hosts
    • Load Based Teaming, Load Balancing without the IP Hash worry.

    Cons

    • If vCenter fails there is no way to manage your dvSwitch
    • Requires an Enterprise Plus License

    Different Features

    These features are available with both types of virtual switches:

    • Can forward L2 frames
    • Can segment traffic into VLANs
    • Can use and understand 802.1q VLAN encapsulation
    • Can have more than one uplink (NIC Teaming)
    • Can have traffic shaping for the outbound (TX) traffic

    These features are available only with a Distributed Switch:

    • Can shape inbound (RX) traffic
    • Has a central unified management interface through vCenter Server
    • Supports Private VLANs (PVLANs)
    • Provides potential customization of Data and Control Planes

    vSphere 5.x provides these improvements to Distributed Switch functionality:

    • Increased visibility of inter-virtual machine traffic through Netflow
    • Improved monitoring through port mirroring (dvMirror)
    • Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.
    • The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
    • Additional port security is enabled through traffic filtering support.
    • Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.

    Automated dvSwitch Backup Script

    Below is a script that I have written that allows automated backups of your dvSwitches.

    Get-dvSwitchBackup

    I have also got many other scripts available for use here on my GitHub.

    Final Thoughts

    vSphere Distributed Virtual Switches are definitely the correct choice for companies that have the license, is it worth buying the licensing just for dvSwitch? I wouldn’t say so unless you require one of the specific features only dvSwitch supports. When you environment starts to grow I would say they are vital to saving time deploying hosts and re-configuring networks. I would recommend that you only use one or the other and don’t use a Hybrid configuration, in a Hybrid mode you are adding more configuration for your team and also added complexity that is not required. As long as you always have a backup of your dvSwitch you will not have any issues with loss of configuration.

    If you have anything to add please comment below, all feedback is appreciated.

    This post is licensed under CC BY 4.0 by the author.

    Understanding Resource Pools in VMware

    Offline Upgrade ESXi 5.5 to 6.0

    diff --git a/posts/vmware-esxi-embedded-host-client-installation/index.html b/posts/vmware-esxi-embedded-host-client-installation/index.html index 8b15692ab..0a4c2861a 100644 --- a/posts/vmware-esxi-embedded-host-client-installation/index.html +++ b/posts/vmware-esxi-embedded-host-client-installation/index.html @@ -1,5 +1,5 @@ - VMware ESXi Embedded Host Client Installation – Updated | TotalDebug
    Home VMware ESXi Embedded Host Client Installation – Updated
    Post
    Cancel

    VMware ESXi Embedded Host Client Installation – Updated

    1439679600
    1614629284

    In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host.

    Check out the latest version in this video:

    Installation

    The easiest way to install a VIB is to download it directly on the ESXi host.

    If your ESXi host has internet access, follow these steps:

    1. Enable SSH on your ESXi host, using DCUI or the vSphere web client.
    2. Connect to the host using an SSH Client such as putty
    3. Run the below command:
    1
    + VMware ESXi Embedded Host Client Installation – Updated | TotalDebug
    Home VMware ESXi Embedded Host Client Installation – Updated
    Post
    Cancel

    VMware ESXi Embedded Host Client Installation – Updated

    1439679600
    1614629284

    In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host.

    Check out the latest version in this video:

    Installation

    The easiest way to install a VIB is to download it directly on the ESXi host.

    If your ESXi host has internet access, follow these steps:

    1. Enable SSH on your ESXi host, using DCUI or the vSphere web client.
    2. Connect to the host using an SSH Client such as putty
    3. Run the below command:
    1
     
    esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxui/esxui-2976804.vib
     

    If the VIB installation completes successfully, you should now be able to navigate a web browser to https:///ui and the login page should be displayed.

    Usage

    The login page is the same one used for vCenter Server, On logging in you will also see the menu structures follow this look and feel.

    From the interface you are able to do most of the features seen in the old VI Client. It is very responsive (compared to the vCenter versions) and seems to work well.

    One feature that is a little frustrating is the inability to edit settings of a powered on virtual machine. So you would either need to use command, the old VI Client or Power Off the VM.

    A few things that are still “under construction” are: Host Management

    Authentication

    Certificates

    Profiles

    Power Management

    Resource Reservation

    Security

    Swap

    Host -> Manage -> Virtual Machines View</ol </ul>

    Virtual Machine

    Log Browser

    Networking

    Monitor Tasks

    Removal

    To remove the ESXi embedded host client from your ESXi host, you will need to use esxcli and have root privileges on the host.

    1. Connect to the host using and SSH Client such as putty
    2. Log into the host and run the following command:
    1
     
    esxcli software vib remove -n esx-ui
    -

    If you have any comments, tips or tricks, please let me know over on my Discord

    This post is licensed under CC BY 4.0 by the author.

    Mikrotik OpenVPN Server with Linux Client

    Bulk configure vCenter Alarms with PowerCLI

    +

    If you have any comments, tips or tricks, please let me know over on my Discord

    This post is licensed under CC BY 4.0 by the author.

    Mikrotik OpenVPN Server with Linux Client

    Bulk configure vCenter Alarms with PowerCLI

    diff --git a/posts/vmware-large-snapshot-safe-removal/index.html b/posts/vmware-large-snapshot-safe-removal/index.html index 23d746820..6457e07c3 100644 --- a/posts/vmware-large-snapshot-safe-removal/index.html +++ b/posts/vmware-large-snapshot-safe-removal/index.html @@ -1 +1 @@ - VMware Large Snapshot Safe Removal | TotalDebug
    Home VMware Large Snapshot Safe Removal
    Post
    Cancel

    VMware Large Snapshot Safe Removal

    1437433200
    1666884241

    One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machine, with the option to make it crash consistent. This feature is particularly useful when performing upgrades or testing, as if anything goes wrong during the process, you can quickly go back to a stable point in time (when the snapshot was taken).

    Snapshots are great for quick, short term restores, but can have devastating effects to an environment if kept long term. There are a number of reasons why snapshots should not be kept for long term or used as backups, one of the main issues is I/O performance 1008885. A list of best practices for snapshots can be found here: 1025279. This article shows 1 method to remove snapshots in a way that minimizes impact.

    Noticing High I/O

    As mentioned earlier, one of the disasters that can occur when leaving a snapshot active for too long is that it very heavy I/O. After taking a look at the virtual machine, the “Revert to Current Snapshot” is available, so a snapshot exists.

    Before deleting the snapshot, check the size of the deltas to get an idea of how long the removal process will take. To do this select your virtual machine, right click the datastore and click browse.

    From the datastore select the folder matching your virtual machine name.

    As you can see from the delta (000001.vmdk) the snapshots are large. If this were a non-critical server or a small snapshot, I would just delete it, in this example the snapshot exists on a business critical server so I will take the below precautions.

    Why Take Precautions

    Although snapshot removal has been substantially improved in newer versions, it is still possible in 5.1 to stun the VM and in 5.5 to fail the removal and require consolidation. For a business critical application such as Microsoft SQL / Exchange that must remain active, the snapshot removal process cannot be cancelled once it has been initiated.

    One example that I experienced when I had first started working with VMware, I noticed one of our IT Staff had taken a snapshot on our Exchange server and had left it there for around 2 weeks. It was then decided we would remove the snapshot… Big Mistake! About 3 hours into the snapshot removal, Our phones were ringing off the hook, our Exchange server had became unresponsive and users could no longer access their mail. For the next 3 hours VMware was removing the snapshot and no one was able to use email.

    Removing a Large Snapshot

    As crazy as this will seem, to remove the large snapshot we must first create a new snapshot… yes you did read that correctly. The reason for this is that it stops VMware writing to the old snapshot delta thus allowing VMware to write it back to the main VMDK without interruption. We then have a much smaller new snapshot that can be easily removed.

    Uncheck the “Snapshot the Virtual machine’s memory” option and name this: Safe Snapshot Removal. By unchecking the box shown below, this will assist in removing the “Safe Snapshot” once the other snapshot is removed, as we are not expecting to restore to this snapshot it is not required.

    We now have 2 snapshots, one from the upgrade (the old large snapshot) and our new Safe Removal Snapshot.

    Next, remove the large “Upgrade” snapshot. This will roll the snapshot back into the parent and will no longer cause any downtime. Note that this can potentially cause greater I/O penalties, so calculate the risks before proceeding with this method.

    Once the Upgrade snapshot has been deleted, I verify that the Safe Removal Snapshot is fairly small. If not, repeat the process. If it is, the Safe Removal Snapshot can be deleted.

    This post is licensed under CC BY 4.0 by the author.

    Dell VMware 5.5 FCoE Errors

    VMware Transparent Page Sharing TPS

    + VMware Large Snapshot Safe Removal | TotalDebug
    Home VMware Large Snapshot Safe Removal
    Post
    Cancel

    VMware Large Snapshot Safe Removal

    1437433200
    1666884241

    One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machine, with the option to make it crash consistent. This feature is particularly useful when performing upgrades or testing, as if anything goes wrong during the process, you can quickly go back to a stable point in time (when the snapshot was taken).

    Snapshots are great for quick, short term restores, but can have devastating effects to an environment if kept long term. There are a number of reasons why snapshots should not be kept for long term or used as backups, one of the main issues is I/O performance 1008885. A list of best practices for snapshots can be found here: 1025279. This article shows 1 method to remove snapshots in a way that minimizes impact.

    Noticing High I/O

    As mentioned earlier, one of the disasters that can occur when leaving a snapshot active for too long is that it very heavy I/O. After taking a look at the virtual machine, the “Revert to Current Snapshot” is available, so a snapshot exists.

    Before deleting the snapshot, check the size of the deltas to get an idea of how long the removal process will take. To do this select your virtual machine, right click the datastore and click browse.

    From the datastore select the folder matching your virtual machine name.

    As you can see from the delta (000001.vmdk) the snapshots are large. If this were a non-critical server or a small snapshot, I would just delete it, in this example the snapshot exists on a business critical server so I will take the below precautions.

    Why Take Precautions

    Although snapshot removal has been substantially improved in newer versions, it is still possible in 5.1 to stun the VM and in 5.5 to fail the removal and require consolidation. For a business critical application such as Microsoft SQL / Exchange that must remain active, the snapshot removal process cannot be cancelled once it has been initiated.

    One example that I experienced when I had first started working with VMware, I noticed one of our IT Staff had taken a snapshot on our Exchange server and had left it there for around 2 weeks. It was then decided we would remove the snapshot… Big Mistake! About 3 hours into the snapshot removal, Our phones were ringing off the hook, our Exchange server had became unresponsive and users could no longer access their mail. For the next 3 hours VMware was removing the snapshot and no one was able to use email.

    Removing a Large Snapshot

    As crazy as this will seem, to remove the large snapshot we must first create a new snapshot… yes you did read that correctly. The reason for this is that it stops VMware writing to the old snapshot delta thus allowing VMware to write it back to the main VMDK without interruption. We then have a much smaller new snapshot that can be easily removed.

    Uncheck the “Snapshot the Virtual machine’s memory” option and name this: Safe Snapshot Removal. By unchecking the box shown below, this will assist in removing the “Safe Snapshot” once the other snapshot is removed, as we are not expecting to restore to this snapshot it is not required.

    We now have 2 snapshots, one from the upgrade (the old large snapshot) and our new Safe Removal Snapshot.

    Next, remove the large “Upgrade” snapshot. This will roll the snapshot back into the parent and will no longer cause any downtime. Note that this can potentially cause greater I/O penalties, so calculate the risks before proceeding with this method.

    Once the Upgrade snapshot has been deleted, I verify that the Safe Removal Snapshot is fairly small. If not, repeat the process. If it is, the Safe Removal Snapshot can be deleted.

    This post is licensed under CC BY 4.0 by the author.

    Dell VMware 5.5 FCoE Errors

    VMware Transparent Page Sharing TPS

    diff --git a/posts/vmware-transparent-page-sharing-tps/index.html b/posts/vmware-transparent-page-sharing-tps/index.html index b79acb1d0..31133304e 100644 --- a/posts/vmware-transparent-page-sharing-tps/index.html +++ b/posts/vmware-transparent-page-sharing-tps/index.html @@ -1 +1 @@ - VMware Transparent Page Sharing TPS | TotalDebug
    Home VMware Transparent Page Sharing TPS
    Post
    Cancel

    VMware Transparent Page Sharing TPS

    1437519600
    1666884241

    What is TPS?

    Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory pages. The benefits of TPS are that it allows a host to reduce memory usage so you can allow more VMs onto a host, as memory is often one of the most constrained resources on a host. TPS is basically de-duplication for RAM and works at the 4KB block level.

    In some situations multiple virtual machines will have identical sets of memory content, TPS allows these sets to be De-duplicated thus using less overall memory on the host. As you can see from the image above, this displays a host with TPS Enabled and one with TPS Disabled. As you can see TPS uses much less memory where blocks are duplicated.

    What has changed?

    VMware recently acknowledged a vulnerability with their TPS feature that could in very specific scenarios allow VM’s to access memory pages of other VMs running on a host. It is important to note that this vulnerability is not easily exploitable and the risk is really low so most environments should not really be impacted by it. However VMware have been cautious and released patches to disable this feature by default in the following updates:

    ESXi 5.5, Patch ESXi550-201501001 ESXi 5.1 Update 3 ESXi 5.0 Patch ESXi500-201502001

    All versions of vSphere are vulnerable to the exploit but VMware is only patching the 5.x versions of vSphere as 4.x versions are no longer supported. These patches only disable TPS which is currently enable by default, they do not fix the vulnerability. VMware states in the KB article that Administrators may revert to the previous behaviour if they so wish.

    The benefits that TPS provides will vary in each environment depending on VM workloads so if you want to be PCI Compliant or are paranoid about security you will probably want to leave TPS Disabled. You can view the effectiveness of TPS in vCenter by looking at the shared and sharedcommon memory counters to see how much it is benefiting you.

    This post is licensed under CC BY 4.0 by the author.

    VMware Large Snapshot Safe Removal

    Mikrotik OpenVPN Server with Linux Client

    + VMware Transparent Page Sharing TPS | TotalDebug
    Home VMware Transparent Page Sharing TPS
    Post
    Cancel

    VMware Transparent Page Sharing TPS

    1437519600
    1666884241

    What is TPS?

    Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory pages. The benefits of TPS are that it allows a host to reduce memory usage so you can allow more VMs onto a host, as memory is often one of the most constrained resources on a host. TPS is basically de-duplication for RAM and works at the 4KB block level.

    In some situations multiple virtual machines will have identical sets of memory content, TPS allows these sets to be De-duplicated thus using less overall memory on the host. As you can see from the image above, this displays a host with TPS Enabled and one with TPS Disabled. As you can see TPS uses much less memory where blocks are duplicated.

    What has changed?

    VMware recently acknowledged a vulnerability with their TPS feature that could in very specific scenarios allow VM’s to access memory pages of other VMs running on a host. It is important to note that this vulnerability is not easily exploitable and the risk is really low so most environments should not really be impacted by it. However VMware have been cautious and released patches to disable this feature by default in the following updates:

    ESXi 5.5, Patch ESXi550-201501001 ESXi 5.1 Update 3 ESXi 5.0 Patch ESXi500-201502001

    All versions of vSphere are vulnerable to the exploit but VMware is only patching the 5.x versions of vSphere as 4.x versions are no longer supported. These patches only disable TPS which is currently enable by default, they do not fix the vulnerability. VMware states in the KB article that Administrators may revert to the previous behaviour if they so wish.

    The benefits that TPS provides will vary in each environment depending on VM workloads so if you want to be PCI Compliant or are paranoid about security you will probably want to leave TPS Disabled. You can view the effectiveness of TPS in vCenter by looking at the shared and sharedcommon memory counters to see how much it is benefiting you.

    This post is licensed under CC BY 4.0 by the author.

    VMware Large Snapshot Safe Removal

    Mikrotik OpenVPN Server with Linux Client

    diff --git a/posts/warning-cannot-modify-header-information-headers-already-sent-by/index.html b/posts/warning-cannot-modify-header-information-headers-already-sent-by/index.html index b7a41536d..d46dc7e4b 100644 --- a/posts/warning-cannot-modify-header-information-headers-already-sent-by/index.html +++ b/posts/warning-cannot-modify-header-information-headers-already-sent-by/index.html @@ -1,4 +1,4 @@ - Warning: Cannot modify header information – headers already sent by… | TotalDebug
    Home Warning: Cannot modify header information – headers already sent by…
    Post
    Cancel

    Warning: Cannot modify header information – headers already sent by…

    1312758000
    1614629284

    Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative to using:

    1
    + Warning: Cannot modify header information – headers already sent by… | TotalDebug
    Home Warning: Cannot modify header information – headers already sent by…
    Post
    Cancel

    Warning: Cannot modify header information – headers already sent by…

    1312758000
    1614629284

    Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative to using:

    1
     
    header(location:"index.php");
     

    So to get rid of the error that this produces simply change it to any of the below:

    1
     2
    @@ -36,4 +36,4 @@
     }
     

    OR

    1
     
    printf("<script>location.href=&#8217;errorpage.html'</script>");
    -

    I used the last option as I found this worked best compared to the others with my program however they may all work well for your application

    This post is licensed under CC BY 4.0 by the author.

    Mapping a network drive in NT4 with logon credentials

    Server 2003 Reinstall Terminal Services Licensing.

    +

    I used the last option as I found this worked best compared to the others with my program however they may all work well for your application

    This post is licensed under CC BY 4.0 by the author.

    Mapping a network drive in NT4 with logon credentials

    Server 2003 Reinstall Terminal Services Licensing.

    diff --git a/posts/what-is-docker-overview/index.html b/posts/what-is-docker-overview/index.html index b1c62d761..2f759d32b 100644 --- a/posts/what-is-docker-overview/index.html +++ b/posts/what-is-docker-overview/index.html @@ -1 +1 @@ - What is Docker? - Overview | TotalDebug
    Home What is Docker? - Overview
    Post
    Cancel

    What is Docker? - Overview

    1526770800
    1680258820

    In this video I talk about what Docker is, how it can be used and how containerisation differs from virtualisation.

    For anyone just getting into Docker this video and my Docker series will take you through that Journey.

    This post is licensed under CC BY 4.0 by the author.

    Install, Configure and add a repository with Git on CentOS 7

    Docker install on CentOS & basic Docker commands

    + What is Docker? - Overview | TotalDebug
    Home What is Docker? - Overview
    Post
    Cancel

    What is Docker? - Overview

    1526770800
    1680258820

    In this video I talk about what Docker is, how it can be used and how containerisation differs from virtualisation.

    For anyone just getting into Docker this video and my Docker series will take you through that Journey.

    This post is licensed under CC BY 4.0 by the author.

    Install, Configure and add a repository with Git on CentOS 7

    Docker install on CentOS & basic Docker commands

    diff --git a/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html b/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html index 1ee8aaff0..80d0abbba 100644 --- a/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html +++ b/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html @@ -1 +1 @@ - Your client does not support opening this list with windows explorer | TotalDebug
    Home Your client does not support opening this list with windows explorer
    Post
    Cancel

    Your client does not support opening this list with windows explorer

    1346972400
    1614629284

    When using Office 365 and sharepoint 2010 you may find that trying to open a library in explorer will result in this error:

    “Your client does not support opening this list with windows explorer”

    I have written a few simple things to check and once these are met the issue should be resolved.

    There are a few things to check:

    1. Use IE x86 not x64.
    2. Make sure the URL is in the trusted sites list within internet options and security.
    3. If it is Windows Server make sure Desktop Experience is installed.
    4. Make sure the WebClient service is started.

    If all of these things are met then your issue should now be resolved.

    This post is licensed under CC BY 4.0 by the author.

    Folder redirection permissions. My Documents / Start Menu / Desktop

    How to recreate all Virtual Directories for Exchange 2007

    + Your client does not support opening this list with windows explorer | TotalDebug
    Home Your client does not support opening this list with windows explorer
    Post
    Cancel

    Your client does not support opening this list with windows explorer

    1346972400
    1614629284

    When using Office 365 and sharepoint 2010 you may find that trying to open a library in explorer will result in this error:

    “Your client does not support opening this list with windows explorer”

    I have written a few simple things to check and once these are met the issue should be resolved.

    There are a few things to check:

    1. Use IE x86 not x64.
    2. Make sure the URL is in the trusted sites list within internet options and security.
    3. If it is Windows Server make sure Desktop Experience is installed.
    4. Make sure the WebClient service is started.

    If all of these things are met then your issue should now be resolved.

    This post is licensed under CC BY 4.0 by the author.

    Folder redirection permissions. My Documents / Start Menu / Desktop

    How to recreate all Virtual Directories for Exchange 2007

    diff --git a/sitemap.xml b/sitemap.xml index cb1138ee3..fc1d43921 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -354,23 +354,23 @@ https://totaldebug.uk/categories/ -2023-08-08T09:29:11+01:00 +2023-08-08T09:43:21+01:00 https://totaldebug.uk/tags/ -2023-08-08T09:29:11+01:00 +2023-08-08T09:43:21+01:00 https://totaldebug.uk/archives/ -2023-08-08T09:29:11+01:00 +2023-08-08T09:43:21+01:00 https://totaldebug.uk/about/ -2023-08-08T09:29:11+01:00 +2023-08-08T09:43:21+01:00 https://totaldebug.uk/works/ -2023-08-08T09:29:11+01:00 +2023-08-08T09:43:21+01:00 https://totaldebug.uk/ diff --git a/sw.js b/sw.js index 53d4fd3a2..6fcef5f92 100644 --- a/sw.js +++ b/sw.js @@ -1 +1 @@ -self.importScripts('/assets/js/data/swcache.js'); const cacheName = 'chirpy-20230808.092952'; function verifyDomain(url) { for (const domain of allowedDomains) { const regex = RegExp(`^http(s)?:\/\/${domain}\/`); if (regex.test(url)) { return true; } } return false; } function isExcluded(url) { for (const item of denyUrls) { if (url === item) { return true; } } return false; } self.addEventListener('install', event => { event.waitUntil( caches.open(cacheName).then(cache => { return cache.addAll(resource); }) ); }); self.addEventListener('activate', event => { event.waitUntil( caches.keys().then(keyList => { return Promise.all( keyList.map(key => { if (key !== cacheName) { return caches.delete(key); } }) ); }) ); }); self.addEventListener('message', (event) => { if (event.data === 'SKIP_WAITING') { self.skipWaiting(); } }); self.addEventListener('fetch', event => { event.respondWith( caches.match(event.request).then(response => { if (response) { return response; } return fetch(event.request).then(response => { const url = event.request.url; if (event.request.method !== 'GET' || !verifyDomain(url) || isExcluded(url)) { return response; } /* see: */ let responseToCache = response.clone(); caches.open(cacheName).then(cache => { /* console.log('[sw] Caching new resource: ' + event.request.url); */ cache.put(event.request, responseToCache); }); return response; }); }) ); }); +self.importScripts('/assets/js/data/swcache.js'); const cacheName = 'chirpy-20230808.094410'; function verifyDomain(url) { for (const domain of allowedDomains) { const regex = RegExp(`^http(s)?:\/\/${domain}\/`); if (regex.test(url)) { return true; } } return false; } function isExcluded(url) { for (const item of denyUrls) { if (url === item) { return true; } } return false; } self.addEventListener('install', event => { event.waitUntil( caches.open(cacheName).then(cache => { return cache.addAll(resource); }) ); }); self.addEventListener('activate', event => { event.waitUntil( caches.keys().then(keyList => { return Promise.all( keyList.map(key => { if (key !== cacheName) { return caches.delete(key); } }) ); }) ); }); self.addEventListener('message', (event) => { if (event.data === 'SKIP_WAITING') { self.skipWaiting(); } }); self.addEventListener('fetch', event => { event.respondWith( caches.match(event.request).then(response => { if (response) { return response; } return fetch(event.request).then(response => { const url = event.request.url; if (event.request.method !== 'GET' || !verifyDomain(url) || isExcluded(url)) { return response; } /* see: */ let responseToCache = response.clone(); caches.open(cacheName).then(cache => { /* console.log('[sw] Caching new resource: ' + event.request.url); */ cache.put(event.request, responseToCache); }); return response; }); }) ); }); diff --git a/tags/2007/index.html b/tags/2007/index.html index 8525112f9..49ba549a6 100644 --- a/tags/2007/index.html +++ b/tags/2007/index.html @@ -1 +1 @@ - 2007 | TotalDebug
    + 2007 | TotalDebug
    diff --git a/tags/2fa/index.html b/tags/2fa/index.html index 6d7806352..e2411112a 100644 --- a/tags/2fa/index.html +++ b/tags/2fa/index.html @@ -1 +1 @@ - 2fa | TotalDebug
    + 2fa | TotalDebug
    diff --git a/tags/3560/index.html b/tags/3560/index.html index 3b428aaf4..9d4f9aff0 100644 --- a/tags/3560/index.html +++ b/tags/3560/index.html @@ -1 +1 @@ - 3560 | TotalDebug
    + 3560 | TotalDebug
    diff --git a/tags/3d/index.html b/tags/3d/index.html index f3b7a7a38..fccd4a415 100644 --- a/tags/3d/index.html +++ b/tags/3d/index.html @@ -1 +1 @@ - 3d | TotalDebug
    + 3d | TotalDebug
    diff --git a/tags/actions/index.html b/tags/actions/index.html index 3fb884c99..aa3f26583 100644 --- a/tags/actions/index.html +++ b/tags/actions/index.html @@ -1 +1 @@ - actions | TotalDebug
    + actions | TotalDebug
    diff --git a/tags/alpine/index.html b/tags/alpine/index.html index 4d0c3a24c..4347cc552 100644 --- a/tags/alpine/index.html +++ b/tags/alpine/index.html @@ -1 +1 @@ - alpine | TotalDebug
    + alpine | TotalDebug
    diff --git a/tags/ansible/index.html b/tags/ansible/index.html index 209db7864..51163fe75 100644 --- a/tags/ansible/index.html +++ b/tags/ansible/index.html @@ -1 +1 @@ - ansible | TotalDebug
    + ansible | TotalDebug
    diff --git a/tags/application/index.html b/tags/application/index.html index d666c49b0..50fda5f4f 100644 --- a/tags/application/index.html +++ b/tags/application/index.html @@ -1 +1 @@ - application | TotalDebug
    + application | TotalDebug
    diff --git a/tags/auth/index.html b/tags/auth/index.html index 1dddff486..be897f751 100644 --- a/tags/auth/index.html +++ b/tags/auth/index.html @@ -1 +1 @@ - auth | TotalDebug
    + auth | TotalDebug
    diff --git a/tags/authentication/index.html b/tags/authentication/index.html index 318c8ec0a..c232dbc21 100644 --- a/tags/authentication/index.html +++ b/tags/authentication/index.html @@ -1 +1 @@ - authentication | TotalDebug
    + authentication | TotalDebug
    diff --git a/tags/automation/index.html b/tags/automation/index.html index 7a815a59f..a98ad01ba 100644 --- a/tags/automation/index.html +++ b/tags/automation/index.html @@ -1 +1 @@ - automation | TotalDebug
    + automation | TotalDebug
    diff --git a/tags/backups/index.html b/tags/backups/index.html index da4b66db8..4f56ca224 100644 --- a/tags/backups/index.html +++ b/tags/backups/index.html @@ -1 +1 @@ - backups | TotalDebug
    + backups | TotalDebug
    diff --git a/tags/bash/index.html b/tags/bash/index.html index be959af2f..1628d5047 100644 --- a/tags/bash/index.html +++ b/tags/bash/index.html @@ -1 +1 @@ - bash | TotalDebug
    + bash | TotalDebug
    diff --git a/tags/battery/index.html b/tags/battery/index.html index 3b15e8850..82d29e8a9 100644 --- a/tags/battery/index.html +++ b/tags/battery/index.html @@ -1 +1 @@ - battery | TotalDebug
    + battery | TotalDebug
    diff --git a/tags/behalf/index.html b/tags/behalf/index.html index cedaf5c33..1c8a65973 100644 --- a/tags/behalf/index.html +++ b/tags/behalf/index.html @@ -1 +1 @@ - behalf | TotalDebug
    + behalf | TotalDebug
    diff --git a/tags/best-practices/index.html b/tags/best-practices/index.html index a23195475..df7ea5414 100644 --- a/tags/best-practices/index.html +++ b/tags/best-practices/index.html @@ -1 +1 @@ - best practices | TotalDebug
    + best practices | TotalDebug
    diff --git a/tags/bigdata/index.html b/tags/bigdata/index.html index 0cdc1e080..77042bc28 100644 --- a/tags/bigdata/index.html +++ b/tags/bigdata/index.html @@ -1 +1 @@ - bigdata | TotalDebug
    + bigdata | TotalDebug
    diff --git a/tags/calibration/index.html b/tags/calibration/index.html index 01550ce9b..a7f14fd61 100644 --- a/tags/calibration/index.html +++ b/tags/calibration/index.html @@ -1 +1 @@ - calibration | TotalDebug
    + calibration | TotalDebug
    diff --git a/tags/catalyst/index.html b/tags/catalyst/index.html index 5cf4b76c4..19df873e5 100644 --- a/tags/catalyst/index.html +++ b/tags/catalyst/index.html @@ -1 +1 @@ - catalyst | TotalDebug
    + catalyst | TotalDebug
    diff --git a/tags/cd/index.html b/tags/cd/index.html index 6307ec2f5..976957e75 100644 --- a/tags/cd/index.html +++ b/tags/cd/index.html @@ -1 +1 @@ - CD | TotalDebug
    + CD | TotalDebug
    diff --git a/tags/centos/index.html b/tags/centos/index.html index 1fe8302e9..b1281eaa9 100644 --- a/tags/centos/index.html +++ b/tags/centos/index.html @@ -1 +1 @@ - centos | TotalDebug
    + centos | TotalDebug
    diff --git a/tags/centos7/index.html b/tags/centos7/index.html index c7516395d..42d1e31c2 100644 --- a/tags/centos7/index.html +++ b/tags/centos7/index.html @@ -1 +1 @@ - centos7 | TotalDebug
    + centos7 | TotalDebug
    diff --git a/tags/change/index.html b/tags/change/index.html index ce5ef6d26..73d8d22cc 100644 --- a/tags/change/index.html +++ b/tags/change/index.html @@ -1 +1 @@ - change | TotalDebug
    + change | TotalDebug
    diff --git a/tags/ci/index.html b/tags/ci/index.html index 4b16f5d76..588c39960 100644 --- a/tags/ci/index.html +++ b/tags/ci/index.html @@ -1 +1 @@ - CI | TotalDebug
    + CI | TotalDebug
    diff --git a/tags/cisco/index.html b/tags/cisco/index.html index a119b7a10..139bed4ba 100644 --- a/tags/cisco/index.html +++ b/tags/cisco/index.html @@ -1 +1 @@ - cisco | TotalDebug
    + cisco | TotalDebug
    diff --git a/tags/clone/index.html b/tags/clone/index.html index bb9c9bc89..746f2da99 100644 --- a/tags/clone/index.html +++ b/tags/clone/index.html @@ -1 +1 @@ - clone | TotalDebug
    + clone | TotalDebug
    diff --git a/tags/clonezilla/index.html b/tags/clonezilla/index.html index 724299b82..70249b027 100644 --- a/tags/clonezilla/index.html +++ b/tags/clonezilla/index.html @@ -1 +1 @@ - clonezilla | TotalDebug
    + clonezilla | TotalDebug
    diff --git a/tags/cloud-image/index.html b/tags/cloud-image/index.html index 6ce708bb4..df7922ba1 100644 --- a/tags/cloud-image/index.html +++ b/tags/cloud-image/index.html @@ -1 +1 @@ - cloud-image | TotalDebug
    + cloud-image | TotalDebug
    diff --git a/tags/cloud-init/index.html b/tags/cloud-init/index.html index 5773a1f89..d774e616d 100644 --- a/tags/cloud-init/index.html +++ b/tags/cloud-init/index.html @@ -1 +1 @@ - cloud-init | TotalDebug
    + cloud-init | TotalDebug
    diff --git a/tags/code-quality/index.html b/tags/code-quality/index.html index cab95c2d7..62c065492 100644 --- a/tags/code-quality/index.html +++ b/tags/code-quality/index.html @@ -1 +1 @@ - code quality | TotalDebug
    + code quality | TotalDebug
    diff --git a/tags/code/index.html b/tags/code/index.html index 64834e9db..81d97ce48 100644 --- a/tags/code/index.html +++ b/tags/code/index.html @@ -1 +1 @@ - code | TotalDebug
    + code | TotalDebug
    diff --git a/tags/commit/index.html b/tags/commit/index.html index 6530e613b..985c25ae1 100644 --- a/tags/commit/index.html +++ b/tags/commit/index.html @@ -1 +1 @@ - commit | TotalDebug
    + commit | TotalDebug
    diff --git a/tags/competition/index.html b/tags/competition/index.html index 8f9c5e102..f0f7ba1b3 100644 --- a/tags/competition/index.html +++ b/tags/competition/index.html @@ -1 +1 @@ - competition | TotalDebug
    + competition | TotalDebug
    diff --git a/tags/configuration/index.html b/tags/configuration/index.html index ed6a5f611..38e9823b7 100644 --- a/tags/configuration/index.html +++ b/tags/configuration/index.html @@ -1 +1 @@ - configuration | TotalDebug
    + configuration | TotalDebug
    diff --git a/tags/container/index.html b/tags/container/index.html index daf0acd3b..574adee61 100644 --- a/tags/container/index.html +++ b/tags/container/index.html @@ -1 +1 @@ - container | TotalDebug
    + container | TotalDebug
    diff --git a/tags/continuous/index.html b/tags/continuous/index.html index adb85189e..f55a00d4c 100644 --- a/tags/continuous/index.html +++ b/tags/continuous/index.html @@ -1 +1 @@ - continuous | TotalDebug
    + continuous | TotalDebug
    diff --git a/tags/controller/index.html b/tags/controller/index.html index 4d6d602d6..e1213d762 100644 --- a/tags/controller/index.html +++ b/tags/controller/index.html @@ -1 +1 @@ - controller | TotalDebug
    + controller | TotalDebug
    diff --git a/tags/dashboard/index.html b/tags/dashboard/index.html index f0a28039d..4ea12735b 100644 --- a/tags/dashboard/index.html +++ b/tags/dashboard/index.html @@ -1 +1 @@ - dashboard | TotalDebug
    + dashboard | TotalDebug
    diff --git a/tags/database/index.html b/tags/database/index.html index e02298bc4..37967a2b8 100644 --- a/tags/database/index.html +++ b/tags/database/index.html @@ -1 +1 @@ - database | TotalDebug
    + database | TotalDebug
    diff --git a/tags/dataframes/index.html b/tags/dataframes/index.html index ffe4d930f..5f9bfda05 100644 --- a/tags/dataframes/index.html +++ b/tags/dataframes/index.html @@ -1 +1 @@ - dataframes | TotalDebug
    + dataframes | TotalDebug
    diff --git a/tags/db/index.html b/tags/db/index.html index 865755921..467c3f69e 100644 --- a/tags/db/index.html +++ b/tags/db/index.html @@ -1 +1 @@ - db | TotalDebug
    + db | TotalDebug
    diff --git a/tags/deployment/index.html b/tags/deployment/index.html index bf1cdc9f2..7a43b49cf 100644 --- a/tags/deployment/index.html +++ b/tags/deployment/index.html @@ -1 +1 @@ - deployment | TotalDebug
    + deployment | TotalDebug
    diff --git a/tags/director/index.html b/tags/director/index.html index 17310233f..ff0c9bfa7 100644 --- a/tags/director/index.html +++ b/tags/director/index.html @@ -1 +1 @@ - director | TotalDebug
    + director | TotalDebug
    diff --git a/tags/directories/index.html b/tags/directories/index.html index 12eac9e65..60b9ce21d 100644 --- a/tags/directories/index.html +++ b/tags/directories/index.html @@ -1 +1 @@ - directories | TotalDebug
    + directories | TotalDebug
    diff --git a/tags/docker/index.html b/tags/docker/index.html index fab1c288d..22efca5cf 100644 --- a/tags/docker/index.html +++ b/tags/docker/index.html @@ -1 +1 @@ - docker | TotalDebug
    + docker | TotalDebug
    diff --git a/tags/documents/index.html b/tags/documents/index.html index dbe22ac47..607e7129e 100644 --- a/tags/documents/index.html +++ b/tags/documents/index.html @@ -1 +1 @@ - documents | TotalDebug
    + documents | TotalDebug
    diff --git a/tags/email/index.html b/tags/email/index.html index 926e740d2..4c78e5016 100644 --- a/tags/email/index.html +++ b/tags/email/index.html @@ -1 +1 @@ - email | TotalDebug
    + email | TotalDebug
    diff --git a/tags/ender3/index.html b/tags/ender3/index.html index b80b0b2db..09890400c 100644 --- a/tags/ender3/index.html +++ b/tags/ender3/index.html @@ -1 +1 @@ - ender3 | TotalDebug
    + ender3 | TotalDebug
    diff --git a/tags/exchange/index.html b/tags/exchange/index.html index 8d7e27e26..7f9558349 100644 --- a/tags/exchange/index.html +++ b/tags/exchange/index.html @@ -1 +1 @@ - exchange | TotalDebug
    + exchange | TotalDebug
    diff --git a/tags/filter/index.html b/tags/filter/index.html index 0570444de..82fab0f72 100644 --- a/tags/filter/index.html +++ b/tags/filter/index.html @@ -1 +1 @@ - filter | TotalDebug
    + filter | TotalDebug
    diff --git a/tags/git/index.html b/tags/git/index.html index 1a39737f2..1e5d0b7f8 100644 --- a/tags/git/index.html +++ b/tags/git/index.html @@ -1 +1 @@ - git | TotalDebug
    + git | TotalDebug
    diff --git a/tags/github/index.html b/tags/github/index.html index ce46ff482..b13f8747b 100644 --- a/tags/github/index.html +++ b/tags/github/index.html @@ -1 +1 @@ - github | TotalDebug
    + github | TotalDebug
    diff --git a/tags/gpo/index.html b/tags/gpo/index.html index c6cf9e124..228a8954d 100644 --- a/tags/gpo/index.html +++ b/tags/gpo/index.html @@ -1 +1 @@ - gpo | TotalDebug
    + gpo | TotalDebug
    diff --git a/tags/group/index.html b/tags/group/index.html index 232c5f2d5..0b4749b18 100644 --- a/tags/group/index.html +++ b/tags/group/index.html @@ -1 +1 @@ - group | TotalDebug
    + group | TotalDebug
    diff --git a/tags/ha/index.html b/tags/ha/index.html index 374db4fea..a62cd8e0c 100644 --- a/tags/ha/index.html +++ b/tags/ha/index.html @@ -1 +1 @@ - ha | TotalDebug
    + ha | TotalDebug
    diff --git a/tags/hardening/index.html b/tags/hardening/index.html index e08d07bba..b66a60db8 100644 --- a/tags/hardening/index.html +++ b/tags/hardening/index.html @@ -1 +1 @@ - hardening | TotalDebug
    + hardening | TotalDebug
    diff --git a/tags/home-assistant/index.html b/tags/home-assistant/index.html index a7448bda5..bc70a111b 100644 --- a/tags/home-assistant/index.html +++ b/tags/home-assistant/index.html @@ -1 +1 @@ - home-assistant | TotalDebug
    + home-assistant | TotalDebug
    diff --git a/tags/homer/index.html b/tags/homer/index.html index 464081caf..eb50118e0 100644 --- a/tags/homer/index.html +++ b/tags/homer/index.html @@ -1 +1 @@ - homer | TotalDebug
    + homer | TotalDebug
    diff --git a/tags/hot-to/index.html b/tags/hot-to/index.html index 06c11e568..42724cc43 100644 --- a/tags/hot-to/index.html +++ b/tags/hot-to/index.html @@ -1 +1 @@ - hot-to | TotalDebug
    + hot-to | TotalDebug
    diff --git a/tags/how-to/index.html b/tags/how-to/index.html index d40e7a3db..8dc032877 100644 --- a/tags/how-to/index.html +++ b/tags/how-to/index.html @@ -1 +1 @@ - how-to | TotalDebug
    + how-to | TotalDebug
    diff --git a/tags/index.html b/tags/index.html index f8c67ac86..24da8212d 100644 --- a/tags/index.html +++ b/tags/index.html @@ -1 +1 @@ - Tags | TotalDebug
    Home Tags
    Tags
    Cancel

    Tags

    + Tags | TotalDebug
    Home Tags
    Tags
    Cancel

    Tags

    diff --git a/tags/index/index.html b/tags/index/index.html index 387ecb995..e39dee416 100644 --- a/tags/index/index.html +++ b/tags/index/index.html @@ -1 +1 @@ - index | TotalDebug
    + index | TotalDebug
    diff --git a/tags/init/index.html b/tags/init/index.html index 83f1e3959..fefe379db 100644 --- a/tags/init/index.html +++ b/tags/init/index.html @@ -1 +1 @@ - init | TotalDebug
    + init | TotalDebug
    diff --git a/tags/integration/index.html b/tags/integration/index.html index bbead5858..ad221f979 100644 --- a/tags/integration/index.html +++ b/tags/integration/index.html @@ -1 +1 @@ - integration | TotalDebug
    + integration | TotalDebug
    diff --git a/tags/ipsec/index.html b/tags/ipsec/index.html index 7655de86d..2af3f6107 100644 --- a/tags/ipsec/index.html +++ b/tags/ipsec/index.html @@ -1 +1 @@ - ipsec | TotalDebug
    + ipsec | TotalDebug
    diff --git a/tags/jekyll/index.html b/tags/jekyll/index.html index 10e5e1146..ebd8a4347 100644 --- a/tags/jekyll/index.html +++ b/tags/jekyll/index.html @@ -1 +1 @@ - Jekyll | TotalDebug
    + Jekyll | TotalDebug
    diff --git a/tags/juniper/index.html b/tags/juniper/index.html index 87a60f2ab..65f805993 100644 --- a/tags/juniper/index.html +++ b/tags/juniper/index.html @@ -1 +1 @@ - juniper | TotalDebug
    + juniper | TotalDebug
    diff --git a/tags/junos-log-moniroting/index.html b/tags/junos-log-moniroting/index.html index 065b2a48c..c9b4e8314 100644 --- a/tags/junos-log-moniroting/index.html +++ b/tags/junos-log-moniroting/index.html @@ -1 +1 @@ - junos log moniroting | TotalDebug
    + junos log moniroting | TotalDebug
    diff --git a/tags/junos/index.html b/tags/junos/index.html index 9f5bb0966..dc38be398 100644 --- a/tags/junos/index.html +++ b/tags/junos/index.html @@ -1 +1 @@ - junos | TotalDebug
    + junos | TotalDebug
    diff --git a/tags/kill/index.html b/tags/kill/index.html index 1854dc713..b0afcfdbc 100644 --- a/tags/kill/index.html +++ b/tags/kill/index.html @@ -1 +1 @@ - kill | TotalDebug
    + kill | TotalDebug
    diff --git a/tags/l2tp/index.html b/tags/l2tp/index.html index 1d9da3ba2..ccf9fc4dc 100644 --- a/tags/l2tp/index.html +++ b/tags/l2tp/index.html @@ -1 +1 @@ - l2tp | TotalDebug
    + l2tp | TotalDebug
    diff --git a/tags/large/index.html b/tags/large/index.html index 6f625e7f9..82cad84c1 100644 --- a/tags/large/index.html +++ b/tags/large/index.html @@ -1 +1 @@ - large | TotalDebug
    + large | TotalDebug
    diff --git a/tags/licensing/index.html b/tags/licensing/index.html index 1f112b254..5fa80c628 100644 --- a/tags/licensing/index.html +++ b/tags/licensing/index.html @@ -1 +1 @@ - licensing | TotalDebug
    + licensing | TotalDebug
    diff --git a/tags/linux/index.html b/tags/linux/index.html index b0a6cacd9..811b318fc 100644 --- a/tags/linux/index.html +++ b/tags/linux/index.html @@ -1 +1 @@ - linux | TotalDebug
    + linux | TotalDebug
    diff --git a/tags/logs/index.html b/tags/logs/index.html index 92a5b2152..d75c59764 100644 --- a/tags/logs/index.html +++ b/tags/logs/index.html @@ -1 +1 @@ - logs | TotalDebug
    + logs | TotalDebug
    diff --git a/tags/management/index.html b/tags/management/index.html index 7ec4bbf5f..00f8e4678 100644 --- a/tags/management/index.html +++ b/tags/management/index.html @@ -1 +1 @@ - management | TotalDebug
    + management | TotalDebug
    diff --git a/tags/mariadb/index.html b/tags/mariadb/index.html index 29527a31f..9886b0024 100644 --- a/tags/mariadb/index.html +++ b/tags/mariadb/index.html @@ -1 +1 @@ - mariadb | TotalDebug
    + mariadb | TotalDebug
    diff --git a/tags/match/index.html b/tags/match/index.html index b4f5706a9..50f4f0e1e 100644 --- a/tags/match/index.html +++ b/tags/match/index.html @@ -1 +1 @@ - match | TotalDebug
    + match | TotalDebug
    diff --git a/tags/migration/index.html b/tags/migration/index.html index 8e2f099ff..c7310705a 100644 --- a/tags/migration/index.html +++ b/tags/migration/index.html @@ -1 +1 @@ - migration | TotalDebug
    + migration | TotalDebug
    diff --git a/tags/mikrotik/index.html b/tags/mikrotik/index.html index 0dcbc2f72..a41554191 100644 --- a/tags/mikrotik/index.html +++ b/tags/mikrotik/index.html @@ -1 +1 @@ - mikrotik | TotalDebug
    + mikrotik | TotalDebug
    diff --git a/tags/monitor/index.html b/tags/monitor/index.html index e593db7b7..d62bb44b5 100644 --- a/tags/monitor/index.html +++ b/tags/monitor/index.html @@ -1 +1 @@ - monitor | TotalDebug
    + monitor | TotalDebug
    diff --git a/tags/mqtt/index.html b/tags/mqtt/index.html index 193179afe..7aec92f28 100644 --- a/tags/mqtt/index.html +++ b/tags/mqtt/index.html @@ -1 +1 @@ - mqtt | TotalDebug
    + mqtt | TotalDebug
    diff --git a/tags/mysql/index.html b/tags/mysql/index.html index a48eaec1c..bf42089ff 100644 --- a/tags/mysql/index.html +++ b/tags/mysql/index.html @@ -1 +1 @@ - mysql | TotalDebug
    + mysql | TotalDebug
    diff --git a/tags/networking/index.html b/tags/networking/index.html index f6dcede63..2328afcdc 100644 --- a/tags/networking/index.html +++ b/tags/networking/index.html @@ -1 +1 @@ - networking | TotalDebug
    + networking | TotalDebug
    diff --git a/tags/nmcli/index.html b/tags/nmcli/index.html index b990c166a..69f9c78e9 100644 --- a/tags/nmcli/index.html +++ b/tags/nmcli/index.html @@ -1 +1 @@ - nmcli | TotalDebug
    + nmcli | TotalDebug
    diff --git a/tags/nodered/index.html b/tags/nodered/index.html index ba792c21b..2ba26e044 100644 --- a/tags/nodered/index.html +++ b/tags/nodered/index.html @@ -1 +1 @@ - nodered | TotalDebug
    + nodered | TotalDebug
    diff --git a/tags/notify/index.html b/tags/notify/index.html index b25b3a93d..507cdb9df 100644 --- a/tags/notify/index.html +++ b/tags/notify/index.html @@ -1 +1 @@ - notify | TotalDebug
    + notify | TotalDebug
    diff --git a/tags/ntp/index.html b/tags/ntp/index.html index 13c8e2059..5ff3cb7db 100644 --- a/tags/ntp/index.html +++ b/tags/ntp/index.html @@ -1 +1 @@ - ntp | TotalDebug
    + ntp | TotalDebug
    diff --git a/tags/octoprint/index.html b/tags/octoprint/index.html index 7f141f0c2..367d9c4d2 100644 --- a/tags/octoprint/index.html +++ b/tags/octoprint/index.html @@ -1 +1 @@ - octoprint | TotalDebug
    + octoprint | TotalDebug
    diff --git a/tags/openswan/index.html b/tags/openswan/index.html index 4a943ca6d..1fa74a9b3 100644 --- a/tags/openswan/index.html +++ b/tags/openswan/index.html @@ -1 +1 @@ - openswan | TotalDebug
    + openswan | TotalDebug
    diff --git a/tags/openvpn/index.html b/tags/openvpn/index.html index ee652d1c8..572179be0 100644 --- a/tags/openvpn/index.html +++ b/tags/openvpn/index.html @@ -1 +1 @@ - openvpn | TotalDebug
    + openvpn | TotalDebug
    diff --git a/tags/outlook/index.html b/tags/outlook/index.html index 6c1ae6de1..7e3759635 100644 --- a/tags/outlook/index.html +++ b/tags/outlook/index.html @@ -1 +1 @@ - outlook | TotalDebug
    + outlook | TotalDebug
    diff --git a/tags/overlay2/index.html b/tags/overlay2/index.html index c5d0cab6f..bee31a7c6 100644 --- a/tags/overlay2/index.html +++ b/tags/overlay2/index.html @@ -1 +1 @@ - overlay2 | TotalDebug
    + overlay2 | TotalDebug
    diff --git a/tags/overview/index.html b/tags/overview/index.html index 47bf66d9b..786f2ca72 100644 --- a/tags/overview/index.html +++ b/tags/overview/index.html @@ -1 +1 @@ - overview | TotalDebug
    + overview | TotalDebug
    diff --git a/tags/page/index.html b/tags/page/index.html index 723aa35b6..07eac792c 100644 --- a/tags/page/index.html +++ b/tags/page/index.html @@ -1 +1 @@ - page | TotalDebug
    + page | TotalDebug
    diff --git a/tags/pages/index.html b/tags/pages/index.html index d62f8a2d4..e90162bd7 100644 --- a/tags/pages/index.html +++ b/tags/pages/index.html @@ -1 +1 @@ - pages | TotalDebug
    + pages | TotalDebug
    diff --git a/tags/pandas/index.html b/tags/pandas/index.html index 1204c941f..dc30ebad1 100644 --- a/tags/pandas/index.html +++ b/tags/pandas/index.html @@ -1 +1 @@ - pandas | TotalDebug
    + pandas | TotalDebug
    diff --git a/tags/performance/index.html b/tags/performance/index.html index 7c30757ea..af376aa0a 100644 --- a/tags/performance/index.html +++ b/tags/performance/index.html @@ -1 +1 @@ - performance | TotalDebug
    + performance | TotalDebug
    diff --git a/tags/php/index.html b/tags/php/index.html index 102e6fe31..9a57daca6 100644 --- a/tags/php/index.html +++ b/tags/php/index.html @@ -1 +1 @@ - php | TotalDebug
    + php | TotalDebug
    diff --git a/tags/pi/index.html b/tags/pi/index.html index 3d86aaa12..bf0893ee2 100644 --- a/tags/pi/index.html +++ b/tags/pi/index.html @@ -1 +1 @@ - pi | TotalDebug
    + pi | TotalDebug
    diff --git a/tags/plugins/index.html b/tags/plugins/index.html index 37545b090..e1af629a5 100644 --- a/tags/plugins/index.html +++ b/tags/plugins/index.html @@ -1 +1 @@ - plugins | TotalDebug
    + plugins | TotalDebug
    diff --git a/tags/policy/index.html b/tags/policy/index.html index f9520b72c..bd9f1072a 100644 --- a/tags/policy/index.html +++ b/tags/policy/index.html @@ -1 +1 @@ - policy | TotalDebug
    + policy | TotalDebug
    diff --git a/tags/pools/index.html b/tags/pools/index.html index e7e9ce938..3d9b4bd87 100644 --- a/tags/pools/index.html +++ b/tags/pools/index.html @@ -1 +1 @@ - pools | TotalDebug
    + pools | TotalDebug
    diff --git a/tags/private-key/index.html b/tags/private-key/index.html index 62b670acd..d3cfb4d2e 100644 --- a/tags/private-key/index.html +++ b/tags/private-key/index.html @@ -1 +1 @@ - private key | TotalDebug
    + private key | TotalDebug
    diff --git a/tags/privilege/index.html b/tags/privilege/index.html index 72e5fd14d..1216178f2 100644 --- a/tags/privilege/index.html +++ b/tags/privilege/index.html @@ -1 +1 @@ - privilege | TotalDebug
    + privilege | TotalDebug
    diff --git a/tags/process/index.html b/tags/process/index.html index 96813e457..7093367ab 100644 --- a/tags/process/index.html +++ b/tags/process/index.html @@ -1 +1 @@ - process | TotalDebug
    + process | TotalDebug
    diff --git a/tags/production/index.html b/tags/production/index.html index 36edcfaef..2e7c2e7ca 100644 --- a/tags/production/index.html +++ b/tags/production/index.html @@ -1 +1 @@ - production | TotalDebug
    + production | TotalDebug
    diff --git a/tags/project/index.html b/tags/project/index.html index 12cf7cd24..dba2c9e3f 100644 --- a/tags/project/index.html +++ b/tags/project/index.html @@ -1 +1 @@ - project | TotalDebug
    + project | TotalDebug
    diff --git a/tags/pronterface/index.html b/tags/pronterface/index.html index fbe1f375d..03d25a1be 100644 --- a/tags/pronterface/index.html +++ b/tags/pronterface/index.html @@ -1 +1 @@ - pronterface | TotalDebug
    + pronterface | TotalDebug
    diff --git a/tags/proxmox/index.html b/tags/proxmox/index.html index 739bf4b61..c00ba7926 100644 --- a/tags/proxmox/index.html +++ b/tags/proxmox/index.html @@ -1 +1 @@ - proxmox | TotalDebug
    + proxmox | TotalDebug
    diff --git a/tags/public-key/index.html b/tags/public-key/index.html index 166643f4b..052f690fd 100644 --- a/tags/public-key/index.html +++ b/tags/public-key/index.html @@ -1 +1 @@ - public key | TotalDebug
    + public key | TotalDebug
    diff --git a/tags/python/index.html b/tags/python/index.html index 1a47b876a..72eaf67b2 100644 --- a/tags/python/index.html +++ b/tags/python/index.html @@ -1 +1 @@ - python | TotalDebug
    + python | TotalDebug
    diff --git a/tags/radius/index.html b/tags/radius/index.html index b820ec576..94de8234e 100644 --- a/tags/radius/index.html +++ b/tags/radius/index.html @@ -1 +1 @@ - radius | TotalDebug
    + radius | TotalDebug
    diff --git a/tags/raspberry/index.html b/tags/raspberry/index.html index 1a780154e..a71322610 100644 --- a/tags/raspberry/index.html +++ b/tags/raspberry/index.html @@ -1 +1 @@ - raspberry | TotalDebug
    + raspberry | TotalDebug
    diff --git a/tags/rds/index.html b/tags/rds/index.html index bd71dce2d..e7baa013b 100644 --- a/tags/rds/index.html +++ b/tags/rds/index.html @@ -1 +1 @@ - rds | TotalDebug
    + rds | TotalDebug
    diff --git a/tags/recover/index.html b/tags/recover/index.html index 5837de6b2..5705e8e14 100644 --- a/tags/recover/index.html +++ b/tags/recover/index.html @@ -1 +1 @@ - recover | TotalDebug
    + recover | TotalDebug
    diff --git a/tags/redirect/index.html b/tags/redirect/index.html index e60729b59..ad08aa78e 100644 --- a/tags/redirect/index.html +++ b/tags/redirect/index.html @@ -1 +1 @@ - redirect | TotalDebug
    + redirect | TotalDebug
    diff --git a/tags/remote/index.html b/tags/remote/index.html index a5a4ecbf7..04be31e98 100644 --- a/tags/remote/index.html +++ b/tags/remote/index.html @@ -1 +1 @@ - remote | TotalDebug
    + remote | TotalDebug
    diff --git a/tags/removal/index.html b/tags/removal/index.html index 18bc41b69..fae7626ed 100644 --- a/tags/removal/index.html +++ b/tags/removal/index.html @@ -1 +1 @@ - removal | TotalDebug
    + removal | TotalDebug
    diff --git a/tags/remove/index.html b/tags/remove/index.html index 51a2598eb..f699c81af 100644 --- a/tags/remove/index.html +++ b/tags/remove/index.html @@ -1 +1 @@ - remove | TotalDebug
    + remove | TotalDebug
    diff --git a/tags/repo/index.html b/tags/repo/index.html index dc0726225..23040f389 100644 --- a/tags/repo/index.html +++ b/tags/repo/index.html @@ -1 +1 @@ - repo | TotalDebug
    + repo | TotalDebug
    diff --git a/tags/repository/index.html b/tags/repository/index.html index 01a8b47fb..57eaebffe 100644 --- a/tags/repository/index.html +++ b/tags/repository/index.html @@ -1 +1 @@ - repository | TotalDebug
    + repository | TotalDebug
    diff --git a/tags/resource/index.html b/tags/resource/index.html index 20bcbf60c..763ae745a 100644 --- a/tags/resource/index.html +++ b/tags/resource/index.html @@ -1 +1 @@ - resource | TotalDebug
    + resource | TotalDebug
    diff --git a/tags/rsnapshot/index.html b/tags/rsnapshot/index.html index 3954ccc94..382e59297 100644 --- a/tags/rsnapshot/index.html +++ b/tags/rsnapshot/index.html @@ -1 +1 @@ - rsnapshot | TotalDebug
    + rsnapshot | TotalDebug
    diff --git a/tags/rsync/index.html b/tags/rsync/index.html index 32f56b3a4..3ff3260e0 100644 --- a/tags/rsync/index.html +++ b/tags/rsync/index.html @@ -1 +1 @@ - rsync | TotalDebug
    + rsync | TotalDebug
    diff --git a/tags/safe/index.html b/tags/safe/index.html index 5705b7f7d..d586c7643 100644 --- a/tags/safe/index.html +++ b/tags/safe/index.html @@ -1 +1 @@ - safe | TotalDebug
    + safe | TotalDebug
    diff --git a/tags/script/index.html b/tags/script/index.html index 4e8f99949..c95faa3ec 100644 --- a/tags/script/index.html +++ b/tags/script/index.html @@ -1 +1 @@ - script | TotalDebug
    + script | TotalDebug
    diff --git a/tags/security/index.html b/tags/security/index.html index 72c6c51dc..94b7c9571 100644 --- a/tags/security/index.html +++ b/tags/security/index.html @@ -1 +1 @@ - security | TotalDebug
    + security | TotalDebug
    diff --git a/tags/send-as/index.html b/tags/send-as/index.html index 9473ad1fa..0f999ddb4 100644 --- a/tags/send-as/index.html +++ b/tags/send-as/index.html @@ -1 +1 @@ - send as | TotalDebug
    + send as | TotalDebug
    diff --git a/tags/send/index.html b/tags/send/index.html index 72b87cb2b..422bff782 100644 --- a/tags/send/index.html +++ b/tags/send/index.html @@ -1 +1 @@ - send | TotalDebug
    + send | TotalDebug
    diff --git a/tags/server/index.html b/tags/server/index.html index 3d670c6e8..50c96e399 100644 --- a/tags/server/index.html +++ b/tags/server/index.html @@ -1 +1 @@ - server | TotalDebug
    + server | TotalDebug
    diff --git a/tags/settings/index.html b/tags/settings/index.html index 1c5b674a6..6d0d86154 100644 --- a/tags/settings/index.html +++ b/tags/settings/index.html @@ -1 +1 @@ - settings | TotalDebug
    + settings | TotalDebug
    diff --git a/tags/sharing/index.html b/tags/sharing/index.html index 3af99acf2..ef63cb1e5 100644 --- a/tags/sharing/index.html +++ b/tags/sharing/index.html @@ -1 +1 @@ - sharing | TotalDebug
    + sharing | TotalDebug
    diff --git a/tags/shutdown/index.html b/tags/shutdown/index.html index d836a2597..6610d5987 100644 --- a/tags/shutdown/index.html +++ b/tags/shutdown/index.html @@ -1 +1 @@ - shutdown | TotalDebug
    + shutdown | TotalDebug
    diff --git a/tags/snapshot/index.html b/tags/snapshot/index.html index 5c9412f6a..8279ab4c4 100644 --- a/tags/snapshot/index.html +++ b/tags/snapshot/index.html @@ -1 +1 @@ - snapshot | TotalDebug
    + snapshot | TotalDebug
    diff --git a/tags/solar/index.html b/tags/solar/index.html index a3b7d7e85..785ee7176 100644 --- a/tags/solar/index.html +++ b/tags/solar/index.html @@ -1 +1 @@ - solar | TotalDebug
    + solar | TotalDebug
    diff --git a/tags/sqitch/index.html b/tags/sqitch/index.html index 17c4b3423..a14eb5e48 100644 --- a/tags/sqitch/index.html +++ b/tags/sqitch/index.html @@ -1 +1 @@ - sqitch | TotalDebug
    + sqitch | TotalDebug
    diff --git a/tags/ssh/index.html b/tags/ssh/index.html index f8b7eecb5..e65bee942 100644 --- a/tags/ssh/index.html +++ b/tags/ssh/index.html @@ -1 +1 @@ - ssh | TotalDebug
    + ssh | TotalDebug
    diff --git a/tags/ssl/index.html b/tags/ssl/index.html index 3cba901b6..c47980a3e 100644 --- a/tags/ssl/index.html +++ b/tags/ssl/index.html @@ -1 +1 @@ - ssl | TotalDebug
    + ssl | TotalDebug
    diff --git a/tags/standards/index.html b/tags/standards/index.html index c178cbaf9..caa1e24e5 100644 --- a/tags/standards/index.html +++ b/tags/standards/index.html @@ -1 +1 @@ - standards | TotalDebug
    + standards | TotalDebug
    diff --git a/tags/start/index.html b/tags/start/index.html index 6e5d9578b..2ab7244e3 100644 --- a/tags/start/index.html +++ b/tags/start/index.html @@ -1 +1 @@ - start | TotalDebug
    + start | TotalDebug
    diff --git a/tags/stateless/index.html b/tags/stateless/index.html index fa37ea9b5..a39730101 100644 --- a/tags/stateless/index.html +++ b/tags/stateless/index.html @@ -1 +1 @@ - stateless | TotalDebug
    + stateless | TotalDebug
    diff --git a/tags/stop/index.html b/tags/stop/index.html index 5ca0dfd97..1c8dc98a5 100644 --- a/tags/stop/index.html +++ b/tags/stop/index.html @@ -1 +1 @@ - stop | TotalDebug
    + stop | TotalDebug
    diff --git a/tags/sync/index.html b/tags/sync/index.html index 2bcf7fc44..ae02c9770 100644 --- a/tags/sync/index.html +++ b/tags/sync/index.html @@ -1 +1 @@ - sync | TotalDebug
    + sync | TotalDebug
    diff --git a/tags/tail-juniper-logs/index.html b/tags/tail-juniper-logs/index.html index 5c800c0f9..16879dc19 100644 --- a/tags/tail-juniper-logs/index.html +++ b/tags/tail-juniper-logs/index.html @@ -1 +1 @@ - tail juniper logs | TotalDebug
    + tail juniper logs | TotalDebug
    diff --git a/tags/tail-junos-logs/index.html b/tags/tail-junos-logs/index.html index d336fb447..24408c30c 100644 --- a/tags/tail-junos-logs/index.html +++ b/tags/tail-junos-logs/index.html @@ -1 +1 @@ - tail junos logs | TotalDebug
    + tail junos logs | TotalDebug
    diff --git a/tags/task/index.html b/tags/task/index.html index 70c727a58..02bb87155 100644 --- a/tags/task/index.html +++ b/tags/task/index.html @@ -1 +1 @@ - task | TotalDebug
    + task | TotalDebug
    diff --git a/tags/taskkill/index.html b/tags/taskkill/index.html index 4d10968c9..3767c66b7 100644 --- a/tags/taskkill/index.html +++ b/tags/taskkill/index.html @@ -1 +1 @@ - taskkill | TotalDebug
    + taskkill | TotalDebug
    diff --git a/tags/team/index.html b/tags/team/index.html index f1f433f05..884a118aa 100644 --- a/tags/team/index.html +++ b/tags/team/index.html @@ -1 +1 @@ - team | TotalDebug
    + team | TotalDebug
    diff --git a/tags/teamspeak/index.html b/tags/teamspeak/index.html index 100a35700..e4e826885 100644 --- a/tags/teamspeak/index.html +++ b/tags/teamspeak/index.html @@ -1 +1 @@ - teamspeak | TotalDebug
    + teamspeak | TotalDebug
    diff --git a/tags/template/index.html b/tags/template/index.html index 18db413da..7fc626945 100644 --- a/tags/template/index.html +++ b/tags/template/index.html @@ -1 +1 @@ - template | TotalDebug
    + template | TotalDebug
    diff --git a/tags/templating/index.html b/tags/templating/index.html index 9f9ccc6bb..91aa63294 100644 --- a/tags/templating/index.html +++ b/tags/templating/index.html @@ -1 +1 @@ - templating | TotalDebug
    + templating | TotalDebug
    diff --git a/tags/terminal/index.html b/tags/terminal/index.html index bbd1953ca..e5651abf1 100644 --- a/tags/terminal/index.html +++ b/tags/terminal/index.html @@ -1 +1 @@ - terminal | TotalDebug
    + terminal | TotalDebug
    diff --git a/tags/terraform/index.html b/tags/terraform/index.html index 5daadd63a..d64cd899c 100644 --- a/tags/terraform/index.html +++ b/tags/terraform/index.html @@ -1 +1 @@ - terraform | TotalDebug
    + terraform | TotalDebug
    diff --git a/tags/time/index.html b/tags/time/index.html index 5e2f7dbb0..1f2f47e10 100644 --- a/tags/time/index.html +++ b/tags/time/index.html @@ -1 +1 @@ - time | TotalDebug
    + time | TotalDebug
    diff --git a/tags/tps/index.html b/tags/tps/index.html index ae23b1a9c..892a3060c 100644 --- a/tags/tps/index.html +++ b/tags/tps/index.html @@ -1 +1 @@ - tps | TotalDebug
    + tps | TotalDebug
    diff --git a/tags/ts3/index.html b/tags/ts3/index.html index 58acc6894..1d045cbef 100644 --- a/tags/ts3/index.html +++ b/tags/ts3/index.html @@ -1 +1 @@ - ts3 | TotalDebug
    + ts3 | TotalDebug
    diff --git a/tags/type-hints/index.html b/tags/type-hints/index.html index 0a2c7abe6..f21215b1d 100644 --- a/tags/type-hints/index.html +++ b/tags/type-hints/index.html @@ -1 +1 @@ - type hints | TotalDebug
    + type hints | TotalDebug
    diff --git a/tags/typing/index.html b/tags/typing/index.html index 95cfacab6..4a2ddb63f 100644 --- a/tags/typing/index.html +++ b/tags/typing/index.html @@ -1 +1 @@ - typing | TotalDebug
    + typing | TotalDebug
    diff --git a/tags/ubiquiti/index.html b/tags/ubiquiti/index.html index f4377b1fa..e91a6d790 100644 --- a/tags/ubiquiti/index.html +++ b/tags/ubiquiti/index.html @@ -1 +1 @@ - ubiquiti | TotalDebug
    + ubiquiti | TotalDebug
    diff --git a/tags/ubuntu/index.html b/tags/ubuntu/index.html index 8dd3a5749..4a27d676b 100644 --- a/tags/ubuntu/index.html +++ b/tags/ubuntu/index.html @@ -1 +1 @@ - ubuntu | TotalDebug
    + ubuntu | TotalDebug
    diff --git a/tags/ubuquiti/index.html b/tags/ubuquiti/index.html index aef6c1c99..411e5bbad 100644 --- a/tags/ubuquiti/index.html +++ b/tags/ubuquiti/index.html @@ -1 +1 @@ - ubuquiti | TotalDebug
    + ubuquiti | TotalDebug
    diff --git a/tags/undefined/index.html b/tags/undefined/index.html index 0f7da90fe..bfa8ff125 100644 --- a/tags/undefined/index.html +++ b/tags/undefined/index.html @@ -1 +1 @@ - undefined | TotalDebug
    + undefined | TotalDebug
    diff --git a/tags/unifi/index.html b/tags/unifi/index.html index e313b648a..2a9b282d4 100644 --- a/tags/unifi/index.html +++ b/tags/unifi/index.html @@ -1 +1 @@ - unifi | TotalDebug
    + unifi | TotalDebug
    diff --git a/tags/upgrade/index.html b/tags/upgrade/index.html index 5fd3d714e..6536f9547 100644 --- a/tags/upgrade/index.html +++ b/tags/upgrade/index.html @@ -1 +1 @@ - upgrade | TotalDebug
    + upgrade | TotalDebug
    diff --git a/tags/user/index.html b/tags/user/index.html index 195d2855b..d454d0b22 100644 --- a/tags/user/index.html +++ b/tags/user/index.html @@ -1 +1 @@ - user | TotalDebug
    + user | TotalDebug
    diff --git a/tags/usg/index.html b/tags/usg/index.html index 31a4d2d65..1945b1c82 100644 --- a/tags/usg/index.html +++ b/tags/usg/index.html @@ -1 +1 @@ - usg | TotalDebug
    + usg | TotalDebug
    diff --git a/tags/vcloud/index.html b/tags/vcloud/index.html index be3595dd7..56287a97b 100644 --- a/tags/vcloud/index.html +++ b/tags/vcloud/index.html @@ -1 +1 @@ - vcloud | TotalDebug
    + vcloud | TotalDebug
    diff --git a/tags/versioning/index.html b/tags/versioning/index.html index e87b74716..7903bb748 100644 --- a/tags/versioning/index.html +++ b/tags/versioning/index.html @@ -1 +1 @@ - versioning | TotalDebug
    + versioning | TotalDebug
    diff --git a/tags/virtual/index.html b/tags/virtual/index.html index fb1dc8848..80c6f8d7d 100644 --- a/tags/virtual/index.html +++ b/tags/virtual/index.html @@ -1 +1 @@ - virtual | TotalDebug
    + virtual | TotalDebug
    diff --git a/tags/vmware/index.html b/tags/vmware/index.html index eba7b1cae..433170a3b 100644 --- a/tags/vmware/index.html +++ b/tags/vmware/index.html @@ -1 +1 @@ - vmware | TotalDebug
    + vmware | TotalDebug
    diff --git a/tags/vpn/index.html b/tags/vpn/index.html index f5627e125..e27986d77 100644 --- a/tags/vpn/index.html +++ b/tags/vpn/index.html @@ -1 +1 @@ - vpn | TotalDebug
    + vpn | TotalDebug
    diff --git a/tags/vuln/index.html b/tags/vuln/index.html index 031a15b1d..d33cc275c 100644 --- a/tags/vuln/index.html +++ b/tags/vuln/index.html @@ -1 +1 @@ - vuln | TotalDebug
    + vuln | TotalDebug
    diff --git a/tags/website/index.html b/tags/website/index.html index e55a5daa8..77674b4f7 100644 --- a/tags/website/index.html +++ b/tags/website/index.html @@ -1 +1 @@ - website | TotalDebug
    + website | TotalDebug
    diff --git a/tags/windows/index.html b/tags/windows/index.html index 3cd680b9d..5d75f5f61 100644 --- a/tags/windows/index.html +++ b/tags/windows/index.html @@ -1 +1 @@ - windows | TotalDebug
    + windows | TotalDebug
    diff --git a/tags/wireless/index.html b/tags/wireless/index.html index f2d472d59..289fe5002 100644 --- a/tags/wireless/index.html +++ b/tags/wireless/index.html @@ -1 +1 @@ - wireless | TotalDebug
    + wireless | TotalDebug
    diff --git a/tags/xl2tpd/index.html b/tags/xl2tpd/index.html index f22f2b486..af937a8f1 100644 --- a/tags/xl2tpd/index.html +++ b/tags/xl2tpd/index.html @@ -1 +1 @@ - xl2tpd | TotalDebug
    + xl2tpd | TotalDebug
    diff --git a/tags/zigbee/index.html b/tags/zigbee/index.html index 6fb187677..664dbf7cd 100644 --- a/tags/zigbee/index.html +++ b/tags/zigbee/index.html @@ -1 +1 @@ - zigbee | TotalDebug
    + zigbee | TotalDebug
    diff --git a/tags/zigbee2mqtt/index.html b/tags/zigbee2mqtt/index.html index be6b3113f..267859b35 100644 --- a/tags/zigbee2mqtt/index.html +++ b/tags/zigbee2mqtt/index.html @@ -1 +1 @@ - zigbee2mqtt | TotalDebug
    + zigbee2mqtt | TotalDebug
    diff --git a/upgrade-linux-unifi-controller-minutes/index.html b/upgrade-linux-unifi-controller-minutes/index.html index 8afdc60e5..aa175bc79 100644 --- a/upgrade-linux-unifi-controller-minutes/index.html +++ b/upgrade-linux-unifi-controller-minutes/index.html @@ -1 +1 @@ - Upgrade your Linux UniFi Controller in minutes! | TotalDebug
    Home Upgrade your Linux UniFi Controller in minutes!
    Post
    Cancel

    Upgrade your Linux UniFi Controller in minutes!

    1488056596

    Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the generic controller package. The upgrade provess is so simple! (i have also written this script that makes it even quicker)

    I previously explained how to install your own UniFi Controller on CentOS in this article. Once you have it up and running, it’s even easier to upgrade to a newer version. The process takes less than 3 minutes with these steps.

    This upgrade was tested on version 5.3.11 to 5.4.11 but should be the same for all versions

    UPDATE: I have also upgraded 5.4.11 to 5.5.11 with no issues

    Stop the UniFi Controller service:

    systemctl stop unifi

    Take a backup of the current unifi folder:

    cp -R /opt/UniFi/ /opt/UniFi_bak/

    Download the new version:

    cd ~ && wget http://dl.ubnt.com/unifi/5.4.11/UniFi.unix.zip

    Unzip the downloaded file into the correct directory:

    unzip -q UniFi.unix.zip -d /opt

    Copy the old data back into the UniFi folder, this allows historical data to be kept:

    cp -R /opt/UniFi_bak/data/ /opt/UniFi/data/

    Restart the UniFi Controller service:

    systemctl start unifi

    Wait a little while for your controller to load back up, once completed you can login as normal and you should still have all your legacy data still visible.

    thats it youre done, simple!

    This post is licensed under CC BY 4.0 by the author.

    Setup Ubiquiti UniFi USG Remote User VPN

    CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

    + Upgrade your Linux UniFi Controller in minutes! | TotalDebug
    Home Upgrade your Linux UniFi Controller in minutes!
    Post
    Cancel

    Upgrade your Linux UniFi Controller in minutes!

    1488056596

    Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the generic controller package. The upgrade provess is so simple! (i have also written this script that makes it even quicker)

    I previously explained how to install your own UniFi Controller on CentOS in this article. Once you have it up and running, it’s even easier to upgrade to a newer version. The process takes less than 3 minutes with these steps.

    This upgrade was tested on version 5.3.11 to 5.4.11 but should be the same for all versions

    UPDATE: I have also upgraded 5.4.11 to 5.5.11 with no issues

    Stop the UniFi Controller service:

    systemctl stop unifi

    Take a backup of the current unifi folder:

    cp -R /opt/UniFi/ /opt/UniFi_bak/

    Download the new version:

    cd ~ && wget http://dl.ubnt.com/unifi/5.4.11/UniFi.unix.zip

    Unzip the downloaded file into the correct directory:

    unzip -q UniFi.unix.zip -d /opt

    Copy the old data back into the UniFi folder, this allows historical data to be kept:

    cp -R /opt/UniFi_bak/data/ /opt/UniFi/data/

    Restart the UniFi Controller service:

    systemctl start unifi

    Wait a little while for your controller to load back up, once completed you can login as normal and you should still have all your legacy data still visible.

    thats it youre done, simple!

    This post is licensed under CC BY 4.0 by the author.

    Setup Ubiquiti UniFi USG Remote User VPN

    CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server

    diff --git a/works/index.html b/works/index.html index d485d50c2..2d4f8f8d9 100644 --- a/works/index.html +++ b/works/index.html @@ -1 +1 @@ - Works | TotalDebug
    + Works | TotalDebug