Hi, Im Steve a Full Stack Systems Engineer from Leeds, UK.
I created this website to provide a location where I could share articles related to my interests / findings when working as a Developer / Sys Admin / Solution Architect.
My interest in IT started when I was very young, my Uncle worked for a large IT company and I was always fascinated by his work. I would build my own computers with him and eventually built them for friends and family too.
I started my career in IT straight from high school where I had an apprenticeship with Kodak working in one of their datacenters, I then moved on to working for small IT Resellers, and then on to a small Leeds based data center and hosting provider where I honed my skills and continue to expand my knowledge.
Now working for one of the worlds largest service providers, I spends this time working on projects to ensure the business runs in the most efficient ways possible by implementing automation tooling and improving processes.
The world always seems brighter when you’ve just made something that wasn’t there before. Neil Gaiman
As a hobby, I likes to spend time on home automation & 3D printing projects and of course spending time with my beautiful wife and children.
Responsible for the design, function and future of the business products and support tooling. Working closely within functional team and with external stakeholders qualifying and documenting new products, both hardware and software.
Identifying areas of improvement both within the team and the larger organization. Designing, engineering, architecture, and integration of tooling to reduce total cost, to support customers and minimize human error.
Role:
Identify areas of improvement for existing or new tooling
Manage approved projects and implement within assigned timelines
Provide support and guidance to other engineers
Point of escalation for support teams and Full Stack Systems Engineers
A small team of engineers setup to help standardise, train and implement new ways workflows or applications to improve efficiency within the business. Also an escalation point for support teams.
Role:
Improve business workflows and tooling to provide more efficient service
Independently manage assigned tasks and communicate with task stakeholders
Responsible for leading a team of 13 Windows Engineers, providing 3rd line support to our customer base.
Role:
Management of Windows Engineering team, including performance reviews and 1-to-1
Planning of on-call rotas and shift patterns
Ticket queue reporting and ensuring SLAs are adhered to
Technologies: InfluxDB, Grafana, ServiceNow
Technical Lead
2015 — 2017 @ Datapipe (Acquired by Rackspace Technology)
Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.
Role:
Overall technical responsibility for a number of large enterprise businesses.
Work closely with the Account Director and Service Manager to co-ordinate teams and deliver the best possible service.
Track and progress problems and projects to enable continued improvement.
Achieved:
Worked with the companies two largest customers to ensure retention after a period of issues, one customer ended up expanding their contract and adding additional £1.2m due to this effort.
Provided a full CMDB device and relationship mapping to allow enhanced support and impact analysis.
Executed monthly usage analysis to provide feedback on potential savings or performance bottlenecks. This resulted in a major bottleneck being found with part of the infrastructure and mitigation being implemented.
Implemented a billing audit to ensure devices were being billed accurately each month.
Resolved some major problems that were causing poor customer performance.
Hi, Im Steve a Full Stack Systems Engineer from Leeds, UK.
I created this website to provide a location where I could share articles related to my interests / findings when working as a Developer / Sys Admin / Solution Architect.
My interest in IT started when I was very young, my Uncle worked for a large IT company and I was always fascinated by his work. I would build my own computers with him and eventually built them for friends and family too.
I started my career in IT straight from high school where I had an apprenticeship with Kodak working in one of their datacenters, I then moved on to working for small IT Resellers, and then on to a small Leeds based data center and hosting provider where I honed my skills and continue to expand my knowledge.
Now working for one of the worlds largest service providers, I spends this time working on projects to ensure the business runs in the most efficient ways possible by implementing automation tooling and improving processes.
The world always seems brighter when you’ve just made something that wasn’t there before. Neil Gaiman
As a hobby, I likes to spend time on home automation & 3D printing projects and of course spending time with my beautiful wife and children.
Responsible for the design, function and future of the business products and support tooling. Working closely within functional team and with external stakeholders qualifying and documenting new products, both hardware and software.
Identifying areas of improvement both within the team and the larger organization. Designing, engineering, architecture, and integration of tooling to reduce total cost, to support customers and minimize human error.
Role:
Identify areas of improvement for existing or new tooling
Manage approved projects and implement within assigned timelines
Provide support and guidance to other engineers
Point of escalation for support teams and Full Stack Systems Engineers
A small team of engineers setup to help standardise, train and implement new ways workflows or applications to improve efficiency within the business. Also an escalation point for support teams.
Role:
Improve business workflows and tooling to provide more efficient service
Independently manage assigned tasks and communicate with task stakeholders
Responsible for leading a team of 13 Windows Engineers, providing 3rd line support to our customer base.
Role:
Management of Windows Engineering team, including performance reviews and 1-to-1
Planning of on-call rotas and shift patterns
Ticket queue reporting and ensuring SLAs are adhered to
Technologies: InfluxDB, Grafana, ServiceNow
Technical Lead
2015 — 2017 @ Datapipe (Acquired by Rackspace Technology)
Adapt was a Managed Service Provider, also providing Co-Location and internet circuits to customers.
Role:
Overall technical responsibility for a number of large enterprise businesses.
Work closely with the Account Director and Service Manager to co-ordinate teams and deliver the best possible service.
Track and progress problems and projects to enable continued improvement.
Achieved:
Worked with the companies two largest customers to ensure retention after a period of issues, one customer ended up expanding their contract and adding additional £1.2m due to this effort.
Provided a full CMDB device and relationship mapping to allow enhanced support and impact analysis.
Executed monthly usage analysis to provide feedback on potential savings or performance bottlenecks. This resulted in a major bottleneck being found with part of the infrastructure and mitigation being implemented.
Implemented a billing audit to ensure devices were being billed accurately each month.
Resolved some major problems that were causing poor customer performance.
When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom?
Well these few commands show you how to do this.
In order to start the monitoring run the following command:
monitor start <log-file-name>
Here is an example command:
monitor start messages
Any changes to the log file will automatically be posted to your screen.
If you want to filter the logs to only show records with certain words then use the following command:
monitor start messages | match error
In order to stop the logs:
monitor stop
Hopefully this article will assist you in viewing your logs with more ease.
When working with JUNOS Switches etc. you may want to monitor the logs over a period of time without loading them every few minutes and scrolling to the bottom?
Well these few commands show you how to do this.
In order to start the monitoring run the following command:
monitor start <log-file-name>
Here is an example command:
monitor start messages
Any changes to the log file will automatically be posted to your screen.
If you want to filter the logs to only show records with certain words then use the following command:
monitor start messages | match error
In order to stop the logs:
monitor stop
Hopefully this article will assist you in viewing your logs with more ease.
One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just work, I was so wrong!
The good news is, its quite a simple process once you know how and in this article im going to share with you, how I calibrate my printer and get perfect prints almost every time.
I use an Ender 3 with a lot of upgrades, but the process is the same for almost all 3d printers , so you should be able to follow this article without issue.
What you will need:
3d Printer
Correctly tensioned belts (they should make a nice twang sound)
One of the most difficult things I found out about 3d printing was that you must calibrate it! This isn’t something that I was aware of, I assumed once everything was tightened that it would just work, I was so wrong!
The good news is, its quite a simple process once you know how and in this article im going to share with you, how I calibrate my printer and get perfect prints almost every time.
I use an Ender 3 with a lot of upgrades, but the process is the same for almost all 3d printers , so you should be able to follow this article without issue.
What you will need:
3d Printer
Correctly tensioned belts (they should make a nice twang sound)
Ruler (calipers sometimes get in the way but you may be ok)
Tape or marker
Filament
Something to take notes on
Axes Diagram:
Setup Software:
First we need to gather all the current settings, to do this you must first send a command to the printer, this can be done with either:
Pronterface
You must plug the USB into the printer and a computer, then launch pronterface, it should auto detect the printer, then click Connect
You can now enter commands in the right window next to the Send button
Octoprint
Once Octoprint is setup go to the terminal tab and you can enter commands here
Gather Initial Info:
Issue the command: M92 then press enter or hit send. you should see something like this:
1
echo: M92 X80.00 Y80.00 Z400.00 E93.00
Make a note of this information somewhere as we will be referring back to these values quite often.
Now we can begin to calibrate each of our motors.
X&Z-Axis Calibration
First start by homing your X axis and the Z axis. I will use the stop switch as the measuring point as this doesn’t move, however you can use any fixed point from the relevant axis.
First measure the distance from the stop switch to the edge of the moving part (X = Printhead, Z = Gantry) , if yours is touching the stop switch then the distance is 0mm.
Now tell your printer to move the Axis 100mm (you can set this to smaller or larger number as the calculation will still work) The further you move the axis the more accurate your calibration should be. Now with your calipers measure from the stop switch to the same point on the printhead, write down the measurement as “ActualDistance” you will need to do this for both the X & Z Axis
If you measured 100mm then you don’t need to do anything else, your axis is calibrated. However, you likely wont get exactly 100mm so we will need to adjust for this.
E Axis Calibration
There are two ways that you can calibrate the E Axis. With the HotEnd attached or without. Personally I prefer to remove the bowden tube from the extruder and measure this way, I find its much more accurate. Some people prefer to heat the HotEnd and let the filament flow through it.
First, remove your filament and disconnect the bowden tube, then we will need to push the filament through the extruder until you just see the end of it flat with the edge where the bowden tube attaches.
Now send 100mm to the E Axis to extrude (you will need to heat the HotEnd or it wont work)
Once this finishes, measure with your calipers the distance from the end of the filament to the extruder this should be 100mm, if not make a note of the measurement (ActualDistance)
Calculations
In order to calculate the Axis we need the following calculation, the calculation is the same no matter which Axis you are working on:
Also we add an M500 which will save the configuration, if you want to make sure the values have saved, restart your printer and issue M92 again you should see the new values.
Also we add an M500 which will save the configuration, if you want to make sure the values have saved, restart your printer and issue M92 again you should see the new values.
One of our customers was getting the below error and it took ages to find a solution so I thought I would post it here.
1
2
3
Unexpected Exchange mailbox Server error: Server: [server.domain] User: [useremail] HTTP status code: [503]. Verify that the Exchange mailbox Server is working correctly.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
-
This is how I fixed the issue:
Open IIS
Right click Default-Website
Click Properties
Click advanced
Review sites. most likely you will see host headers and ip address.
Click add
IP address = (all unassigned)
TCP Port = 80
Host Header Value = (Blank)
Click OK
delete the entry with the host headers and ip address assignerd.
This should resolve the issue please comment if you have any issues doing this.
This post is licensed under CC BY 4.0 by the author.
In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require.
First we want to install NxLog on our vCenter Server, This will be our syslog client.
To configure NxLog go to: c:\Program Files (x86)\nxlog\conf and edit nxlog.conf with a word editor.
Add the following configuration into the file:
1
+ Add vCenter Logs to Syslog Server (GrayLog2) | TotalDebug
In this article I will be showing you how to add vCenter logs to a syslog server, I currently use GrayLog2 as its a great free syslog server and does everything that I require.
First we want to install NxLog on our vCenter Server, This will be our syslog client.
To configure NxLog go to: c:\Program Files (x86)\nxlog\conf and edit nxlog.conf with a word editor.
Once this configuration has been completed we need to configure an output in GrayLog2 for each of our NxLog outputs, My example just shows how to do this for the VPXD log but it is the same for any log.
Login to GrayLog2 Web Interface
Go To System > Inputs
Select GELF UDP from the dropdown
Click Launch New Input
Tick Global Input or a specific GrayLog2 Server depending on your setup
Enter a Title e.g. VPXD Logs
Enter a port that you specified in the NxLog configuration (this must be unique)
Click Launch
You should now start to see the logs pouring in, vCenter does generate a LOT of logs so you may want to keep an eye on your syslog server as it could get overloaded with data.
Hope this helped you, any issues or questions please let me know over on my Discord
Steve
This post is licensed under CC BY 4.0 by the author.
Once this configuration has been completed we need to configure an output in GrayLog2 for each of our NxLog outputs, My example just shows how to do this for the VPXD log but it is the same for any log.
Login to GrayLog2 Web Interface
Go To System > Inputs
Select GELF UDP from the dropdown
Click Launch New Input
Tick Global Input or a specific GrayLog2 Server depending on your setup
Enter a Title e.g. VPXD Logs
Enter a port that you specified in the NxLog configuration (this must be unique)
Click Launch
You should now start to see the logs pouring in, vCenter does generate a LOT of logs so you may want to keep an eye on your syslog server as it could get overloaded with data.
Hope this helped you, any issues or questions please let me know over on my Discord
Steve
This post is licensed under CC BY 4.0 by the author.
diff --git a/posts/assigning-send-as-permissions-to-a-user/index.html b/posts/assigning-send-as-permissions-to-a-user/index.html
index de048234c..9ff11388e 100644
--- a/posts/assigning-send-as-permissions-to-a-user/index.html
+++ b/posts/assigning-send-as-permissions-to-a-user/index.html
@@ -1 +1 @@
- Assigning Send As Permissions to a user | TotalDebug
It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work. I too tried to follow the steps and found that they did not work. I know this feature works, so I went looking around for other documentation on this and found KB281208 which applies to Exchange 5.5 and 2000. Following the steps in KB281208 properly gave an user Send As permission as another user. But I found the steps listed in KB281208 were not complete either. The additional step that I performed was to remove all other permissions other than Send As. Here are the modified steps for KB281208 that I performed:
Start Active Directory Users and Computers; click Start, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
On the View menu, make sure that Advanced Features is selected.
Double-click the user that you want to grant send as rights for, and then click theSecurity tab.
Click Add, click the user that you want to give send as rights to, and then check send as under allow in the Permissions area.
Remove all other permissions granted by default so only the send as permission is granted.
Click OK to close the dialog box.
So after I verified that the steps for KB281208 worked, I was curious as to why the steps for KB327000 did not work. What I found was that Step #7 of KB327000 applied to the permission to User Objects instead of This Object Only. Here are the modified steps for KB327000 that I performed:
On an Exchange computer, click Start, point to Programs, point to Microsoft Exchange, and then click Active Directory Users and Computers.
On the View menu, click to select Advanced Features.
Expand Users, right-click the MailboxOwner object where you want to grant the permission, and then click Properties.
Click the Security tab, and then click Advanced.
In the Access Control Settings for MailboxOwner dialog box, click Add.
In the Select User, Computer, or Group dialog box, click the user account or the group that you want to grant Send as permissions to, and then click OK.
In the Permission Entry for MailboxOwner dialog box, click This Object Only in theApply onto list.
In the Permissions list, locate Send As, and then click to select the Allow check box.
Click OK three times to close the dialog boxes.
The KB articles were updated to include correct information. But, if you had problems with this in the past, this might be why!
It was brought to my attention that following the steps listed in KB327000, which applies to Exchange 2000 and 2003, to assign a user Send As permission as another user did not appear to work. I too tried to follow the steps and found that they did not work. I know this feature works, so I went looking around for other documentation on this and found KB281208 which applies to Exchange 5.5 and 2000. Following the steps in KB281208 properly gave an user Send As permission as another user. But I found the steps listed in KB281208 were not complete either. The additional step that I performed was to remove all other permissions other than Send As. Here are the modified steps for KB281208 that I performed:
Start Active Directory Users and Computers; click Start, point to Programs, point to Administrative Tools, and then click Active Directory Users and Computers.
On the View menu, make sure that Advanced Features is selected.
Double-click the user that you want to grant send as rights for, and then click theSecurity tab.
Click Add, click the user that you want to give send as rights to, and then check send as under allow in the Permissions area.
Remove all other permissions granted by default so only the send as permission is granted.
Click OK to close the dialog box.
So after I verified that the steps for KB281208 worked, I was curious as to why the steps for KB327000 did not work. What I found was that Step #7 of KB327000 applied to the permission to User Objects instead of This Object Only. Here are the modified steps for KB327000 that I performed:
On an Exchange computer, click Start, point to Programs, point to Microsoft Exchange, and then click Active Directory Users and Computers.
On the View menu, click to select Advanced Features.
Expand Users, right-click the MailboxOwner object where you want to grant the permission, and then click Properties.
Click the Security tab, and then click Advanced.
In the Access Control Settings for MailboxOwner dialog box, click Add.
In the Select User, Computer, or Group dialog box, click the user account or the group that you want to grant Send as permissions to, and then click OK.
In the Permission Entry for MailboxOwner dialog box, click This Object Only in theApply onto list.
In the Permissions list, locate Send As, and then click to select the Allow check box.
Click OK three times to close the dialog boxes.
The KB articles were updated to include correct information. But, if you had problems with this in the past, this might be why!
Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well.
I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment.
For this reason I have moved over to using Terraform with Proxmox and ansible.
Telemate developed a Terraform provider that maps Terraform functionality to the Proxmox API, so start by defining the use of that provider in provider.tf
1
+ Automating deployments using Terraform with Proxmox and ansible | TotalDebug
Over the years my home lab has grown and become more and more difficult to maintain, especially because some servers I build and forget as they function so well.
I have found recently though that moving to newer versions of operating systems can be difficult for the servers that I cant easily containerise at the moment.
For this reason I have moved over to using Terraform with Proxmox and ansible.
Telemate developed a Terraform provider that maps Terraform functionality to the Proxmox API, so start by defining the use of that provider in provider.tf
1
2
3
4
@@ -272,4 +272,4 @@
ansible_user=""
Don’t commit this file to Git as it contains sensitive information
Any variables in vars.tf that have a default value don’t need to be defined in the credential file if the default value is sufficient.
The Cloud-Init template
The configuration that is used utilises a cloud-init template, check out my previous post (Proxmox template with cloud image and cloud init) where I cover how to set this up for use in Proxmox with Terraform.
Usage
Now all of the files we require are created, lets get it running:
Install Terraform and Ansible
1
apt install-y terraform ansible
-
Enter the directory where your Terraform files reside
Run terraform init, this will initialize your Terraform configuration and pull all the required providers.
Ensure that you have the credential.auto.tfvars file created and with your variables populated
Run terraform plan -out plan and if everything seems good terraform apply.
Use terraform apply --auto-approve to automatically apply without a prompt
To destroy the infrastructure, run terraform destroy
Final Thoughts
There is so much more potential using Terraform and Ansible. I have just scratched the surface, but you could automate everything up to firewall configuration as well, this is something I still need to look into, but it would be great to deploy and configure the firewall based on each individual device.
If you have any cool ideas for using Terraform and Ansible please let me know in the comments below!
Enter the directory where your Terraform files reside
Run terraform init, this will initialize your Terraform configuration and pull all the required providers.
Ensure that you have the credential.auto.tfvars file created and with your variables populated
Run terraform plan -out plan and if everything seems good terraform apply.
Use terraform apply --auto-approve to automatically apply without a prompt
To destroy the infrastructure, run terraform destroy
Final Thoughts
There is so much more potential using Terraform and Ansible. I have just scratched the surface, but you could automate everything up to firewall configuration as well, this is something I still need to look into, but it would be great to deploy and configure the firewall based on each individual device.
If you have any cool ideas for using Terraform and Ansible please let me know in the comments below!
I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms that match the name you specify and set the email as required.
This is a really basic script and can easily be modified to set alarms how you want them.
1
+ Bulk configure vCenter Alarms with PowerCLI | TotalDebug
I was recently asked if it was possible to update vCenter alarms in bulk with email details. So i set about writing the below script, basically this script will go through looking for any alarms that match the name you specify and set the email as required.
This is a really basic script and can easily be modified to set alarms how you want them.
1
2
3
4
@@ -36,4 +36,4 @@
}
To edit multiple alarms at once simply change the $alarms variable as below:
1
$alarms=@("Test Alarm1","Test Alarm2")
-
One thing you will probably notice is that we set the “Yellow” to “Red” status after everything else, the reason for this is that it is set by default when creating the alarm definition and we need to unset this before resetting with the required notification type.
This post is licensed under CC BY 4.0 by the author.
One thing you will probably notice is that we set the “Yellow” to “Red” status after everything else, the reason for this is that it is set by default when creating the alarm definition and we need to unset this before resetting with the required notification type.
This post is licensed under CC BY 4.0 by the author.
Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentation around the net was setting up an L2TP VPN.
In Windows or iOS its a nice simple setup where you enter all the required details and it sorts out the IPsec and L2TP VPN for you, In CentOS this is much different.
First we need to add the EPEL Repository: Now we need to install the software:
1
+ CentOS 6/7 IPSec/L2TP VPN client to UniFi USG L2TP Server | TotalDebug
Working with CentOS quite a lot I have spent time looking for configurations that work for various issues, one I have seen recently that took me a long time to resolve and had very poor documentation around the net was setting up an L2TP VPN.
In Windows or iOS its a nice simple setup where you enter all the required details and it sorts out the IPsec and L2TP VPN for you, In CentOS this is much different.
First we need to add the EPEL Repository: Now we need to install the software:
1
yum -yinstall epel-release
Now we need to install the software:
1
sudo yum -yinstall xl2tpd openswan
@@ -248,4 +248,4 @@
then
sudo route add -net xxx.xxx.xxx.xxx/xx dev ppp0
fi
-
This can then be created as a cron job to make sure the vpn is always up and running.
I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially with a hidden SSID.
As part of my home systems I am utilising an old laptop as my home assistant server, this allows for battery backup and network teaming, if my switch dies, my WiFi will still work etc.
Lets get to the meat and potatoes!
So the first thing that we need to do is check our devices are available:
1
+ CentOS 8 Teaming with WiFi Hidden SSID using nmcli | TotalDebug
I have had a lot of issues when setting up teaming with WiFi, mainly because of lack of documentation around this, im guessing that teaming ethernet and WiFi is not a common occurrence especially with a hidden SSID.
As part of my home systems I am utilising an old laptop as my home assistant server, this allows for battery backup and network teaming, if my switch dies, my WiFi will still work etc.
Lets get to the meat and potatoes!
So the first thing that we need to do is check our devices are available:
1
2
3
4
@@ -140,4 +140,4 @@
down count: 0
runner:
active port: eno1
-
The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords.
1.First, create a public/private key pair on the client that you will use to connect to the server (you will need to do this from each client machine from which you connect):
1
+ CentOS Use Public/Private Keys for Authentication | TotalDebug
The following Tutorial walks you through how to setup authentication using a key pair to negotiate the connection, stopping the requirement for passwords.
1.First, create a public/private key pair on the client that you will use to connect to the server (you will need to do this from each client machine from which you connect):
1
ssh-keygen -t rsa
Leave the passphrase blank if you dont want to receive a prompt for this.
This will create two files in your ~/.ssh directory called: id_rsa and id_rsa.pub The first: id_rsa is your private key and the second: id_rsa.pub is your public key.
The above permissions are required if StrictModes is set to yes in /etc/ssh/sshd_config (the default).
Ensure the correct SELinux contexts are set:
1
restorecon -Rv ~/.ssh
-
Now when you login to the server you shouldn’t be prompted for a password (unless you entered a passphrase). By default, ssh will first try to authenticate using keys. If no keys are found or authentication fails, then ssh will fall back to conventional password authentication.
If you want access to and from some servers you would need to complete this process on each client server and master server
If you have any issues with setting this up, please let me know over on my Discord.
Now when you login to the server you shouldn’t be prompted for a password (unless you entered a passphrase). By default, ssh will first try to authenticate using keys. If no keys are found or authentication fails, then ssh will fall back to conventional password authentication.
If you want access to and from some servers you would need to complete this process on each client server and master server
If you have any issues with setting this up, please let me know over on my Discord.
This article provides various hardening tips for your Linux server.
1. Minimise Packages to Minimise Vulnerability
Do you really want all sorts of services installed?. It’s recommended to avoid installing packages that are not required to avoid vulnerabilities. This may minimise risks that compromise other services on your server. Find and remove or disable unwanted services from the server to minimize vulnerability. Use the chkconfig command to find out services which are running on runlevel 3.
This article provides various hardening tips for your Linux server.
1. Minimise Packages to Minimise Vulnerability
Do you really want all sorts of services installed?. It’s recommended to avoid installing packages that are not required to avoid vulnerabilities. This may minimise risks that compromise other services on your server. Find and remove or disable unwanted services from the server to minimize vulnerability. Use the chkconfig command to find out services which are running on runlevel 3.
1
/sbin/chkconfig --list |grep '3:on'
Once you’ve found any unwanted services that are running, disable them using the following command:
1
chkconfig serviceName off
@@ -84,4 +84,4 @@
sh /usr/local/ddos/ddos.sh
Restart DDos Deflate
1
sh /usr/local/ddos/ddos.sh -c
-
14. Install DenyHosts
DenyHosts is a security tool written in python that monitors server access logs to prevent brute force attacks on a virtual server. The program works by banning IP addresses that exceed a certain number of failed login attempts.
This list is still not completed, I am constantly adding new security tips to it, should you have any you think I should include please comment below and I will add them.
DenyHosts is a security tool written in python that monitors server access logs to prevent brute force attacks on a virtual server. The program works by banning IP addresses that exceed a certain number of failed login attempts.
This list is still not completed, I am constantly adding new security tips to it, should you have any you think I should include please comment below and I will add them.
diff --git a/posts/check-vm-disk-thick-thin-provisioned/index.html b/posts/check-vm-disk-thick-thin-provisioned/index.html
index 7f8f3bab1..18ccc91f1 100644
--- a/posts/check-vm-disk-thick-thin-provisioned/index.html
+++ b/posts/check-vm-disk-thick-thin-provisioned/index.html
@@ -1,3 +1,3 @@
- How to check if a VM disk is Thick or Thin provisioned | TotalDebug
There are multiple ways to tell if a virtual machine has thick or thin provisioned VM Disk. Below are some of the ways I am able to see this information:
VI Client (thick client)
Select the Virtual Machine
Choose Edit Settings
Select the disk you wish to check
look under Type
Web Client
Select your Host in Host and Cluster inventory -> Related Objects -> Virtual machines tab
Select your Host in Host and Cluster
click Related Objects
click Virtual machines tab
PowerCLI
Launch PowerCLI
type: connect-viserver
Run this command:
1
+ How to check if a VM disk is Thick or Thin provisioned | TotalDebug
There are multiple ways to tell if a virtual machine has thick or thin provisioned VM Disk. Below are some of the ways I am able to see this information:
VI Client (thick client)
Select the Virtual Machine
Choose Edit Settings
Select the disk you wish to check
look under Type
Web Client
Select your Host in Host and Cluster inventory -> Related Objects -> Virtual machines tab
In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them.
Main Features
Some of Homers main features are:
Yaml file configuration
Search
Grouping
Theme customisation
Service Health Checks
Keyboard shortcuts
Configuration
To begin configuration navigate to the homer data folder that we created in the previous article dockerfiles\homer\data, you will store all the files you require here, but first open config.yml.
The initial configuration gives you an idea of how to layout your dashboard, each section has a great explanation on how to use it.
One thing that isn’t covered is the service checks, we will look at that later.
To setup a basic section and URL you would need something like this:
In my last article I talked about how to setup Homer dashboard with Docker, now I will walk through some of the features and how to use them.
Main Features
Some of Homers main features are:
Yaml file configuration
Search
Grouping
Theme customisation
Service Health Checks
Keyboard shortcuts
Configuration
To begin configuration navigate to the homer data folder that we created in the previous article dockerfiles\homer\data, you will store all the files you require here, but first open config.yml.
The initial configuration gives you an idea of how to layout your dashboard, each section has a great explanation on how to use it.
One thing that isn’t covered is the service checks, we will look at that later.
To setup a basic section and URL you would need something like this:
To add more items, just copy the first item and change its details for the second service that you wish to link out to.
For custom icons, you need to add the files to the tools folder and then update the logo line in the configuration.
I recommend checking out dashboard-icons which contains a huge list of icons that work great with Homer.
Service Checks
Additional checks can be added to an item, these are called Custom Services, some applications have direct integration, others can only use ping. A full list of the supported services and how to configure them is listed here
Custom Themes
You can add custom CSS to homer in order to have a personal look similar to the one I have used from Walkxcode called homer-theme
Easier Updates
Sometimes updating via terminal using nano/vim can be a pain, I personally use VS Code for the majority of my editing, so I setup Remote SSH which allows me to connect to my docker server file system from VS Code and edit the configuration files directly in VS Code.
Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.
To add more items, just copy the first item and change its details for the second service that you wish to link out to.
For custom icons, you need to add the files to the tools folder and then update the logo line in the configuration.
I recommend checking out dashboard-icons which contains a huge list of icons that work great with Homer.
Service Checks
Additional checks can be added to an item, these are called Custom Services, some applications have direct integration, others can only use ping. A full list of the supported services and how to configure them is listed here
Custom Themes
You can add custom CSS to homer in order to have a personal look similar to the one I have used from Walkxcode called homer-theme
Easier Updates
Sometimes updating via terminal using nano/vim can be a pain, I personally use VS Code for the majority of my editing, so I setup Remote SSH which allows me to connect to my docker server file system from VS Code and edit the configuration files directly in VS Code.
Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.
I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge.
Over the years I have built up quite an estate of servers that over time become more difficult to manage and maintain, mostly I will spend a long time researching and deploying a solution, but when it breaks weeks / months later i struggle to remember how it was all built.
There must be a better way!
So now im looking for the best way to deploy / re-deploy and test all of my servers and services with minimum effort and without breaking them if I do something wrong.
I started by building out Ansible playbooks, one for each of my servers, this works great for deploying my servers with all the apps that I require, However this doesn’t help with things like home assistant configuration changes, if I change my config I have to do it via atom with a remote plugin that allows FTP on changes. This works… but if i make a mistake i take home assistant offline which doesn’t go down well with the family!
After this I thought how can I update my configuration, keep it backed up, have the ability to roll it back and also test it before I put it on my server?
So I have now started using GitHub to store my configuration, this gives me a backup in case my server dies and also helps the HA Community see examples of the configuration for their own deployments.
I also want to check the new configuration when it gets committed to GIT but before I download it to home assistant, for this I use gitlab. Whenever gitlab detects a commit on the GIT repository it will begin a pipeline on gitlab that checks my latest configuration for various things:
MarkdownLint - Checks any files with markdown in to make sure it is valid
YAMLlint - Checks YAML files for formatting and validation
JSONlint - Checks any JSON files for formatting and validation
HA Stable / Dev / Beta - My Home Assistant configuration is then checked against the different builds
By doing all of the above checks I will know that the code works as expected and I can also tell that it will work with all the current releases of HomeAssistant.
Once the configuration has been checked the pipeline will trigger a webhook back to my Home Assistant server which then pulls the latest commit from GitHub and restarts HomeAssistant.
Now I have gone from roughly 15 / 30 minutes for testing and troubleshooting, along with potential outages down to around 2 minutes and no long outage for my Home Assistant.
Conclusion
By doing this I have saved myself 13 / 28 minutes per configuration change, when you add that up over weeks / months of changes I have very quickly saved a days worth of configuration change! If you then add the time saved by using Ansible, I can deploy a brand new Home Assistant server in around 10 minutes which is fully configured and functional.
I have recently been looking into CI and CD, mainly for use at home with my various projects etc. but also to further my knowledge.
Over the years I have built up quite an estate of servers that over time become more difficult to manage and maintain, mostly I will spend a long time researching and deploying a solution, but when it breaks weeks / months later i struggle to remember how it was all built.
There must be a better way!
So now im looking for the best way to deploy / re-deploy and test all of my servers and services with minimum effort and without breaking them if I do something wrong.
I started by building out Ansible playbooks, one for each of my servers, this works great for deploying my servers with all the apps that I require, However this doesn’t help with things like home assistant configuration changes, if I change my config I have to do it via atom with a remote plugin that allows FTP on changes. This works… but if i make a mistake i take home assistant offline which doesn’t go down well with the family!
After this I thought how can I update my configuration, keep it backed up, have the ability to roll it back and also test it before I put it on my server?
So I have now started using GitHub to store my configuration, this gives me a backup in case my server dies and also helps the HA Community see examples of the configuration for their own deployments.
I also want to check the new configuration when it gets committed to GIT but before I download it to home assistant, for this I use gitlab. Whenever gitlab detects a commit on the GIT repository it will begin a pipeline on gitlab that checks my latest configuration for various things:
MarkdownLint - Checks any files with markdown in to make sure it is valid
YAMLlint - Checks YAML files for formatting and validation
JSONlint - Checks any JSON files for formatting and validation
HA Stable / Dev / Beta - My Home Assistant configuration is then checked against the different builds
By doing all of the above checks I will know that the code works as expected and I can also tell that it will work with all the current releases of HomeAssistant.
Once the configuration has been checked the pipeline will trigger a webhook back to my Home Assistant server which then pulls the latest commit from GitHub and restarts HomeAssistant.
Now I have gone from roughly 15 / 30 minutes for testing and troubleshooting, along with potential outages down to around 2 minutes and no long outage for my Home Assistant.
Conclusion
By doing this I have saved myself 13 / 28 minutes per configuration change, when you add that up over weeks / months of changes I have very quickly saved a days worth of configuration change! If you then add the time saved by using Ansible, I can deploy a brand new Home Assistant server in around 10 minutes which is fully configured and functional.
As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, documenting and best practices we have set.
When we create a new project there are many repetitive tasks that take place, such as creating pyproject.toml, directory structures, documentation folders and many other tasks, these tasks are time consuming, repetitive and prone to user error.
Some context
Starting a new repository for a new project is always a chore, specially when working with large teams where others are collaborating with you. You have to follow the same standards and coding practices to ensure all developers know what is happening.
Working in large teams means that with many different projects and repositories it is very likely that none of them will follow the same base structure that is expected. To help alleviate this problem and fulfil these expectations I created project templates that anyone can follow to ensure all base projects are the same.
What is Cookiecutter
Cookiecutter is a CLI tool built in Python that creates a project from boilerplate templates (mainly available on Github). It uses the templating system Jinja2 to replace and customize folders and/or files names, as well as their content.
Although built with Python, you are not limited to templating Python projects, it can easily be implemented with other programming languages. However, to do this you will need to know or learn some Jinja and if you want to implement hooks this will need to be done in Python.
Why use cookiecutter
Well simply put, to save time building new project repositories, to avoid missing files or commit checks and probably one important step, to make life easier for new team members who will be expected to create projects.
We also use it as a way to enforce standards, providing the developer with the necessary structure to ensure the rules are followed: write documentation perform tests, follow specific syntax standards by giving them the base structure in a boilerplate code, it makes it easier for developers to follow standards.
In certain projects you may have a lot of repetitive code, such as creating Flask websites, with a cookiecutter template, you would be able to duplicate that code with ease and little time spent.
How to use Cookiecutter
Cookiecutter is super simple to use, you can either use one of the many templates that already exist online, or you can create one that suits your own needs.
You can access templates from various locations:
Git repository
Local folder
Zip file
If working with Git repositories, you can even start a template from any branch!
To try out cookie cutter, first it needs to be installed:
As I move closer to the world of development within my career I have been looking for more efficient ways to spend my time, along with assisting my colleagues and myself follow the programming, documenting and best practices we have set.
When we create a new project there are many repetitive tasks that take place, such as creating pyproject.toml, directory structures, documentation folders and many other tasks, these tasks are time consuming, repetitive and prone to user error.
Some context
Starting a new repository for a new project is always a chore, specially when working with large teams where others are collaborating with you. You have to follow the same standards and coding practices to ensure all developers know what is happening.
Working in large teams means that with many different projects and repositories it is very likely that none of them will follow the same base structure that is expected. To help alleviate this problem and fulfil these expectations I created project templates that anyone can follow to ensure all base projects are the same.
What is Cookiecutter
Cookiecutter is a CLI tool built in Python that creates a project from boilerplate templates (mainly available on Github). It uses the templating system Jinja2 to replace and customize folders and/or files names, as well as their content.
Although built with Python, you are not limited to templating Python projects, it can easily be implemented with other programming languages. However, to do this you will need to know or learn some Jinja and if you want to implement hooks this will need to be done in Python.
Why use cookiecutter
Well simply put, to save time building new project repositories, to avoid missing files or commit checks and probably one important step, to make life easier for new team members who will be expected to create projects.
We also use it as a way to enforce standards, providing the developer with the necessary structure to ensure the rules are followed: write documentation perform tests, follow specific syntax standards by giving them the base structure in a boilerplate code, it makes it easier for developers to follow standards.
In certain projects you may have a lot of repetitive code, such as creating Flask websites, with a cookiecutter template, you would be able to duplicate that code with ease and little time spent.
How to use Cookiecutter
Cookiecutter is super simple to use, you can either use one of the many templates that already exist online, or you can create one that suits your own needs.
You can access templates from various locations:
Git repository
Local folder
Zip file
If working with Git repositories, you can even start a template from any branch!
To try out cookie cutter, first it needs to be installed:
1
pip install-U cookiecutter
Once installed run the following command:
1
cookiecutter gh:totaldebug/python-package-template
@@ -62,4 +62,4 @@
ifnotre.match(MODULE_REGEX,module_name):print('ERROR: The project slug (%s) is not a valid Python module name. Please do not use a - and use _ instead'%module_name)
-
As you can see from the examples, you can either create a very simple template or add Jinja / Python for more complex and error validation.
Final Thoughts
Cookiecutter has saved me a lot of time in the creation or projects, also a lot of the boring template work is taken out of starting a new project which is always a bonus.
Now all of my projects start in a good standard and should be easier to keep that way.
If you would like to check out cookiecutter you could start by checking my python-package-template
I have added things like Github actions and pre-commits to check work along with other python best practices that I hope to cover in my next article.
Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.
As you can see from the examples, you can either create a very simple template or add Jinja / Python for more complex and error validation.
Final Thoughts
Cookiecutter has saved me a lot of time in the creation or projects, also a lot of the boring template work is taken out of starting a new project which is always a bonus.
Now all of my projects start in a good standard and should be easier to keep that way.
If you would like to check out cookiecutter you could start by checking my python-package-template
I have added things like Github actions and pre-commits to check work along with other python best practices that I hope to cover in my next article.
Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.
I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar more efficiently once it’s installed.
As part of my automation I used to run deconz with some zigbee IKEA Tradfri lights around the house, I found deconz limiting at the time and it doesn’t seem to have progressed much, whereas zigbee2mqtt seems to have moved a long way and has a lot of support.
I also had the issue that Home Assistant now runs on a virtual machine in my loft, where the conbee II signal didn’t reach my devices, so to combat this I wanted to utilise an old raspberry pi and create a zigbee hub that is easy to maintain in a set and forget fashion, if it stops working, reboot and it works again.
This is when I came up with ZigQt, an Alpine overlay that will fully configure a Zigbee2mqtt controller on a Raspberry Pi in a stateless manner. Through this article I will show you how to setup this great little ZigQt hub
Hardware
For this I have used the following hardware:
Raspberry Pi 3b plus
POE+ Hat (Optional)
Micro SD Card
Conbee II (can use other zigbee dongles)
OS Installation
For the OS I have used Alpine Linux, by default Alpine is a diskless OS, meaning it loads the whole OS into memory and this makes it lightning fast.
Create a bootable MicroSD card with two partitions
The goal is to have a MicroSD card containing two partitions:
The system partition: A fat32 partition, with boot and lba flags, on a small part of the MicroSD card, enough to store the system and the applications (suggested 512MB to 2GB).
The storage partition: A ext4 partition occupying the rest of the MicroSD card capacity, to use as persistent storage for any configuration data that may be needed.
Creating the partitions (assuming you re using Linux)
Mount the SD card (this should be automated, if not, you probably know how to do that and you probably don’t need that tutorial)
List your disks:
1
+ Creating a standalone zigbee2mqtt hub with alpine linux | TotalDebug
I have began sorting out my smart home again, I let it run to ruin a year or so ago and now I’m getting solar installed I wanted to increase my automation to make life easier and utilise my solar more efficiently once it’s installed.
As part of my automation I used to run deconz with some zigbee IKEA Tradfri lights around the house, I found deconz limiting at the time and it doesn’t seem to have progressed much, whereas zigbee2mqtt seems to have moved a long way and has a lot of support.
I also had the issue that Home Assistant now runs on a virtual machine in my loft, where the conbee II signal didn’t reach my devices, so to combat this I wanted to utilise an old raspberry pi and create a zigbee hub that is easy to maintain in a set and forget fashion, if it stops working, reboot and it works again.
This is when I came up with ZigQt, an Alpine overlay that will fully configure a Zigbee2mqtt controller on a Raspberry Pi in a stateless manner. Through this article I will show you how to setup this great little ZigQt hub
Hardware
For this I have used the following hardware:
Raspberry Pi 3b plus
POE+ Hat (Optional)
Micro SD Card
Conbee II (can use other zigbee dongles)
OS Installation
For the OS I have used Alpine Linux, by default Alpine is a diskless OS, meaning it loads the whole OS into memory and this makes it lightning fast.
Create a bootable MicroSD card with two partitions
The goal is to have a MicroSD card containing two partitions:
The system partition: A fat32 partition, with boot and lba flags, on a small part of the MicroSD card, enough to store the system and the applications (suggested 512MB to 2GB).
The storage partition: A ext4 partition occupying the rest of the MicroSD card capacity, to use as persistent storage for any configuration data that may be needed.
Creating the partitions (assuming you re using Linux)
Mount the SD card (this should be automated, if not, you probably know how to do that and you probably don’t need that tutorial)
A default zigbee2mqtt configuration is created during install, however this may not suite your needs in this case you can create a custom configuration.yaml file. Further configuration options can be found here
Further customisation
This repository may be forked/cloned/downloaded. The main script file is headless.sh. Execute ./make.sh to rebuild zigqt.apkovl.tar.gz with any of the changes made.
On your Pi
Initial Boot
Each time the hub reboots, the initial boot sequence will be run, this ensures that the OS is the same on every boot greatly reducing the risk of changes to the OS causing issues with the hub.
The following directories are mapped to persistent storage:
/var
/etc/zigbee2mqtt
This ensures certain configuration is not lost on reboot.
User/Password management
The root user will have no password by default. It isn’t currently possible to update the password without breaking the way the overlay works, however in theory you could launch a copy of Alpine Linux without the zigqt overlay, setup a password and alternative used, run lbu commit to save the changes and then merge the required files with those in zigqt.apkovl.tar.gz
If I manage to figure out an easier way to do this I will be sure to update this article.
Zigbee2mqtt
If everything has worked, zigbee2mqtt should be accessible at the following address: http://zigqt.local:8080.
Any configuration changes made in the web interface will be saved to the persistent storage, so will still be in effect after a reboot.
Updates
To update to newer versions, simply reboot, the latest available zigbee2mqtt will be installed.
Final thoughts
At the moment this is the best solution I could think of to provide a fully functioning and maintenance free version of Zigbee2mqtt on a standalone Raspberry Pi.
I hope to have a solution for the user and password management someday, but if you know a way to get around this please do let me know.
A default zigbee2mqtt configuration is created during install, however this may not suite your needs in this case you can create a custom configuration.yaml file. Further configuration options can be found here
Further customisation
This repository may be forked/cloned/downloaded. The main script file is headless.sh. Execute ./make.sh to rebuild zigqt.apkovl.tar.gz with any of the changes made.
On your Pi
Initial Boot
Each time the hub reboots, the initial boot sequence will be run, this ensures that the OS is the same on every boot greatly reducing the risk of changes to the OS causing issues with the hub.
The following directories are mapped to persistent storage:
/var
/etc/zigbee2mqtt
This ensures certain configuration is not lost on reboot.
User/Password management
The root user will have no password by default. It isn’t currently possible to update the password without breaking the way the overlay works, however in theory you could launch a copy of Alpine Linux without the zigqt overlay, setup a password and alternative used, run lbu commit to save the changes and then merge the required files with those in zigqt.apkovl.tar.gz
If I manage to figure out an easier way to do this I will be sure to update this article.
Zigbee2mqtt
If everything has worked, zigbee2mqtt should be accessible at the following address: http://zigqt.local:8080.
Any configuration changes made in the web interface will be saved to the persistent storage, so will still be in effect after a reboot.
Updates
To update to newer versions, simply reboot, the latest available zigbee2mqtt will be installed.
Final thoughts
At the moment this is the best solution I could think of to provide a fully functioning and maintenance free version of Zigbee2mqtt on a standalone Raspberry Pi.
I hope to have a solution for the user and password management someday, but if you know a way to get around this please do let me know.
Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practices will lead to a standardised and aligned coding experience for developers.
In this article I will go through what I consider to be the best python project setup. Please follow along, or if you prefer to jump straight in, you can use cookiecutter to generate a new project following these standards, install poetry then create a new project.
Poetry: Dependency Management
Poetry is a Python dependency management and packaging system that makes package management easy!
Poetry comes with all the features you would require to manage a project’s packages, it removes the need to freeze and potentially include packages that are not required for the specific project. Poetry only adds the libraries that you require for that specific project.
No more need for the unmanageable requirements.txt file.
Poetry will also add a venv to ensure only the required packages are loaded. with one simple command poetry shell you enter the venv with all the required packages.
Lets get setup with poetry
1
+ Creating the perfect Python project | TotalDebug
Working on a new project its always exciting to jump straight in and get coding without any setup time. However spending a small amount of time to setup the project with the best tools and practices will lead to a standardised and aligned coding experience for developers.
In this article I will go through what I consider to be the best python project setup. Please follow along, or if you prefer to jump straight in, you can use cookiecutter to generate a new project following these standards, install poetry then create a new project.
Poetry: Dependency Management
Poetry is a Python dependency management and packaging system that makes package management easy!
Poetry comes with all the features you would require to manage a project’s packages, it removes the need to freeze and potentially include packages that are not required for the specific project. Poetry only adds the libraries that you require for that specific project.
No more need for the unmanageable requirements.txt file.
Poetry will also add a venv to ensure only the required packages are loaded. with one simple command poetry shell you enter the venv with all the required packages.
Now when you run a commit you will see each hook running, this will then show any errors prior to committing, you can then fix the issues and try the commit again.
You can also see I have conventional-pre-commit applied with the -t commit-msg tag this enforces the use of conventional commit messages for all commits, ensuring that our commit messages all follow the same standard.
Final Thoughts
This method of utilising cookie cutter, and pre-commit hooks has saved me a lot of time, I think there is more to be explored with pre-commit hooks such as adding tests for my code etc. that will come with time on my development journey.
With these methods I know my commit messages are tidy, and my code is cleaner than before its a great start with more to come.
I also execute these as github actions on my projects, that way anyone else who contributes but doesn’t install the pre-commit hooks will be held accountable to resolve any issues prior to merging their pull requests.
Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.
Now when you run a commit you will see each hook running, this will then show any errors prior to committing, you can then fix the issues and try the commit again.
You can also see I have conventional-pre-commit applied with the -t commit-msg tag this enforces the use of conventional commit messages for all commits, ensuring that our commit messages all follow the same standard.
Final Thoughts
This method of utilising cookie cutter, and pre-commit hooks has saved me a lot of time, I think there is more to be explored with pre-commit hooks such as adding tests for my code etc. that will come with time on my development journey.
With these methods I know my commit messages are tidy, and my code is cleaner than before its a great start with more to come.
I also execute these as github actions on my projects, that way anyone else who contributes but doesn’t install the pre-commit hooks will be held accountable to resolve any issues prior to merging their pull requests.
Hopefully some of this information was useful for you, If you have any questions about this article and share your thoughts head over to my Discord.
Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time.
I looked into this and found that the latest Dell ESXi image also includes Drivers and scripts that enable the FCoE interfaces on cards that support it.
To see if you have this problem check the below steps:
on boot press ALT + F12, this will show what ESXi is doing on boot, you will then begin to see the following errors multiple times:
Recently I have seen an issue after upgrading some of our Dell R6xx hosts to 5.5 U2, they started showing FCoE in the storage adapters and booting took a really long time.
I looked into this and found that the latest Dell ESXi image also includes Drivers and scripts that enable the FCoE interfaces on cards that support it.
To see if you have this problem check the below steps:
on boot press ALT + F12, this will show what ESXi is doing on boot, you will then begin to see the following errors multiple times:
1
2
FIP VLAN ID unavail. Retry VLAN discovery
fcoe_ctlr_vlan_request() is done
@@ -12,4 +12,4 @@
rm 99bnx2fc.sh
esxcli fcoe nic disable -n=vmnic4
esxcli fcoe nic disable -n=vmnic5
-
This will remove the FCoE VIB, delete a script that runs to check for the VIB and then disable fcoe on the vmnics required.
Hopefully this will help someone else as it took me a long time to find this solution and resolve the issue.
This post is licensed under CC BY 4.0 by the author.
diff --git a/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html b/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html
index 7e815818b..b7451ad7f 100644
--- a/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html
+++ b/posts/deploy-exe-using-batch-check-os-version-and-if-the-update-is-already-installed/index.html
@@ -1 +1 @@
- Deploy .exe using batch check os version and if the update is already installed. | TotalDebug
OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain.
Instead I created a script that would check the OS Version and see if the update was already installed.
First we hide the script from users: @ECHO Off
Then we check they are running the correct OS (for windows 7 “Version 6.1”) ver | find "Windows XP" >NUL if errorlevel 1 goto end
Check to see if the update is installed (chance the reg location depending on the install) reg QUERY “HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP\SP20\KB943729” >NUL 2>NUL if errorlevel 1 goto install_update goto end
Then if it is the correct OS and the update isn’t installed run the exe :install_update \\PUT\_YOUR\_SHARE\_PATH\_HERE\Windows-KB943729-x86-ENU.exe /passive /norestart
End (this is added so that the script will stop if the criteria are not met before the update is installed stopping errors. :end
you can then add this to a group policy to allow it to be deployed
This post is licensed under CC BY 4.0 by the author.
OK so I had an issue that Microsoft released an update for Windows XP that I needed to install but they didn’t do an MSI so I couldn’t deploy is using GPO which was a real pain.
Instead I created a script that would check the OS Version and see if the update was already installed.
First we hide the script from users: @ECHO Off
Then we check they are running the correct OS (for windows 7 “Version 6.1”) ver | find "Windows XP" >NUL if errorlevel 1 goto end
Check to see if the update is installed (chance the reg location depending on the install) reg QUERY “HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Updates\Windows XP\SP20\KB943729” >NUL 2>NUL if errorlevel 1 goto install_update goto end
Then if it is the correct OS and the update isn’t installed run the exe :install_update \\PUT\_YOUR\_SHARE\_PATH\_HERE\Windows-KB943729-x86-ENU.exe /passive /norestart
End (this is added so that the script will stop if the criteria are not met before the update is installed stopping errors. :end
you can then add this to a group policy to allow it to be deployed
This post is licensed under CC BY 4.0 by the author.
I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines.
I decided that I needed a way of reporting on which virtual machines had snapshots present, when they were created and how big they are.
The attached PowerCLI script does just that! It will logon to vCenter check all of the virtual machines for snapshots and then send an email report to the email address specified.
This script support’s the get-help command and tab completion of parameters.
I have recently had an issue with people leaving snapshots on VM’s for too long causing large snapshots and poor performance on Virtual Machines.
I decided that I needed a way of reporting on which virtual machines had snapshots present, when they were created and how big they are.
The attached PowerCLI script does just that! It will logon to vCenter check all of the virtual machines for snapshots and then send an email report to the email address specified.
This script support’s the get-help command and tab completion of parameters.
The only thing that I ask is that when using this script you keep my name and website present in the notes, if there are any improvements you think I could make please let me know.
This post is licensed under CC BY 4.0 by the author.
The only thing that I ask is that when using this script you keep my name and website present in the notes, if there are any improvements you think I could make please let me know.
This post is licensed under CC BY 4.0 by the author.
Recently I have been playing in my lab with VCSA and vCNS, I found that when I tried to connect to the vCenter I received this error:
1
2
Failed to connect to VMware Lookup Service.
SSL certificate verification failed.
-
I was stuck for a little while as to why I was getting this error, then I noticed that the SSL Cert had a different name to the appliance due to it being deployed and then renamed. Lucky for me the fix for this is very simple!
Go to http://:5480
Click the “Admin” tab
Change “Certificate regeneration enalbed” to yes, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
Restart the vCenter Appliance
Once the Appliance reboots it will re-generate the certificates
Change “Certificate regeneration enalbed” to no, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
try to reconnect your appliance / application to vCenter and it should work no problems.
This post is licensed under CC BY 4.0 by the author.
I was stuck for a little while as to why I was getting this error, then I noticed that the SSL Cert had a different name to the appliance due to it being deployed and then renamed. Lucky for me the fix for this is very simple!
Go to http://:5480
Click the “Admin” tab
Change “Certificate regeneration enalbed” to yes, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
Restart the vCenter Appliance
Once the Appliance reboots it will re-generate the certificates
Change “Certificate regeneration enalbed” to no, this is either done with a toggle button to the right or radio button depending on the VCSA Version.
try to reconnect your appliance / application to vCenter and it should work no problems.
This post is licensed under CC BY 4.0 by the author.
How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in full access to all files and folders, as an outsider I had access to other peoples my documents from my laptop without being on the domain! Following this article will stop that happening to your data.
When creating the redirection share, limit access to the share to only users that need access.
Because redirected folders contain personal information, such as documents and EFS certificates care should be taken to protect them as well as possible. In general:
Restrict the share to only users that need access. Create a security group for users that have redirected folders on a particular share, and limit access to only those users.
When creating the share, hide the share by putting a $ after the share name. This will hide the share from casual browsers; the share will not be visible in My Network Places.
Only give users the minimum amount of permissions needed. The permissions needed are shown in the tables below:
Table 12 NTFS Permissions for Folder Redirection Root Folder
User Account
Minimum permissions required
Creator/Owner
Full Control, Subfolders And Files Only
Administrator
None
Security group of users needing to put data on share.
List Folder/Read Data, Create Folders/Append Data – This Folder Only
Everyone
No Permissions
Local System
Full Control, This Folder, Subfolders And Files
Table 13 Share level (SMB) Permissions for Folder Redirection Share
User Account
Default Permissions
Minimum permissions required
Everyone
Full Control
No Permissions
Security group of users needing to put data on share.
N/A
Full Control,
Table 14 NTFS Permissions for Each Users Redirected Folder
User Account
Default Permissions
Minimum permissions required
%Username%
Full Control, Owner Of Folder
Full Control, Owner Of Folder
Local System
Full Control
Full Control
Administrators
No Permissions
No Permissions
Everyone
No Permissions
No Permissions
Always use the NTFS Filesystem for volumes holding users data.
For the most secure configuration, configure servers hosting redirected files to use the NTFS File System. Unlike FAT, NTFS supports Discretionary access control lists (DACLs) and system access control lists (SACLs), which control who can perform operations on a file and what events will trigger logging of actions performed on a file.
Let the system create folders for each user.
To ensure that Folder Redirection works optimally, create only the root share on the server, and let the system create the folders for each user. Folder Redirection will create a folder for the user with appropriate security.
If you must create folders for the users, ensure that you have the correct permissions set, also note that if pre-creating folders you must clear the “grant the user exclusive rights to XXX checkbox on the settings tab of the Folder Redirection page. If you don’t clear this checkbox, then Folder Redirection will first check a pre-existing folder to ensure the user is the owner. If the folder is pre-created by the administrator, this check will fail and redirection will be aborted. Folder Redirection will then log an event in the Application event log:
Error: Folder Redirection
Event ID: 101
Event Message:
Failed to perform redirection of folder XXXX. The new directories for the redirected folder could not be created. The folder is configured to be redirected to \server\share, the final expanded path was \server\share\XXX .
The following error occurred:
This security ID may not be assigned as the owner of this object.
It is strongly recommended that you do not pre-create folders, and allow Folder Redirection to create the folder for the user.
Ensure correct permissions are set if redirecting to a users home directory.
Windows Server 2003 and Windows XP allow you to redirect a users My Documents folder to their home directory. When redirecting to the home directory, the default security checks are not made – ownership and the existing directory security are not checked and any existing permissions are not changed – it is assumed that the permissions on the users home directory are set appropriately.
If you are redirecting to a users home directory, be sure that the permissions on the users home directory are set appropriately for your organization.
How to correctly set-up folder redirection permissions for My Documents, Start Menu and Desktop. I have worked on many company computer systems where this hadn’t been done correctly resulting in full access to all files and folders, as an outsider I had access to other peoples my documents from my laptop without being on the domain! Following this article will stop that happening to your data.
When creating the redirection share, limit access to the share to only users that need access.
Because redirected folders contain personal information, such as documents and EFS certificates care should be taken to protect them as well as possible. In general:
Restrict the share to only users that need access. Create a security group for users that have redirected folders on a particular share, and limit access to only those users.
When creating the share, hide the share by putting a $ after the share name. This will hide the share from casual browsers; the share will not be visible in My Network Places.
Only give users the minimum amount of permissions needed. The permissions needed are shown in the tables below:
Table 12 NTFS Permissions for Folder Redirection Root Folder
User Account
Minimum permissions required
Creator/Owner
Full Control, Subfolders And Files Only
Administrator
None
Security group of users needing to put data on share.
List Folder/Read Data, Create Folders/Append Data – This Folder Only
Everyone
No Permissions
Local System
Full Control, This Folder, Subfolders And Files
Table 13 Share level (SMB) Permissions for Folder Redirection Share
User Account
Default Permissions
Minimum permissions required
Everyone
Full Control
No Permissions
Security group of users needing to put data on share.
N/A
Full Control,
Table 14 NTFS Permissions for Each Users Redirected Folder
User Account
Default Permissions
Minimum permissions required
%Username%
Full Control, Owner Of Folder
Full Control, Owner Of Folder
Local System
Full Control
Full Control
Administrators
No Permissions
No Permissions
Everyone
No Permissions
No Permissions
Always use the NTFS Filesystem for volumes holding users data.
For the most secure configuration, configure servers hosting redirected files to use the NTFS File System. Unlike FAT, NTFS supports Discretionary access control lists (DACLs) and system access control lists (SACLs), which control who can perform operations on a file and what events will trigger logging of actions performed on a file.
Let the system create folders for each user.
To ensure that Folder Redirection works optimally, create only the root share on the server, and let the system create the folders for each user. Folder Redirection will create a folder for the user with appropriate security.
If you must create folders for the users, ensure that you have the correct permissions set, also note that if pre-creating folders you must clear the “grant the user exclusive rights to XXX checkbox on the settings tab of the Folder Redirection page. If you don’t clear this checkbox, then Folder Redirection will first check a pre-existing folder to ensure the user is the owner. If the folder is pre-created by the administrator, this check will fail and redirection will be aborted. Folder Redirection will then log an event in the Application event log:
Error: Folder Redirection
Event ID: 101
Event Message:
Failed to perform redirection of folder XXXX. The new directories for the redirected folder could not be created. The folder is configured to be redirected to \server\share, the final expanded path was \server\share\XXX .
The following error occurred:
This security ID may not be assigned as the owner of this object.
It is strongly recommended that you do not pre-create folders, and allow Folder Redirection to create the folder for the user.
Ensure correct permissions are set if redirecting to a users home directory.
Windows Server 2003 and Windows XP allow you to redirect a users My Documents folder to their home directory. When redirecting to the home directory, the default security checks are not made – ownership and the existing directory security are not checked and any existing permissions are not changed – it is assumed that the permissions on the users home directory are set appropriately.
If you are redirecting to a users home directory, be sure that the permissions on the users home directory are set appropriately for your organization.
I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing out the long LDAP Strings can be a bit tricky and cause typo’s.
Logon to the fortigate and go to the Users -> Remote -> LDAP (Create New)
Fill in a Name for the connector
Fill in the IP Address of the server that has LDAP Installed
Change the Common Name Identifier to: sAMAccountName
Enter the Distinguished Name if your domain was domain.local the distinguished name would be: DC=domain,DC=local
Make your Bind Type Regular
In the User DN Box you must type the full path to the user e.g. if you user is domain.local/users/service accounts/fortigate you would need the following: CN=fortigate,OU=Service Accounts,OU=Users,OU=MyBusiness,DC=domain,DC=local
type the password for your service account
This should be all that you require. one thing to keep an eye on is typo’s when doing the User DN this will stop you from being able to logon with an SSL-VPN or anything for that matter!
If you get an error in the logs for SSL-VPN saying no_matching_policy then you will have a typo somewhere.
This post is licensed under CC BY 4.0 by the author.
I have been setting up a lot of Fortigate’s recently and on my first few had issues with the settings for LDAP i found that it was tricky to remember the correct settings and also typing out the long LDAP Strings can be a bit tricky and cause typo’s.
Logon to the fortigate and go to the Users -> Remote -> LDAP (Create New)
Fill in a Name for the connector
Fill in the IP Address of the server that has LDAP Installed
Change the Common Name Identifier to: sAMAccountName
Enter the Distinguished Name if your domain was domain.local the distinguished name would be: DC=domain,DC=local
Make your Bind Type Regular
In the User DN Box you must type the full path to the user e.g. if you user is domain.local/users/service accounts/fortigate you would need the following: CN=fortigate,OU=Service Accounts,OU=Users,OU=MyBusiness,DC=domain,DC=local
type the password for your service account
This should be all that you require. one thing to keep an eye on is typo’s when doing the User DN this will stop you from being able to logon with an SSL-VPN or anything for that matter!
If you get an error in the logs for SSL-VPN saying no_matching_policy then you will have a typo somewhere.
This post is licensed under CC BY 4.0 by the author.
I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a viable option for what I required.
After a little searching I came across Graylog2 which is an open source alternative to Splunk and is totally free! You only need to pay if you would like support from them.
So here is how I setup the server and got it working on my CentOS Server.
I recently required a syslog server that was easy to use with a web interface to monitor some customers firewalls. I had been looking at Splunk but due to the price of this product it was not a viable option for what I required.
After a little searching I came across Graylog2 which is an open source alternative to Splunk and is totally free! You only need to pay if you would like support from them.
So here is how I setup the server and got it working on my CentOS Server.
For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain.
Due to this I decided I needed a way to log when I take my medication and then a notification on my phone when im due to take it again.
I ended up creating a workflow in Node-RED that will do the following after I scan an NFC tag located on my fridge where I keep the medication:
Update a input_datetime in Home Assistant with the current date and time
Check every 60 minutes if the medication date is over 13 days ago
On Monday, check if its been 10 days since last medication, then send a notification reminding me to take my medication that week
After 14 days, if the input_datetime hasn’t been updated, send a notification to my mobile and TV every hour until it is reset.
Lets look at how I made this.
Home Assistant Configuration
Some changes need to be made within home assistant to make this work
Input Datetime
Adding the input_datetime entity requires editing the configuration.yaml file directly.
Add the following to your configuration:
1
+ Home Assistant medication notification using Node-RED | TotalDebug
For around 4 years I have had to take medication for Rheumatoid Arthritis once every two weeks, I always forget when I last took the medication and end up skipping which causes me pain.
Due to this I decided I needed a way to log when I take my medication and then a notification on my phone when im due to take it again.
I ended up creating a workflow in Node-RED that will do the following after I scan an NFC tag located on my fridge where I keep the medication:
Update a input_datetime in Home Assistant with the current date and time
Check every 60 minutes if the medication date is over 13 days ago
On Monday, check if its been 10 days since last medication, then send a notification reminding me to take my medication that week
After 14 days, if the input_datetime hasn’t been updated, send a notification to my mobile and TV every hour until it is reset.
Lets look at how I made this.
Home Assistant Configuration
Some changes need to be made within home assistant to make this work
Input Datetime
Adding the input_datetime entity requires editing the configuration.yaml file directly.
Add the following to your configuration:
1
2
3
4
@@ -20,4 +20,4 @@
{"message":"You need to take your medication this week.","title":"This Week: Take Medication","data":{"color":"#2DF56D"}}
Notify every 60 minutes after 14 days
This workflow is essentially the same as the 10 day notification with a few tweaks so you can copy the previous workflow and make these changes:
In the inject node select interval or interval between times then every X minutes and select all the days you want it to run.
In the function node change the value to 1209600000 for 14 days, or as required for your notification.
I also amended the message, and added an additional notify to my TV, this way it will popup on my TV every 60 minutes to annoy me into getting my medication.
1
{"message":"You need to take your medication.","title":"Take Medication","data":{"color":"#2DF56D"}}
-
That is everything done, you can now deploy and test.
Final thoughts
I have now been using this workflow for around 2 months and it has been working great.
The notifications to the TV even annoy my wife which really does make me get my medication out quicker!
If you have any ideas on how I could improve this workflow further, please leave a comment.
Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especially ones that I access infrequently.
At one point I did have a dashboard that was HTML but I never updated it and I decided to remove it a year or so ago.
After sitting on YouTube for a few hours watching rubbish I came across Homer, A simple to use Docker container that hosts am easily configurable dashboard with customisable designs.
Homer is configured using YAML making it very familiar to myself having used Docker for a number of years now.
Directory setup
In order to use Homer with Docker first I created a directory to store the configuration file and any other assets such as images. Mine are on an NFS share but this would also be the same for local files. My file structure is as follows:
Recently I have decided to get my home network in order, One of the things I realised was that I spend a lot of time trying to remember the IP addresses or URLs for services within my home, especially ones that I access infrequently.
At one point I did have a dashboard that was HTML but I never updated it and I decided to remove it a year or so ago.
After sitting on YouTube for a few hours watching rubbish I came across Homer, A simple to use Docker container that hosts am easily configurable dashboard with customisable designs.
Homer is configured using YAML making it very familiar to myself having used Docker for a number of years now.
Directory setup
In order to use Homer with Docker first I created a directory to store the configuration file and any other assets such as images. Mine are on an NFS share but this would also be the same for local files. My file structure is as follows:
1
2
3
4
@@ -56,4 +56,4 @@
http://<docker-host-ip-address>:<port>
So in my case this would be:
1
http://172.16.20.4:8080
-
If everything has worked as expected you should see the following demo dashboard:
For more information on how to configure this dashboard check out this article where I cover the configuration of the dashboards in more detail.
Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.
If everything has worked as expected you should see the following demo dashboard:
For more information on how to configure this dashboard check out this article where I cover the configuration of the dashboards in more detail.
Hopefully this information was useful for you, If you have any questions about this article, share your thoughts and comment in the discussion below or head over to my Discord.
diff --git a/posts/how-i-host-this-site/index.html b/posts/how-i-host-this-site/index.html
index 367f4aa99..cdbce3d55 100644
--- a/posts/how-i-host-this-site/index.html
+++ b/posts/how-i-host-this-site/index.html
@@ -1,4 +1,4 @@
- How I host this site | TotalDebug
My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure.
Motivation for hosting this site
I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the issues I have come across through my career working for one of the largest MSPs in the world.
Sharing on social media has come more recently, but this site still serves as the main location for all of my content. Even more so in the past few months, social media platforms have shown that they are not a certain thing. Accounts get suspended, ownership changes kill services, rules change, etc. hosting my own site I have total control over the content with no risk of losing anything, which for me is well worthwhile.
Ultimately, I wanted this site to be one place where you can always find my projects, regardless of what other platforms may do. I don’t make any money off my content so keeping it low cost is important. Seeing others lose their work due to account issues or frustration with a platform served as motivation to own my content.
If you are a content creator, I encourage you to keep platform agnostic, allowing you to easily recover if for some reason an account is suspended.
The site
Lets get to the bones of it, this site is built using Jekyll, a Ruby based tool that is able to convert markdown into a static website, for my use case it was the perfect fit.
Here are some of the things that I like about it:
Small footprint - I used Wordpress for my last site, but found it was massively bloated for my needs. Along with update issues and other administrative overheads. Jekyll being static removes a lot of this complexity.
Security - Wordpress and its plugins, due to its popularity sees a lot of vulnerabilities exploited. Again another benefit of using a static site generated by Jekyll this risk is significantly reduced.
CDN Friendly - Having static content means that the site is able to be cached, handling incredible loads at low cost across the globe.
Simple Format - Using markdown for all of the posts means that the content is pretty easy to move around. They can be used with other frameworks or easily converted to different formats if needed.
Git Friendly - I hold my entire site in Git, so backups are easy along with the history of any changes.
Jekyll also supports additional features through plugins, like RSS, Sitemaps, metadata, pagination and much more. If there isn’t a plugin to meet your needs its simple to create something.
It’s also incredibly fast at building a site and generates predicable, easy to host results. If you haven’t looked at Jekyll, you might give it a whirl!
Jekyll though needs two things to make it really work:
A way to build the site
A place to host the site
Building the site
Building a Jekyll site is easy, you just run jekyll build, but to make things even easier I utilize GitHub actions to automate the builds and deploy whenever changes happen.
Using actions is pretty simple, the documentation is also great so easy to learn, here is my workflow for this site:
My site isn’t anything special, but I thought I’d like to share how I create and host things for others who may be interested in sharing their own words with a very simple and easy to maintain structure.
Motivation for hosting this site
I have hosted a blog site in some form for the past 10+ years. The idea being to share my experience with others and hopefully help others with some of the issues I have come across through my career working for one of the largest MSPs in the world.
Sharing on social media has come more recently, but this site still serves as the main location for all of my content. Even more so in the past few months, social media platforms have shown that they are not a certain thing. Accounts get suspended, ownership changes kill services, rules change, etc. hosting my own site I have total control over the content with no risk of losing anything, which for me is well worthwhile.
Ultimately, I wanted this site to be one place where you can always find my projects, regardless of what other platforms may do. I don’t make any money off my content so keeping it low cost is important. Seeing others lose their work due to account issues or frustration with a platform served as motivation to own my content.
If you are a content creator, I encourage you to keep platform agnostic, allowing you to easily recover if for some reason an account is suspended.
The site
Lets get to the bones of it, this site is built using Jekyll, a Ruby based tool that is able to convert markdown into a static website, for my use case it was the perfect fit.
Here are some of the things that I like about it:
Small footprint - I used Wordpress for my last site, but found it was massively bloated for my needs. Along with update issues and other administrative overheads. Jekyll being static removes a lot of this complexity.
Security - Wordpress and its plugins, due to its popularity sees a lot of vulnerabilities exploited. Again another benefit of using a static site generated by Jekyll this risk is significantly reduced.
CDN Friendly - Having static content means that the site is able to be cached, handling incredible loads at low cost across the globe.
Simple Format - Using markdown for all of the posts means that the content is pretty easy to move around. They can be used with other frameworks or easily converted to different formats if needed.
Git Friendly - I hold my entire site in Git, so backups are easy along with the history of any changes.
Jekyll also supports additional features through plugins, like RSS, Sitemaps, metadata, pagination and much more. If there isn’t a plugin to meet your needs its simple to create something.
It’s also incredibly fast at building a site and generates predicable, easy to host results. If you haven’t looked at Jekyll, you might give it a whirl!
Jekyll though needs two things to make it really work:
A way to build the site
A place to host the site
Building the site
Building a Jekyll site is easy, you just run jekyll build, but to make things even easier I utilize GitHub actions to automate the builds and deploy whenever changes happen.
Using actions is pretty simple, the documentation is also great so easy to learn, here is my workflow for this site:
Install Dependencies & Build Site - Installs all of the site dependencies / plugins etc. and then builds the site static content
Deploy - Deploys the generated site to GitHub Pages
As you can see this runs whenever changes are pushed to the master branch, or I can manually run the workflow with workflow_dispatch
Hosting
The last thing we need is somewhere to host the site. The beauty about this is that Jekyll is just creating static HTML content, making loads of options available. In my case, to keep costs down I use Github Pages, its totally free, comes with SSL Certificates and seems to perform well enough for most small static sites.
If you wanted something more performant, you could use Amazon S3, Digital Ocean Spaces Object Storage, or some other cloud-based solution.
Final Thoughts
I understand that this site is basic, but keeping it this way helps me focus on other things, I have no need to worry about keeping patching and the onslaught of spam. It just works! Since its hosted on GitHub Pages I don’t need to worry about the hosting, but should the site be suspended for some reason, I can easily take my content and move it elsewhere with little hassle.
Hopefully, if you’re looking to create new content and save yourself some hassle you would look at doing this option. The point (for me at least) is just to share what I think is cool and what I work on.
Install Dependencies & Build Site - Installs all of the site dependencies / plugins etc. and then builds the site static content
Deploy - Deploys the generated site to GitHub Pages
As you can see this runs whenever changes are pushed to the master branch, or I can manually run the workflow with workflow_dispatch
Hosting
The last thing we need is somewhere to host the site. The beauty about this is that Jekyll is just creating static HTML content, making loads of options available. In my case, to keep costs down I use Github Pages, its totally free, comes with SSL Certificates and seems to perform well enough for most small static sites.
If you wanted something more performant, you could use Amazon S3, Digital Ocean Spaces Object Storage, or some other cloud-based solution.
Final Thoughts
I understand that this site is basic, but keeping it this way helps me focus on other things, I have no need to worry about keeping patching and the onslaught of spam. It just works! Since its hosted on GitHub Pages I don’t need to worry about the hosting, but should the site be suspended for some reason, I can easily take my content and move it elsewhere with little hassle.
Hopefully, if you’re looking to create new content and save yourself some hassle you would look at doing this option. The point (for me at least) is just to share what I think is cool and what I work on.
diff --git a/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html b/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html
index 117efdf30..5dda1b5b2 100644
--- a/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html
+++ b/posts/how-to-make-the-shutdown-button-unavailable-with-group-policy/index.html
@@ -1 +1 @@
- How to Make the Shutdown Button Unavailable with Group Policy | TotalDebug
You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen.
To Edit the Local Policy on a Windows 2000-Based Computer
To make the Shutdown button unavailable on a standalone Windows 2000-based computer:
Click Start, and then click Run.
In the Open box, type gpedit.msc, and then click OK.
Expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, and then click Security Options.
In the right pane, double-click Shutdown:Allow system to be shut down without having to log on.
Click Disabled, and then click OK.NOTE: If domain-level policy settings are defined, they may override this local policy setting.
Quit Group Policy Editor.
Restart the computer.
To Edit the Group Policy in a Domain
To edit a domain-wide policy to make the Shutdownbutton unavailable::
Start the Active Directory Users and Computers snap-in. To do this, click Start, point toPrograms, point to Administrative Tools, and then click Active Directory Users and Computers.
In the console, right-click your domain, and then click Properties.
Click the Group Policytab.
In the Group Policy Object Links box, click the group policy for which you want to apply this setting. For example, click Default Domain Policy.
Click Edit.
Expand User Configuration, expand Administrative Templates, and then clickStart Menu & Taskbar.
In the right pane, double-click Disable and remove the Shut Down command.
Click Enabled, and then click OK.
Quit the Group Policy editor, and then click OK.
Troubleshooting
Group Policy changes are not immediately enforced. Group Policy background processing can take up to 5 minutes to be refreshed on domain controllers, and up to 120 minutes to be refreshed on client computers. To force background processing of Group Policy settings, use the Secedit.exe tool. To do this:
Click Start, and then click Run.
In the Open box, type cmd, and then click OK.
Type secedit /refreshpolicy user_policy, and then press ENTER.
Type secedit /refreshpolicy machine_policy, and then press ENTER.
Type exit, and then press ENTER to quit the command prompt.
You can use Group Policy Editor to make the Shutdown button unavailable in the Log On to Windows dialog box that appears when you press CTRL+ALT+DELETE on the Welcome to Windows screen.
To Edit the Local Policy on a Windows 2000-Based Computer
To make the Shutdown button unavailable on a standalone Windows 2000-based computer:
Click Start, and then click Run.
In the Open box, type gpedit.msc, and then click OK.
Expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, and then click Security Options.
In the right pane, double-click Shutdown:Allow system to be shut down without having to log on.
Click Disabled, and then click OK.NOTE: If domain-level policy settings are defined, they may override this local policy setting.
Quit Group Policy Editor.
Restart the computer.
To Edit the Group Policy in a Domain
To edit a domain-wide policy to make the Shutdownbutton unavailable::
Start the Active Directory Users and Computers snap-in. To do this, click Start, point toPrograms, point to Administrative Tools, and then click Active Directory Users and Computers.
In the console, right-click your domain, and then click Properties.
Click the Group Policytab.
In the Group Policy Object Links box, click the group policy for which you want to apply this setting. For example, click Default Domain Policy.
Click Edit.
Expand User Configuration, expand Administrative Templates, and then clickStart Menu & Taskbar.
In the right pane, double-click Disable and remove the Shut Down command.
Click Enabled, and then click OK.
Quit the Group Policy editor, and then click OK.
Troubleshooting
Group Policy changes are not immediately enforced. Group Policy background processing can take up to 5 minutes to be refreshed on domain controllers, and up to 120 minutes to be refreshed on client computers. To force background processing of Group Policy settings, use the Secedit.exe tool. To do this:
Click Start, and then click Run.
In the Open box, type cmd, and then click OK.
Type secedit /refreshpolicy user_policy, and then press ENTER.
Type secedit /refreshpolicy machine_policy, and then press ENTER.
Type exit, and then press ENTER to quit the command prompt.
Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):
Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):
First you shall write down the information what you will get (for example: if it “Default Web Site” or “SBS Web Applications” and if they have the information, what INTERNURL or External URL is configured):
– Open Exchange Management Shell with elevated permission – Run the following commands:
1
+ How to recreate all Virtual Directories for Exchange 2007 | TotalDebug
Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):
Here you will find all commands what would help you to recreate all Virtual Directories for Exchange 2007. You can also use just a few of them. But never delete or create it in IIS. This has to be done under Exchange Management Shell (don’t get mixed with the Windows Powershell):
First you shall write down the information what you will get (for example: if it “Default Web Site” or “SBS Web Applications” and if they have the information, what INTERNURL or External URL is configured):
– Open Exchange Management Shell with elevated permission – Run the following commands:
You must rerun the Internet Address Management Wizard to stamp the new virtual directories with the proper external URL and maybe you have to check the certificates.
====================================== Troubleshooting for useKernelMode
%windir%\system32\inetsrv\appcmd.exe set config /section:system.webServer/security/authentication/windowsAuthentication /useKernelMode:false
With the following command you should see something like this:
%windir%\system32\inetsrv\appcmd.exe list config /section:system.webServer/security/authentication/windowsAuthentication
-
diff --git a/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html b/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html
index a876f0971..ab26630b4 100644
--- a/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html
+++ b/posts/how-to-turn-on-automatic-logon-to-a-domain-with-windows-xp-windows-7-and-server-2008/index.html
@@ -1,3 +1,3 @@
- How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008 | TotalDebug
I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated.
So how did I overcome this?
I found a very useful Microsoft KB article and adapted it to work with a domain account, see below for my adapted version.
Click Start, click Run, type regedit, and then click OK.
Locate the following registry key:
1
+ How to turn on automatic logon to a domain with Windows XP, Windows 7 and Server 2008 | TotalDebug
I had a requirement for some of our security camera servers to login automatically now on a normal standalone computer this is easy but on a domain it gets more complicated.
So how did I overcome this?
I found a very useful Microsoft KB article and adapted it to work with a domain account, see below for my adapted version.
Click Start, click Run, type regedit, and then click OK.
diff --git a/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html b/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html
index ed1cbf58f..f57e69d63 100644
--- a/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html
+++ b/posts/how-to-view-and-kill-processes-on-remote-windows-computers/index.html
@@ -1,2 +1,2 @@
- How To View and Kill Processes On Remote Windows Computers | TotalDebug
Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While both options are good alternatives, Windows XP and Vista provides a built in utility for viewing and killing process on remote Computers using Tasklist and Taskkill commands.
Both tasklist.exe and taskkill,exe can be found in %SYSTEMROOT%\System32 (typically C:\Windows\System32) directory.
To view processes on a remote Computer in your home, you will need to know the username and password on the Computer you want to view the processes. Once you have the user account information, the syntax for using tasklist follows:
_tasklist.exe /S SYSTEM /U USERNAME /P PASSWORD_
-
(To view all tasklist options, type tasklist /? at the command prompt)
To execute, click on Start \ Run… and in the run window type cmd to open a command prompt. Then type the tasklist command, substituting SYSTEM for the remote computer you want to view processes, USERNAME and PASSWORD with an account/password on the remote Computer.
if you are in a Domain environment and have Administrator rights to the remote Computer, you will may not need to specify a Username and Password
Windows provides several methods to view processes remotely on another computer. Terminal Server is one way or you can use the command line utility pslist from Microsoft Sysinternals site. While both options are good alternatives, Windows XP and Vista provides a built in utility for viewing and killing process on remote Computers using Tasklist and Taskkill commands.
Both tasklist.exe and taskkill,exe can be found in %SYSTEMROOT%\System32 (typically C:\Windows\System32) directory.
To view processes on a remote Computer in your home, you will need to know the username and password on the Computer you want to view the processes. Once you have the user account information, the syntax for using tasklist follows:
_tasklist.exe /S SYSTEM /U USERNAME /P PASSWORD_
+
(To view all tasklist options, type tasklist /? at the command prompt)
To execute, click on Start \ Run… and in the run window type cmd to open a command prompt. Then type the tasklist command, substituting SYSTEM for the remote computer you want to view processes, USERNAME and PASSWORD with an account/password on the remote Computer.
if you are in a Domain environment and have Administrator rights to the remote Computer, you will may not need to specify a Username and Password
diff --git a/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html b/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html
index 45c36e10f..6d0df4bf7 100644
--- a/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html
+++ b/posts/i-won-a-ender-3-3d-printer-and-im-addicted/index.html
@@ -1 +1 @@
- I won a Ender 3 3D Printer and i'm addicted | TotalDebug
About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer.
To my surprise a few weeks later I received an email from Banggood stating that I had won and to email over my address, at first i thought that it was just a spam email.
After a few weeks of waiting the printer arrived, I couldn’t believe it, I just got a £300 printer for FREE!
On with the build!
I then unboxed and went through building the printer. I followed the instructions which were very comprehensive (other than a few confusing sentences).
It took me roughly 2 hours to build the printer.
I ran through a test print, printing out a benchy to make sure that everything was working as expected. It looked perfect so ordered some more filament for future projects.
The addiction begins…
So now it starts, I spend the rest of my time glued to thingiverse deciding what to print! Although my son made this easier by constantly bugging me to print him a mini combine harvester. This was a difficult print, the thing prints as one, but the combine kept on fusing to the harvester so it wouldn’t spin. After a lot of calibration I finally made it work and he was delighted to get a new toy for free!
Now I have the printer running, and have printed multiple useful prints, Laptop wall mount, Google Home wall mount, Microphone stand for my large desk etc. (and more toys)
Time for the upgrades!
The Ender3 is a brilliant little printer for the price, however it does have some issues that can be easily resolved with a few upgrades.
Printed Upgrades
Upper Filament Guide – Unfortunately not a great upgrade, and one that I scrapped. I have put this here to recommend avoiding this. (I have better options below) It is supposed to keep the filament further away from the printer and stop it wearing the extruder arm. However I found that it made horrible squeaking noises so that was out.
Lower Filament Guide – This brilliant little print stops the filament from rubbing against Z Screw and getting grease on it which ruins prints. I recommend the linked guide as it doesn’t curl over the filament, again I found any that curled over the top would rub and cause a horrible squeak.
Fan Cover – I found that little bits of filament would drop from the hot end into the fan that was open on the ender 3 case, this covers that up and also puts the directions for bed up and down.
Hero Me Gen3 remix – Parts don’t get cooled fast enough with the standard cooler, therefore I printed this one, it focuses the air perfectly under the nozzle for really good cooling.
This version is for the new Ender3’s as they have smaller screws than the older models.
Extruder Knob – This print was one I didn’t know I needed until I printed it, this allows for easier retraction and extrusion by twisting the knob it will move the gear to easily feed filament.
If you plan to upgrade to the MK8 Dual Gear Extruder Arm this part wont fit due to larger gears.
Side Spool Holder – I had issues with the filament dragging and getting stuck due to the sharp angle that it was pulling at, this causes unnecessary wear on the extruder arm and gears. I found this side spool holder which moves the spool to the side of the printer, next to the extruder arm causing much less force to be required when extruding and less rubbing on the extruder arm.
Spool Holder – This spool holder uses bearings to allow the filament to roll around much easier, reducing the drag on the extruder and in turn reducing the wear on the stepper motor.
Purchased Upgrades
SKR 1.3 - A great upgrade to silence those stepper motors, not only that the SKR has a 32bit chip which means more space for new features and faster gcode processing. The TMC2209 Stepper Motor Drivers really do make a massive difference when it comes to the noise of the printer.
Now the only annoying thing are the fans… still on my to do list.
Capricorn Bowden Tube – This Bowden Tube appears to be much better than the one shipped with the Ender 3, it is much more slick to the touch and a little more sturdy, this means the filament passes through it with ease, also the tight diameter means the filament has little room to flex and cause retraction issues. I found that the shipped bowden tube had also melted at the end and had filament stuck to it which leads me to believe it wasn’t installed very well at the factory.
MK8 Dual Gear Extruder Arm – My extruder arm broke within a few months of use, I didn’t notice until I was having bad under extrusion and also seeing slippage on the extruder gear. I took the arm apart to clean the gear and found a tiny crack near the screw for the idler wheel, this was enough to stop the arm working at all due to the flex it added. This can be combated by printing a new extruder arm but I decided to upgrade to a dual gear option which results in:
Even less potential slippage
Stronger spring for the extruder arm
Metal body stops wear from filament rub
PEI Magnetic Bed – This is an excellent upgrade from the stock bed, makes prints super smooth on the bed and sticks really well. I haven’t had a single failed print due to adhesion on this surface, also you can easily take stuff off once cooled without much effort or by flexing the magnetic plate for larger prints.
Conclusion
All in I think I have spent no more than £100 on upgrades and have a brilliant printer, the prints that I get out now are near perfect bed adhesion is excellent with the stock bed, however I ripped mine by overheating a print during testing and am awaiting a new PEI Magnetic build plate which I think will be my last upgrade for a little while!
It would also be great to hear about anyone else experience with this printer.
About 6 months ago I entered a competition with DrZzs (highly recommend his channel for home automation) and BangGood to win a Creality Ender 3 3D Printer.
To my surprise a few weeks later I received an email from Banggood stating that I had won and to email over my address, at first i thought that it was just a spam email.
After a few weeks of waiting the printer arrived, I couldn’t believe it, I just got a £300 printer for FREE!
On with the build!
I then unboxed and went through building the printer. I followed the instructions which were very comprehensive (other than a few confusing sentences).
It took me roughly 2 hours to build the printer.
I ran through a test print, printing out a benchy to make sure that everything was working as expected. It looked perfect so ordered some more filament for future projects.
The addiction begins…
So now it starts, I spend the rest of my time glued to thingiverse deciding what to print! Although my son made this easier by constantly bugging me to print him a mini combine harvester. This was a difficult print, the thing prints as one, but the combine kept on fusing to the harvester so it wouldn’t spin. After a lot of calibration I finally made it work and he was delighted to get a new toy for free!
Now I have the printer running, and have printed multiple useful prints, Laptop wall mount, Google Home wall mount, Microphone stand for my large desk etc. (and more toys)
Time for the upgrades!
The Ender3 is a brilliant little printer for the price, however it does have some issues that can be easily resolved with a few upgrades.
Printed Upgrades
Upper Filament Guide – Unfortunately not a great upgrade, and one that I scrapped. I have put this here to recommend avoiding this. (I have better options below) It is supposed to keep the filament further away from the printer and stop it wearing the extruder arm. However I found that it made horrible squeaking noises so that was out.
Lower Filament Guide – This brilliant little print stops the filament from rubbing against Z Screw and getting grease on it which ruins prints. I recommend the linked guide as it doesn’t curl over the filament, again I found any that curled over the top would rub and cause a horrible squeak.
Fan Cover – I found that little bits of filament would drop from the hot end into the fan that was open on the ender 3 case, this covers that up and also puts the directions for bed up and down.
Hero Me Gen3 remix – Parts don’t get cooled fast enough with the standard cooler, therefore I printed this one, it focuses the air perfectly under the nozzle for really good cooling.
This version is for the new Ender3’s as they have smaller screws than the older models.
Extruder Knob – This print was one I didn’t know I needed until I printed it, this allows for easier retraction and extrusion by twisting the knob it will move the gear to easily feed filament.
If you plan to upgrade to the MK8 Dual Gear Extruder Arm this part wont fit due to larger gears.
Side Spool Holder – I had issues with the filament dragging and getting stuck due to the sharp angle that it was pulling at, this causes unnecessary wear on the extruder arm and gears. I found this side spool holder which moves the spool to the side of the printer, next to the extruder arm causing much less force to be required when extruding and less rubbing on the extruder arm.
Spool Holder – This spool holder uses bearings to allow the filament to roll around much easier, reducing the drag on the extruder and in turn reducing the wear on the stepper motor.
Purchased Upgrades
SKR 1.3 - A great upgrade to silence those stepper motors, not only that the SKR has a 32bit chip which means more space for new features and faster gcode processing. The TMC2209 Stepper Motor Drivers really do make a massive difference when it comes to the noise of the printer.
Now the only annoying thing are the fans… still on my to do list.
Capricorn Bowden Tube – This Bowden Tube appears to be much better than the one shipped with the Ender 3, it is much more slick to the touch and a little more sturdy, this means the filament passes through it with ease, also the tight diameter means the filament has little room to flex and cause retraction issues. I found that the shipped bowden tube had also melted at the end and had filament stuck to it which leads me to believe it wasn’t installed very well at the factory.
MK8 Dual Gear Extruder Arm – My extruder arm broke within a few months of use, I didn’t notice until I was having bad under extrusion and also seeing slippage on the extruder gear. I took the arm apart to clean the gear and found a tiny crack near the screw for the idler wheel, this was enough to stop the arm working at all due to the flex it added. This can be combated by printing a new extruder arm but I decided to upgrade to a dual gear option which results in:
Even less potential slippage
Stronger spring for the extruder arm
Metal body stops wear from filament rub
PEI Magnetic Bed – This is an excellent upgrade from the stock bed, makes prints super smooth on the bed and sticks really well. I haven’t had a single failed print due to adhesion on this surface, also you can easily take stuff off once cooled without much effort or by flexing the magnetic plate for larger prints.
Conclusion
All in I think I have spent no more than £100 on upgrades and have a brilliant printer, the prints that I get out now are near perfect bed adhesion is excellent with the stock bed, however I ripped mine by overheating a print during testing and am awaiting a new PEI Magnetic build plate which I think will be my last upgrade for a little while!
It would also be great to hear about anyone else experience with this printer.
Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with sites like GitHub offering a social coding experience, and multiple popular projects utilising it great functionality and availability for Open Source sharing.
First off lets make sure that CentOS is up to date:
1
+ Install, Configure and add a repository with Git on CentOS 7 | TotalDebug
Git is an open source, version control system (VCS). It’s commonly used for source code management by developers to allow them to track changes to code bases throughout the product lifecycle, with sites like GitHub offering a social coding experience, and multiple popular projects utilising it great functionality and availability for Open Source sharing.
First off lets make sure that CentOS is up to date:
1
yum update -y
Then we can install Git, it couldn’t be simpler, just run the below command:
In some cases you may already have a repository that you would like to clone and then change the existing code, well that is simple to do too. Get the URL for the clone from GitHub or any other Git SVN and type the following:
1
git clone <URL TO REPOSITORY>
-
This will then download the contents from the repository and onto your CentOS Server
Hopefully this tutorial has been useful for you, please feel free to ask me any questions that you may have, or if you would like a more in depth article for further functions of Git.
This will then download the contents from the repository and onto your CentOS Server
Hopefully this tutorial has been useful for you, please feel free to ask me any questions that you may have, or if you would like a more in depth article for further functions of Git.
I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article will run through how to install and set this up.
I will be using FreeRADIUS as this is the most commonly used, it supports most common authentication protocols.
Disable SELinux: vi /etc/sysconfig/selinux
1
+ Install FreeRadius on CentOS 7 with DaloRadius for management – Updated | TotalDebug
I have recently purchased a load of Ubiquiti UniFi equipment, as part of this i have the UniFi USG which in order to deploy a User VPN requires a RADUIS Server for user authentication. This article will run through how to install and set this up.
I will be using FreeRADIUS as this is the most commonly used, it supports most common authentication protocols.
Disable SELinux: vi /etc/sysconfig/selinux
1
SELINUX=disabled
First we need to update our CentOS server and install the required applications:
1
2
@@ -166,4 +166,4 @@
systemctl restart nginx
Access the web interface:
1
http://FQDN_IP_OF_SERVER/daloradius/login.php
-
Default Login: User: Administrator Pass: radius
This post is licensed under CC BY 4.0 by the author.
This is a short simple guide to assist users with installing the Ubiquiti UniFi Controller required for all UniFi devices on a CentOS 7 Server.
First we need to update our CentOS server and disable SELinux:
1
2
3
yum -y update
@@ -90,4 +90,4 @@
2
rm-rf ~/UniFi.unix.zip
systemctl reboot
-
Once the server is back online you should be able to access the controller via the URL: https://FQDN\_or\_IP:8443 Follow the simple wizard to complete the setup of your controller, I would also recommend you register with Ubiquiti when you setup the controller, this will allow you to manage it remotely on a mobile device.
Once the server is back online you should be able to access the controller via the URL: https://FQDN\_or\_IP:8443 Follow the simple wizard to complete the setup of your controller, I would also recommend you register with Ubiquiti when you setup the controller, this will allow you to manage it remotely on a mobile device.
Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init.
Rather than having to manually link to other articles related to the series, I thought it would be better to have a section at the top that lists all articles related to the series.
The metadata
For this to work each post that is required to be part of a series should contain some metadata with a name for that series. For example:
Creating blog posts for my website I sometimes find that I want top create multiple articles as part of a series, usually because I have done some research and got to a stage that makes sense to have an article to itself, something like my recent post on Proxmox Template with Cloud Image and Cloud Init.
Rather than having to manually link to other articles related to the series, I thought it would be better to have a section at the top that lists all articles related to the series.
The metadata
For this to work each post that is required to be part of a series should contain some metadata with a name for that series. For example:
1
2
3
---
@@ -46,4 +46,4 @@
{% endif %}
This works as follows:
Check that the page has the series medatada
Get all posts that have series that match. Sort these in ascending order.
Display a card with:
The series name
How many parts there are in the series
Clickable link to the other posts in the series
Add to layout
We have everything we need to get this working, but we need to add it to the layout for our posts, edit the post.hhml file and add the include as follows:
1
{% include post-series.html %}
-
You can add this anywhere you would like it to appear on your post, for my website, I have it appear after the meta but before the article begins as per the below screenshot:
Final Thoughts
We now have a great new feature on our blog, super easy to add to the website and this is one of the reasons I love using Jekyll for my website, so much is possible and with very little effort.
There are many additional features that could be added with this small snippet, for example you could create a page that shows all of the series that you have or you could add the series to a menu rather than just the top of the post page.
You can add this anywhere you would like it to appear on your post, for my website, I have it appear after the meta but before the article begins as per the below screenshot:
Final Thoughts
We now have a great new feature on our blog, super easy to add to the website and this is one of the reasons I love using Jekyll for my website, so much is possible and with very little effort.
There are many additional features that could be added with this small snippet, for example you could create a page that shows all of the series that you have or you could add the series to a menu rather than just the top of the post page.
diff --git a/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html b/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html
index ec5f69491..018bed633 100644
--- a/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html
+++ b/posts/killing-a-windows-service-that-seems-to-hang-on-stopping/index.html
@@ -1,4 +1,4 @@
- Killing a Windows service that hangs on "stopping" | TotalDebug
It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the ServiceProcess classes in the .NET Framework or by other means (net stop, Win32 API), the service remains in the state of stopping and never reaches the stopped phase. It’s pretty simple to simulate this behaviour by creating a Windows Service in C# (or any .NET language whatsoever) and adding an infinite loop in the Stop method. The only way to stop the service is by killing the process then. However, sometimes it’s not clear what the process name or ID is (e.g. when you’re running a service hosting application that can cope with multiple instances such as SQL Server Notification Services). The way to do it is as follows:
Go to the command-prompt and query the service (e.g. the SMTP service) by using sc:
1
+ Killing a Windows service that hangs on "stopping" | TotalDebug
It sometimes happens (and it’s not a good sign most of the time): you’d like to stop a Windows Service, and when you issue the stop command through the SCM (Service Control Manager) or by using the ServiceProcess classes in the .NET Framework or by other means (net stop, Win32 API), the service remains in the state of stopping and never reaches the stopped phase. It’s pretty simple to simulate this behaviour by creating a Windows Service in C# (or any .NET language whatsoever) and adding an infinite loop in the Stop method. The only way to stop the service is by killing the process then. However, sometimes it’s not clear what the process name or ID is (e.g. when you’re running a service hosting application that can cope with multiple instances such as SQL Server Notification Services). The way to do it is as follows:
Go to the command-prompt and query the service (e.g. the SMTP service) by using sc:
1
sc queryex SMTPSvc
This will give you the following information:
1
2
@@ -26,4 +26,4 @@
or something like this (the state will mention stopping).
Over here you can find the process identifier (PID), so it’s pretty easy to kill the associated process either by using the task manager or by using taskkill:
1
taskkill /PID 388 /F
-
where the /F flag is needed to force the process kill (first try without the flag).</li> </ol>
This post is licensed under CC BY 4.0 by the author.
At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house.
With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable.
Choosing a company for the installation
I spent a long time looking at different solar companies in my area, the majority had extremely long waiting times around 6+ months before they could install. I then found YouTube recommendations for a local company that had brilliant TrustPilot reviews, they had been around for quite some time and their website was very informative
Most other companies I saw looked like a single person outfit which for me wasn’t really an ideal option as aftercare may become an issue especially if they retire or stop doing solar. in hindsight this shouldn’t have been a deciding factor.
At this point I had decided that I would go with First4Solar, they were the clear winners in price and the initial delivery…
The quoting process
After getting in touch with First4Solar they asked some initial questions:
What our yearly power consumption was
The orientation of our roof
Where we would like the panels
If we wanted battery storage and how much
What access was like to the roofs
I told them all of this, additionally asking for automated whole house failover in case of a grid failure and as many panels as possible on the roof along with around 11kw batteries.
So far so good.
Until I got the first quote, which didn’t match what I had asked for, I assumed that this was just a miss-communication, they had provided a quote for a standard package:
16x 415W Jinko N-Type Panels
GivEnergy 9.5kWh Battery with 100% DOD
3.6kW Charge / Discharge
Can be used with off-peak tariffs
Emergency Power Supply Compatible (manual)
The issue I had with this was I wanted as many panels as possible on the roof, which this didn’t do, when adding the extra panels, the GivEnergy inverter wasn’t powerful enough, it would basically be at full capacity based on the above, and in my eyes its always better to have a bigger inverter to support the addition of more panels or other technology in the future, there was no growing room here which again wasn’t what I requested.
Part of their reasoning for using the GivEnergy inverter was that its under 3.6kWh which means no DNO G99 application is required speeding up the process.
I then asked for a custom quote that doesn’t follow the standard package, they then quoted for the following:
24x Jinko Tiger 415W N-Type Black Framed Mono
SolaX G4 7.5kW Hybrid Inverter
SolaX Triple Power HV 5.8kWh (Master) V2
SolaX Triple Power HV 5.8kWh (Slave) V2
EPS - Manual Changeover
Again you can see they didn’t add the automated failover, however they did say that if the power draw of the house was higher than the inverter could handle, it would stop until the load dropped, which I agreed made sense as I could shut off none essential items, but not really what I wanted as I have servers to keep running.
In the end this second quote is what I went with after they persuaded me that it was best, I guess they were the experts so I would go with their recommendation.
Paying the deposit
Now we are at the stage of paying the deposit, all seemed good and even the director called me to explain the process and to get my deposit paid ASAP to not miss my install date. So I called to try pay by card but “the card machine wasn’t working” and they recommend paying by BACS anyway. At the time I thought nothing of it, so I transferred the money via BACS.
This was the first mistake that I made, however I was totally unaware at the time.
The second mistake was believing being told once paid I would automatically be registered with HEIS so my deposit would be secure and covered by the HEIS insurance, which turned out not to be the case and I should have been contacted by HEIS.
Radio Silence
After paying the deposit (7th Dec 2022) I had total radio silence, If I contacted them the person I needed was “on another call” or “off sick” or one of many other excuses.
I finally managed to speak with someone who gave me an install date of 4th & 5th May 2023, which they would later call and re-arrange for the 18th & 19th May 2023.
I then received a further call to re-arrange again, but told them if they moved the install date I would cancel the order and go elsewhere. After this I never heard from them again.
There was no mention of the DNO G98/99 application and the install was about two weeks off, so again I tried contacting them around ten times which eventually got me through to someone who said it was in hand and they were waiting for the DNO to get back to them, I have contacted the DNO to see if they got the application and am currently awaiting a call back, although I suspect that there was never an application sent.
Concern starts to set in
Other family members were also having issues, dates being set back and different excuses every time. After a while the bad reviews started to pour into TrustPilot at this point I knew something was wrong and began to try get a refund, but all the phones had stopped working and were going to automated systems, even though surprisingly the sales line was still working and deposits were still being taken from customers!
I never got a refund and never spoke to another person at F4S, I did find a facebook group with hundreds of people who were having the exact same issues.
Is this fraud?
The company took my deposit at a point where they must have known they were trading insolvent, but continued to take people’s money, I have heard from other customers that their credit cards were used to pay other suppliers of F4S.
Raised with HEIS who I found out I was not registered and so my £3.5k was not covered by their insurance and I would need to take this up with the bank, the bank also would not touch this as I made a BACS payment (an expensive mistake to make)
The Takeover
So now we are at a state where i’m 3.5k down and potentially no longer able to afford a solar install, but there was some light at the end of the tunnel.
A company called Contact Solar had purchased the customer list and agreed to do what they could to help the customers that F4S had left in limbo and without their deposits. This was an absolutely brilliant thing for them to do for all these customers, however I had concerns that with reports of 1500+ customers they would struggle to keep up with the installs and I could be waiting months again, this really did unsettle me.
That said, I had no evidence that this was the case and Contact Solar provided a very competitive price and a good proposal based on the money left on the contract.
As you can see both offered quite a good deal, with reputable equipment that integrate well with my Home Assistant and Octopus Energy
The main difference I found was the Depth of Discharge (DoD) on the SunSynk batteries was 90% but on Greenlinx batteries its 100%, plus the batteries are larger and the addition of the 15 year maintenance from ESE just made the deal a little better for me.
I also found ESE seemed to have more time to answer my questions, they would always call back when they said and were very helpful, with contact solar, it was all via email which had slow responses and there wasn’t much of a personal touch that made you feel like they wanted your business.
So as you can probably tell, my business went to ese group
Sales Aftercare
On 1st June 2023 I was at the stage of paying a deposit, getting DNO Approval, scheduling the survey and installation dates.
After agreeing to continue I was passed over to make payment, recommended by ESE to pay by Credit Card for the added protection (I would have insisted on this anyway, but it was nice that they mentioned the added protection it offers and that it was their recommended payment method).
Once I had paid I was asked when I would like the install, I asked ASAP due to being so delayed from my previous provider. The install date provided was the 22nd June! not even 4 weeks and they could have the installation completed, crazy to think how long I had been waiting that they could do this so quick.
I agreed and was told I would get a call the following day to arrange a survey, sure enough at 09:30 I had a call to say that someone was in the area today doing another survey and could come do mine after, the surveyor came to check everything and said that it would either be him or one of the other members of the team that would carry out the install.
The Install
I have now had my install completed, the team came and had the inverter, batteries and 1st string of panels installed on the first day, on the second day they got the final string installed and everything was done. The job was very tidy and looks great.
I did have one issue where the battery kept draining itself and power usage was jumping all over the place, but on investigation I found that the CT Clamp was on upside down, easy mistake to make and it was easy enough for me to fix.
The electrician got the dongle hooked up to my WiFi and helped setup the app on my phone. I did however have to contact ESE as by default they don’t leave the ESS Greenlinx battery dongles in, this means its not possible to individually monitor the batteries and should this be required in the future I would be stuck if ESE went into administration. (I’m now waiting for these to be shipped to me)
Aftercare
Generally ESE have been great, the install was quick and efficient.
However I do believe I was slightly miss-lead by them, I went with them as the contract stated I would be able to use any energy provider to get SEG payments via the flexi-orb certificate which they said is the same as MCS but an alternative.
The way this was worded lead me to believe I would be find with ANY energy supplier, that’s not the case, currently only 5 energy suppliers accept Flexi-orb and Octopus is not one of these. This is partially my fault as I didn’t double check or specifically ask if Octopus would accept it, but believe that the contract shouldn’t state SEG payments can be from any supplier I choose when in fact what they meant was any supplier that accepts Flexi-Orb certificates.
The results
Here you can see my solar, battery usage along with my import / export from the grid.
The first full month of the install I have spent £6.95 on electricity and that was with some quite bad weather days. I’m still not on an export tariff so unsure how much additional funds this will yield but judging by the amount I have generated it should be a nice bit of money which will hopefully cover the standing charges.
Things to check / be aware of
To ensure that you don’t get stuck in the same situation, there are a few things I would highly recommend you take into consideration:
Pay your deposit by Credit Card.
If they insist on a Bank Transfer, REFUSE!
They may give excuses like the card machine isn’t working, if they don’t offer you to pay another day, walk away! (I can’t stress this enough!)
Bank Transfers (BACS) payments are not protected by your bank only credit cards are.
Not all solar companies are MCS Certified, some issue Flexi-Orb Certificates instead.
Not all energy suppliers accept flexi-orb, at the time of writing only five of the major suppliers accept them
E.ON
Scottish Power
British Gas
SSE
OVO Energy
It is likely that once Flexi-Orb is accredited that all energy suppliers will accept it, but at time of writing this is something to be aware of.
HEIS will email you within 48 hours to confirm registration and your cover
If you don’t get an email from HEIS, contact them immediately to ensure you have been registered, failure to do so will mean you are not protected by them.
Honestly, from what I have heard, HEIS protection isn’t worth the paper its written on.
You are only covered by HEIS for 120 days, if your install is going to be after this time, you wouldn’t be covered.
If your install gets delayed and will breach the 120 days, contact HEIS and ask what could be done to ensure you are still protected
You will need a DNO application approved before installation:
DNO G98 - For installs under 16A per phase, which is the equivalent of 3.68kWp for a single-phase supply
DNO G99 - For installs greater than 16A per phase
Ensure the electrician is qualified ideally being registered with NICEIC
Without a qualified electrician doing the install you will be unable to get a valid certificate
Ensure you are provided with the necessary electrical certificates to ensure your install is legal
Without this you would need to have an electrician do an EICR on this
Don’t rely on your installer doing things correctly, check everything
Final Thoughts
My overall solar experience has been stressful to say the least, I’m glad that its finally getting sorted but its been a horrible situation for me and the other 1000 First 4 Solar customers that have been conned out of money, likely never to see it again!
If you are looking for solar I have been very impressed with ese group so far, obviously I will update this based on the install but my Dad had his completed by them and they did a great job.
Were you impacted by this nightmare? let me know in the comments how your install went.
At the end of 2022 I came into some inheritance, with the massive energy price increase in the UK I decided to spend this money on a solar and battery installation on my house.
With the decreasing price of solar systems and the increase in price, the return on investment is getting smaller and making the option much more reasonable.
Choosing a company for the installation
I spent a long time looking at different solar companies in my area, the majority had extremely long waiting times around 6+ months before they could install. I then found YouTube recommendations for a local company that had brilliant TrustPilot reviews, they had been around for quite some time and their website was very informative
Most other companies I saw looked like a single person outfit which for me wasn’t really an ideal option as aftercare may become an issue especially if they retire or stop doing solar. in hindsight this shouldn’t have been a deciding factor.
At this point I had decided that I would go with First4Solar, they were the clear winners in price and the initial delivery…
The quoting process
After getting in touch with First4Solar they asked some initial questions:
What our yearly power consumption was
The orientation of our roof
Where we would like the panels
If we wanted battery storage and how much
What access was like to the roofs
I told them all of this, additionally asking for automated whole house failover in case of a grid failure and as many panels as possible on the roof along with around 11kw batteries.
So far so good.
Until I got the first quote, which didn’t match what I had asked for, I assumed that this was just a miss-communication, they had provided a quote for a standard package:
16x 415W Jinko N-Type Panels
GivEnergy 9.5kWh Battery with 100% DOD
3.6kW Charge / Discharge
Can be used with off-peak tariffs
Emergency Power Supply Compatible (manual)
The issue I had with this was I wanted as many panels as possible on the roof, which this didn’t do, when adding the extra panels, the GivEnergy inverter wasn’t powerful enough, it would basically be at full capacity based on the above, and in my eyes its always better to have a bigger inverter to support the addition of more panels or other technology in the future, there was no growing room here which again wasn’t what I requested.
Part of their reasoning for using the GivEnergy inverter was that its under 3.6kWh which means no DNO G99 application is required speeding up the process.
I then asked for a custom quote that doesn’t follow the standard package, they then quoted for the following:
24x Jinko Tiger 415W N-Type Black Framed Mono
SolaX G4 7.5kW Hybrid Inverter
SolaX Triple Power HV 5.8kWh (Master) V2
SolaX Triple Power HV 5.8kWh (Slave) V2
EPS - Manual Changeover
Again you can see they didn’t add the automated failover, however they did say that if the power draw of the house was higher than the inverter could handle, it would stop until the load dropped, which I agreed made sense as I could shut off none essential items, but not really what I wanted as I have servers to keep running.
In the end this second quote is what I went with after they persuaded me that it was best, I guess they were the experts so I would go with their recommendation.
Paying the deposit
Now we are at the stage of paying the deposit, all seemed good and even the director called me to explain the process and to get my deposit paid ASAP to not miss my install date. So I called to try pay by card but “the card machine wasn’t working” and they recommend paying by BACS anyway. At the time I thought nothing of it, so I transferred the money via BACS.
This was the first mistake that I made, however I was totally unaware at the time.
The second mistake was believing being told once paid I would automatically be registered with HEIS so my deposit would be secure and covered by the HEIS insurance, which turned out not to be the case and I should have been contacted by HEIS.
Radio Silence
After paying the deposit (7th Dec 2022) I had total radio silence, If I contacted them the person I needed was “on another call” or “off sick” or one of many other excuses.
I finally managed to speak with someone who gave me an install date of 4th & 5th May 2023, which they would later call and re-arrange for the 18th & 19th May 2023.
I then received a further call to re-arrange again, but told them if they moved the install date I would cancel the order and go elsewhere. After this I never heard from them again.
There was no mention of the DNO G98/99 application and the install was about two weeks off, so again I tried contacting them around ten times which eventually got me through to someone who said it was in hand and they were waiting for the DNO to get back to them, I have contacted the DNO to see if they got the application and am currently awaiting a call back, although I suspect that there was never an application sent.
Concern starts to set in
Other family members were also having issues, dates being set back and different excuses every time. After a while the bad reviews started to pour into TrustPilot at this point I knew something was wrong and began to try get a refund, but all the phones had stopped working and were going to automated systems, even though surprisingly the sales line was still working and deposits were still being taken from customers!
I never got a refund and never spoke to another person at F4S, I did find a facebook group with hundreds of people who were having the exact same issues.
Is this fraud?
The company took my deposit at a point where they must have known they were trading insolvent, but continued to take people’s money, I have heard from other customers that their credit cards were used to pay other suppliers of F4S.
Raised with HEIS who I found out I was not registered and so my £3.5k was not covered by their insurance and I would need to take this up with the bank, the bank also would not touch this as I made a BACS payment (an expensive mistake to make)
The Takeover
So now we are at a state where i’m 3.5k down and potentially no longer able to afford a solar install, but there was some light at the end of the tunnel.
A company called Contact Solar had purchased the customer list and agreed to do what they could to help the customers that F4S had left in limbo and without their deposits. This was an absolutely brilliant thing for them to do for all these customers, however I had concerns that with reports of 1500+ customers they would struggle to keep up with the installs and I could be waiting months again, this really did unsettle me.
That said, I had no evidence that this was the case and Contact Solar provided a very competitive price and a good proposal based on the money left on the contract.
As you can see both offered quite a good deal, with reputable equipment that integrate well with my Home Assistant and Octopus Energy
The main difference I found was the Depth of Discharge (DoD) on the SunSynk batteries was 90% but on Greenlinx batteries its 100%, plus the batteries are larger and the addition of the 15 year maintenance from ESE just made the deal a little better for me.
I also found ESE seemed to have more time to answer my questions, they would always call back when they said and were very helpful, with contact solar, it was all via email which had slow responses and there wasn’t much of a personal touch that made you feel like they wanted your business.
So as you can probably tell, my business went to ese group
Sales Aftercare
On 1st June 2023 I was at the stage of paying a deposit, getting DNO Approval, scheduling the survey and installation dates.
After agreeing to continue I was passed over to make payment, recommended by ESE to pay by Credit Card for the added protection (I would have insisted on this anyway, but it was nice that they mentioned the added protection it offers and that it was their recommended payment method).
Once I had paid I was asked when I would like the install, I asked ASAP due to being so delayed from my previous provider. The install date provided was the 22nd June! not even 4 weeks and they could have the installation completed, crazy to think how long I had been waiting that they could do this so quick.
I agreed and was told I would get a call the following day to arrange a survey, sure enough at 09:30 I had a call to say that someone was in the area today doing another survey and could come do mine after, the surveyor came to check everything and said that it would either be him or one of the other members of the team that would carry out the install.
The Install
I have now had my install completed, the team came and had the inverter, batteries and 1st string of panels installed on the first day, on the second day they got the final string installed and everything was done. The job was very tidy and looks great.
I did have one issue where the battery kept draining itself and power usage was jumping all over the place, but on investigation I found that the CT Clamp was on upside down, easy mistake to make and it was easy enough for me to fix.
The electrician got the dongle hooked up to my WiFi and helped setup the app on my phone. I did however have to contact ESE as by default they don’t leave the ESS Greenlinx battery dongles in, this means its not possible to individually monitor the batteries and should this be required in the future I would be stuck if ESE went into administration. (I’m now waiting for these to be shipped to me)
Aftercare
Generally ESE have been great, the install was quick and efficient.
However I do believe I was slightly miss-lead by them, I went with them as the contract stated I would be able to use any energy provider to get SEG payments via the flexi-orb certificate which they said is the same as MCS but an alternative.
The way this was worded lead me to believe I would be find with ANY energy supplier, that’s not the case, currently only 5 energy suppliers accept Flexi-orb and Octopus is not one of these. This is partially my fault as I didn’t double check or specifically ask if Octopus would accept it, but believe that the contract shouldn’t state SEG payments can be from any supplier I choose when in fact what they meant was any supplier that accepts Flexi-Orb certificates.
The results
Here you can see my solar, battery usage along with my import / export from the grid.
The first full month of the install I have spent £6.95 on electricity and that was with some quite bad weather days. I’m still not on an export tariff so unsure how much additional funds this will yield but judging by the amount I have generated it should be a nice bit of money which will hopefully cover the standing charges.
Things to check / be aware of
To ensure that you don’t get stuck in the same situation, there are a few things I would highly recommend you take into consideration:
Pay your deposit by Credit Card.
If they insist on a Bank Transfer, REFUSE!
They may give excuses like the card machine isn’t working, if they don’t offer you to pay another day, walk away! (I can’t stress this enough!)
Bank Transfers (BACS) payments are not protected by your bank only credit cards are.
Not all solar companies are MCS Certified, some issue Flexi-Orb Certificates instead.
Not all energy suppliers accept flexi-orb, at the time of writing only five of the major suppliers accept them
E.ON
Scottish Power
British Gas
SSE
OVO Energy
It is likely that once Flexi-Orb is accredited that all energy suppliers will accept it, but at time of writing this is something to be aware of.
HEIS will email you within 48 hours to confirm registration and your cover
If you don’t get an email from HEIS, contact them immediately to ensure you have been registered, failure to do so will mean you are not protected by them.
Honestly, from what I have heard, HEIS protection isn’t worth the paper its written on.
You are only covered by HEIS for 120 days, if your install is going to be after this time, you wouldn’t be covered.
If your install gets delayed and will breach the 120 days, contact HEIS and ask what could be done to ensure you are still protected
You will need a DNO application approved before installation:
DNO G98 - For installs under 16A per phase, which is the equivalent of 3.68kWp for a single-phase supply
DNO G99 - For installs greater than 16A per phase
Ensure the electrician is qualified ideally being registered with NICEIC
Without a qualified electrician doing the install you will be unable to get a valid certificate
Ensure you are provided with the necessary electrical certificates to ensure your install is legal
Without this you would need to have an electrician do an EICR on this
Don’t rely on your installer doing things correctly, check everything
Final Thoughts
My overall solar experience has been stressful to say the least, I’m glad that its finally getting sorted but its been a horrible situation for me and the other 1000 First 4 Solar customers that have been conned out of money, likely never to see it again!
If you are looking for solar I have been very impressed with ese group so far, obviously I will update this based on the install but my Dad had his completed by them and they did a great job.
Were you impacted by this nightmare? let me know in the comments how your install went.
There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving the database connection details in a PHP file and the rest in a database table.
The advantage of using this approach over the others will be apparent when developing downloadable scripts, as updates will not need to modify a configuration file of an already setup script.
To start create a table containing 3 fields: auto increment ID, setting name and setting value:
CREATE TABLE IF NOT EXISTS `settings` (
+ Managing Application Settings in PHP | TotalDebug
There are multiple ways to save application settings/configurations in PHP. You can save them in INI, XML or PHP files as well as a database table. I prefer a combination of the latter two; saving the database connection details in a PHP file and the rest in a database table.
The advantage of using this approach over the others will be apparent when developing downloadable scripts, as updates will not need to modify a configuration file of an already setup script.
To start create a table containing 3 fields: auto increment ID, setting name and setting value:
CREATE TABLE IF NOT EXISTS `settings` (
`setting_id` int(11) NOT NULL AUTO_INCREMENT,
`setting` varchar(50) NOT NULL,
`value` varchar(500) NOT NULL,
@@ -119,4 +119,4 @@
$mail->Host=$setting['email_server'];$mail->Port=$setting['email_port'];?>
-
This code does not filter the values sent to SaveSetting(). To prevent SQL injection and XSS attacks please make sure you check the values before saving them and also after reading them using GetSetting().
This code does not filter the values sent to SaveSetting(). To prevent SQL injection and XSS attacks please make sure you check the values before saving them and also after reading them using GetSetting().
Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted.
Here is a simple solution to the issue we have been having:
[crayon lang=”cmd”]net use I: \SERVERNAME\SHARENAME /User:DOMAIN\username password[/crayon]
run this at startup or as a logon script and the issue will be no more.
This post is licensed under CC BY 4.0 by the author.
Ok so today I had a customer come to me saying that when they map a network drive in NT4 the user details don’t get remembered when the pc is rebooted.
Here is a simple solution to the issue we have been having:
[crayon lang=”cmd”]net use I: \SERVERNAME\SHARENAME /User:DOMAIN\username password[/crayon]
run this at startup or as a logon script and the issue will be no more.
This post is licensed under CC BY 4.0 by the author.
I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore written this guide, taking you from certificate creation all the way to VPN connectivity.
For this tutorial I will have SSH to my Mikrotik (you can use a winbox terminal), I have chosen not to use WinBox for the configuration as its easier to deploy this way.
Certificate Creation
First we need to create our certificate templates on our Mikrotik.
1
+ Mikrotik OpenVPN Server with Linux Client | TotalDebug
I spent quite some time trying to get the OpenVPN Server working on the Mikrotik Router with a Linux client, It caused some pain and I didn’t want others to go through that. I have therefore written this guide, taking you from certificate creation all the way to VPN connectivity.
For this tutorial I will have SSH to my Mikrotik (you can use a winbox terminal), I have chosen not to use WinBox for the configuration as its easier to deploy this way.
Certificate Creation
First we need to create our certificate templates on our Mikrotik.
Watch the log closely, you will see errors in here which will help with troubleshooting any issues.
Troubleshooting
Compression: At the time of writing compression is not supported by Mikrotik, please make sure no LZO lines are present in the configuration.
Certificates: Check that your certificate and key were imported properly and that your client is configured to trust the self-signed certificate or the CA you used.
Security
There are some security improvements that could be made to this configuration, however this is to get you up and running.
Limit the port access to a specific Source IP Address so that only you can connect
Configure better passwords, the ones shown are examples only
Consider using a separate bridge so that the VPN has its own filters and rules
Change the security of the firewall-auth.txt and home.up files to 600
Hopefully this will be helpful to someone out there.
If you have any issues add a comment below and I will get back to you ASAP.
Watch the log closely, you will see errors in here which will help with troubleshooting any issues.
Troubleshooting
Compression: At the time of writing compression is not supported by Mikrotik, please make sure no LZO lines are present in the configuration.
Certificates: Check that your certificate and key were imported properly and that your client is configured to trust the self-signed certificate or the CA you used.
Security
There are some security improvements that could be made to this configuration, however this is to get you up and running.
Limit the port access to a specific Source IP Address so that only you can connect
Configure better passwords, the ones shown are examples only
Consider using a separate bridge so that the VPN has its own filters and rules
Change the security of the firewall-auth.txt and home.up files to 600
Hopefully this will be helpful to someone out there.
If you have any issues add a comment below and I will get back to you ASAP.
Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU and memory together form a NUMA node (as shown in the diagram below).
Memory access time can differ due to the memory location relative to a processor, because a CPU can access it own memory faster than remote memory thus creating higher latency if remote memory is required.
In short NUMA links multiple small high performing nodes together inside a single server.
What is vNUMA
vNUMA stands for Virtual Non-Uniform Memory Access, ESX has been NUMA-aware singe 2002, with VMware ESX 1.5 Introducing memory management features to improve locality on NUMA hardware. This works very well for placing VMs on local memory for resources being used by that VM, particularly for VMs that are smaller than the NUMA node. Large VMs, however, will start to see performance issues as they breach a single node, these VMs will require some additional help with resource scheduling.
When enabled vNUMA exposes the VM OS to the physical NUMA. This provides performance improvements with the VM by allowing the OS and programs to best utilise the NUMA optimisations. VMs will then benefit from NUMA, even if the VM itself is larger than the physical NUMA nodes
An administrator can enable / disable vNUMA on a VM using advanced vNUMA Controls
If a VM has more than eight vCPUs, vNUMA is auto enabled
If CPU Hot Add is enabled, vNUMA is Disabled
The operating system must be NUMA Aware
How to determine the size of a NUMA node
In most cases the easiest way to determine a NUMA nodes boundaries is by dividing the amount of physical RAM by the number of logical processoes (cores), this is a very loose guideline. Further information on determining the specific NUMA node setup can be found here:
A VM will initially have its vNUMA topology built when it is powered on, each time it reboots this will be reapplied depending on the host it sits upon, In the case of a vMotion the vNUMA will stay the same until the VM is rebooted and it will re-evaluate its vNUMA topology. This is another great argument to make sure all hardware in a cluster is the same as it will avoid NUMA mismatched which could cause severe performance issues.
Check if a VM is using resources from another NUMA node
If you start to see performance issues with VMs then I would recommend running this test to make sure that the VM isnt using resources from other Nodes.
SSH to the ESXi host that the VM resides on
Type esxtop and press enter
Press “m”
Press “f”
Press “G” until a * shows next to NUMA STATS
look at the column N%L this shows the numa usage if it is lower than 100 it is sharing resources from another numa node, see the example shown below:
As you can see we have multiple VMs using different NUMA nodes, these VMs were showing slower performance than expected, once we sized them correctly they stopped sharing NUMA nodes and this resolved our issues.
Conclusion
NUMA plays a vital part in understanding performance within virtual environments, VMware ESXi 5.0 and above have extended capabilities for VMs with intelligent NUMA scheduling and improved VM-Level optimisation with vNUMA. It is important to understand how both NUMA and vNUMA work when sizing any virtual machines as this can have a detremental effect on your environments performance
This post is licensed under CC BY 4.0 by the author.
Most modern CPU’s, Intel new Nehalem’s and AMD’s veteran Opteron are NUMA architectures. NUMA stands for Non-Uniform Memory Access. Each CPU get assigned its own “local” memory, CPU and memory together form a NUMA node (as shown in the diagram below).
Memory access time can differ due to the memory location relative to a processor, because a CPU can access it own memory faster than remote memory thus creating higher latency if remote memory is required.
In short NUMA links multiple small high performing nodes together inside a single server.
What is vNUMA
vNUMA stands for Virtual Non-Uniform Memory Access, ESX has been NUMA-aware singe 2002, with VMware ESX 1.5 Introducing memory management features to improve locality on NUMA hardware. This works very well for placing VMs on local memory for resources being used by that VM, particularly for VMs that are smaller than the NUMA node. Large VMs, however, will start to see performance issues as they breach a single node, these VMs will require some additional help with resource scheduling.
When enabled vNUMA exposes the VM OS to the physical NUMA. This provides performance improvements with the VM by allowing the OS and programs to best utilise the NUMA optimisations. VMs will then benefit from NUMA, even if the VM itself is larger than the physical NUMA nodes
An administrator can enable / disable vNUMA on a VM using advanced vNUMA Controls
If a VM has more than eight vCPUs, vNUMA is auto enabled
If CPU Hot Add is enabled, vNUMA is Disabled
The operating system must be NUMA Aware
How to determine the size of a NUMA node
In most cases the easiest way to determine a NUMA nodes boundaries is by dividing the amount of physical RAM by the number of logical processoes (cores), this is a very loose guideline. Further information on determining the specific NUMA node setup can be found here:
A VM will initially have its vNUMA topology built when it is powered on, each time it reboots this will be reapplied depending on the host it sits upon, In the case of a vMotion the vNUMA will stay the same until the VM is rebooted and it will re-evaluate its vNUMA topology. This is another great argument to make sure all hardware in a cluster is the same as it will avoid NUMA mismatched which could cause severe performance issues.
Check if a VM is using resources from another NUMA node
If you start to see performance issues with VMs then I would recommend running this test to make sure that the VM isnt using resources from other Nodes.
SSH to the ESXi host that the VM resides on
Type esxtop and press enter
Press “m”
Press “f”
Press “G” until a * shows next to NUMA STATS
look at the column N%L this shows the numa usage if it is lower than 100 it is sharing resources from another numa node, see the example shown below:
As you can see we have multiple VMs using different NUMA nodes, these VMs were showing slower performance than expected, once we sized them correctly they stopped sharing NUMA nodes and this resolved our issues.
Conclusion
NUMA plays a vital part in understanding performance within virtual environments, VMware ESXi 5.0 and above have extended capabilities for VMs with intelligent NUMA scheduling and improved VM-Level optimisation with vNUMA. It is important to understand how both NUMA and vNUMA work when sizing any virtual machines as this can have a detremental effect on your environments performance
This post is licensed under CC BY 4.0 by the author.
Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this format so i found this work around hope it helps you!
You basically need to create an smtp relay on a local server / computer to forward your scans to then set the smtp relay up as below which will then do the authentication part for you.
SMTP relay settings for Office 365
To configure an SMTP relay in Office 365, you need the following:
A user who has an Exchange Online mailbox
The SMTP set to port 587
Transport Layer Security (TLS) encryption enabled
The mailbox server name
To obtain SMTP settings information, follow these steps:
Sign in to Outlook Web App.
Click Options, and then click See All Options.
Click Account, click My Account, and then in the Account Information area, click Settings for POP, IMAP, and SMTP access.Note the SMTP settings information that is displayed on this page.
Configure Internet Information Services (IIS)
To configure Internet Information Services (IIS) so that your LOB programs can use the SMTP relay, follow these steps:
Create a user who has an Exchange Online mailbox. To do this, use one of the following methods:
Create the user in Active Directory Domain Services, run directory synchronization, and then activate the user by using an Exchange Online license.Note The user must not have an on-premises mailbox.
Create the user by using the Office 365 portal or by using Microsoft Online Services PowerShell Module, and then assign the user an Exchange Online license.
Configure the IIS SMTP relay server. To do this, follow these steps: <li type="a"> Install IIS on an internal server. During the installation, select the option to install the SMTP components. </li> <li type="a"> In Internet Information Services (IIS) Manager, expand the Default SMTP Virtual Server, and then click Domains. </li> <li type="a"> Right-click Domains, click New, click Domain, and then click Remote. </li> <li type="a"> In the Name box, type *.com, and then click Finish. </li>
Double-click the domain that you just created.
Click to select the Allow incoming mail to be relayed to this domain check box.
In the Route domain area, click Forward all mail to smart host, and then in the box, type the mailbox server name.
Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
Right-click the Default SMTP Virtual Server node, and then click Properties.
On the Delivery tab, click Outbound Connections.
In the TCP Port box, type 587, and then click OK.
Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
On the Access tab, click Authentication, click to select the Anonymous access check box, and then click OK.
On the Relay tab, select Only the list below, type the IP addresses of the client computers that will be sending the email messages, and then click OK.
This post is licensed under CC BY 4.0 by the author.
Ok so this one had me stumped for a LONG time trying to figure out how to get scanners to authenticate to office 365 in the end i found out that the scanner i was using wasnt supported in this format so i found this work around hope it helps you!
You basically need to create an smtp relay on a local server / computer to forward your scans to then set the smtp relay up as below which will then do the authentication part for you.
SMTP relay settings for Office 365
To configure an SMTP relay in Office 365, you need the following:
A user who has an Exchange Online mailbox
The SMTP set to port 587
Transport Layer Security (TLS) encryption enabled
The mailbox server name
To obtain SMTP settings information, follow these steps:
Sign in to Outlook Web App.
Click Options, and then click See All Options.
Click Account, click My Account, and then in the Account Information area, click Settings for POP, IMAP, and SMTP access.Note the SMTP settings information that is displayed on this page.
Configure Internet Information Services (IIS)
To configure Internet Information Services (IIS) so that your LOB programs can use the SMTP relay, follow these steps:
Create a user who has an Exchange Online mailbox. To do this, use one of the following methods:
Create the user in Active Directory Domain Services, run directory synchronization, and then activate the user by using an Exchange Online license.Note The user must not have an on-premises mailbox.
Create the user by using the Office 365 portal or by using Microsoft Online Services PowerShell Module, and then assign the user an Exchange Online license.
Configure the IIS SMTP relay server. To do this, follow these steps: <li type="a"> Install IIS on an internal server. During the installation, select the option to install the SMTP components. </li> <li type="a"> In Internet Information Services (IIS) Manager, expand the Default SMTP Virtual Server, and then click Domains. </li> <li type="a"> Right-click Domains, click New, click Domain, and then click Remote. </li> <li type="a"> In the Name box, type *.com, and then click Finish. </li>
Double-click the domain that you just created.
Click to select the Allow incoming mail to be relayed to this domain check box.
In the Route domain area, click Forward all mail to smart host, and then in the box, type the mailbox server name.
Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
Right-click the Default SMTP Virtual Server node, and then click Properties.
On the Delivery tab, click Outbound Connections.
In the TCP Port box, type 587, and then click OK.
Click Outbound Security, and then configure the following settings: <li type="a"> Click Basic Authentication. </li> <li type="a"> In the User name box, type the user name of the Office 365 mailbox user. </li> <li type="a"> In the Password box, type the password of the Office 365 mailbox user. </li> <li type="a"> Click to select the TLS encryption check box, and then click OK. </li>
On the Access tab, click Authentication, click to select the Anonymous access check box, and then click OK.
On the Relay tab, select Only the list below, type the IP addresses of the client computers that will be sending the email messages, and then click OK.
This post is licensed under CC BY 4.0 by the author.
wait until the upgrade has completed then enter the follwoing to reboot the host:
1
reboot
-
The host will reboot and you will now be able to connect with your client, you will be prompted to download the latest client and then you will be away!
This post is licensed under CC BY 4.0 by the author.
The host will reboot and you will now be able to connect with your client, you will be prompted to download the latest client and then you will be away!
This post is licensed under CC BY 4.0 by the author.
I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix!
How to Fix
One simple answer – isset() !
isset() function in PHP determines whether a variable is set and is not NULL. It returns a Boolean value, that is, if the variable is set it will return true and if the variable value is null it will return false. More details on this function can be found in PHP Manual
Example
Let us consider an example. Below is the HTML code for a comment form in a blog.
I have had a few times when coding where I get the error PHP Notice: Undefined Index, I found the below solution to this issue which is an extremely simple fix!
How to Fix
One simple answer – isset() !
isset() function in PHP determines whether a variable is set and is not NULL. It returns a Boolean value, that is, if the variable is set it will return true and if the variable value is null it will return false. More details on this function can be found in PHP Manual
Example
Let us consider an example. Below is the HTML code for a comment form in a blog.
All these URL’s go to the same page but each time performs a different task. So when I try to access the page through the first URL, it will give me the ‘Undefined index’ notice since the parameter ‘action’ is not set.
We can fix this using the isset() function too. But on this instance, we can just ignore it by hiding the notices like this. error\_reporting(E\_ALL ^ E_NOTICE);
You can also turn off error reporting in your php.ini file or .htaccess file, but it is not considered as a wise move if you are still in the testing stage.
This is another simple solution in PHP for a common complex problem. Hope it is useful.
This is an example only my form has no security hardening. Use at own risk.
All these URL’s go to the same page but each time performs a different task. So when I try to access the page through the first URL, it will give me the ‘Undefined index’ notice since the parameter ‘action’ is not set.
We can fix this using the isset() function too. But on this instance, we can just ignore it by hiding the notices like this. error\_reporting(E\_ALL ^ E_NOTICE);
You can also turn off error reporting in your php.ini file or .htaccess file, but it is not considered as a wise move if you are still in the testing stage.
This is another simple solution in PHP for a common complex problem. Hope it is useful.
This is an example only my form has no security hardening. Use at own risk.
Updated to latest Ubuntu image & Added enable for qemu service
Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud images are small cloud certified that have Cloud init pre-installed and ready to accept configuration.
Updated to latest Ubuntu image & Added enable for qemu service
Using Cloud images and Cloud init with Proxmox is the quickest, most efficient way to deploy servers at this time. Cloud images are small cloud certified that have Cloud init pre-installed and ready to accept configuration.
This will download the image onto your proxmox server ready for use.
Install packages
The qemu-guest-agent is not installed on the cloud-images, so we need a way to inject that into out image file. This can be done with a great tool called virt-customize this is installed with the package libguestfs-tools. libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images.
Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another user. What this means, is that the recipient is cognitive of who actually initiated the sending message, regardless of who it was sent on behalf of. This may not be what you are looking to accomplish. In many cases, you may want to send as another person and you do not want the recipient to be cognitive about who initiated the message. Of course, a possible downside to this, is that if the recipient replies, it may go to a user who did not initiate the sent message and might be confused depending on the circumstances. Send As can be useful in a scenario where you are sending as a mail-enabled distribution group. If someone replies, it will go to that distribution group which ultimately gets sent to every user who is a part of that distribution group. This article will explains how to use both methods.
Send on Behalf
There are three ways to configure Send on Behalf. The first method is by using Outlook Delegates which allows a user to grant another user to Send on Behalf of their mailbox. The second method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to Send on Behalf of another user. The third and final method is using the Exchange Management Console (EMC).
Outlook Delegates
There are major steps in order to use Outlook Delegates. The first is to select the user and add him as a delegate. You then must share your mailbox to that user.
Go to Tools and choose Options
Go to the Delegates Tab and click Add
Select the user who wish to grant access to and click Add and then Ok
There are more options you can choose from once you select OK after adding that user. Nothing in the next window is necessary to grant send on behalf.
When back at the main Outlook window, in the Folder List, choose your mailbox at the root level. This will appear as Mailbox – Full Name
Right-click and choose Change Sharing Permissions
Click the Add button
Select the user who wish to grant access to and click Add and then Ok
In the permissions section, you must grant the user at minimum, Non-editing Author.
Exchange Management Shell (EMS)
This is a fairly simple process to complete. It consists of running only the following command and you are finished. The command is as follows:
Choose the mailbox and choose Properties in Action Pane
Go to the Mail Flow Settings Tab and choose Delivery Options
Click the Add button
Select the user who wish to grant access to and click Add and then Ok
Send As
As of Exchange 2007 SP1, there are two ways to configure SendAs. The first method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to SendAs of another user. The second and final method (added in SP1) is using the Exchange Management Console (EMC).
Exchange Management Shell (EMS)
The first method is to grant a specific user the ability to SendAs as another user. It consists of running only the following command and you are finished. The command is as follows:
Send on Behalf and Send As are similar in fashion. Send on Behalf will allow a user to send as another user while showing the recipient that it was sent from a specific user on behalf of another user. What this means, is that the recipient is cognitive of who actually initiated the sending message, regardless of who it was sent on behalf of. This may not be what you are looking to accomplish. In many cases, you may want to send as another person and you do not want the recipient to be cognitive about who initiated the message. Of course, a possible downside to this, is that if the recipient replies, it may go to a user who did not initiate the sent message and might be confused depending on the circumstances. Send As can be useful in a scenario where you are sending as a mail-enabled distribution group. If someone replies, it will go to that distribution group which ultimately gets sent to every user who is a part of that distribution group. This article will explains how to use both methods.
Send on Behalf
There are three ways to configure Send on Behalf. The first method is by using Outlook Delegates which allows a user to grant another user to Send on Behalf of their mailbox. The second method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to Send on Behalf of another user. The third and final method is using the Exchange Management Console (EMC).
Outlook Delegates
There are major steps in order to use Outlook Delegates. The first is to select the user and add him as a delegate. You then must share your mailbox to that user.
Go to Tools and choose Options
Go to the Delegates Tab and click Add
Select the user who wish to grant access to and click Add and then Ok
There are more options you can choose from once you select OK after adding that user. Nothing in the next window is necessary to grant send on behalf.
When back at the main Outlook window, in the Folder List, choose your mailbox at the root level. This will appear as Mailbox – Full Name
Right-click and choose Change Sharing Permissions
Click the Add button
Select the user who wish to grant access to and click Add and then Ok
In the permissions section, you must grant the user at minimum, Non-editing Author.
Exchange Management Shell (EMS)
This is a fairly simple process to complete. It consists of running only the following command and you are finished. The command is as follows:
Choose the mailbox and choose Properties in Action Pane
Go to the Mail Flow Settings Tab and choose Delivery Options
Click the Add button
Select the user who wish to grant access to and click Add and then Ok
Send As
As of Exchange 2007 SP1, there are two ways to configure SendAs. The first method is having an Exchange Administrator go into the Exchange Management Shell (EMS) and grant a specific user to SendAs of another user. The second and final method (added in SP1) is using the Exchange Management Console (EMC).
Exchange Management Shell (EMS)
The first method is to grant a specific user the ability to SendAs as another user. It consists of running only the following command and you are finished. The command is as follows:
Choose the mailbox and choose Manage Send As Permissions in Action Pane
Select the user who wish to grant access to and click Add and then Ok
Miscellaneous Information
No “From:” Button
In order for a user to Send on Behalf or Send As another user, their Outlook profile must be configured to show a From: button. By default, Outlook does not show the From: button. In order to configure a user’s Outlook profile to show the From: button:
Replies
If you are sending as another user, the recipient user might reply. By default, Outlook is configured to set the reply address to whoever is configured as the sending address. So if I am user A sending on behalf of user B, the reply address will be set to user B. If you are the user initiating the sending message, you can configure your Outlook profile to manually configure the reply address.
Conflicting Methods
If you are configuring Send on Behalf permissions on the Exchange Server, ensure that the user is not trying to use the Outlook delegates at the same time. Recently, at a client, I was given the task to configure Send As as well as Send on Behalf. As I was configuring Send As on the server, I found out that the client was attempting to use Outlook Delegates at the same time. Send As would not work. Once the user removed the user from Outlook Delegates and removed permissions for that user at the root level of your mailbox that appears as Mailbox – Full Name, Send As began to work. So keep in mind, if you are configuring Send As or Send on Behalf, use only one method for a specific user.
SendAs Disappearing
If you are in a Protected Group, something in Active Directory called SDProp will come by every hour and remove SendAs permissions on users in these protected groups. What security rights are configured on these security accounts are determined based on what security rights are assigned on the adminSDHolder object which exists in each domain. The important part for you to remember is that every hour, inheritance on these protected groups will be removed and SendAs will be wiped away.
A good blog article explaining what adminSDHolder and SDprop are and what Protected Groups is located here.
Choose the mailbox and choose Manage Send As Permissions in Action Pane
Select the user who wish to grant access to and click Add and then Ok
Miscellaneous Information
No “From:” Button
In order for a user to Send on Behalf or Send As another user, their Outlook profile must be configured to show a From: button. By default, Outlook does not show the From: button. In order to configure a user’s Outlook profile to show the From: button:
Replies
If you are sending as another user, the recipient user might reply. By default, Outlook is configured to set the reply address to whoever is configured as the sending address. So if I am user A sending on behalf of user B, the reply address will be set to user B. If you are the user initiating the sending message, you can configure your Outlook profile to manually configure the reply address.
Conflicting Methods
If you are configuring Send on Behalf permissions on the Exchange Server, ensure that the user is not trying to use the Outlook delegates at the same time. Recently, at a client, I was given the task to configure Send As as well as Send on Behalf. As I was configuring Send As on the server, I found out that the client was attempting to use Outlook Delegates at the same time. Send As would not work. Once the user removed the user from Outlook Delegates and removed permissions for that user at the root level of your mailbox that appears as Mailbox – Full Name, Send As began to work. So keep in mind, if you are configuring Send As or Send on Behalf, use only one method for a specific user.
SendAs Disappearing
If you are in a Protected Group, something in Active Directory called SDProp will come by every hour and remove SendAs permissions on users in these protected groups. What security rights are configured on these security accounts are determined based on what security rights are assigned on the adminSDHolder object which exists in each domain. The important part for you to remember is that every hour, inheritance on these protected groups will be removed and SendAs will be wiped away.
A good blog article explaining what adminSDHolder and SDprop are and what Protected Groups is located here.
I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied.
I managed to resolve this issue by copying the licensing db to a different folder and then re-installing terminal services and copying it back.
stop Terminal Services Licensing service
Copy c:\windows\system32\LServer\TLSLic.edb
Paste the db to a different location
Uninstall Terminal Services Licensing from add remove components
Re-Install Terminal Services Licensing
stop Terminal Services Licensing service
copy the TLSLic.edb back to c:\windows\system32\LServer\ overwriting the new db that is in there
start Terminal Services Licensing service
Now you will notice that TS Licensing is working and all of your licences still work.
You CANNOT move this to another server it is registered to that Licensing server!!!
I came across an issue today where I needed to reinstall terminal services licensing but when you do this licensing is lost and needs to be re-applied.
I managed to resolve this issue by copying the licensing db to a different folder and then re-installing terminal services and copying it back.
stop Terminal Services Licensing service
Copy c:\windows\system32\LServer\TLSLic.edb
Paste the db to a different location
Uninstall Terminal Services Licensing from add remove components
Re-Install Terminal Services Licensing
stop Terminal Services Licensing service
copy the TLSLic.edb back to c:\windows\system32\LServer\ overwriting the new db that is in there
start Terminal Services Licensing service
Now you will notice that TS Licensing is working and all of your licences still work.
You CANNOT move this to another server it is registered to that Licensing server!!!
diff --git a/posts/setup-nfs-mount-centos-6/index.html b/posts/setup-nfs-mount-centos-6/index.html
index 9f5f6f7e3..b895c33c6 100644
--- a/posts/setup-nfs-mount-centos-6/index.html
+++ b/posts/setup-nfs-mount-centos-6/index.html
@@ -1,4 +1,4 @@
- How to setup an NFS mount on CentOS 6 | TotalDebug
NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.
Setup
An NFS mount is set up between at least two servers. The machine hosting the shared directory is called the server, while the ones that connect to it are clients.
This tutorial will take you through setting up the NFS server.
The system should be setup as root
1
+ How to setup an NFS mount on CentOS 6 | TotalDebug
NFS mounts allow sharing a directory between several servers. This has the advantage of saving disk space, as the directory is only kept on one server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.
Setup
An NFS mount is set up between at least two servers. The machine hosting the shared directory is called the server, while the ones that connect to it are clients.
This tutorial will take you through setting up the NFS server.
The system should be setup as root
1
sudo su -
Setting up the NFS Server
1. Install the required software and start services
First we use yum to install the required nfs programs.
rw: This option allows the client to both read and write within the shared directory
sync: Sync confirms requests to the shared directory only once the changes have been committed.
no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger filesystem, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
no_root_squash: This phrase allows root to connect to the designated directory
Once completed save the file and exit it, then run the following command to export the settings:
1
exportfs -a
-
You now have a fully functioning NFS server. If there is anything you think I have missed from this tutorial please comment below.
This post is licensed under CC BY 4.0 by the author.
In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up.
You must first have rSync and rSnapshot installed:
1
+ Setup rSnapshot backups on CentOS | TotalDebug
In this article I will be talking you through how to use rSnapshot and rSync to backup your server with an email alert when the backup has been completed and what has been backed up.
You must first have rSync and rSnapshot installed:
1
yum -yinstall rsync rsnapshot
Once installed you will then need to create the correct configuration files for your server. Here is an example of what I use (save as backup_config.conf):
If the backup fails the email will be empty, I still haven’t figured out how to resolve this to email the errors, If you know please let me know in the comments!
If the backup fails the email will be empty, I still haven’t figured out how to resolve this to email the errors, If you know please let me know in the comments!
I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRADIUS Setup
Once RADIUS is setup the easy part is configuring the USG through the UniFi controller.
First you will need to login to your UniFi Controller
Go to the settings
Then select networks
Create a new network
Add a name for the VPN
Select Remote User VPN for the Purpose
Enter and IP Address with CIDR e.g. 192.168.10.1/24
Enter the IP Address for your RADIUS Server
Enter the port for your RADIUS Server (Default is 1812)
Enter your RADIUS Servers Secret Key / Password
Click Save
That is all you need to do!
In version 5.3.11 and below P2TP is not supported which means it will not work with iPhones / iPads etc. this is supposed to be resolved in the next release.
I have recently had loads of trouble setting up a Ubiquiti UniFi USG remote user VPN, the USG requires a RADIUS server in order to function correctly, the following article covers this setup freeRADIUS Setup
Once RADIUS is setup the easy part is configuring the USG through the UniFi controller.
First you will need to login to your UniFi Controller
Go to the settings
Then select networks
Create a new network
Add a name for the VPN
Select Remote User VPN for the Purpose
Enter and IP Address with CIDR e.g. 192.168.10.1/24
Enter the IP Address for your RADIUS Server
Enter the port for your RADIUS Server (Default is 1812)
Enter your RADIUS Servers Secret Key / Password
Click Save
That is all you need to do!
In version 5.3.11 and below P2TP is not supported which means it will not work with iPhones / iPads etc. this is supposed to be resolved in the next release.
This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and consolidate them in vSphere 6.0 with VVols. Most people who use VMware are aware of limitations with snapshots on VMs that have heavy IO or large snapshots attached to them. In a large number of cases we see snapshots fail to remove and then require hours of downtime to actually consolidate.
Previously we would take a snapshot, this would make the VMDK Read-Only and create a new Delta file that all the new changes would be written to. this file would continue to grow and potentially would end up as big as the VM’s allocated space. Depending on the size of the snapshot we would also take helper snapshots or “Safe Removal Snapshots”, these would allow us to lower the IO on the large snapshot so that the VM didnt see as big an impact when consolidating the first larger snapshot. This would then mean we could remove the helper snapshot, in some cases though the IO was too high for this to work. This could cause VMware to “Stun” the server effectively freezing IO and allowing the snapshot removal to take over causing downtime to our end users.
Eventually if you were unable to merge the snapshots to the base disk the server would need to be powered down and the snapshot removed, this could take hours…
In vSphere 6.0 with VVols this has totally changed!
As you can see we now take a snapshot, but the base disk is still Read/Write, multiple delta files are created with the changed original data. This means that when we remove the snapshot all we need to do is tell VMware to delete the deltas, no need to write it all to the base VMDK as its already there. This technique was first implemented for the VMware Mirror Driver in vMotion, VMware have now utilised this to provide a near seamless snapshot capability in v6.0 stopping large amounts of downtime all together. There should no longer be any noticeable stun time as we are only removing the references to the snapshot.
Interesting piece of information that I thought some of you might find useful.
UPDATE:
I decided to do a test of snapshot removal times, Using the same VM on both VVol and a normal Datastore by writing a 10gb file to them in the same manner. The VM on VVol took 3 seconds to remove, the VM on the normal Datastore took just over 3 minutes. This doesn’t sound like a lot but this is a lab on a VM with no load, imagine a 100GB snapshot with heavy load!
So it looks like there are huge benefits to be had with VVol moving forwards.
This post is licensed under CC BY 4.0 by the author.
This is something that I was unaware of until recently when I was looking into the usage of V-Vols. It appears that VMware have made some major improvements to the ways we handle snapshots and consolidate them in vSphere 6.0 with VVols. Most people who use VMware are aware of limitations with snapshots on VMs that have heavy IO or large snapshots attached to them. In a large number of cases we see snapshots fail to remove and then require hours of downtime to actually consolidate.
Previously we would take a snapshot, this would make the VMDK Read-Only and create a new Delta file that all the new changes would be written to. this file would continue to grow and potentially would end up as big as the VM’s allocated space. Depending on the size of the snapshot we would also take helper snapshots or “Safe Removal Snapshots”, these would allow us to lower the IO on the large snapshot so that the VM didnt see as big an impact when consolidating the first larger snapshot. This would then mean we could remove the helper snapshot, in some cases though the IO was too high for this to work. This could cause VMware to “Stun” the server effectively freezing IO and allowing the snapshot removal to take over causing downtime to our end users.
Eventually if you were unable to merge the snapshots to the base disk the server would need to be powered down and the snapshot removed, this could take hours…
In vSphere 6.0 with VVols this has totally changed!
As you can see we now take a snapshot, but the base disk is still Read/Write, multiple delta files are created with the changed original data. This means that when we remove the snapshot all we need to do is tell VMware to delete the deltas, no need to write it all to the base VMDK as its already there. This technique was first implemented for the VMware Mirror Driver in vMotion, VMware have now utilised this to provide a near seamless snapshot capability in v6.0 stopping large amounts of downtime all together. There should no longer be any noticeable stun time as we are only removing the references to the snapshot.
Interesting piece of information that I thought some of you might find useful.
UPDATE:
I decided to do a test of snapshot removal times, Using the same VM on both VVol and a normal Datastore by writing a 10gb file to them in the same manner. The VM on VVol took 3 seconds to remove, the VM on the normal Datastore took just over 3 minutes. This doesn’t sound like a lot but this is a lab on a VM with no load, imagine a 100GB snapshot with heavy load!
So it looks like there are huge benefits to be had with VVol moving forwards.
This post is licensed under CC BY 4.0 by the author.
Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the database changes between dev / staging / prod without missing parts of functions or missing table columns, especially over long development periods.
Due to this I spent the past month looking into many different ways to manage this, we ended up landing on sqitch, it wasn’t the first product tested and below I will run through some of the others that I found and the issues we saw with them.
Expectations
So what did our team expect would be delivered by the database change management tool?
Well here is the list:
Native SQL support
No limitations on SQL functionality
Open Source, or have a feature rich community edition that is well supported
Easily managed version control, ideally without need for new SQL files for each change
Ability to rollback changes to specific versions
Unix command line utility for easy automation
The testing phase
Over about a month I tested the following products:
Flyway
Flyway was very close to being the chosen product, it had most of our requirements with a few limitations, but it was the best I had found.
Pros:
Uses native SQL
Easy file naming
Cons:
A new file is required for every change, this would lead to hundreds of version files
Inability to rollback to a specific version in time
Heavily limited functionality on the community edition
More complex implementation
Liqibase
Liqibase was looking great, until I discovered that the main language used is XML, SQL is supported, however most documentation is XML based and I didn’t have the time to spend learning the XML format to eventually find out that some specific feature we use isn’t supported by this format.
All in I found that it was more complex to get started than Flyway and the documentation wasn’t the best.
Pros:
More features in the free version than Flyway
Diff feature to compare two databases
Rollback is free
Utilises one file for migrations
Cons:
XML is the primary language used
Targeted rollback is an addon
SQL Alchemy
As this is an ORM it was removed from the running fairly quickly, there is no native SQL support, which means a high chance of missing SQL functionality, one such feature was the ability to create and update Postgres functions
Pros:
Uses Python so can be baked into projects
Development Teams don’t need to know/learn SQL
Cons:
Functionality limited to what the developers implement
Risk of compatibility issues in the future
No support for native SQL files
Sqitch
Sqitch was the last option on the table, I found this tool when searching YouTube when a very early version was being presented.
The idea of Sqitch is to use Version control to track the changes in files, for our requirements this was perfect. It meant I could update existing SQL files and Sqitch would know a change was made and could then be deployed.
One downside to this plan is that not all these features are implemented yet. Although the developers working on the project are making massive strides and I feel it wont be long until they have achieved the original goal they set out for.
Pros:
Uses native SQL
Utilises a git like version control system
You always edit the original file
Open source allowing you to customise as needed
Very responsive community
Ability to support almost any database
Cons:
Some expected features are not implemented yet
No commercial support, only community based
Implementation
Now that we have tested and decided that Sqitch is the product for us, its time to implement the solution.
Installation is super simple, its written in Perl so can be installed on almost any system, or you can use it within a Docker container.
I won’t cover the installation as its easy enough and documented well on the sqitch website.
One thing that I would recommend is to change the default location of the files, by default Sqitch will add deploy, revert and verify to the root directory. Your SQL goes inside these directories. I prefer to have these in a separate directory to keep the root directory tidy, to do this you would run a command similar to below when initialising your repository:
Recently I have been working on a few projects that utilise PostgreSQL databases, as the projects have grown our team has found it increasingly more difficult to manage all of the database changes between dev / staging / prod without missing parts of functions or missing table columns, especially over long development periods.
Due to this I spent the past month looking into many different ways to manage this, we ended up landing on sqitch, it wasn’t the first product tested and below I will run through some of the others that I found and the issues we saw with them.
Expectations
So what did our team expect would be delivered by the database change management tool?
Well here is the list:
Native SQL support
No limitations on SQL functionality
Open Source, or have a feature rich community edition that is well supported
Easily managed version control, ideally without need for new SQL files for each change
Ability to rollback changes to specific versions
Unix command line utility for easy automation
The testing phase
Over about a month I tested the following products:
Flyway
Flyway was very close to being the chosen product, it had most of our requirements with a few limitations, but it was the best I had found.
Pros:
Uses native SQL
Easy file naming
Cons:
A new file is required for every change, this would lead to hundreds of version files
Inability to rollback to a specific version in time
Heavily limited functionality on the community edition
More complex implementation
Liqibase
Liqibase was looking great, until I discovered that the main language used is XML, SQL is supported, however most documentation is XML based and I didn’t have the time to spend learning the XML format to eventually find out that some specific feature we use isn’t supported by this format.
All in I found that it was more complex to get started than Flyway and the documentation wasn’t the best.
Pros:
More features in the free version than Flyway
Diff feature to compare two databases
Rollback is free
Utilises one file for migrations
Cons:
XML is the primary language used
Targeted rollback is an addon
SQL Alchemy
As this is an ORM it was removed from the running fairly quickly, there is no native SQL support, which means a high chance of missing SQL functionality, one such feature was the ability to create and update Postgres functions
Pros:
Uses Python so can be baked into projects
Development Teams don’t need to know/learn SQL
Cons:
Functionality limited to what the developers implement
Risk of compatibility issues in the future
No support for native SQL files
Sqitch
Sqitch was the last option on the table, I found this tool when searching YouTube when a very early version was being presented.
The idea of Sqitch is to use Version control to track the changes in files, for our requirements this was perfect. It meant I could update existing SQL files and Sqitch would know a change was made and could then be deployed.
One downside to this plan is that not all these features are implemented yet. Although the developers working on the project are making massive strides and I feel it wont be long until they have achieved the original goal they set out for.
Pros:
Uses native SQL
Utilises a git like version control system
You always edit the original file
Open source allowing you to customise as needed
Very responsive community
Ability to support almost any database
Cons:
Some expected features are not implemented yet
No commercial support, only community based
Implementation
Now that we have tested and decided that Sqitch is the product for us, its time to implement the solution.
Installation is super simple, its written in Perl so can be installed on almost any system, or you can use it within a Docker container.
I won’t cover the installation as its easy enough and documented well on the sqitch website.
One thing that I would recommend is to change the default location of the files, by default Sqitch will add deploy, revert and verify to the root directory. Your SQL goes inside these directories. I prefer to have these in a separate directory to keep the root directory tidy, to do this you would run a command similar to below when initialising your repository:
This command will tell Sqitch that you want to init a sqitch project within the directory sql for the GitHub repository sqitch_demo and with the engine pg (PostgreSQL) there are other options and databases supported all listed here
Once you have initialised the project you are ready to add a change. The basic pattern is:
Create a branch
Add SQL changes
Modify the code as needed
Commit
Merge to master
So when first starting out you would want to create the schema to do this you would:
Create a branch in your Git repo
Run sqitch add appschema
Edit sql/deploy/appschema.sql, sql/revert/appschema.sql and sql/verify/appschema.sql
Run sqitch deploy db:pg://user@127.0.0.1:5432/sqitch_demo to deploy the changes
Edit any code as normal
Run any tests
Commit your changes
Merge the changes back to the main branch
In order to ensure that your revert SQL is working as expected, it is a good idea to revert and redeploy your changes:
1
sqitch rebase --onto @HEAD^ -y
This command will revert the last change, and redeploy it to the database. This is essentially a shorter way of running:
When the deploy command is issued, sqitch will run down the plan file and execute each change that is required.
If this is the first time deploying Sqitch to a database, it will automatically create all the required tables to track future deployments and changes.
Conclusion
I’ve barely scratched the surface of Sqitch’s capabilities. To say how long Git and change management has been around, its amazing that its taken this long for someone to get it right. If you are having issues with managing database change, I highly suggest that you try Sqitch.
When the deploy command is issued, sqitch will run down the plan file and execute each change that is required.
If this is the first time deploying Sqitch to a database, it will automatically create all the required tables to track future deployments and changes.
Conclusion
I’ve barely scratched the surface of Sqitch’s capabilities. To say how long Git and change management has been around, its amazing that its taken this long for someone to get it right. If you are having issues with managing database change, I highly suggest that you try Sqitch.
diff --git a/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html b/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html
index 1cd15ee70..bdb361d64 100644
--- a/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html
+++ b/posts/synchronize-time-with-external-ntp-server-on-windows-server/index.html
@@ -1,4 +1,4 @@
- Synchronise time with external NTP server on Windows Server | TotalDebug
Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from the domain’s PDC Operation Master. Therefore the PDC must synchronize his time from an external source. I usually use the servers listed at the NTP Pool Project website. Before you begin, don’t forget to open the default UDP 123 port (in- and outbound) on your firewall.
First, locate your PDC Server. Open the command prompt and type:
1
+ Synchronise time with external NTP server on Windows Server | TotalDebug
Time synchronization is an important aspect for all computers on the network. By default, the clients computers get their time from a Domain Controller and the Domain Controller gets his time from the domain’s PDC Operation Master. Therefore the PDC must synchronize his time from an external source. I usually use the servers listed at the NTP Pool Project website. Before you begin, don’t forget to open the default UDP 123 port (in- and outbound) on your firewall.
First, locate your PDC Server. Open the command prompt and type:
1
netdom /query fsmo
Log in to your PDC Server and open the command prompt. run the following command:
1
net stop w32time
@@ -10,4 +10,4 @@
w32tm /resync /nowait
To check that the command has worked run the following:
1
**_w32tm /query /configuration_**
-
When doing this on SBS you may get an access denied error if you do remove: /reliable:yes from the line on number 3.
This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script.
We are using MariaDB as MySQL no longer ships with CentOS and MariaDB is a fork of MySQL
Checkout the video at YouTube:
A few prerequisites that will be required before proceeding with this tutorial:
1
+ Teamspeak 3 on CentOS 7 using MariaDB Database (3.0.12.4) | TotalDebug
This tutorial takes you through setting up Teamspeak 3 on CentOS 7, I will also be going through using a MariaDB database for the backend and a custom system services script.
We are using MariaDB as MySQL no longer ships with CentOS and MariaDB is a fork of MySQL
Checkout the video at YouTube:
A few prerequisites that will be required before proceeding with this tutorial:
To use a MySQL database, you need to install additional libraries not available from the default repositories. Download MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm (This is 64 bit version. If you are on a 32 bit system, you’ll need to find it somewhere) and install
1
yum localinstall MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm
@@ -242,4 +242,4 @@
chkconfig --add teamspeak
chkconfig teamspeak on
service teamspeak start
-
When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down?
In this article I will show you how to retrieve it!
Login to your Teamspeak3 server
Connect to SQL:
mysql -uyouruser -p
+ Teamspeak 3 Recovering privilege key after first startup (MySQL/MariaDB Only) | TotalDebug
When deploying a Teamspeak3 server one thing that is vital for the first time startup is to make a note of the privilege key, but what do you do if for some reason you didn’t write it down?
In this article I will show you how to retrieve it!
Login to your Teamspeak3 server
Connect to SQL:
mysql -uyouruser -p
Select your TS3 Database:
USE <DatabaseName>;
Sleect the Tokens Table:
SELECT * FROM tokens;
-
You should see a privilege key copy this (token_key) column
Its as simple as that! the privilege key can only be used once, when it has been used it will be removed from the tokens table.
diff --git a/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html b/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html
index f43936be7..91ceadba9 100644
--- a/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html
+++ b/posts/the-missing-manual-part-1-veeam-b-r-direct-san-backups/index.html
@@ -1 +1 @@
- The Missing Manual Part 1: Veeam B & R Direct SAN Backups | TotalDebug
One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host, the data would flow from SAN to backup server directly. The benefits of this process are very clear… reduced CPU and network load on the ever so valuable ESXi resources.
The problem is that by default this just doesn’t work with Veeam if you haven’t properly setup your backup server. I will try and keep this process simple, and vendor agnostic ( from a SAN point of view).
The first step to making the vStorage API “SAN backup” work is to make sure your backup server has the Microsoft iSCSI initiator installed. It is already installed by default on Windows 2008 server, however for windows 2003 server you will need to go to Microsoft to download the latest version.
You will need to configure your SAN to allow the IQN address of the iSCSI initiator to have access to the volumes on the SAN… this process is different for each vendor. See screen shot on how to find this in the Configuration tab of the iscsi initiator
After installing MS iSCSI initiator, and setting up your SAN, we need to configure it to see the SAN volumes; do this by opening the “iSCSI initiator” option from control panel. At the top of the main tab there is a field where you can put your SAN’s IP address, enter that now and then press Quick Connect. Shortly a list of all of the volumes that your backup server has access to should appear, once they do select each one and press the “connect” button. Because the volumes are formatted VMFS windows will not show them in My Computer, but if you go to Disk Management inside of Computer manager you should now see that the backup server can see these volumes.
Update: A note from the Veeam Team “One thing that we (Veeam) recommends is to disable automount on your Windows backup server. To do this open up a command prompt and enter in diskpart. Hit enter and then type “Automount disable”. This is to ensure that the Windows server doesn’t try and format the volumes at all. However, before any of this is done if you can through your SAN software, give the Veeam Backup server Read-Only access to your VMFS volumes.”
After preforming these steps go ahead and configure Veeam to use the SAN backup option, and you should notice (especially if you have separate NICs for the SAN network) that all of your data is moving through the SAN directly to the backup server without proxying through the ESXi hosts.
This post is licensed under CC BY 4.0 by the author.
One thing that I had problems with the first time I installed Veeam was the ability to backup Virtual Machines directly from the SAN. Meaning that instead of proxying the data through an ESXi host, the data would flow from SAN to backup server directly. The benefits of this process are very clear… reduced CPU and network load on the ever so valuable ESXi resources.
The problem is that by default this just doesn’t work with Veeam if you haven’t properly setup your backup server. I will try and keep this process simple, and vendor agnostic ( from a SAN point of view).
The first step to making the vStorage API “SAN backup” work is to make sure your backup server has the Microsoft iSCSI initiator installed. It is already installed by default on Windows 2008 server, however for windows 2003 server you will need to go to Microsoft to download the latest version.
You will need to configure your SAN to allow the IQN address of the iSCSI initiator to have access to the volumes on the SAN… this process is different for each vendor. See screen shot on how to find this in the Configuration tab of the iscsi initiator
After installing MS iSCSI initiator, and setting up your SAN, we need to configure it to see the SAN volumes; do this by opening the “iSCSI initiator” option from control panel. At the top of the main tab there is a field where you can put your SAN’s IP address, enter that now and then press Quick Connect. Shortly a list of all of the volumes that your backup server has access to should appear, once they do select each one and press the “connect” button. Because the volumes are formatted VMFS windows will not show them in My Computer, but if you go to Disk Management inside of Computer manager you should now see that the backup server can see these volumes.
Update: A note from the Veeam Team “One thing that we (Veeam) recommends is to disable automount on your Windows backup server. To do this open up a command prompt and enter in diskpart. Hit enter and then type “Automount disable”. This is to ensure that the Windows server doesn’t try and format the volumes at all. However, before any of this is done if you can through your SAN software, give the Veeam Backup server Read-Only access to your VMFS volumes.”
After preforming these steps go ahead and configure Veeam to use the SAN backup option, and you should notice (especially if you have separate NICs for the SAN network) that all of your data is moving through the SAN directly to the backup server without proxying through the ESXi hosts.
This post is licensed under CC BY 4.0 by the author.
diff --git a/posts/two-factor-authentication-worth-really-add-security/index.html b/posts/two-factor-authentication-worth-really-add-security/index.html
index cd58b37d8..7c036b6f9 100644
--- a/posts/two-factor-authentication-worth-really-add-security/index.html
+++ b/posts/two-factor-authentication-worth-really-add-security/index.html
@@ -1 +1 @@
- Two-Factor Authentication: is it worth it, does it really add more security? | TotalDebug
As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, including LastPass, LinkedIn, Twitter and Adobe.
This has cast a light on the problems that passwords bring and how vulnerable users are as a result. Most of these companies are now implementing Two-Factor authentication, but is it really as secure as we are lead to believe? what are its pitfalls?
In this article i’m going to go through some of the pros and cons relating to Two-Factor authentication (or 2FA)
What is 2FA?
Simply put Two-Factor authentication / multi-factor authentication is the ability to employ multiple layers of authentication, in most cases this would be your password and then a token that expires after a short period of time.
Other types of authentication could include but are not limited to:
Finger Print Recognition
Retinal scanners
Face Recognition
Try this example: You have a house, with a safe, inside is a gold bar. The safe has a combination on it that only you know and the house has a door that is locked, only you have the key for this door. It takes two steps of “authentication” to get into the safe and retrieve your gold.
If you added more doors with different locks this would add more “authentication” and it would make the house harder to enter to get to the safe.
How does it work?
There are multiple ways that 2FA tokens work, one method is time based. Both the server and client take the current time e.g. 15:15 they then turn this into a number 1515 and run it through an algorithm that hashes it into a multiple digit code, both devices use the same algorithm to generate the code and thus both generate the same code (as long as the times match), this is obviously a very simplified explanation but shows how both the server and client generate the same codes securely.
To setup 2FA in most cases the website you are using will have a QR-Code that you can scan into an app such as Authy or Google Authenticator, this will then display a numbered token for around 8 seconds before expiring and a new code being generated. After you have entered your conventional username and password you would be prompted for your “Token” once entered you will be authenticated into your account. If you don’t type the token and submit it before the token expires your authentication would fail and you would need to enter the new token.
How Secure is 2FA?
Like any security mechanism there are ways that it can be hacked/compromised, however with two layers of authentication we are making it much harder for any hacker to gain access to our accounts, most people use the same password across multiple websites, with this method if someone does get that password but doesn’t have the 2FA Token then they aren’t getting into your accounts.
Not all deployments of 2FA are as secure as others, this comes down to the algorithms that are used and the reliance on any 3rd party servers to generate the 2FA Tokens. The type of 2FA used would really depend on the application and users that would be using it. Hardware based 2FA is much more secure than software based but relies on 3rd party hardware.
Conclusion
Personally I believe that 2FA should be used where possible, if you have a smartphone that can install one of the 2FA applications I see no reason to avoid this. It makes your accounts and personal information more secure and most importantly harder to hack!
As we all move to a digital age, adding more and more personal information to the internet security has become a real issue, in recent years there have been hack attempts on well-known brands, including LastPass, LinkedIn, Twitter and Adobe.
This has cast a light on the problems that passwords bring and how vulnerable users are as a result. Most of these companies are now implementing Two-Factor authentication, but is it really as secure as we are lead to believe? what are its pitfalls?
In this article i’m going to go through some of the pros and cons relating to Two-Factor authentication (or 2FA)
What is 2FA?
Simply put Two-Factor authentication / multi-factor authentication is the ability to employ multiple layers of authentication, in most cases this would be your password and then a token that expires after a short period of time.
Other types of authentication could include but are not limited to:
Finger Print Recognition
Retinal scanners
Face Recognition
Try this example: You have a house, with a safe, inside is a gold bar. The safe has a combination on it that only you know and the house has a door that is locked, only you have the key for this door. It takes two steps of “authentication” to get into the safe and retrieve your gold.
If you added more doors with different locks this would add more “authentication” and it would make the house harder to enter to get to the safe.
How does it work?
There are multiple ways that 2FA tokens work, one method is time based. Both the server and client take the current time e.g. 15:15 they then turn this into a number 1515 and run it through an algorithm that hashes it into a multiple digit code, both devices use the same algorithm to generate the code and thus both generate the same code (as long as the times match), this is obviously a very simplified explanation but shows how both the server and client generate the same codes securely.
To setup 2FA in most cases the website you are using will have a QR-Code that you can scan into an app such as Authy or Google Authenticator, this will then display a numbered token for around 8 seconds before expiring and a new code being generated. After you have entered your conventional username and password you would be prompted for your “Token” once entered you will be authenticated into your account. If you don’t type the token and submit it before the token expires your authentication would fail and you would need to enter the new token.
How Secure is 2FA?
Like any security mechanism there are ways that it can be hacked/compromised, however with two layers of authentication we are making it much harder for any hacker to gain access to our accounts, most people use the same password across multiple websites, with this method if someone does get that password but doesn’t have the 2FA Token then they aren’t getting into your accounts.
Not all deployments of 2FA are as secure as others, this comes down to the algorithms that are used and the reliance on any 3rd party servers to generate the 2FA Tokens. The type of 2FA used would really depend on the application and users that would be using it. Hardware based 2FA is much more secure than software based but relies on 3rd party hardware.
Conclusion
Personally I believe that 2FA should be used where possible, if you have a smartphone that can install one of the 2FA applications I see no reason to avoid this. It makes your accounts and personal information more secure and most importantly harder to hack!
Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5.
Type hints help to structure your projects better, however they are just hints, they don’t impact the runtime.
As your code base gets larger or you utilise unfamiliar libraries type hints can help with debugging and stopping mistakes from being made when writing new code. When utilising an IDE such as VSCode (with extensions) and PyCharm you will be presented with warning messages each time an incorrect type is used.
Pros and Cons
Adding Type hints comes with some great pros:
Great to assist in the documentation of your code
Enable IDEs to provide better autocomplete functionality
Help discover errors during development
Force you to think about what type should be used and returned, enabling better design decisions.
However, there are also some downsides to type hinting:
Adds development time
Only works with Python 3.5+. (although this shouldn’t be an issue now)
Can cause a minor start-up delay in code that uses it especially when using the typing module
Code can be harder to write, especially for complex types
When should type hinting be added:
Large projects with multiple developers
Design and development of libraries, type hints will help developers that are not familiar with the library
If you plan on writing tests it is recommended to use type hinting
Function Typing
Type hints can be added to a function as follows:
After each parameter, add a colon and a data type
After the function add an arrow function -> and data type
A function with type hints should look similar to the one below:
1
+ Type hinting and checking in Python | TotalDebug
Type hinting is a formal solution that statically indicates the type of a value within your Python code. Specified by PEP 484 and then introduced to Python 3.5.
Type hints help to structure your projects better, however they are just hints, they don’t impact the runtime.
As your code base gets larger or you utilise unfamiliar libraries type hints can help with debugging and stopping mistakes from being made when writing new code. When utilising an IDE such as VSCode (with extensions) and PyCharm you will be presented with warning messages each time an incorrect type is used.
Pros and Cons
Adding Type hints comes with some great pros:
Great to assist in the documentation of your code
Enable IDEs to provide better autocomplete functionality
Help discover errors during development
Force you to think about what type should be used and returned, enabling better design decisions.
However, there are also some downsides to type hinting:
Adds development time
Only works with Python 3.5+. (although this shouldn’t be an issue now)
Can cause a minor start-up delay in code that uses it especially when using the typing module
Code can be harder to write, especially for complex types
When should type hinting be added:
Large projects with multiple developers
Design and development of libraries, type hints will help developers that are not familiar with the library
If you plan on writing tests it is recommended to use type hinting
Function Typing
Type hints can be added to a function as follows:
After each parameter, add a colon and a data type
After the function add an arrow function -> and data type
A function with type hints should look similar to the one below:
With this updated example if we used add_numbers(1.1, 1.2) the output would work without error and type hints would not display a warning.
Static Type Checking - Mypy
Mypy will run against your code and print out any type errors that are found. Mypy doesn’t need to execute the code, it will simply run through it much the same as a linter tool would do.
If no type hinting is present in the code, no errors will be produced by Mypy.
Mypy can be run against a single file or an entire folder. I also utilise pre-commits which wont allow code to be committed if there are any errors present. I also introduced these checks with Github Actions to ensure any contributions to my projects follow these requirements.
Final Thoughts
Type hints are a great way to ensure your code is used in the correct manner and to reduce the risk of errors being introduced during development. Although they are not required by Python, I feel that type hints should be added to all projects as it assists with clean code and reduces errors.
The following resources are great for additional help with type hinting:
With this updated example if we used add_numbers(1.1, 1.2) the output would work without error and type hints would not display a warning.
Static Type Checking - Mypy
Mypy will run against your code and print out any type errors that are found. Mypy doesn’t need to execute the code, it will simply run through it much the same as a linter tool would do.
If no type hinting is present in the code, no errors will be produced by Mypy.
Mypy can be run against a single file or an entire folder. I also utilise pre-commits which wont allow code to be committed if there are any errors present. I also introduced these checks with Github Actions to ensure any contributions to my projects follow these requirements.
Final Thoughts
Type hints are a great way to ensure your code is used in the correct manner and to reduce the risk of errors being introduced during development. Although they are not required by Python, I feel that type hints should be added to all projects as it assists with clean code and reduces errors.
The following resources are great for additional help with type hinting:
Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this
First we need to logon to the USG via SSH, On windows I recommend Putty
Recently I had a requirement to setup a content filter on the USG for a client. I couldn’t find much information online so have decided to write this article to show others how to do this
First we need to logon to the USG via SSH, On windows I recommend Putty
Once we have logged in, run the below command:
1
update webproxy blacklists
This will download all of the content filter categories to the USG, this can take some time as there is approx. 100MB (70-80MB is “adult”)
When this is completed run the following:
1
2
@@ -60,4 +60,4 @@
}}}
-
Save this information into a file on your controller
It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them as folders. Even with some other great resources out there that discuss this topic, a lack of education remains on how resource pools work, and what they do. In this post, I’ll give you my spin on some of the ideals behind a resource pool, and then discuss ways to properly balance resource pools by hand and with the help of some PowerShell scripts I have created for you.
What is a Resource Pool?
A VMware resource pool is a way of guaranteeing or providing higher priority on a VM’s CPU and/or Memory, the priority set at the pool is then split between each of the individual VM’s in that pool equally.
Who Needs Resource Pools?
You can’t make a resource pool on a cluster unless you have DRS running. So, if your license level excludes DRS, you can’t use resource pools. If you are graced with the awesomeness of DRS, you may need a resource pool if you want to give different types of workloads different priorities for two scenarios:
For when memory and CPU resources become constrained on the cluster.
For when a workload needs a dedicated amount of resources at all times.
Now, this isn’t to say that a resource pool is the only way to accomplish these things – you can use per VM shares and reservations. But, these values sometimes reset when a VM vMotions to another host, and frankly it’s a bit of an administrative nightmare to manage resource settings on the VMs individually.
I personally like resource pools and use them often in a mixed workload environment. If you don’t have the luxury of a dedicated management cluster, resource pools are an easy way to dedicate resources to your vCenter, VUM, DB, and other “virtual infrastructure management” (VIM) VMs.
Why People Fear Resource Pools
People fear resource pools because they are mysterious. Ok, maybe not that mysterious, but they are a bit awkward at first, one common misuse of resource pools that I see quite a lot is as folders, to sort VM’s rather than as a performance control. Also, they are easy to misunderstand, and thus misuse.
Where Did I Get The Numbers?
Let’s start with the resource pools. You’ll notice 3 points for each pool – the shares (high, normal or low), the amount of shares for RAM, and the amount of shares for CPU. Here is the math (supporting document):
RAM is calculated like this: [Cluster RAM in MB] * [20 for High
10 for Normal
5 for Low]
Our cluster has 100 GB of RAM (grey section) and so the math is: 102,400 MB of RAM * 20 = 2,048,000 for High and 102,400 MB of RAM * 5 = 512,000 for Low
CPU is calculated like this: [Cluster CPU Cores] * [2,000 for High, 1,000 for Normal, 500 for Low]
Our cluster has 100 CPU cores (grey section) and so the math is: 100 * 2,000 = 200,000 for High and 100 * 500 = 50,000 for Low
Based on this, the Production resource pool has roughly 80% of the shares. However, when you divide those shares for the resource pool by the number of VMs that live in the resource pool, you start to see the problem. The bottom part of the graphic shows the entitlements at a Per VM level. Test has more than twice what Production has when looking at individual VMs.
This script will calculate the Per VM resource allocation for you:
The script has many options and will calculate what the share value should be by using the -RecommendedShares
Maintaining the Balance
So now you are thinking oh no! My resource pools are totally wrong and this could be causing all my performance issues, so how do you keep the balance?
The trick to keeping your resource pools balanced is to work it out backwards and never, ever use the default high, normal, and low shares values. Decide the weight of your per VM shares first. Let’s say that I want my Test VMs to receive half as much share weight as Production. Shares are an arbitrary value that just determine weight, they aren’t a magic number so you could create your own values. I prefer to stick with the VMware defaults, this way you know where you stand. So, let’s give Test VMs 500 shares per CPU and Per MB Ram each, and Production VMs 2000 shares per CPU and Per MB Ram. I would change the resource pools to this: Calculations: [Total amount of VM RAM in Pool] * [shares] = [Required RAM Shares] [Total amount of VM vCPU in Pool] * [shares] = [Required CPU Shares]
I would recommend having all virtual machines in a resource pool to avoid any issues with balancing your load. If you don’t want to do that then make sure you set your custom shares according to the VMware standards.
Our resource pools: Production would get 90,000 * 20 = 180,000 shares of RAM, 90 * 2000 = 180,000 shares of CPU Test would get 10,000 * 5 = 50000 shares of RAM, 10 * 500 = 5000 Shares of CPU
Much easier, right? Note! If the number of VMs in the resource pool change, you’ll need to update the resource pool shares value to reflect the added VMs. Your options are to manually update the pool when the number of VMs inside change (no fun) or use PowerCLI!
Using PowerCLI to Balance Resource Pool Shares
Now let’s do some coding. This very basic script will connect to the vCenter server and cluster specified and look at the resource pools within. It then reports on the number of VMs contained within and offers to adjust the shares value based on an input you provide. It confirms before making any changes:
It is my experience that resource pools are nearly a four letter word in the virtualization world. Typically I see a look of fear or confusion when I bring up the topic, or I see people using them as folders. Even with some other great resources out there that discuss this topic, a lack of education remains on how resource pools work, and what they do. In this post, I’ll give you my spin on some of the ideals behind a resource pool, and then discuss ways to properly balance resource pools by hand and with the help of some PowerShell scripts I have created for you.
What is a Resource Pool?
A VMware resource pool is a way of guaranteeing or providing higher priority on a VM’s CPU and/or Memory, the priority set at the pool is then split between each of the individual VM’s in that pool equally.
Who Needs Resource Pools?
You can’t make a resource pool on a cluster unless you have DRS running. So, if your license level excludes DRS, you can’t use resource pools. If you are graced with the awesomeness of DRS, you may need a resource pool if you want to give different types of workloads different priorities for two scenarios:
For when memory and CPU resources become constrained on the cluster.
For when a workload needs a dedicated amount of resources at all times.
Now, this isn’t to say that a resource pool is the only way to accomplish these things – you can use per VM shares and reservations. But, these values sometimes reset when a VM vMotions to another host, and frankly it’s a bit of an administrative nightmare to manage resource settings on the VMs individually.
I personally like resource pools and use them often in a mixed workload environment. If you don’t have the luxury of a dedicated management cluster, resource pools are an easy way to dedicate resources to your vCenter, VUM, DB, and other “virtual infrastructure management” (VIM) VMs.
Why People Fear Resource Pools
People fear resource pools because they are mysterious. Ok, maybe not that mysterious, but they are a bit awkward at first, one common misuse of resource pools that I see quite a lot is as folders, to sort VM’s rather than as a performance control. Also, they are easy to misunderstand, and thus misuse.
Where Did I Get The Numbers?
Let’s start with the resource pools. You’ll notice 3 points for each pool – the shares (high, normal or low), the amount of shares for RAM, and the amount of shares for CPU. Here is the math (supporting document):
RAM is calculated like this: [Cluster RAM in MB] * [20 for High
10 for Normal
5 for Low]
Our cluster has 100 GB of RAM (grey section) and so the math is: 102,400 MB of RAM * 20 = 2,048,000 for High and 102,400 MB of RAM * 5 = 512,000 for Low
CPU is calculated like this: [Cluster CPU Cores] * [2,000 for High, 1,000 for Normal, 500 for Low]
Our cluster has 100 CPU cores (grey section) and so the math is: 100 * 2,000 = 200,000 for High and 100 * 500 = 50,000 for Low
Based on this, the Production resource pool has roughly 80% of the shares. However, when you divide those shares for the resource pool by the number of VMs that live in the resource pool, you start to see the problem. The bottom part of the graphic shows the entitlements at a Per VM level. Test has more than twice what Production has when looking at individual VMs.
This script will calculate the Per VM resource allocation for you:
The script has many options and will calculate what the share value should be by using the -RecommendedShares
Maintaining the Balance
So now you are thinking oh no! My resource pools are totally wrong and this could be causing all my performance issues, so how do you keep the balance?
The trick to keeping your resource pools balanced is to work it out backwards and never, ever use the default high, normal, and low shares values. Decide the weight of your per VM shares first. Let’s say that I want my Test VMs to receive half as much share weight as Production. Shares are an arbitrary value that just determine weight, they aren’t a magic number so you could create your own values. I prefer to stick with the VMware defaults, this way you know where you stand. So, let’s give Test VMs 500 shares per CPU and Per MB Ram each, and Production VMs 2000 shares per CPU and Per MB Ram. I would change the resource pools to this: Calculations: [Total amount of VM RAM in Pool] * [shares] = [Required RAM Shares] [Total amount of VM vCPU in Pool] * [shares] = [Required CPU Shares]
I would recommend having all virtual machines in a resource pool to avoid any issues with balancing your load. If you don’t want to do that then make sure you set your custom shares according to the VMware standards.
Our resource pools: Production would get 90,000 * 20 = 180,000 shares of RAM, 90 * 2000 = 180,000 shares of CPU Test would get 10,000 * 5 = 50000 shares of RAM, 10 * 500 = 5000 Shares of CPU
Much easier, right? Note! If the number of VMs in the resource pool change, you’ll need to update the resource pool shares value to reflect the added VMs. Your options are to manually update the pool when the number of VMs inside change (no fun) or use PowerCLI!
Using PowerCLI to Balance Resource Pool Shares
Now let’s do some coding. This very basic script will connect to the vCenter server and cluster specified and look at the resource pools within. It then reports on the number of VMs contained within and offers to adjust the shares value based on an input you provide. It confirms before making any changes:
I am also in the process of writing some more resource pool scripts that will email a report should you have any pools not at the correct resource levels.
I hope this has helped you understand when to use and cleared some confusion around resource pools, although it is a big chunk of information to swallow in one bite, and I’m sure there are a lot of other opinions floating around out there that won’t agree with mine. I’m OK with that. One thing that would be a great feature would be the ability to set per VM shares on the resource pool, and let the pool automatically adjust for membership values.
Any comments and views are appreciated so please share.
I am also in the process of writing some more resource pool scripts that will email a report should you have any pools not at the correct resource levels.
I hope this has helped you understand when to use and cleared some confusion around resource pools, although it is a big chunk of information to swallow in one bite, and I’m sure there are a lot of other opinions floating around out there that won’t agree with mine. I’m OK with that. One thing that would be a great feature would be the ability to set per VM shares on the resource pool, and let the pool automatically adjust for membership values.
Any comments and views are appreciated so please share.
diff --git a/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html b/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html
index 566e87dc8..f88620662 100644
--- a/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html
+++ b/posts/unifi-l2tp-set-a-static-ip-for-a-specific-user-built-in-radius-server/index.html
@@ -1,4 +1,4 @@
- UniFi L2TP: set a static IP for a specific user (built-in Radius Server) | TotalDebug
When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes.
This wouldn’t normally be a problem if the remote client was only taking to my internal network, however I run a server that my internal network communicates out to via IP Address, so if this changes it all stops working.
This article walks through how to setup a static IP Address for an L2TP Client.
First we need to get a dump of our configuration from the USG, to do this we need to SSH to the USG and run a dump:
1
+ UniFi L2TP: set a static IP for a specific user (built-in Radius Server) | TotalDebug
When using my L2TP VPN with the Unifi I realised that it was assigning a different IP Address to my client when it connected sometimes.
This wouldn’t normally be a problem if the remote client was only taking to my internal network, however I run a server that my internal network communicates out to via IP Address, so if this changes it all stops working.
This article walks through how to setup a static IP Address for an L2TP Client.
First we need to get a dump of our configuration from the USG, to do this we need to SSH to the USG and run a dump:
1
mca-ctrl -t dump-cfg
Once we have this I recommend copying it into your favourite text editor. We want to delete everything except the following:
1
2
@@ -52,4 +52,4 @@
/opt/UniFi/data/sites/default/
once in this directory create a new file called config.gateway.json and paste the above configuration into it.
To test the new configuration file you can run this command:
1
python -m json.tool config.gateway.json
-
You shouldn’t see any errors if this is correct.
We now can re-provision the USG which will pickup the configuration from the Controller and update the VPN settings.
Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisco documentation I followed. It’s for the 3550, but the Cisco support engineer said that it’s close enough.
First Hurdle: VLAN Mismatch Error
I quickly got bunch of errors that stated “Native VLAN Mismatch: discovered on Gigabit Ethernet 0/1.” The far end of the new switch is VLAN1. To fix this error, I moved port 1 from VLAN 3 to VLAN 1. These are the commands I ran.
1
+ Upgrading a Cisco Catalyst 3560 Switch | TotalDebug
Here are my notes on upgrading a Catalyst 3560. I plugged in a laptop to the serial console and an ethernet cable into port 1 (technically interface Gigabit Ethernet 0/1). Here is the official Cisco documentation I followed. It’s for the 3550, but the Cisco support engineer said that it’s close enough.
First Hurdle: VLAN Mismatch Error
I quickly got bunch of errors that stated “Native VLAN Mismatch: discovered on Gigabit Ethernet 0/1.” The far end of the new switch is VLAN1. To fix this error, I moved port 1 from VLAN 3 to VLAN 1. These are the commands I ran.
1
2
3
4
@@ -84,4 +84,4 @@
switch#reload
Upon reboot:
1
switch#show ver
-
5. Drank a celebratory drink. Coffee of course, because I was still at work.
Over the past few months I have been using Git & GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git.
There are obviously many ways to create workflows using Git, however below is the way that I have started to manage my workflow, this is likely to change over time as it is only my first workflow but this is a start!
What to solve?
There are many things that I didn’t like about the way I used Git in the past and so these are some of the issues I am aiming to solve:
Versioning
Standardised git commit messages
How best to utilise Branches
When should Pull Requests be used
How can the workflow be Automated
Why solve them?
Well this is quite straight forward, to improve the readability of my Git Repos especially in open source projects, but also to keep my mind clear and organised.
How were these issues solved?
Below I have split each area to solve out, this explains how I solved the issues I was experiencing.
Versioning
Versioning was something that I never thought about, I increased when I wanted to based on what I thought was right.
Then I started doing code professionally and was introduced to the Semantic Versioning specification.
This made much more sense by adding a relationship between each different increment.
A version number would be MAJOR.MINOR.PATCH, Increments as below:
MAJOR version when changes are mede that would break previous functionality.
MINOR version when functionality is added in a backwards compatible manner.
PATCH version where you make backwards compatible bug fixes.
by using this method people are now able to easily identify what type of change has been implemented and if it is likely to break their current project.
Conventional Commits
My commit records were… well… a total mess, Looking at other repos this is quite common and not many projects follow a standard. I was looking for a better way to provide commit messages that just make sense and are easy to read, in my research I found a standard called Conventional Commits.
Conventional Commits is a specification for adding human and machine readable meanings to commit messages, this then allows the creation of ChangeLogs through Automation and makes life easier for a human to tell what has changed!
The specification is real simple so doesn’t take much to get your head around:
Over the past few months I have been using Git & GitHub more frequently, both in my professional and personal work, with this came many questions about what the “correct” way is to use Git.
There are obviously many ways to create workflows using Git, however below is the way that I have started to manage my workflow, this is likely to change over time as it is only my first workflow but this is a start!
What to solve?
There are many things that I didn’t like about the way I used Git in the past and so these are some of the issues I am aiming to solve:
Versioning
Standardised git commit messages
How best to utilise Branches
When should Pull Requests be used
How can the workflow be Automated
Why solve them?
Well this is quite straight forward, to improve the readability of my Git Repos especially in open source projects, but also to keep my mind clear and organised.
How were these issues solved?
Below I have split each area to solve out, this explains how I solved the issues I was experiencing.
Versioning
Versioning was something that I never thought about, I increased when I wanted to based on what I thought was right.
Then I started doing code professionally and was introduced to the Semantic Versioning specification.
This made much more sense by adding a relationship between each different increment.
A version number would be MAJOR.MINOR.PATCH, Increments as below:
MAJOR version when changes are mede that would break previous functionality.
MINOR version when functionality is added in a backwards compatible manner.
PATCH version where you make backwards compatible bug fixes.
by using this method people are now able to easily identify what type of change has been implemented and if it is likely to break their current project.
Conventional Commits
My commit records were… well… a total mess, Looking at other repos this is quite common and not many projects follow a standard. I was looking for a better way to provide commit messages that just make sense and are easy to read, in my research I found a standard called Conventional Commits.
Conventional Commits is a specification for adding human and machine readable meanings to commit messages, this then allows the creation of ChangeLogs through Automation and makes life easier for a human to tell what has changed!
The specification is real simple so doesn’t take much to get your head around:
1
2
3
4
@@ -16,4 +16,4 @@
<issue number>-<short_description>
Example:
1
311-softLimit
-
By doing this I am able to quickly link a branch to a specific issue in the project. Branches also enable me to make multiple commits at smaller increments, which I then use Pull Requests to merge with Master
Pull Requests
I now utilise Pull Requests to move my branch into the master, the pull request has various checks using GitHub Actions depending on the project type This would be things like:
Version check: confirm that the version in the project files has been incremented since the last release
Tests: Check that the code functions as expected
Linting: Check that the code still adheres to the relevant standards
With all of my Repos I will only enable Allow squash merging this allows me to create one good commit message that covers the issues fixed for the specific branch we are merging, rather than all the commits from the development lifecycle (keeping my master commits clean)
Version Tags
Once I have completed all of the pull requests for a specific release I will then add a version tag to the master.
This version tag creates a point in time reference along with triggering my release automation once it is pushed.
Automated Workflow
In order to streamline my delivery to release I have started to utilise GitHub Actions, This allows me to have endless automation capabilities.
Currently I utilise Actions for the following:
Linting
Tests
Version Checks
ChangeLog Generation
Release creation
Push to external artifactories (e.g. Docker Hub, Ansible Galaxy etc.)
The changelog and release process is something that I have just started doing, I was manually writing out my changelog for any new releases which was time consuming and required a lot of manual back and forth to confirm what was changed, not an issue whilst a project is small, but if it grows that would quickly become out of control.
Final Thoughts
I believe that at this time for the work I am doing this is the best workflow for myself, If you have any thoughts on ways this could be further improved, please let me know over on my Discord
By doing this I am able to quickly link a branch to a specific issue in the project. Branches also enable me to make multiple commits at smaller increments, which I then use Pull Requests to merge with Master
Pull Requests
I now utilise Pull Requests to move my branch into the master, the pull request has various checks using GitHub Actions depending on the project type This would be things like:
Version check: confirm that the version in the project files has been incremented since the last release
Tests: Check that the code functions as expected
Linting: Check that the code still adheres to the relevant standards
With all of my Repos I will only enable Allow squash merging this allows me to create one good commit message that covers the issues fixed for the specific branch we are merging, rather than all the commits from the development lifecycle (keeping my master commits clean)
Version Tags
Once I have completed all of the pull requests for a specific release I will then add a version tag to the master.
This version tag creates a point in time reference along with triggering my release automation once it is pushed.
Automated Workflow
In order to streamline my delivery to release I have started to utilise GitHub Actions, This allows me to have endless automation capabilities.
Currently I utilise Actions for the following:
Linting
Tests
Version Checks
ChangeLog Generation
Release creation
Push to external artifactories (e.g. Docker Hub, Ansible Galaxy etc.)
The changelog and release process is something that I have just started doing, I was manually writing out my changelog for any new releases which was time consuming and required a lot of manual back and forth to confirm what was changed, not an issue whilst a project is small, but if it grows that would quickly become out of control.
Final Thoughts
I believe that at this time for the work I am doing this is the best workflow for myself, If you have any thoughts on ways this could be further improved, please let me know over on my Discord
I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported.
Due to this I needed to find a workaround, which I wanted to share with you all
Advantages of this method
Control over gemset
Jekyll Version - Instead of using the version forced upon you by GitHub, you can use any version you want
Plugins - You can use any Jekyll plugins irrespective of them being supported by GitHub
Workflow Management
Customization - By using GitHub Actions, you are able to customize the build steps however you need them
Logging - The build log is visible and can be adjusted, so it is much easier to debug errors
Setting up the GitHub Action
GitHub actions are created by adding a YAML file in the directory .github/workflows. Here we will create our action using the Jekyll Action from the Marketplace.
Create a workflow file github-pages.yml, then add the below information:
1
+ Use GitHub pages with unsupported plugins | TotalDebug
I have recently migrated my website over to Github Pages, however in doing so I have found that there are some limitations, the main one being that not all Jekyll plugins are supported.
Due to this I needed to find a workaround, which I wanted to share with you all
Advantages of this method
Control over gemset
Jekyll Version - Instead of using the version forced upon you by GitHub, you can use any version you want
Plugins - You can use any Jekyll plugins irrespective of them being supported by GitHub
Workflow Management
Customization - By using GitHub Actions, you are able to customize the build steps however you need them
Logging - The build log is visible and can be adjusted, so it is much easier to debug errors
Setting up the GitHub Action
GitHub actions are created by adding a YAML file in the directory .github/workflows. Here we will create our action using the Jekyll Action from the Marketplace.
Create a workflow file github-pages.yml, then add the below information:
We trigger on.push to master, or by a manual dispatch workflow_dispatch
The checkout action clones your repository.
Our action is specified along with the required version helaili/jekyll-action@2.0.1
We set an environment variable for the action to use JEKYLL_PAT a Personal Access Token
Providing permissions
The action needs permissions to push the Jekyll data to your gh-pages branch (this will be created if it doesn’t exist)
In order to do this, you must create a GitHub Personal Access Token on your GitHub profile, then set this as an environment variable using Secrets.
On your GitHub profile, under Developer Settings, go to the Personal Access Tokens section.
Create a token. Give it a name like “GitHub Actions” and ensure it has permissions to public_repos (or the entire repo scope for private repository) — necessary for the action to commit to the gh-pages branch.
Copy the token value.
Go to your repository’s Settings and then the Secrets tab.
Create a token named JEKYLL_PAT (important) and paste your token into the value
Deployment
On pushing changes onto master the action will be triggered and the build will start.
You can watch the progress by looking at the actions that are currently running via your repository
If all goes well you should see a green build status on the gh-pages branch.
If this is a new repository you will also need to setup the pages to use the new gh-pages branch instead of master. this can be found in the repository settings.
We trigger on.push to master, or by a manual dispatch workflow_dispatch
The checkout action clones your repository.
Our action is specified along with the required version helaili/jekyll-action@2.0.1
We set an environment variable for the action to use JEKYLL_PAT a Personal Access Token
Providing permissions
The action needs permissions to push the Jekyll data to your gh-pages branch (this will be created if it doesn’t exist)
In order to do this, you must create a GitHub Personal Access Token on your GitHub profile, then set this as an environment variable using Secrets.
On your GitHub profile, under Developer Settings, go to the Personal Access Tokens section.
Create a token. Give it a name like “GitHub Actions” and ensure it has permissions to public_repos (or the entire repo scope for private repository) — necessary for the action to commit to the gh-pages branch.
Copy the token value.
Go to your repository’s Settings and then the Secrets tab.
Create a token named JEKYLL_PAT (important) and paste your token into the value
Deployment
On pushing changes onto master the action will be triggered and the build will start.
You can watch the progress by looking at the actions that are currently running via your repository
If all goes well you should see a green build status on the gh-pages branch.
If this is a new repository you will also need to setup the pages to use the new gh-pages branch instead of master. this can be found in the repository settings.
diff --git a/posts/use-google-authenticator-ssh/index.html b/posts/use-google-authenticator-ssh/index.html
index e75ffd8ee..86e64f379 100644
--- a/posts/use-google-authenticator-ssh/index.html
+++ b/posts/use-google-authenticator-ssh/index.html
@@ -1,4 +1,4 @@
- Use Google Authenticator for 2FA with SSH | TotalDebug
By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it is much more secure. But like someone can guess a password or get it from alternative sources, they can also steal your private SSH key and then access all data that key has access to.
In this guide, We will setup Two-Factor authentication (2FA) meaning that more than one factor is required to authenticate or log in. This means any hackers would need to compromise multiple devices, like your computer and your phone to get access.
Prerequisites
To follow this tutorial, you will need:
One CentOS 8 or Ubuntu server with a sudo non-root user and SSH key
A phone or tablet with an OATH-TOTP app, like Authy or Google Authenticator
Install chrony to synchronize the system clock
This step is very important, due to the way 2FA works, the time must be accurate on the server. Run the following commands to setup and install chrony:
1
+ Use Google Authenticator for 2FA with SSH | TotalDebug
By default, SSH uses password authentication, most SSH hardening instructions recommend using SSH keys instead. However, SSH keys still only provide a single factor authentication, even though it is much more secure. But like someone can guess a password or get it from alternative sources, they can also steal your private SSH key and then access all data that key has access to.
In this guide, We will setup Two-Factor authentication (2FA) meaning that more than one factor is required to authenticate or log in. This means any hackers would need to compromise multiple devices, like your computer and your phone to get access.
Prerequisites
To follow this tutorial, you will need:
One CentOS 8 or Ubuntu server with a sudo non-root user and SSH key
A phone or tablet with an OATH-TOTP app, like Authy or Google Authenticator
Install chrony to synchronize the system clock
This step is very important, due to the way 2FA works, the time must be accurate on the server. Run the following commands to setup and install chrony:
1
2
3
4
@@ -106,4 +106,4 @@
- : ALL : ALL
Local login attempts from 10.0.0.0/24 will not require two-factor authentication, while all others do. Now we need to edit the ssh daemon configuration file.
Please keep in mind that this could add a security risk if not locked down sufficiently
Restart the SSH daemon:
1
systemctl restart sshd
-
Final Thoughts
This how-to guide has taken you through how to add 2FA authentication using google authentication via your computer and your phone making your system considerably more secure. It is now much more difficult for a brute force attack via SSH.
This how-to guide has taken you through how to add 2FA authentication using google authentication via your computer and your phone making your system considerably more secure. It is now much more difficult for a brute force attack via SSH.
diff --git a/posts/use-python-pandas-now/index.html b/posts/use-python-pandas-now/index.html
index a16b3a211..6e3d93bb9 100644
--- a/posts/use-python-pandas-now/index.html
+++ b/posts/use-python-pandas-now/index.html
@@ -1,4 +1,4 @@
- Use Python pandas NOW for your big datasets | TotalDebug
Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement.
I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew.
Most of the processing required multiple nested for loops and addition of columns to json formatted data, this had some large processing requirements and multi threaded processing wouldn’t help in these scenarios.
I knew there had to be a better way to process this data faster, and so I looked into using pandas.
What is pandas?
pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.
Test results
I ran some testing on 100 rows of data, one using for loops and one using pandas. With for loops the test took 19.09s to complete, with pandas an impressive 1.21s an improvement of 17.88s. When I run this on the full dataset which currently sits at around 16,500 rows it takes 33.15s seconds, an impressive improvement from a full run with for loops (which I had to cancel after 3 hours, it took too long for my requirements).
Pandas first steps
Install and import
Pandas is an easy package to install. Open up your terminal program (for Mac users) or command line (for PC users) and install it using either of the following commands:
1
+ Use Python pandas NOW for your big datasets | TotalDebug
Over the past few years I have been working on processing large analytical data sets requiring various manipulations to produce statistics for analysis and business improvement.
I quickly found that processing data of this size was slow, some taking over 11 hours to process which would only get worse as the data grew.
Most of the processing required multiple nested for loops and addition of columns to json formatted data, this had some large processing requirements and multi threaded processing wouldn’t help in these scenarios.
I knew there had to be a better way to process this data faster, and so I looked into using pandas.
What is pandas?
pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.
Test results
I ran some testing on 100 rows of data, one using for loops and one using pandas. With for loops the test took 19.09s to complete, with pandas an impressive 1.21s an improvement of 17.88s. When I run this on the full dataset which currently sits at around 16,500 rows it takes 33.15s seconds, an impressive improvement from a full run with for loops (which I had to cancel after 3 hours, it took too long for my requirements).
Pandas first steps
Install and import
Pandas is an easy package to install. Open up your terminal program (for Mac users) or command line (for PC users) and install it using either of the following commands:
There is much more that can be done with pandas and DataFrames, this just scratches the surface and gives a very basic overview. The main reason for writing this article is to show what a difference in performance is made from using pandas, if you aren’t using this for your data yet, I recommend that you do!
There is much more that can be done with pandas and DataFrames, this just scratches the surface and gives a very basic overview. The main reason for writing this article is to show what a difference in performance is made from using pandas, if you aren’t using this for your data yet, I recommend that you do!
diff --git a/posts/using-clonezilla-to-migrate-multi-disk-server/index.html b/posts/using-clonezilla-to-migrate-multi-disk-server/index.html
index 4840b1366..5ba67527a 100644
--- a/posts/using-clonezilla-to-migrate-multi-disk-server/index.html
+++ b/posts/using-clonezilla-to-migrate-multi-disk-server/index.html
@@ -1 +1 @@
- Using CloneZilla to migrate multiple disk server | TotalDebug
I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware.
For a normal migration I would just use CloneZilla’s remote-source to remote-dest feature, however I could only get this to work for a single source disk, which is fine for the majority of my servers, however I do have some with multiple disks which became an issue.
What was the problem?
At its core CloneZilla is designed to clone a single disk to multiple other disks, you can do this many different ways, however if you have a machine with multiple disks then it is not possible to do this in the traditional way that most tutorials online show you.
I really struggled to find any information on this subject, and most of my research turned up how to clone a single disk to multiple disks rather than how to clone multiple disks to multiple disks!
Its easy to see how this could be difficult for CloneZilla, I mean how would it know which two disks to clone the source data to on the destination? without some form of GUI where you need to pair up all the disks it would be difficult.
The solution
In order to overcome this issue, I created a CloneZilla image this was cloned onto an NFS share. Once complete, I was able to load the image on the destination machine, as there were only two disks in the destination server the image was applied without any issue, on boot I could see that both disks had been cloned over from the image.
The only thing I didn’t like about this is that I had to first create the image, then deploy that image, when I only have one server to clone and not need for an image it would be nice for CloneZilla to implement something in the remote-source / remote-dest that allows this functionality.
Final Thoughts
CloneZilla is an excellent tool for performing these migrations, It’s very easy to use and clones the images quite quickly. In my opinion it is much easier than other solutions provided on the Proxmox website, in fact other methods using the OVF Tool never worked for me (there are also lots of reports of other users having the same issues) which is why I ended up going with CloneZilla.
If you have had any experience with Proxmox migrations using CloneZilla or have a trick that makes the OVF Tool migrations work please let me know over on my Discord.
I recently decided to migrate all of my home servers to Proxmox from VMware ESXi, many factors at play but the main being that new versions of ESXi don’t support my hardware.
For a normal migration I would just use CloneZilla’s remote-source to remote-dest feature, however I could only get this to work for a single source disk, which is fine for the majority of my servers, however I do have some with multiple disks which became an issue.
What was the problem?
At its core CloneZilla is designed to clone a single disk to multiple other disks, you can do this many different ways, however if you have a machine with multiple disks then it is not possible to do this in the traditional way that most tutorials online show you.
I really struggled to find any information on this subject, and most of my research turned up how to clone a single disk to multiple disks rather than how to clone multiple disks to multiple disks!
Its easy to see how this could be difficult for CloneZilla, I mean how would it know which two disks to clone the source data to on the destination? without some form of GUI where you need to pair up all the disks it would be difficult.
The solution
In order to overcome this issue, I created a CloneZilla image this was cloned onto an NFS share. Once complete, I was able to load the image on the destination machine, as there were only two disks in the destination server the image was applied without any issue, on boot I could see that both disks had been cloned over from the image.
The only thing I didn’t like about this is that I had to first create the image, then deploy that image, when I only have one server to clone and not need for an image it would be nice for CloneZilla to implement something in the remote-source / remote-dest that allows this functionality.
Final Thoughts
CloneZilla is an excellent tool for performing these migrations, It’s very easy to use and clones the images quite quickly. In my opinion it is much easier than other solutions provided on the Proxmox website, in fact other methods using the OVF Tool never worked for me (there are also lots of reports of other users having the same issues) which is why I ended up going with CloneZilla.
If you have had any experience with Proxmox migrations using CloneZilla or have a trick that makes the OVF Tool migrations work please let me know over on my Discord.
This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes.
Download VCSA 6.0 from the VMware Website.
Mount the ISO on your computer.
go to the VCSA folder and install the VMware Client Intergration Plugin.
launch vcsa-setup.html from the ISO.
you will be prompted to Install or Upgrade, Choose Install
Accept the terms and click next
Enter the FQDN / IP and user details for an ESXi Host
wait for validation then click yes on the certificate warning.
Enter the appliance name and root password.
Select the install type, there are now 2 choices, you can either deploy the appliance as one virtual machine or two, when deploying as two virtual machines one would be the platform services controller and the second vCenter Server.
Select the SSO type. You have the choice of setting up a new SSO Domain or joining an existing one if you already have one in place.
select the size of the appliance, this ranges from Tiny (10 hosts, 100 VMs) to Large (1,000 hosts and 10,000 VMs)
Select the datastore you would like vCenter to reside on, tick “Enable Thin Disk Mode” if you want the Appliance to be Thin Provisioned.
Select the database type, either Use an embedded database or Oracle database.
Fill in the Network settings as required, choosing the correct network / IP addressing required for your network.
vCenter will now begin to deploy.
You should now have a fully working vCenter Server Appliance 6.0, this install is much improved from previous versions and makes it much easier for basic users to get the appliance deployed.
This post is licensed under CC BY 4.0 by the author.
This article covers the deployment on the vCenter 6.0 VCSA, you will see that this process is radically different from previous processes.
Download VCSA 6.0 from the VMware Website.
Mount the ISO on your computer.
go to the VCSA folder and install the VMware Client Intergration Plugin.
launch vcsa-setup.html from the ISO.
you will be prompted to Install or Upgrade, Choose Install
Accept the terms and click next
Enter the FQDN / IP and user details for an ESXi Host
wait for validation then click yes on the certificate warning.
Enter the appliance name and root password.
Select the install type, there are now 2 choices, you can either deploy the appliance as one virtual machine or two, when deploying as two virtual machines one would be the platform services controller and the second vCenter Server.
Select the SSO type. You have the choice of setting up a new SSO Domain or joining an existing one if you already have one in place.
select the size of the appliance, this ranges from Tiny (10 hosts, 100 VMs) to Large (1,000 hosts and 10,000 VMs)
Select the datastore you would like vCenter to reside on, tick “Enable Thin Disk Mode” if you want the Appliance to be Thin Provisioned.
Select the database type, either Use an embedded database or Oracle database.
Fill in the Network settings as required, choosing the correct network / IP addressing required for your network.
vCenter will now begin to deploy.
You should now have a fully working vCenter Server Appliance 6.0, this install is much improved from previous versions and makes it much easier for basic users to get the appliance deployed.
This post is licensed under CC BY 4.0 by the author.
As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product.
Originally the idea was that organisations would use vCloud Director for test environments but as the “Cloud” becomes cheaper and companies move their hosting out to 3rd party providers it makes sense for VMware to push consumers towards hosted platforms for cheaper billing and better support.
With the release of the vRealize product suite we see the new Automation product that allows users to automate deployments on hosted vCloud platforms which is a great step forwards.
So what’s new in vCloud Director 8.0?
vSphere 6.0 Support: Support for vSphere 6.0 in backward compatibility mode.
NSX support: Support for NSX 6.1.4 in backward compatibility mode. This means that tenants’ consumption capability is unchanged and remains at the vCloud Networking and Security feature level of vCloud Director 5.6.
Organization virtual data center templates: Allows system administrators to create organization virtual data center templates, including resource delegation, that organization users can deploy to create new organization virtual data centers.
vApp enhancements: Enhancements to vApp functionality, including the ability to reconfigure virtual machines within a vApp, and network connectivity and virtual machine capability during vApp instantiation.
OAuth support for identity sources: Support for OAuth2 tokens.
Tenant throttling: This prevents a single tenant from consuming all of the resources for a single instance of vCloud director. Ensuring fairness of execution and scheduling among tenants.
So not much has changed even though the version number has jumped quite dramatically. One thing that i will be interested in seeing is if the NSX Support adds much more functionality and what the upgrade paths are from vCNS to NSX for existing providers.
This post is licensed under CC BY 4.0 by the author.
As most of you will now be aware VMware decided to end availability for vCloud Director and shift to only allow service providers to utilise the product.
Originally the idea was that organisations would use vCloud Director for test environments but as the “Cloud” becomes cheaper and companies move their hosting out to 3rd party providers it makes sense for VMware to push consumers towards hosted platforms for cheaper billing and better support.
With the release of the vRealize product suite we see the new Automation product that allows users to automate deployments on hosted vCloud platforms which is a great step forwards.
So what’s new in vCloud Director 8.0?
vSphere 6.0 Support: Support for vSphere 6.0 in backward compatibility mode.
NSX support: Support for NSX 6.1.4 in backward compatibility mode. This means that tenants’ consumption capability is unchanged and remains at the vCloud Networking and Security feature level of vCloud Director 5.6.
Organization virtual data center templates: Allows system administrators to create organization virtual data center templates, including resource delegation, that organization users can deploy to create new organization virtual data centers.
vApp enhancements: Enhancements to vApp functionality, including the ability to reconfigure virtual machines within a vApp, and network connectivity and virtual machine capability during vApp instantiation.
OAuth support for identity sources: Support for OAuth2 tokens.
Tenant throttling: This prevents a single tenant from consuming all of the resources for a single instance of vCloud director. Ensuring fairness of execution and scheduling among tenants.
So not much has changed even though the version number has jumped quite dramatically. One thing that i will be interested in seeing is if the NSX Support adds much more functionality and what the upgrade paths are from vCNS to NSX for existing providers.
This post is licensed under CC BY 4.0 by the author.
Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired.
I could not find a working guide explaining the steps so this post covers everything required to replace expiring / expired certificates with new ones.
First Cell Steps
First we lets check that the Cell doesn’t have any running jobs:
Today I had to renew SSL certificates for a vCloud Director 8.10 cell which had expired.
I could not find a working guide explaining the steps so this post covers everything required to replace expiring / expired certificates with new ones.
First Cell Steps
First we lets check that the Cell doesn’t have any running jobs:
Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming months.
One issue that we have hit a few times was the vCD cell stopped working properly (Multi-cell environment). I could log into the vCD provider and organization portals but the deployment of vApps would run for an abnormally long time and then fail after 20 minutes.
The first thing I tried to do to resolve this issue was reconnect vCenter to vCloud, in the past this has been the solution to this type of problem, however I noticed two problems:
Problem #1: Performing a Reconnect on the vCenter Server object resulted in Error performing operation and Unable to find the cell running this listener.
Problem #2: None of the cells have a vCenter proxy service running on the cell server.
I then stumbled upon some SQL Queries that I wasn’t too sure about, I passed these over to VMware and they confirmed this is the correct action to take and it is none destructive. The below steps take you through resolving this issue:
Stop all your Cells
1
+ vCloud Director and vCenter Proxy Service Failure | TotalDebug
Over the past couple of weeks I have spent some time working with VMware vCloud Director 5.1. I will also be producing multiple other guides for vCloud Director as I use it more over the coming months.
One issue that we have hit a few times was the vCD cell stopped working properly (Multi-cell environment). I could log into the vCD provider and organization portals but the deployment of vApps would run for an abnormally long time and then fail after 20 minutes.
The first thing I tried to do to resolve this issue was reconnect vCenter to vCloud, in the past this has been the solution to this type of problem, however I noticed two problems:
Problem #1: Performing a Reconnect on the vCenter Server object resulted in Error performing operation and Unable to find the cell running this listener.
Problem #2: None of the cells have a vCenter proxy service running on the cell server.
I then stumbled upon some SQL Queries that I wasn’t too sure about, I passed these over to VMware and they confirmed this is the correct action to take and it is none destructive. The below steps take you through resolving this issue:
Stop all your Cells
1
service vmware-vcd stop
Backup the entire vCloud SQL Database. This is just a precaution.
run the below query in SQL Management Studio
1
2
@@ -34,4 +34,4 @@
go
Start one of your Cells and verify that the issue is resolved
1
service vmware-vcd start
-
Start the remaining cells.
The script should run successfully wiping out all rows in each of the named tables.
I was now able to restart the vCD cell and my problems were gone. Everything was working again. All errors have vanished.
These [vCenter Proxy Service] issues are usually caused by a disconnect from the database, causing the tables to become stale. vCD constantly needs the ability to write to the database and when it cannot, the cell ends up in a state that is similar to the one that you have seen. The qrtz tables contain information that controls the coordinator service, and lets it know when the coordinator to be dropped and restarted, for cell to cell fail over to another cell in multi cell environment. When the tables are purged it forces the cell on start up to recheck its status and start the coordinator service. In your situation the cell, due to corrupt records in the table was not allowing this to happen. So by clearing them forced the cell to recheck and to restart the coordinator.
This post is licensed under CC BY 4.0 by the author.
The script should run successfully wiping out all rows in each of the named tables.
I was now able to restart the vCD cell and my problems were gone. Everything was working again. All errors have vanished.
These [vCenter Proxy Service] issues are usually caused by a disconnect from the database, causing the tables to become stale. vCD constantly needs the ability to write to the database and when it cannot, the cell ends up in a state that is similar to the one that you have seen. The qrtz tables contain information that controls the coordinator service, and lets it know when the coordinator to be dropped and restarted, for cell to cell fail over to another cell in multi cell environment. When the tables are purged it forces the cell on start up to recheck its status and start the coordinator service. In your situation the cell, due to corrupt records in the table was not allowing this to happen. So by clearing them forced the cell to recheck and to restart the coordinator.
This post is licensed under CC BY 4.0 by the author.
diff --git a/posts/view-virtual-machines-snapshots-vmware/index.html b/posts/view-virtual-machines-snapshots-vmware/index.html
index 3a9ab711c..c6ac37302 100644
--- a/posts/view-virtual-machines-snapshots-vmware/index.html
+++ b/posts/view-virtual-machines-snapshots-vmware/index.html
@@ -1 +1 @@
- How to view which Virtual Machines have Snapshots in VMware | TotalDebug
This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable.
Using vSphere Client
In vCenter go to: Home > Inventory > Datastores and Datastore Clusters
Select your cluster in the left panel
Choose “Storage Views” tab in the right pane.
Sort by “Snapshot Space”
Anything with more than 0.00b has a snapshot present
This is a question that I have been asked quite a lot recently. I have found multiple ways to do this but 2 are ones that I have used and find the most suitable.
Using vSphere Client
In vCenter go to: Home > Inventory > Datastores and Datastore Clusters
Select your cluster in the left panel
Choose “Storage Views” tab in the right pane.
Sort by “Snapshot Space”
Anything with more than 0.00b has a snapshot present
In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup by hand and with the help of some PowerShell scripts I have created for you.
What is a distributed switch?
A distributed switch (dvSwitch) is very similar to a standard vSwitch, the main difference is that the switch is managed by vCenter instead of the individual ESXi Hosts, the ESXi/ESX 4.x and ESXi 5.x hosts that belong to a dvSwitch do not need further configuration to be compliant.
Distributed switches provide similar functionality to vSwitches. dvPortgroups is a set of dvPorts. The dvSwitch equivalent of portgroups is a set of ports in a vSwitch. Configuration is inherited from dvSwitch to dvPortgroup, just as from vSwitch to Portgroup.
Virtual machines, Service Console interfaces (vswif), and VMKernel interfaces can be connected to dvPortgroups just as they could be connected to portgroups in vSwitches.
This means that if you have 100 ESXi Hosts you only need to configure the PortGroups once and then add the ESXi Hosts to the dvSwitch rather than configuring the networking individually on each host.
How Do You Use a dvSwitch?
Below I have created an example of a two host cluster using a dvSwitch, the dvSwitchis first configured on vCenter and then hosts are added to the dvSwitch. Adding a host to a dvSwitch will then push the network configuration to the host.
Once a host is added to the dvSwitchyou only need to assign the VMK’s and IP Addresses for it to begin functioning correctly. If you have migrated from a vSwitch you can migrate the VMK’s across saving additional configuration.
As you can see from the image there are a few differences from a standard switch, you now have “dvUplinks” these are virtual vmnic’s for the physical network cards that are associated to the same service. e.g. management on host A could be vmnic0 where as on host B it could be vmnic8 without dvUplinks we would not be able to assign the same service to different vmnics on each host.
After you get your head around dvUplinks everything else falls into place, the rest of the dvSwitch is the same as a standard switch (other than features)
VMK’s are host specific due to the requirement for an IP Address, these cannot be allocated on a pool basis which is a shame. You have to manually add VMK’s by going to the host network configuration, selecting vSphere Distributed Switch and then select Manage Virtual adapters, this will then allow you to add / remove / migrate VMK’s to and from specific port groups.
Pros & Cons
There are only a few pros and cons to distributed switches, I have listed all the ones I am aware of below: (if you know any more please leave a comment!)
Pros
Private VLAN’s
Netflow – ability for NetFlow collectors to collect data from the dvSwitch to determine what network device is talking and what protocols they are using
SPAN and LLDP – allows for port mirroring and traffic analysis of network traffic using protocol analyzers
Easy to add a new host
Easy to add a new port group to all hosts
Load Based Teaming, Load Balancing without the IP Hash worry.
Cons
If vCenter fails there is no way to manage your dvSwitch
Requires an Enterprise Plus License
Different Features
These features are available with both types of virtual switches:
Can forward L2 frames
Can segment traffic into VLANs
Can use and understand 802.1q VLAN encapsulation
Can have more than one uplink (NIC Teaming)
Can have traffic shaping for the outbound (TX) traffic
These features are available only with a Distributed Switch:
Can shape inbound (RX) traffic
Has a central unified management interface through vCenter Server
Supports Private VLANs (PVLANs)
Provides potential customization of Data and Control Planes
vSphere 5.x provides these improvements to Distributed Switch functionality:
Increased visibility of inter-virtual machine traffic through Netflow
Improved monitoring through port mirroring (dvMirror)
Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.
The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
Additional port security is enabled through traffic filtering support.
Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.
Automated dvSwitch Backup Script
Below is a script that I have written that allows automated backups of your dvSwitches.
I have also got many other scripts available for use here on my GitHub.
Final Thoughts
vSphere Distributed Virtual Switches are definitely the correct choice for companies that have the license, is it worth buying the licensing just for dvSwitch? I wouldn’t say so unless you require one of the specific features only dvSwitch supports. When you environment starts to grow I would say they are vital to saving time deploying hosts and re-configuring networks. I would recommend that you only use one or the other and don’t use a Hybrid configuration, in a Hybrid mode you are adding more configuration for your team and also added complexity that is not required. As long as you always have a backup of your dvSwitch you will not have any issues with loss of configuration.
If you have anything to add please comment below, all feedback is appreciated.
This post is licensed under CC BY 4.0 by the author.
In this article I am going to take you through what a Distributed switch or dvSwitch is and how it is used, I will also talk about why backing them up is so important, then show you how to backup by hand and with the help of some PowerShell scripts I have created for you.
What is a distributed switch?
A distributed switch (dvSwitch) is very similar to a standard vSwitch, the main difference is that the switch is managed by vCenter instead of the individual ESXi Hosts, the ESXi/ESX 4.x and ESXi 5.x hosts that belong to a dvSwitch do not need further configuration to be compliant.
Distributed switches provide similar functionality to vSwitches. dvPortgroups is a set of dvPorts. The dvSwitch equivalent of portgroups is a set of ports in a vSwitch. Configuration is inherited from dvSwitch to dvPortgroup, just as from vSwitch to Portgroup.
Virtual machines, Service Console interfaces (vswif), and VMKernel interfaces can be connected to dvPortgroups just as they could be connected to portgroups in vSwitches.
This means that if you have 100 ESXi Hosts you only need to configure the PortGroups once and then add the ESXi Hosts to the dvSwitch rather than configuring the networking individually on each host.
How Do You Use a dvSwitch?
Below I have created an example of a two host cluster using a dvSwitch, the dvSwitchis first configured on vCenter and then hosts are added to the dvSwitch. Adding a host to a dvSwitch will then push the network configuration to the host.
Once a host is added to the dvSwitchyou only need to assign the VMK’s and IP Addresses for it to begin functioning correctly. If you have migrated from a vSwitch you can migrate the VMK’s across saving additional configuration.
As you can see from the image there are a few differences from a standard switch, you now have “dvUplinks” these are virtual vmnic’s for the physical network cards that are associated to the same service. e.g. management on host A could be vmnic0 where as on host B it could be vmnic8 without dvUplinks we would not be able to assign the same service to different vmnics on each host.
After you get your head around dvUplinks everything else falls into place, the rest of the dvSwitch is the same as a standard switch (other than features)
VMK’s are host specific due to the requirement for an IP Address, these cannot be allocated on a pool basis which is a shame. You have to manually add VMK’s by going to the host network configuration, selecting vSphere Distributed Switch and then select Manage Virtual adapters, this will then allow you to add / remove / migrate VMK’s to and from specific port groups.
Pros & Cons
There are only a few pros and cons to distributed switches, I have listed all the ones I am aware of below: (if you know any more please leave a comment!)
Pros
Private VLAN’s
Netflow – ability for NetFlow collectors to collect data from the dvSwitch to determine what network device is talking and what protocols they are using
SPAN and LLDP – allows for port mirroring and traffic analysis of network traffic using protocol analyzers
Easy to add a new host
Easy to add a new port group to all hosts
Load Based Teaming, Load Balancing without the IP Hash worry.
Cons
If vCenter fails there is no way to manage your dvSwitch
Requires an Enterprise Plus License
Different Features
These features are available with both types of virtual switches:
Can forward L2 frames
Can segment traffic into VLANs
Can use and understand 802.1q VLAN encapsulation
Can have more than one uplink (NIC Teaming)
Can have traffic shaping for the outbound (TX) traffic
These features are available only with a Distributed Switch:
Can shape inbound (RX) traffic
Has a central unified management interface through vCenter Server
Supports Private VLANs (PVLANs)
Provides potential customization of Data and Control Planes
vSphere 5.x provides these improvements to Distributed Switch functionality:
Increased visibility of inter-virtual machine traffic through Netflow
Improved monitoring through port mirroring (dvMirror)
Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.
The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
Additional port security is enabled through traffic filtering support.
Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.
Automated dvSwitch Backup Script
Below is a script that I have written that allows automated backups of your dvSwitches.
I have also got many other scripts available for use here on my GitHub.
Final Thoughts
vSphere Distributed Virtual Switches are definitely the correct choice for companies that have the license, is it worth buying the licensing just for dvSwitch? I wouldn’t say so unless you require one of the specific features only dvSwitch supports. When you environment starts to grow I would say they are vital to saving time deploying hosts and re-configuring networks. I would recommend that you only use one or the other and don’t use a Hybrid configuration, in a Hybrid mode you are adding more configuration for your team and also added complexity that is not required. As long as you always have a backup of your dvSwitch you will not have any issues with loss of configuration.
If you have anything to add please comment below, all feedback is appreciated.
This post is licensed under CC BY 4.0 by the author.
In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host.
Check out the latest version in this video:
Installation
The easiest way to install a VIB is to download it directly on the ESXi host.
If your ESXi host has internet access, follow these steps:
Enable SSH on your ESXi host, using DCUI or the vSphere web client.
Connect to the host using an SSH Client such as putty
In this article I will be showing you guys the new ESXi Embedded Host Client, this has been long awaited by many users of the Free ESXi host and allows much better management of the host.
Check out the latest version in this video:
Installation
The easiest way to install a VIB is to download it directly on the ESXi host.
If your ESXi host has internet access, follow these steps:
Enable SSH on your ESXi host, using DCUI or the vSphere web client.
Connect to the host using an SSH Client such as putty
If the VIB installation completes successfully, you should now be able to navigate a web browser to https:///ui and the login page should be displayed.
Usage
The login page is the same one used for vCenter Server, On logging in you will also see the menu structures follow this look and feel.
From the interface you are able to do most of the features seen in the old VI Client. It is very responsive (compared to the vCenter versions) and seems to work well.
One feature that is a little frustrating is the inability to edit settings of a powered on virtual machine. So you would either need to use command, the old VI Client or Power Off the VM.
A few things that are still “under construction” are: Host Management
Authentication
Certificates
Profiles
Power Management
Resource Reservation
Security
Swap
Host -> Manage -> Virtual Machines View</ol </ul>
Virtual Machine
Log Browser
Networking
Monitor Tasks
Removal
To remove the ESXi embedded host client from your ESXi host, you will need to use esxcli and have root privileges on the host.
Connect to the host using and SSH Client such as putty
Log into the host and run the following command:
1
esxcli software vib remove -n esx-ui
-
If you have any comments, tips or tricks, please let me know over on my Discord
This post is licensed under CC BY 4.0 by the author.
One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machine, with the option to make it crash consistent. This feature is particularly useful when performing upgrades or testing, as if anything goes wrong during the process, you can quickly go back to a stable point in time (when the snapshot was taken).
Snapshots are great for quick, short term restores, but can have devastating effects to an environment if kept long term. There are a number of reasons why snapshots should not be kept for long term or used as backups, one of the main issues is I/O performance 1008885. A list of best practices for snapshots can be found here: 1025279. This article shows 1 method to remove snapshots in a way that minimizes impact.
Noticing High I/O
As mentioned earlier, one of the disasters that can occur when leaving a snapshot active for too long is that it very heavy I/O. After taking a look at the virtual machine, the “Revert to Current Snapshot” is available, so a snapshot exists.
Before deleting the snapshot, check the size of the deltas to get an idea of how long the removal process will take. To do this select your virtual machine, right click the datastore and click browse.
From the datastore select the folder matching your virtual machine name.
As you can see from the delta (000001.vmdk) the snapshots are large. If this were a non-critical server or a small snapshot, I would just delete it, in this example the snapshot exists on a business critical server so I will take the below precautions.
Why Take Precautions
Although snapshot removal has been substantially improved in newer versions, it is still possible in 5.1 to stun the VM and in 5.5 to fail the removal and require consolidation. For a business critical application such as Microsoft SQL / Exchange that must remain active, the snapshot removal process cannot be cancelled once it has been initiated.
One example that I experienced when I had first started working with VMware, I noticed one of our IT Staff had taken a snapshot on our Exchange server and had left it there for around 2 weeks. It was then decided we would remove the snapshot… Big Mistake! About 3 hours into the snapshot removal, Our phones were ringing off the hook, our Exchange server had became unresponsive and users could no longer access their mail. For the next 3 hours VMware was removing the snapshot and no one was able to use email.
Removing a Large Snapshot
As crazy as this will seem, to remove the large snapshot we must first create a new snapshot… yes you did read that correctly. The reason for this is that it stops VMware writing to the old snapshot delta thus allowing VMware to write it back to the main VMDK without interruption. We then have a much smaller new snapshot that can be easily removed.
Uncheck the “Snapshot the Virtual machine’s memory” option and name this: Safe Snapshot Removal. By unchecking the box shown below, this will assist in removing the “Safe Snapshot” once the other snapshot is removed, as we are not expecting to restore to this snapshot it is not required.
We now have 2 snapshots, one from the upgrade (the old large snapshot) and our new Safe Removal Snapshot.
Next, remove the large “Upgrade” snapshot. This will roll the snapshot back into the parent and will no longer cause any downtime. Note that this can potentially cause greater I/O penalties, so calculate the risks before proceeding with this method.
Once the Upgrade snapshot has been deleted, I verify that the Safe Removal Snapshot is fairly small. If not, repeat the process. If it is, the Safe Removal Snapshot can be deleted.
One of the great virtualization and VMware features is the ability to take snapshots of a virtual machine. The snapshot feature allows an IT administrator to make a restore point of a virtual machine, with the option to make it crash consistent. This feature is particularly useful when performing upgrades or testing, as if anything goes wrong during the process, you can quickly go back to a stable point in time (when the snapshot was taken).
Snapshots are great for quick, short term restores, but can have devastating effects to an environment if kept long term. There are a number of reasons why snapshots should not be kept for long term or used as backups, one of the main issues is I/O performance 1008885. A list of best practices for snapshots can be found here: 1025279. This article shows 1 method to remove snapshots in a way that minimizes impact.
Noticing High I/O
As mentioned earlier, one of the disasters that can occur when leaving a snapshot active for too long is that it very heavy I/O. After taking a look at the virtual machine, the “Revert to Current Snapshot” is available, so a snapshot exists.
Before deleting the snapshot, check the size of the deltas to get an idea of how long the removal process will take. To do this select your virtual machine, right click the datastore and click browse.
From the datastore select the folder matching your virtual machine name.
As you can see from the delta (000001.vmdk) the snapshots are large. If this were a non-critical server or a small snapshot, I would just delete it, in this example the snapshot exists on a business critical server so I will take the below precautions.
Why Take Precautions
Although snapshot removal has been substantially improved in newer versions, it is still possible in 5.1 to stun the VM and in 5.5 to fail the removal and require consolidation. For a business critical application such as Microsoft SQL / Exchange that must remain active, the snapshot removal process cannot be cancelled once it has been initiated.
One example that I experienced when I had first started working with VMware, I noticed one of our IT Staff had taken a snapshot on our Exchange server and had left it there for around 2 weeks. It was then decided we would remove the snapshot… Big Mistake! About 3 hours into the snapshot removal, Our phones were ringing off the hook, our Exchange server had became unresponsive and users could no longer access their mail. For the next 3 hours VMware was removing the snapshot and no one was able to use email.
Removing a Large Snapshot
As crazy as this will seem, to remove the large snapshot we must first create a new snapshot… yes you did read that correctly. The reason for this is that it stops VMware writing to the old snapshot delta thus allowing VMware to write it back to the main VMDK without interruption. We then have a much smaller new snapshot that can be easily removed.
Uncheck the “Snapshot the Virtual machine’s memory” option and name this: Safe Snapshot Removal. By unchecking the box shown below, this will assist in removing the “Safe Snapshot” once the other snapshot is removed, as we are not expecting to restore to this snapshot it is not required.
We now have 2 snapshots, one from the upgrade (the old large snapshot) and our new Safe Removal Snapshot.
Next, remove the large “Upgrade” snapshot. This will roll the snapshot back into the parent and will no longer cause any downtime. Note that this can potentially cause greater I/O penalties, so calculate the risks before proceeding with this method.
Once the Upgrade snapshot has been deleted, I verify that the Safe Removal Snapshot is fairly small. If not, repeat the process. If it is, the Safe Removal Snapshot can be deleted.
Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory pages. The benefits of TPS are that it allows a host to reduce memory usage so you can allow more VMs onto a host, as memory is often one of the most constrained resources on a host. TPS is basically de-duplication for RAM and works at the 4KB block level.
In some situations multiple virtual machines will have identical sets of memory content, TPS allows these sets to be De-duplicated thus using less overall memory on the host. As you can see from the image above, this displays a host with TPS Enabled and one with TPS Disabled. As you can see TPS uses much less memory where blocks are duplicated.
What has changed?
VMware recently acknowledged a vulnerability with their TPS feature that could in very specific scenarios allow VM’s to access memory pages of other VMs running on a host. It is important to note that this vulnerability is not easily exploitable and the risk is really low so most environments should not really be impacted by it. However VMware have been cautious and released patches to disable this feature by default in the following updates:
All versions of vSphere are vulnerable to the exploit but VMware is only patching the 5.x versions of vSphere as 4.x versions are no longer supported. These patches only disable TPS which is currently enable by default, they do not fix the vulnerability. VMware states in the KB article that Administrators may revert to the previous behaviour if they so wish.
The benefits that TPS provides will vary in each environment depending on VM workloads so if you want to be PCI Compliant or are paranoid about security you will probably want to leave TPS Disabled. You can view the effectiveness of TPS in vCenter by looking at the shared and sharedcommon memory counters to see how much it is benefiting you.
Transparent Page Sharing (TPS) is a host process that leverage’s Virtual Machine Monitor (VMM) component of the VMkernel to scan physical host memory to identify duplicate VM memory pages. The benefits of TPS are that it allows a host to reduce memory usage so you can allow more VMs onto a host, as memory is often one of the most constrained resources on a host. TPS is basically de-duplication for RAM and works at the 4KB block level.
In some situations multiple virtual machines will have identical sets of memory content, TPS allows these sets to be De-duplicated thus using less overall memory on the host. As you can see from the image above, this displays a host with TPS Enabled and one with TPS Disabled. As you can see TPS uses much less memory where blocks are duplicated.
What has changed?
VMware recently acknowledged a vulnerability with their TPS feature that could in very specific scenarios allow VM’s to access memory pages of other VMs running on a host. It is important to note that this vulnerability is not easily exploitable and the risk is really low so most environments should not really be impacted by it. However VMware have been cautious and released patches to disable this feature by default in the following updates:
All versions of vSphere are vulnerable to the exploit but VMware is only patching the 5.x versions of vSphere as 4.x versions are no longer supported. These patches only disable TPS which is currently enable by default, they do not fix the vulnerability. VMware states in the KB article that Administrators may revert to the previous behaviour if they so wish.
The benefits that TPS provides will vary in each environment depending on VM workloads so if you want to be PCI Compliant or are paranoid about security you will probably want to leave TPS Disabled. You can view the effectiveness of TPS in vCenter by looking at the shared and sharedcommon memory counters to see how much it is benefiting you.
Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative to using:
1
+ Warning: Cannot modify header information – headers already sent by… | TotalDebug
Ok so today i was doing some PHP coding and get the dreaded header error caused me a bit of a headache as i needed to redirect some pages. After a bit of searching i managed to find an alternative to using:
1
header(location:"index.php");
So to get rid of the error that this produces simply change it to any of the below:
diff --git a/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html b/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html
index 1ee8aaff0..80d0abbba 100644
--- a/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html
+++ b/posts/your-client-does-not-support-opening-this-list-with-windows-explorer/index.html
@@ -1 +1 @@
- Your client does not support opening this list with windows explorer | TotalDebug
Home Upgrade your Linux UniFi Controller in minutes!
Post
Cancel
Upgrade your Linux UniFi Controller in minutes!
Posted Feb 25, 2017
By
1 min read
1488056596
Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the generic controller package. The upgrade provess is so simple! (i have also written this script that makes it even quicker)
I previously explained how to install your own UniFi Controller on CentOS in this article. Once you have it up and running, it’s even easier to upgrade to a newer version. The process takes less than 3 minutes with these steps.
This upgrade was tested on version 5.3.11 to 5.4.11 but should be the same for all versions
UPDATE: I have also upgraded 5.4.11 to 5.5.11 with no issues
Stop the UniFi Controller service:
systemctl stop unifi
Take a backup of the current unifi folder:
cp -R /opt/UniFi/ /opt/UniFi_bak/
Download the new version:
cd ~ && wget http://dl.ubnt.com/unifi/5.4.11/UniFi.unix.zip
Unzip the downloaded file into the correct directory:
unzip -q UniFi.unix.zip -d /opt
Copy the old data back into the UniFi folder, this allows historical data to be kept:
cp -R /opt/UniFi_bak/data/ /opt/UniFi/data/
Restart the UniFi Controller service:
systemctl start unifi
Wait a little while for your controller to load back up, once completed you can login as normal and you should still have all your legacy data still visible.
Home Upgrade your Linux UniFi Controller in minutes!
Post
Cancel
Upgrade your Linux UniFi Controller in minutes!
Posted Feb 25, 2017
By
1 min read
1488056596
Ubiquiti’s provide a Controller version for other distributions of linux but only display debian on their site, but if you’re running CentOS or another Linux distribution, you’ll have to use the generic controller package. The upgrade provess is so simple! (i have also written this script that makes it even quicker)
I previously explained how to install your own UniFi Controller on CentOS in this article. Once you have it up and running, it’s even easier to upgrade to a newer version. The process takes less than 3 minutes with these steps.
This upgrade was tested on version 5.3.11 to 5.4.11 but should be the same for all versions
UPDATE: I have also upgraded 5.4.11 to 5.5.11 with no issues
Stop the UniFi Controller service:
systemctl stop unifi
Take a backup of the current unifi folder:
cp -R /opt/UniFi/ /opt/UniFi_bak/
Download the new version:
cd ~ && wget http://dl.ubnt.com/unifi/5.4.11/UniFi.unix.zip
Unzip the downloaded file into the correct directory:
unzip -q UniFi.unix.zip -d /opt
Copy the old data back into the UniFi folder, this allows historical data to be kept:
cp -R /opt/UniFi_bak/data/ /opt/UniFi/data/
Restart the UniFi Controller service:
systemctl start unifi
Wait a little while for your controller to load back up, once completed you can login as normal and you should still have all your legacy data still visible.