Updating my home lab part 2 – software

After ordering and setting up the new hardware for my home lab in part 1 of this blog post. It is now time to look at the software side of the house. How can I use the hardware as efficiently as possible and run all the software components I want. I’ll have to look at the two different servers and their resources. How will these work together to provide a smoothly running environment. Another thing to look at is how some of the appliances can be slimmed down on their memory requirements. Looking at my hardware and the default requirements quickly tells me things are nog going to fit. In the end I also need some resources available to act as fabric for my vRealize Automation installation.

Inventory

To fit everything in my home lab, where memory is the most constraint, I had to slim down some of the appliances to home lab size. This is not supported by VMware but most of the components run just fine when you take some memory away from them. For vCenter I had already done this by trial and error.
I ended up with using 8 Gb instead of the default 10 for a tiny deployment. In the case of vRA and IaaS I did some research by reading other peoples blogs and forums. For vRA I settled on 8 and 2 Gb respectively. When looking into NSX, a colleague recommended a site that talks about slimming down NSX which I used to do my own slimming down.

After taking all of this in consideration I ended up with the following list.

vSphere clusters

The idea is to have two separate environments. One to run my infrastructure components and another to run workloads deployed by vRealize Automation. To achieve this I will use two clusters. The first will hold my two physical ESXi hosts, the second will hold two nested ESXi hosts.

This is an overview of how things are going to be set up. As you can see the vCenter appliance is running in the prod cluster where the two physical ESXi hosts are located.

Handling the resources

Memory

The total memory available in my two systems is 48 Gb, however this is split over the Intel NUC (16 Gb) and the Asrock DeskMini (32 Gb). Lets have a look at how the VM’s need to be split across the two systems.

I ended up with this list and as you can see I tried to keep the memory usage at 80% max. This leaves a little head room for expansion or if I need to upgrade vRA. Remember that the upgrade checks for supported amount of memory in the vRA appliance.

CPU

Processing power is abundant so there is not need to a lot of designing around CPU’s. I am however making sure that I do not over subscribe on the number of virtual cores versus physical cores.

Storage

As far as storage goes, I touched on this topic in part 1 of this blog post. There I stated that I will not use vSAN or shared storage. vSAN has a memory overhead that will never fit in my home lab. Shared storage is available in the form of an NFS share running on  QNAP NAS. I wont be using this for running VM’s because the QNAP model I have does not support running VM’s on it. I am however using for ISO’s and templates.

Back to local storage. The Intel NUC has two internal disks, a 120 Gb SATA SSD and a 500 Gb SATA hard disk as opposed to the Asrock DeskMini that has an NVMe 250 Gb SSD and two 250 GB SATA SSD’s.

The sizes in this list are based on the actual consumption. All my VM’s are in thin disk mode so I am a bit over provisioned and I’m ok with that.

Networking

Then a quick note on networking. At the moment I have no managed switch in my home lab backing this environment. Therefore there will be no VLAN’s used at this moment. A distributed portgroup will connect everything to the network. Looking at NSX, at the moment NSX will only be used for the environment inside the nested ESXi (where vRA will deploy workloads). Later on I might enable NSX for the rest of the environment to do some micro segmentation.

Conclusion

That’s it for now, I have a good overview of how I want my home lab to work and how to set it up. I have been working on it for a bit and it is starting to take shape now. Currently I have both physical and nested ESXi hosts running. I have deployed the vRealize Automation appliance and NSX manager. I still need to install the vRA IaaS components and setup NSX to work with the compute cluster (nested ESXi’s). For the future I also want to have a look at running my main production VM’s as containers with vSphere Integrated Containers, but that is a project on its own.

4 Replies to “Updating my home lab part 2 – software”

  1. Amazing Article , thank your for all the good info
    i’m trying to build a home lab to practice NSX , i have 2960 Switch and a PC i7 with 48Gig RAM and two SSD 512 , is this enough to run a Full NSX lab?
    how many Physical NIC do i need to run NSX?

    1. Hi,

      The resources you need are somewhat specific to your environment. if you really want to run NSX with all services and redundancy 48Gig might not cut it.
      Have a look at this site I found which gives you a general idea of what you need.

      Looking at the number of NIC’s, ideally you need more then one to separate management and VXLAN traffic.

      Hope this helps.

  2. Hi Wesley,

    Congratulations, cool post. I have a quick question: What is the purpose of the Ubuntu VMs? Are they running in the Nested ESXi host?

    1. Hi,

      Those Ubuntu VM’s run various ‘production’ services I use internally. At the end I comment on running some workloads as containers with VMware Integrated Containers. That’s about the things running in de Ubuntu VM’s at the moment.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.