💻
crackcat @ studying
  • 🏠Home
  • 📑Infographics
  • 🛠️How Stuff Works
    • Buffer Overflow - Explained
    • Embedded Firmware Extraction
  • Proxmox+Packer+Terraform+Ansible
  • 📒TryHackMe Write-Ups
    • 📵Mr. Robot
    • 🔗Binex
    • *️CMesS
    • 🏛️Olympus
    • 🧑‍💻UltraTech
    • 🧠Brainpan
  • ⚙️CVE
    • CVE-2019-17571 RCE PoC
    • CVE-2022-45962 Postauth SQLI
  • 🏴CTF Challenges
    • BugBase - RaaS
  • 🏢AllEndEvent
    • Introduction
    • Chapter I
    • Chapter II
    • Chapter III
Powered by GitBook
On this page
  • What on earth?
  • Overview
  • Set Up a Base VM
  • Provision Kali with Packer
  • Deploy Staging and Production VM with Terraform
  • Configure the VMs via Ansible
  • Create a GitLab Pipeline
  • Create a GitLab Runner

Was this helpful?

Proxmox+Packer+Terraform+Ansible

Automating the continuous deployment of a virtual pentest machine using Proxmox, Packer, Terraform, Ansible and GitLab.

PreviousEmbedded Firmware ExtractionNextMr. Robot

Last updated 12 hours ago

Was this helpful?

What on earth?

Setting up a new pentest VM for every project is tedious, time-consuming and error-prone. Thus, I've set out to build from scratch an automation that will:

  • Provision a release of Kali as VM template (Packer - IaC)

  • Provision a staging and production version (Terraform - IaC)

  • Modify the VM via simple changes in GitLab (GitLab + Ansible - CI/CD Pipeline)

  • Do all that on a Proxmox server

To demonstrate the minimum setup and combination of Packer, Terraform, Gitlab and Ansible, I've documented the initial setup. Please keep in mind, that some best-practices, such as using a Vault and secure credentials have been skipped for brevity. Should you choose to follow this guide, make sure to read official documentation for best practice recommendations!

In this guide, we won't look at how to install Proxmox or GitLab.

Overview

Use this chapter as a reference to keep in mind what components interact with eachother.

Set Up a Base VM

The base VM we will call gitlab-runner as it will perform all actions based on the instructions of a Pipeline that we are going to setup later. It is the one system that we will have to maintain manually. We could automate that too but that's out of scope.

  1. In Proxmox, create a new VM

  2. Once the VM is ready, we will need to install a few things

    # Install Packer and Terraform: https://developer.hashicorp.com/packer/install
    wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
    sudo apt update && sudo apt install packer terraform
    
    # We will also need git for the integration as GitLab Runner
    sudo apt install git curl
    curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
    sudo apt install gitlab-runner
    
    # And Ansible
    sudo apt install pipx sshpass
    pipx ensurepath
    source ~/.bashrc
    pipx install --include-deps ansible
    
    # In case your environment requires it:
    # Install the root certificates for Proxmox and GitLab
    # You can achieve this by exporting the root CA via a browser (URL bar, lock icon)
    sudo mv custom-ca.crt /usr/local/share/ca-certificates/custom-ca.crt
    sudo update-ca-certificates

That's it for the installation part. We will continue to make some changes and add things but I recommend you create a snapshot of that VM now, so you can rollback in case anything goes wrong.

Provision Kali with Packer

Packer is a tool for creating virtual images. Here, we will use it to automate downloading a Kali release from the official website, installing it in a VM and then creating a Proxmox template from it.

  1. Before we dive into Packer, we want to create an API user in Proxmox that Packer (and later also Terraform) can use to orchestrate any changes, like creating and deleting VMs

    1. Navigate to Datacenter > Permissions > Roles

    2. Create a new role and call it APIProvision for example (the name cannot start with PVE)

    3. We want to assign permissions to this role, that the API only really requires:

      # Feel free to experiment
      # In case of missing permissions Proxmox will throw API errors
      # They may just not always hint at a permission problem, just keep that in mind
      Datastore.Allocate
      Datastore.AllocateTemplate
      Datastore.Audit
      Datastore.AllocateSpace
      Pool.Allocate
      SDN.Use
      Sys.Audit
      Sys.Console
      Sys.Modify
      Sys.PowerMgmt
      VM.Allocate
      VM.Audit
      VM.Clone
      VM.Config.CDROM
      VM.Config.CPU
      VM.Config.Cloudinit
      VM.Config.Disk
      VM.Config.HWType
      VM.Config.Memory
      VM.Config.Network
      VM.Config.Options
      VM.Console
      VM.Migrate
      VM.Monitor
      VM.PowerMgmt
      VM.Snapshot
      VM.Snapshot.Rollback
    4. Next, navigate to Datacenter > Permissions > API Tokens and select Add

    5. Select a user (optimally it is a single-purpose API user but a normal account works too) and make sure that Privilege Separation is checked. This is especially important when using a normal account (or even root). When the checkbox is ticked, the API token will not inherit permissions from the user

    6. Set any token ID and click Add

    7. You will now see the complete token ID (i.e. <username>!<tokenname>) and the API secret - store both of them securely in your KeePass or elsewhere

    8. Lastly, go to Datacenter > Permissions and select Add

    9. Here, we merge the API token, the role, and a resource pool

      Select the created role, the created API token and the resource pool where you want to provision your systems - I will use a resource pool called Infrastructure that I had created under Permissions > Pools

If you loose your API key, you will have to create a new API token. Delete the previous one and reuse the role.

  1. Now we create our Kali Linix packer script kali.pkr.hcl

    packer {
        required_plugins {
            proxmox = {
                version = ">= 1.2.2"
                source = "github.com/hashicorp/proxmox"
            }
        }
    }

    Then follows the description of exactly what we want to build with that plugin.

    Consult the linked documentation for an explanation of each field.

    # source <plugin> <name>
    source "proxmox-iso" "kali-template" {
        
        # Proxmox connection settings
        proxmox_url = "https://<proxmox>/api2/json"
        username = "<username!tokenname>" # for testing, just paste the values directly
        token = "<apitoken>"              # later we should store them securely
                                          # Just remember not to push secrets to GIT
      
        # Image metadata
        node = "pve"
        pool = "Infrastructure" # Packer-promox does not support nested pools in 1.2.2
        vm_id = "421"
        vm_name = "kali-template"
        tags = "infrastructure"
        template_description = "Template created with Packer"
        
        # ISO source
        boot_iso {
          iso_url = "https://cdimage.kali.org/kali-2025.1c/kali-linux-2025.1c-installer-netinst-amd64.iso"
          iso_checksum = "sha256:<checksum-from-the-website>"
          iso_storage_pool = "local-iso"
          unmount = true
       }
       
       # VM settings
       ## Disk
       disk {
         type = "scsi"
         disk_size = "60G"
         storage_pool = "local-lvm"
       }
       ## CPU
       cores = "2"
       sockets = "2"
       ## Memory
       memory = 8192
       ## Network
       network_adapters {
         model = "virtio"
         bridge = "vmbr0" # must be reachable from gitlab-runner
         vlan_tag = "200" # optional
         firewall = true
       }
       
       # Enable Cloud Init
       cloud_init = true
       cloud_init_storage_pool = "local-lvm"
       qemu_agent = true
       
       # SSH
       ssh_timeout = "2h" # maximum time we allow for the boot+installation process
       ssh_username = "kali"
       ssh_password = "kali" # fine for testing, will be replaced with variables later
       
       # Preseed Kali via HTTP
       # See https://gitlab.com/kalilinux/recipes/kali-preseed-examples
       http_directory = "boot-cfg"
       
       boot_command = [
         # Switch to boot menu
         "<esc><wait>",
         # Utilize the preseed file
         "/install.amd/vmlinuz vga=788 auto=true priority=critical url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/kali-preseed.cfg initrd=/install.amd/initrd.gz --- quiet",
         "<enter>"
       ]
    }

    We will take a look at the mentioned boot-cfg directory and kali-preseed.cfg in just a second. First, we need a final item - the build:

    build {
        name = "proxmox" # you can choose one
        sources = ["sources.proxmox-iso.kali-template"] # put together from the source
        
        # Post installation steps
        provisioner "shell" {
            inline = [
                # Clean up the VM before turning it into a template
                "sudo truncate -s 0 /etc/machine-id",
                # Setup and configure cloud-init
                "echo 'datsource_list: [ NoCloud, ConfigDrive ]' | sudo tee /etc/cloud/cloud.cfg.d/99_pve.cfg"
                # Revert changes to the sudoers file that made sudo in this SSH session possible
                "sudo sed -i '/\(^Defaults.*!requiretty$\|^kali.*ALL$\)/d' /etc/sudoers",
            ]
        }
    
    }

    Of course, we could extend this section. But we want to keep changes here to a minimum as we can use Ansible for any real configuration.

Another method is cloud-init which provides a way of configuring basic system configuration such as credentials and IP configuration via a standard interface. Proxmox supports cloud-init and allows you to set these configurations easily via the GUI.

The default Kali ISO is perfect for customization with preseeding but does not come with cloud-init installed. For cloning and easy configuration later on, we do want cloud-init though. Thus, we install and activate it manually (alternatively, you may also experiment with the Kali Generic Cloud image).

  1. Now, as mentioned previously, we create a folder called boot-cfg in the current directory

  2. # Locale and keyboard
    d-i debian-installer/locale string en_US
    d-i debian-installer/language string en
    d-i debian-installer/country string US
    d-i debian-installer/locale string en_US.UTF-8
    d-i localechooser/shortlist select US
    d-i localechooser/preferred-locale select en_US.UTF-8
    d-i localechooser/languagelist select en
    d-i keyboard-configuration/xkb-keymap select us
    # Network
    d-i netcfg/choose_interface select auto
    d-i netcfg/get_hostname string kali-template
    d-i netcfg/get_domain string unassigned-domain
    d-i netcfg/wireless_wep string
    # Mirrors
    d-i mirror/country string manual
    d-i mirror/http/hostname string http.kali.org
    d-i mirror/http/directory string /kali
    d-i mirror/http/proxy string
    # User
    d-i passwd/user-fullname string kali
    d-i passwd/username string kali
    d-i passwd/user-password password kali
    d-i passwd/user-password-again password kali
    # Date
    d-i clock-setup/utc boolean true
    d-i time/zone string US/Eastern
    d-i clock-setup/ntp boolean true
    # Partitions
    d-i partman-auto/method string regular
    d-i partman-auto-lvm/guided_size string max
    d-i partman-auto/choose_recipe select atomic
    d-i partman-partitioning/confirm_write_new_label boolean true
    d-i partman/choose_partition select finish
    d-i partman/confirm boolean true
    d-i partman/confirm_nooverwrite boolean true
    d-i partman-md/confirm boolean true
    #Packages
    tasksel tasksel/first multiselect standard,core,desktop-xfce,meta-default
    d-i pkgsel/include string qemu-guest-agent cloud-init
    # Grub
    d-i grub-installer/only_debian boolean true
    d-i grub-installer/with_other_os boolean true
    d-i grub-installer/bootdev  string /dev/sda
    d-i finish-install/reboot_in_progress note
    # Post Install
    d-i preseed/late_command string \
    echo "kali    ALL=(ALL)    NOPASSWD: ALL" >> /target/etc/sudoers;\
    sed -i "s/env_reset/env_reset\nDefaults\t!requiretty/" /target/etc/sudoers;\
    in-target systemctl enable ssh
  3. With this config we can change the hostname, username, password, install packages such as cloud-init, etc.

  4. With everything prepared, we must run the following command once

    packer init kali.pkr.hcl

    This downloads the Proxmox plugin we specified in the beginning.

  5. From here on, we can use

    packer validate kali.pkr.hcl # to check for syntax errors, and
    packer build kali.pkr.hcl # to provision the kali template
    # The build command will download (and cache) the specified ISO
    # provision a VM, install the OS, run the build steps, shut the VM down
    # and create a template from it.
    # If we want another release, we just update the URL.

That sums it up for Packer and Proxmox. Feel free to adapt any step to your needs - in the end you should see a success message like the following one and a new template in your Proxmox host.

Finally, you can verify that everything is working by cloning the template in the Proxmox Web interface. Before you start the new machine, switch to the Cloud-Init tab and set a username and password. Subsequently, press on Regenerate Image. Now you are ready to start the VM and login with the credentials you just have set. Note that cloud-init does alot of things for us as boot time, including changing the hostname to the name of the new virtual machine.

You could configure the behaviour of cloud-init by modifying the file /etc/cloud/cloud.dfg but that is out of scope for this series. By default, it will add the specified user as the only user and grant it root permissions.

Deploy Staging and Production VM with Terraform

Having a template available, we now want to clone it to create a staging and production machine for our CI/CD pipeline. The staging VM will be use for feature and integration tests while the production VM is going to be the one we can use as base image to deploy during engagements.

For this we are going to use Terraform. Terraform is tool that we can use to declare the systems that we want to have in our infrastructure. Based on that, it will use the Proxmox API to build the environment.

  1. We start by creating a terraform directory on the gitlab-runner machine.

  2. Switch to the directory and create a kali-provision.tf

    The first part may look familiar. It works almost similar to Packer.

    {
        required_version = ">=1.11.4"
        
        required_providers {
            proxmox = {
                source = "telmate/proxmox"
                version = "3.0.1-rc8"
            }
        }
    }
    
    provider "proxmox" {
        pm_api_url = "https://<proxmox>/api2/json"
        pm_api_token_id = "<username!tokenname>" # paste the Proxmox API ID we created
        pm_api_token_secret = "<apitoken>"       # paste the token
                                            # We do this for testing
                                            # do not push this code to GIT
                                            # later we should store secrets appropriately
    }
    # These are going to be the VM names we want to setup, add any you like
    variable "targets" {
        type = list(string)
        default = [
            "staging",
            "prod"
        ]
    }
    
    resource "proxmox_vm_qemu" "kali-deployment" {
        count = length(var.targets)
    
        # General
        target_node = "pve"
        pool = "Infrastructure" # remember the nested pool issue from Packer? Same thing here
        tags = "infrastructure"
        agent = "1" # since we have qemu_guest_agent installed
        full_clone = true
        os_type = "cloud-init"
        
        # Cloud init
        ci_wait = "20" 
        ciuser = "kali"
        cipassword = "kali"
        
        # VM
        clone_id = "421" # ID of the template we created with Packer
        name = "kali-${var.targets[count.index]}" # generate all VM names
        vmid = "${422+count.index}" # leave at 0 if you want automatically assigned IDs
        desc = "Created from template ${timestamp()}"
        boot = "order=scsi0"
        ## Now, we already specified all this in the template
        ## but apparently, while in the GUI it was enough to click clone
        ## here we have to specify all settings again or they won't be added
        sockets = "2"
        cores = "2"
        memory = "8192"
        ## Set a static IP address
        ipconfig0 = "gw=192.168.0.1,ip=192.168.${22+count.index}/24"
        
        network { # copy from template settings unless you want to change it
            id = 0
            bridge = "vmbr1"
            model = "virtio"
            tag = "200"
            firewall = true
        }
        
        disks { # copy from Hardware settings of the template
            scsi {
                scsi0 {
                    disk {
                        storage = "local-lvm"
                        size = "60G"
                    }
                }
            }
            ide {
                ide0 {
                    cloudinit {
                        storage = "local-lvm"
                    }
                }
            }
        }
        
        # This last part is optional (in case we need some post setup action)
        connection {
            type = "ssh"
            user = "kali"
            password = "kali"
            host = "192.168.0.${22+count.index}"
        }
    
        provisioner "remote-exec" {
            inline = [
                "sudo ip a"
            ]
        }
    }
  3. Save the file and then execute terraform init once - similar to what we did for Packer, this will prepare the environment and download the specified plugin

  4. Run terraform validate to check for any errors

  5. Then run terraform plan to check what Terraform plans to do next

  6. If all looks good (in our example, 2 resources should be created, none modified, none deleted), then we can run terraform apply (if you want to revert the VMs, use terraform destroy)

  7. Confirm you actions with Yes and wait a few minutes while Terraform instantiates the VMs

That's already it for the Terraform part. Go ahead, login and test whatever requirements you have for your staging and production VM. Here, I am fine with the base installation and cloud-init being active. Next, I want to make sure that I can run Ansible playbooks against the VMs.

Configure the VMs via Ansible

Now would be a great time to take a snapshot of the gitlab-runner VM.

So far we've automated the provision of the infrastructure. Next, we want to automate the customization of our Kali. This may include custom programs, files, UI changes, you name it.

For this, we are going to use Ansible as it lets us define the exact state that we want our Kali to have without having to worry about scripting all of it with Bash and Python.

  1. Start by creating an ansible directory and navigate to it

  2. Create two directories: group_vars and inventory

  3. We want to create the following structure:

    + ~/ansible/
        +- ansible.cfg
        +- kali.yml
        +- group_vars/
        |    +-all.yml
        +- inventory/
             +- 01-staging.yml
             +- 02-prod.yml
  4. First, the ansible.cfg

    [defaults]
    host_key_checking = False
    interpreter_python = auto_silent
    
    [ssh_connection]
    pipelining = True
    ssh_args = -o ControlMaster=auto -o ServerAliveInterval=60
  5. Then the main playbook: kali.yml

    - name: Deployment
      hosts: kali
      tasks:
        - name: Ping
          ansible.builtin.ping:
        
        - name: Print
          ansible.builtin.debug:
            msg: We could do everything here
  6. Next we want to add the SSH username and password as variables to the group_vars/all.yml but before we do that, we encrypt the password with ansible-vault encrypt_string 'kali' --name 'ansible_password'

    Copy the output and open group_vars/all.yml

    ansible_user: kali
    ansible_password: !vault |
        ... <output of previous command> ...
  7. Finally, we define the inventory, where we specify the targets to run our playbook against

    inventory/01-staging.yml
    staging:
        hosts:
            kali:
                ansible_host: 192.168.0.22

    and

    inventory/02-prod.yml
    prod:
        hosts:
            kali:
                ansible_host: 192.168.0.23
  8. Run echo kali | ansible-playbook --vault-password-file /bin/cat -i inventory/01-staging.yml kali.yml

    Using /bin/cat is a trick to accept the password from the command like. Alternatively, use a password file. Later, we will replace the password on the command line with a more secure option.

You should now have a functioning Ansible setup. We can customize both the staging and prod machine via the respective inventory file.

Finally, what's left is the automation and deployment via GitLab and CI/CD pipelines.

Create a GitLab Pipeline

Pipelines in GitLab are created by creating/editing the .gitlab-ci.yml file. You can do so by navigating to Build > Pipeline editor - on the left side of the GitLab menu.

A very straight forward pipeline may look like this:

# Define all stages (stages run sequentially)
stages:
  - build
  - provision
  - configure

# Define single jobs (jobs of a same stage run in parallel, depending on how many runners you have)
build-job:
  # This job is called build-job and belongs to the stage "build", so it runs first
  stage: build
  tags:
    - debian # GitLab runners will use the tag to see if they can run this job
  script:
    # Here we specify the action
    # At the moment we simply do what we did manually before
    # But you could extend this arbitrarily to for example:
    #  - check for new Kali releases and update the packer script with a new ISO file
    #  - update Packer or the Proxmox plugin
    #  - provide credentials from GitLab secrets
    #  - only run this stage once every month and skip else
    - cd /home/gitlabrunner/packer
    - packer build kali.pkr.hcl

provision-job: 
  # This runs once the build stage is complete without errors
  stage: provision
  tags:
    - debian
  script:
    # Again, we could:
    #  - validate the script
    #  - validate the plan
    #  - configure credentials
    #  - take snapshots of the target machines after creating them
    - cd /home/gitlabrunner/terraform
    - terraform apply -auto-approve

configure-job:      
  stage: configure
  tags:
    - debian
  script:
    # This stage is very simplified and unsecure, as we specify the password in the script
    # Optimally, we would deploy SSH keys, but for now, we will have to remove any
    # previous host certificates when we redeploy our VMs, or else, ansible will prevent
    # access via SSH.
    - ssh-keygen -f "/home/gitlabrunner/.ssh/known_hosts" -R "192.168.0.22"
    - ssh-keygen -f "/home/gitlabrunner/.ssh/known_hosts" -R "192.168.0.23"
    - cd /home/gitlabrunner/ansible
    # Ideally, we would use the GitLab vault instead,
    # run the playbook against the staging VM and only if it succeeds deploy the changes
    # to the production VM as well.
    - echo kali | ansible-playbook --vault-password-file=/bin/cat -i inventory/01-staging.yml kali.yml
    
# Though out of scope here, you could also add a stage with a script
# to convert the final production image into different VM formats in the end

With this basic pipeline configuration in place, we can attempt to run our pipeline automatically via a GitLab runner.

Note that, since credentials are still hardcoded, we have not yet uploaded all these files to GitLab. That would be our next step after removing all sensitive information. We could then edit all the scripts in Git and trigger a pipeline with our changes. Here, we skip that and use the static code on gitlab-runner.

Create a GitLab Runner

  1. In GitLab, navigate to Settings > CI/CD > Runners and select New project runner

  2. Select a tag if you like (here I've used debian to indicate a runner that runs on debian)

  3. Set a description and "Lock to current projects"

  4. Lastly, click on Create runner

  5. On the next page, select the operating system of the runner (here Linux) and follow the steps 1 to 3 as they are described on the page

  6. Once the gitlab-runner is running, it is ready to accept jobs from the pipeline

    You may want to experiment with the option to start the runner as a service instead of enabling it manually every time.

And that concludes this minimalistic guide to Proxmox + Packer + Terraform + Ansible + GitLab. As I have announced at the top, do pay attention to all the secrets floating around as plaintext and move them to appropriate keystores (using GitLab secrets for example)

Enjoy building your own automation pipeline!

To test everything, you can go to Build > Pipelines in GitLab and press New pipeline at the top right corner.

I will be using a here but you are free to use whatever you want for your base VM. I will name it gitlab-runner and give it an ID of 420.

At the moment of writing, neither Packer nor Proxmox seem incredibly stable. Here, I am working with Proxmox 8.4.1, Packer 1.12.0 and the Packer Proxmox plugin 1.2.2. While many things work just perfectly, keep in mind that small or minor changes in either of these tools can break everything and result in headache-inducing rabbit holes of troubleshooting. On that note, I reported that you may encounter too.

The first item in the file defines the plugin we require to talk to the Proxmox API ().

Theory for the upcoming section: Preseeding is one way (of many) for automating OS installations. It consists of a file that contains all the answers to the questions we would otherwise see in a live install. Debian has some good documentation .

In that directory, we create the kali-preseed.cfg file, this one is based on the

The d-i preseed/late_command is documented and allows us to enable SSH and prepare sudo so that Packer can login via SSH once the setup is finished to perform final clean up tasks.

The next part describes the infrastructure we want to build, using the cloud-init we configured for the template. Refer to the plugins documentation for details.

With this, we have configured a very basic Ansible setup. I highly encourage you to take a look at the of ansible. While our playbok really only connects to the target and prints a message, we now have a base to include arbitrary Ansible roles easily. We also added some basic SSH optimizations and showcased the Ansible vault.

GitLab pipelines can be configured to run any job you want on specific conditions (like a merge request, a commit, or even manually). Here, I simply demonstrate how to utilize the pipeline feature to automate all our previous steps. Checkout the for more details to customize the pipeline for your needs.

Debian
a bug in the Packer Proxmox plugin
documentation
here
Kali example
here
here
documentation
documentation
Successful creation of a base template with Packer on Proxmox
For testing purposes, clone the template manually and configure a user via cloud-init, then start the VM
After a short while we should see two new VMs
Successful pipeline