CI/CD for a Pentest VM
Automating the continuous deployment of a virtual pentest machine using Proxmox, Packer, Terraform, Ansible and GitLab.
What on earth?
Setting up a new pentest VM for every project is tedious, time-consuming and error-prone. Thus, I've set out to build from scratch an automation that will:
Provision a release of Kali as VM template (Packer - IaC)
Provision a staging and production version (Terraform - IaC)
Modify the VM via simple changes in GitLab (GitLab + Ansible - CI/CD Pipeline)
Do all that on a Proxmox server
I started out documenting the initial minimum setup and combination of Packer, Terraform, Gitlab and Ansible. Over time I've repeatedly improved upon some components and updated this blog accordingly. For example, the first version used hardcoded credentials and had not much functionality in the pipeline.
By now, the examples include a complete end-to-end pipeline that fetches the latest Kali release, configures it, runs Ansible on it and releases it as a VMware-ready virtual machine via SFTP.
As Proxmox (and other software) will continue to evolve, examples will break over time but the architecture and overall setup should remain valuable for you as an orientation. Though I do not cover every consideration, this guide will provide you with all the basics required to build your own automation pipeline.
Should you choose to follow this guide, make sure to always read official documentation for up-to-date and best practice recommendations!
In this guide, we won't look at how to install Proxmox or GitLab.
Overview
Use this chapter as a reference to keep in mind what components interact with eachother.

Set Up a Base VM (Gitlab Runner)
The base VM we will call gitlab-runner as it will perform all actions based on the instructions of a Pipeline that we are going to setup later. It is the one system that we will have to maintain manually. We could automate that too but that's out of scope for now.
In Proxmox, create a new VM (top right corner "Create VM") inside your favourite resource pool
I will be using a Debian here but you are free to use whatever you want for your base VM. I will name it
gitlab-runnerand give it an ID of420.Supply enough disk space to allow conversion of VM backups later (~200 GB will suffice)
8192 MiB RAM suffice (though more is better)
4 sockets, 4 cores
Choose a network config that will provide your VM with internet access and access to GitLab
Beware of proxy setups - this can cause you a serious amount of trouble later on. It's possible, but a pain to maintain, trust me. Also, packet inspection may flag and block Kali/Tool downloads, so take that into account early during setup.
Next, complete the ISO installation setup
The
hostnamewill begitlab-runnerDo not add a
rootuser (leave the password blank to disable it)Add a normal user (I will call it
user) and set a strong passwordConsider setting up a KeePass database for this project now because you will create more credentials and secrets.
Once the VM is ready, we will need to install a few things
That's it for the installation part. We will continue to make some changes and add things but I recommend you create a snapshot of that VM now, so you can rollback in case anything goes wrong.
Additional Base VM Preparations
Right now we've laid the groundwork for a VM that will run our GitLab pipeline jobs. However, we need a few more things for later.
Certificates
In case you host your own GitLab instance, consider setting up a proper certificate and install the CA on the
gitlab-runner:Configure Proxmox with a proper backup storage I will not cover the setup of a new storage. You require a storage that's large enough to hold at least one or two VM backups (200 GB will suffice). In Proxmox -> Server View -> Datacenter -> Storage, look for:
Content must contain
BackupEnabled must say
YesNote down the
Path/Targetvalue
Then we configure a "Directory Mapping" that will mirror the backup directory into the runner VM
In Proxmox -> Server View -> Datacenter -> Directory Mappings -> Add:
Name: chose any
Path: this must match the name of the backup storage from the step before

Next we add a Virtiofs to our
gitlab-runnerto include that directory mappingMake sure to set
Cachetoneveror you may experience disk exhaustion over time

Restart the runner VM to activate the new Virtiofs
Now we configure this backup directory and also add a second directory that will serve us for hosting purposes (it will contain the latest pentest vm release)
To avoid the
sftpuser from being abused, we make sure that the SSH config in/etc/ssh/sshd_configcontains the following lines. Only use the first line if you deleted thesftpuser password intentionally (like in the example above).Run
sudo systemctl restart sshto enable the changesNow, only
sftpwill be able to connect withsftp sftp@<gitlab-runner-ip>and nobody elseAnother thing that you may or may not benefit from: Setting a static IP address for the
gitlab-runnerVM. This way you make sure that downloading the latest release is always possible with the same command later. If you got DNS setup correctly, you would not need this and work with hostnames. To configure a static IP, edit/etc/network/interfaces:Activate changes with
sudo reboot nowand confirm withip a
Provision Kali with Packer
Packer is a tool for creating virtual images. Here, we will use it to automate downloading a Kali release from the official website, installing it in a VM and then creating a Proxmox template from it.
At the moment of writing, neither Packer nor Proxmox seem incredibly stable. Here, I am working with Proxmox 8.4.1 (edit: guide updated for 9.1.4), Packer 1.12.0 (edit: guide updated for 1.14.3) and the Packer Proxmox plugin 1.2.2 (edit, guide updated for 1.2.3). While many things work just perfectly, keep in mind that small or minor changes in either of these tools can break everything and result in headache-inducing rabbit holes of troubleshooting. On that note, I reported a bug in the Packer Proxmox plugin that you may encounter too.
Before we dive into Packer, we want to create an API user in Proxmox that Packer (and later also Terraform) can use to orchestrate any changes, like creating and deleting VMs
Navigate to Datacenter > Permissions > Roles
Create a new role and call it
APIProvisionfor example (the name cannot start withPVE)We want to assign permissions to this role, that the API only really requires:
Next, create a user under Datacenter > Permissions > Users > Add (with no groups) and store the password in your KeePass - you should not see/use this password ever again.
Next, navigate to Datacenter > Permissions > API Tokens and select
AddSelect the user we created and make sure that
Privilege Separationis checked. When the checkbox is ticked, the API token will not inherit permissions from the user.Set any token ID and click
AddYou will now see the complete token ID (i.e.
<username>!<tokenname>) and the API secret - store both of them securely in your KeePassLastly, go to Datacenter > Permissions and select
AddHere, we merge the API token, the role, and a resource pool
Select the created role, the created API token and the resource pool where you want to provision your systems - I will use a resource pool called
Infrastructurethat I had created under Permissions > Pools
If you loose your API key, you will have to create a new API token. Delete the previous one and reuse the role.
Now we create our Kali Linix packer script
kali.pkr.hclThe first item in the file defines the plugin we require to talk to the Proxmox API (documentation).
Now we want to define some variables, so that we keep valuable secrets out of the code:
Then follows the description of exactly what we want to build with that plugin. Consult the linked documentation for an explanation of each field (these fields may change over time).
Theory for the upcoming section: Preseeding is one way (of many) for automating OS installations. It consists of a file that contains all the answers to the questions we would otherwise see in a live install. Debian has some good documentation here.
Another method is cloud-init which provides a way of configuring basic system configuration such as credentials and IP configuration via a standard interface. Proxmox supports cloud-init and allows you to set these configurations easily via the GUI.
The default Kali ISO is perfect for customization with preseeding but does not come with cloud-init installed. For cloning and easy configuration later on, we do want cloud-init though. Thus, we install and activate it manually (alternatively, you may also experiment with the Kali Generic Cloud image).
Now, as mentioned previously, we create a folder called
boot-cfgin the current directoryIn that directory, we create the
kali-preseed.cfgfile, this one is based on the Kali exampleWith this config we can change the hostname, username, password, install packages such as
cloud-init, activate passwordless SSH for the Packer script etc.The
d-i preseed/late_commandis documented here and allows us to enable SSH and preparesudoso that Packer can login via SSH once the setup is finished to perform final clean up tasks.With everything prepared, we must run the following command once
This downloads the Proxmox plugin we specified in the beginning.
From here on, we can use
That sums it up for Packer and Proxmox. Feel free to adapt any step to your needs - in the end you should see a success message like the following one and a new template in your Proxmox host.

Finally, you can verify that everything is working by cloning the succesfully crafted template in the Proxmox Web interface. Subsequently, before you start the newly created machine, switch to the cloud-init tab of that VM and set a username and password. Then press on Regenerate Image. Now you are ready to start the VM and login with the credentials you just have set. Note that cloud-init does alot of things for us as boot time, including changing the hostname to the name of the new virtual machine. Just keep this in mind if you later try to change things via Ansible that cloud-init tries to manage (you can use Ansible to disable parts of cloud-init later on).

You could configure the behaviour of cloud-init by modifying the file /etc/cloud/cloud.dfg but that is out of scope for this blog. By default, it will add the specified user as the only user and grant it root permissions.
Deploy Staging and Production VMs with Terraform
Having a template available, we now want to clone it to create a staging and production machine for our CI/CD pipeline. The staging VM will be used for integration tests while the production VM is going to be the one we can use as base image to deploy during engagements.
For this we are going to use Terraform. Terraform is tool that we can use to declare the systems that we want to have in our infrastructure. Based on that, it will use the Proxmox API to build the environment.
We start by creating a
terraformdirectory on thegitlab-runnermachine.Switch to the directory and create a
kali-provision.tfThe first part may look familiar. It works almost similar to Packer.
The next part describes the infrastructure we want to build, using the
cloud-initwe configured for the template. Refer to the plugins documentation here for details.Save the file and then execute
terraform initonce - similar to what we did for Packer, this will prepare the environment and download the specified pluginRun
terraform validateto check for any errors
Keep in mind that we used variables in our Terrafrom script (similar to the Packer script). So for the following terraform <action> commands, you should add these command line parameters:
Then run
terraform planto check what Terraform plans to do nextIf all looks good (in our example, 2 resources should be created, none modified, none deleted), then we can run
terraform apply(if you want to revert the VMs, useterraform destroy)Confirm you actions with
Yesand wait a few minutes while Terraform instantiates the VMs

That's already it for the Terraform part. Go ahead, login and test whatever requirements you have for your staging and production VM. Here, I am fine with the base installation and cloud-init being active. Next, I want to make sure that I can run Ansible playbooks against the VMs.
Configure the VMs via Ansible
Now would be a great time to take a snapshot of the gitlab-runner VM.
So far we've automated the provision of the infrastructure. Next, we want to automate the customization of our Kali. This may include custom programs, files, UI changes, you name it.
For this, we are going to use Ansible as it lets us define the exact state that we want our Kali to have without having to worry about scripting all of it with Bash and Python.
You can follow the next steps on the gitlab-runner VM for testing purposes. Later of course, this will have to go into our GitLab repository.
Start by creating an
ansibledirectory and navigate to itCreate three directories:
group_vars,rolesandinventoryWe want to create the following structure:
First, the
ansible.cfgThen the
.ansible-lint(configures linter rules)Then the main playbook:
kali.ymlYou can see that in the main playbook we gather all the roles that should be executed. Roles are automatically taken from the
rolesdirectory. Inside therolesdirectory, we use arequirements.ymlthat will let Ansible know, what collections should be used. It is not strictly required, but a good practice. It will be referenced later in the pipeline. For example:A single role inside the
rolesdirectory will look like this:For test purposes you can create a role directory with this
tasks/main.yml(ignore the other directories and files as they are all optional):Next we want to add the SSH username and password as variables to the
group_vars/all.ymlbut before we do that, we encrypt the password withansible-vault encrypt_string 'kali' --name 'ansible_password'This way we leverage the Ansible vault. It allows us to include encrypted secrets in our code, so we do not have to place cleartext credentials in the files. Encrypted variables will be decrypted at runtime by Ansible.Copy the output and open
group_vars/all.yml:Finally, we define the inventory, where we specify the targets to run our playbook against
With this, we have configured a very basic Ansible setup. I highly encourage you to take a look at the documentation of Ansible. While our playbok only executes a single role that installs an application, we now have a base to include arbitrary Ansible roles easily. We also added some basic SSH optimizations and showcased the Ansible vault.
Run
echo kali | ansible-playbook --vault-password-file /bin/cat --limit kali-prod kali.ymlUsing
/bin/catis a trick to accept the password from the command like. Alternatively, use a password file. Later, we will replace the password on the command line with a more secure option.If the playbook succeeds, it should run the
my-first-rolerole against the previously generatedkali-prodand when you login to the VM via Proxmox,htopshould now be installed.
You should now have a functioning Ansible setup. When everything works locally on the gitlab-runner VM, get ready to move all of the prior steps to CI/CD.
Finally, what's left is the automation and deployment via GitLab and CI/CD pipelines.
Create a GitLab Pipeline
GitLab pipelines can be configured to run any job you want on specific conditions (like a merge request, a commit, or even manually). Here, we create two pipelines that will automate all of the above.
Checkout the documentation for more details to customize the pipeline for your needs.
Pipelines in GitLab are created by creating/editing the .gitlab-ci.yml file. You can do so by navigating to Build > Pipeline editor - on the left side of the GitLab menu.
What follows is a complete pipeline example, that reacts to two triggers:
Scheduled Pipeline Execution:
Whenever a scheduled pipeline (explained later) is run, the following stages and steps will be performed:
Query the Kali release page for the latest ISO file (and checksum)
Download the ISO and run Packer to create a Proxmox template from it
Build multiple VMs from that template using Terraform (at least a production and staging VM)
Create VM snapshots for all (non-production) VMs using the Proxmox API (this will allow you to reset the VMs again and again when you want to test new roles/features)
Execute the Ansible playbook against the production VM
Create a snapshot of the production VM (this snapshot can be used to rollback the production VM to the current clean release state - but usually, you do not use the production VM directly, see next step)
Backup the production VM and convert the Proxmox backup-format to another virtual machine format (using
qemu-img convert) before moving it to thereleasedirectoryIn between all steps, the Proxmox API is used via
curlto query and manipulate the VM status
Merge Requests: This pipeline will always execute when a merge request is created (for example from a
featurebranch against thedevbranch, or fromdevagainstmain):Run
ansible-linton all Ansible codeRun the complete Ansible playbook against the staging VM
Should either of these steps fail, the merge requests gets blocked (this way we can ensure, that all changes introduced to our release do not break the existing Ansible code)
This pipeline is of course way bigger than a minimum working example. If you are just getting into CI/CD, I suggest you start with only the run_ansible_on_prod stage. It is the most straight forward one and simply automates what we've done manually before. If you got time to work through the entire setup, here is the directory setup that you should have in your GitLab Repository by now:
Below you will find the contents of all the scripts referenced in the .gitlab-ci.yml. We place the logic in separate files inside the ci directory to keep the actual pipeline configuration as clean as possible. You will notice, that the scripts contain much more than just packer init and terraform apply. However, don't let additional logic distract you. Much of what you will find in these scripts is curl commands that talk to the Proxmox API, starting/stopping VMS, creating snapshots and waiting for Proxmox jobs to finish. Most of it is "nice-to-have". When you boil it down, these scripts execute exactly what we have run before on the command line to execute packer, terraform and ansible.
With this pipeline configuration in place, we can attempt to run our pipeline automatically via a GitLab runner.
Note that we referenced several GitLab variables. To let the pipeline run successfully, you must go to Settings > CI/CD and add project variables to your repository. Namely:
GL_PROXMOX_API_USER(can be visible)GL_PROXMOX_API_TOKEN(masked and hidden)GL_ANSIBLE_VAULT(masked and hidden)
Next, we configure the runner that will run this pipeline for us.
Create a GitLab Runner
In GitLab, navigate to Settings > CI/CD > Runners and select
New project runnerSelect a tag if you like (here I've skipped it because I only use one runner)
Set a description and "Lock to current projects"
Configure a global timeout (8 hours are more than enough for all our pipeline needs here)
Lastly, click on
Create runnerOn the next page, select the operating system of the runner (here Linux) and follow the steps 1 to 3 as they are described on the page
Note that on step
1you should usesudofor the command!Use
shellin step 2Ignore step 3
Once the
gitlab-runneris running, it is ready to accept jobs from the pipeline
The End
And that concludes this guide to Proxmox + Packer + Terraform + Ansible + GitLab.
To test everything, you can go to Build > Pipeline schedules in GitLab and press New pipeline at the top right corner.
The image is an example of an earlier version of this blog post, where I only used three stages.

I hope this blog served you as a well rounded introduction to all these different technologies and gives you a good overview of how they can all play together to form a fully automated pipeline.
Enjoy building your own automation pipeline!
Last updated