Four playbooks. Zero manual steps. From bare node to configured VM.
This pipeline takes a Proxmox VE node and a stock Debian 13 netinst ISO and produces a fully configured, SSH-ready VM — with user accounts, base packages, and hardened SSH — by running four Ansible playbooks in sequence. No clicking in the Proxmox web UI, no interactive installer prompts, no manual inventory edits.
The end result is a Debian 13.4 VM with a locked-down ansible automation user, a configurable interactive end-user account, a base tool set, and SSH password auth disabled.
proxmox-ve-vms-ansible/
build-debian-preseed-iso.yml Remaster Debian netinst ISO with preseed
create-vm-from-iso-proxmox.yml Create/delete VM shells via Proxmox API
auto-install-debian.yml Boot + wait for install + discover IP
setup-debian-base.yml Post-install base config and user creation
fetch-iso.yml Download a Debian ISO to Proxmox ISO store
preseed/
debian-preseed.cfg.j2 Jinja2 preseed template (fully unattended d-i)
group_vars/all/
main.yml Proxmox API credentials (Ansible Vault)
preseed_vars.yml All preseed + install variables
vms.yml VM definitions (specs, ISO, preseed flags)
inventory/
hosts.ini proxmox-bms group + auto-populated new-debian-vms
All VM specs — CPU, RAM, disk size, ISO file, whether to preseed — live in a single vms.yml file. Adding a new VM is one YAML block.
| Requirement | Detail |
|---|---|
| Proxmox VE node | API token with VM.Allocate, VM.Config.*, Datastore.AllocateSpace |
| Debian 13 netinst ISO | debian-13.1.0-amd64-netinst.iso already in the Proxmox ISO store |
| Ansible control node | Ansible 2.15+, community.proxmox collection |
| SSH key pair | ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub on the control node |
Before starting fresh, this removes the old VM along with all its storage volumes and cleans up the inventory/hosts.ini entry automatically. The tag is marked never in the playbook so it only fires when explicitly requested — it will never run during a normal pipeline execution.
ansible-playbook -i inventory/hosts.ini \
create-vm-from-iso-proxmox.yml --tags "removeVMs"
The delete goes through the raw Proxmox REST API with ?purge=1&destroy-unreferenced-disks=1 — the proxmox_kvm module's state: absent leaves storage volumes behind by default.
This playbook SSHs into the Proxmox node and runs entirely there. It remasters the stock Debian 13 netinst ISO into a custom version that installs Debian unattended without any human interaction.
What it does:
xorriso into a temp working directorypreseed.cfg into the ISO rootauto=true priority=critical file=/cdrom/preseed.cfg, sets timeout=1 and default=0menu defaultspkgtk.cfg speech synthesis auto-timeout that would hijack the BIOS boot menu after 30 secondsansible-playbook -i inventory/hosts.ini build-debian-preseed-iso.yml
The resulting ISO is placed directly in the Proxmox ISO store at /var/lib/vz/template/iso/. The ISO only needs to be rebuilt when preseed variables change (e.g. SSH key rotation, locale, extra packages).
Talks to the Proxmox REST API from localhost — no SSH to the Proxmox node required. Creates the VM shell, provisions a 20 GB disk on local-lvm, mounts the preseed ISO as a CD-ROM on ide2, and sets boot order to ISO-first.
VM specs are defined in group_vars/all/vms.yml:
- name: "ansible-debian-01"
vmid: 510
boot: "order=ide2;scsi0;net0"
memory: 2048
cores: 2
disk_size_gb: 20
storage: "local-lvm"
iso_file: "debian-13-amd64-preseed.iso"
preseed_install: true
ansible-playbook -i inventory/hosts.ini \
create-vm-from-iso-proxmox.yml --tags "createVMs,createDisks,mountIso,bootOrder"
This is the core of the pipeline. Three plays run entirely from localhost via the Proxmox API — no SSH to the VM until the very end of PLAY 3.
The preseed's late_command ends with poweroff -f — the VM halts immediately after SSH key injection and sudo config, before the d-i "Installation complete" dialog can appear. This gives the playbook a clean, detectable signal to act on.
ansible-playbook -i inventory/hosts.ini auto-install-debian.yml
Connects to the new VM as the ansible user over SSH and handles the post-install baseline. This playbook is fully idempotent — safe to re-run at any time.
apt safe-upgradehtop, btop, vim, curl, wget, git, tmux, net-tools, bash-completion, unzip, jq, qemu-guest-agentvim as the default system editor via update-alternativesPasswordAuthentication no, ChallengeResponseAuthentication noansible user NOPASSWD sudoers entryansible-playbook -i inventory/hosts.ini setup-debian-base.yml
preseed/debian-preseed.cfg.j2 is a Jinja2 template rendered by Ansible at ISO build time. Every value comes from preseed_vars.yml — nothing is hardcoded in the template itself.
preseed_locale, preseed_keymap, preseed_timezonepreseed_ip per-VM in vms.yml for a static addressopenssh-server sudo qemu-guest-agent curl wget vimlate_command — runs on the installer host OS after the target system is installed:
/home/ansible/.ssh/authorized_keys with the control node's public key700 on .ssh, 600 on authorized_keys/etc/sudoers.d/ansible with NOPASSWD:ALLqemu-guest-agent via systemctlpoweroff -f — halts the VM immediately, triggering the Ansible poll to move forwardThe poweroff -f at the end of late_command is what makes the whole pipeline reliable. It fires before the d-i "Installation complete / Press Continue to reboot" dialog, giving the playbook a deterministic signal instead of having to guess when the installer is done.
Two accounts are created across the pipeline with different purposes:
| Account | Created by | Auth | Purpose |
|---|---|---|---|
ansible |
preseed late_command |
SSH key only, NOPASSWD sudo, locked password | Ansible automation — never log in interactively |
cartman (configurable) |
setup-debian-base.yml |
SSH key + sudo password | Interactive day-to-day use |
Set these in group_vars/all/preseed_vars.yml:
vm_enduser_name: "cartman"
vm_enduser_groups: "sudo"
vm_enduser_shell: "/bin/bash"
vm_enduser_ssh_pub_key_file: "~/.ssh/id_rsa.pub"
To set a sudo password, generate a SHA-512 hash with openssl and vault-encrypt it:
openssl passwd -6 'yourpassword'
ansible-vault encrypt_string 'the-hash-output' --name 'vm_enduser_password_hash'
Add the vault block to group_vars/all/main.yml. Without it the account has a locked password — SSH key login only, which is fine if you don't need console access.
| Variable | File | Default | Description |
|---|---|---|---|
preseed_iso_src_file | preseed_vars.yml | debian-13.1.0-amd64-netinst.iso | Source netinst ISO filename |
preseed_iso_dest_file | preseed_vars.yml | debian-13-amd64-preseed.iso | Output preseed ISO filename |
preseed_debian_suite | preseed_vars.yml | trixie | Debian suite for APT mirror |
preseed_ansible_user | preseed_vars.yml | ansible | Automation user created during install |
preseed_ssh_pub_key_file | preseed_vars.yml | ~/.ssh/id_rsa.pub | SSH public key injected for ansible user |
preseed_ssh_priv_key_file | preseed_vars.yml | ~/.ssh/id_rsa | Private key path written to inventory |
preseed_ssh_wait_timeout | preseed_vars.yml | 2400 | Max seconds to wait for install (40 min) |
preseed_boot_wait_seconds | preseed_vars.yml | 30 | Seconds after power-on before IP query |
vm_enduser_name | preseed_vars.yml | "" (disabled) | End-user login account name |
vm_enduser_groups | preseed_vars.yml | sudo | Groups for the end-user account |
vm_enduser_password_hash | main.yml (vault) | ! (locked) | SHA-512 password hash for end-user sudo |
The full repository is available on GitHub: