Most Proxmox VE installs follow a similar path: ISO to USB, boot, walk through the installer, configure network, log in. Learning how to install Proxmox itself takes 30-45 minutes if hardware is ready.
What’s interesting is where it actually breaks. Most failures happen before the installer even starts: BIOS settings, network cabling, USB creation, wrong disk selection. The Proxmox VE installer itself is well-tested and rarely the problem. The mistakes that cost real time are the ones made before clicking “Install”. This guide on how to install Proxmox covers each decision point with operational context.
The walkthrough below covers Proxmox VE 9.1 (released November 2025, current stable as of mid-2026), with decisions called out where they matter and the failure modes that show up repeatedly in Proxmox forum installation threads.
After completing this guide, you’ll have:
- A working Proxmox VE 9.1 host on bare-metal hardware
- Web UI accessible at
https://your-ip:8006 - Functional storage backend (LVM-thin or ZFS, depending on hardware)
- Networking configured and ready for VM deployment
TL;DR — critical install decisions when learning how to install Proxmox
- Wired Ethernet only (Wi-Fi unsupported)
- Disable RAID mode in BIOS (AHCI required for ZFS)
- Avoid
.localhostnames (use FQDN likepve1.home.arpa)- Set static IP during install (not DHCP)
- LVM-thin for low RAM, ZFS for mirrored storage
Who this guide is for
This guide on how to install Proxmox covers bare-metal installation through the official Proxmox VE installer ISO. Scope is small homelab and mini-PC deployments, single-node or first node of a small cluster. The intended reader is either setting up Proxmox for the first time or someone who hasn’t touched a Proxmox installer since 8.x: defaults around storage and networking changed meaningfully in 9.x.
Out of scope: enterprise multi-node clusters with FC SAN storage, automated deployment via PXE or Cloud-Init, complex partitioning schemes, or installation on top of an existing Debian system (covered briefly near the end). Those topics belong in dedicated articles. The how to install Proxmox flow described here is the path 95% of homelab readers actually need.
For hardware selection, see our mini PC for Proxmox guide. For post-install configuration, RAM planning, and storage architecture, see the linked articles at the end.
Most common first-time Proxmox mistakes
Five patterns show up consistently in Proxmox forum threads from new installs. All are preventable with five minutes of planning, and most show up in any honest how-to install Proxmox discussion.
Installing on a Wi-Fi-only system. The Proxmox VE installer does not support Wi-Fi, and Wi-Fi is unreliable as a management interface for a hypervisor anyway. Wired Ethernet is mandatory. If your hardware has only Wi-Fi, this is a hardware problem to solve before installing.
Using consumer RAID controllers before ZFS. Many motherboards ship with RAID mode enabled for SATA. ZFS needs the controller in AHCI mode. Hardware RAID hides the physical disks. ZFS needs direct disk access. This is a BIOS change, made before the installer runs.
Hostname mistakes. Two patterns. Using .local as a domain conflicts with mDNS and breaks resolution in unpredictable ways. Not setting an FQDN at all (hostname like pve instead of pve.home.arpa) causes subtle problems later. Use a proper FQDN even for single-host setups.
Skipping static IP assignment. The installer asks for network configuration. Accepting DHCP and “figuring it out later” is a recurring source of “can’t reach web UI” forum posts. Set a static IP during install.
Tiny boot SSD. Official minimum is 16GB. In practice, anything under 32GB causes problems within months as ISO templates, container images, and logs accumulate. 64GB is a reasonable floor for a homelab host.
Hardware requirements check
Before learning how to install Proxmox, verify your hardware meets the realistic operational minimums. Proxmox VE has modest minimum requirements on paper, but the realistic operational minimums are higher.
CPU. Any 64-bit Intel or AMD processor with virtualization extensions (VT-x for Intel, AMD-V for AMD). These extensions are present on essentially every CPU from the last decade, but often disabled in BIOS. Enable them before installing. For PCI passthrough use cases, VT-d (Intel) or AMD-Vi (AMD) is also needed.
RAM. Official minimum is 1GB for Proxmox itself. Operationally this stops being realistic once real workloads run. 8GB is the practical floor for Proxmox plus a few VMs. 16GB+ is sensible for any real homelab. ZFS adds overhead for ARC (rough rule: 1GB ARC per 1TB of pool, configurable later). RAM planning details in our upcoming RAM sizing guide.
Storage. Minimum 32GB SSD for the boot drive. The Proxmox VE installer wipes whatever target disk you select — this is not optional, and there is no “preserve existing data” option. Any data on the chosen disk is gone. Verify the disk selection twice before clicking through. For ZFS configurations, the installer needs to see all participating disks directly through the storage controller (AHCI mode, not RAID).
Network. Wired Ethernet only. Single 1GbE is fine for basic use. 2.5GbE or 10GbE matters for serious workloads, especially with VM migration or NFS-mounted storage. Wi-Fi adapters do not appear in the installer environment.
Check VT-x or AMD-V is enabled before going further. Reboot, enter BIOS (typically Del, F2, or F10 at POST), find the virtualization setting (often under CPU Configuration, Advanced, or Security), confirm it’s enabled. The complete BIOS checklist is in our mini PC for Proxmox guide.
Download and verify the Proxmox VE ISO
Step one of how to install Proxmox is getting the right ISO. The current stable version is Proxmox VE 9.1 (released November 2025), based on Debian 13 “Trixie” with kernel 6.17. Proxmox 8.4 is still supported through August 2026: useful to know if you’re adding a node to an existing 8.x cluster, since mixed major versions create complications.
Download from the Proxmox VE downloads page. The file is around 1.5GB.
Verify the SHA256 checksum after downloading. The page lists the expected hash.
On Linux or macOS:
sha256sum proxmox-ve_9.1-1.iso
On Windows:
certutil -hashfile proxmox-ve_9.1-1.iso SHA256
The output should match the published hash exactly. If it doesn’t, re-download. Corrupted ISOs cause installer crashes and waste hours debugging “broken” hardware.
Create a bootable USB
Next stage of how to install Proxmox: getting the ISO onto a USB stick. Etcher works on all major platforms and is the lowest-friction option. Rufus on Windows is more flexible but needs configuration. The dd command on Linux is the most direct.
Linux:
dd bs=1M conv=fdatasync if=proxmox-ve_9.1-1.iso of=/dev/sdX
Replace /dev/sdX with your actual USB device. Get this wrong and you wipe the wrong drive. Run lsblk first to confirm which device is your USB stick.
Windows with Rufus: Open Rufus, select the ISO, select your USB device. Critically: when prompted about “ISO Image mode” vs “DD Image mode”, choose DD mode. ISO mode fails to boot the Proxmox VE installer on many systems.
macOS with Etcher: Open Etcher, select ISO, select USB device, write. Etcher handles the rest.
USB size: 8GB minimum, faster drives boot faster. Don’t reuse a USB with important data on it. The write process destroys whatever’s on the stick.
BIOS configuration before booting
Before the how to install Proxmox flow even starts, five BIOS settings need verifying. These are the ones that cause “installer won’t start” or “ZFS install failed” forum posts.
- VT-x / AMD-V enabled: virtualization itself
- VT-d / AMD-Vi enabled: IOMMU for PCI passthrough (enable now, decide later if needed)
- Secure Boot disabled: the Proxmox VE installer doesn’t play well with some Secure Boot configurations
- SATA mode set to AHCI: not RAID, especially if using ZFS
- Boot order: USB first, temporarily, so the installer boots
Save BIOS settings and reboot with the USB plugged in. The Proxmox VE installer welcome screen should appear within 30 seconds. If it doesn’t, the USB wasn’t created correctly or boot order didn’t save.
How to install Proxmox: the installer walkthrough
Select “Install Proxmox VE (Graphical)” from the welcome menu. The terminal UI option exists for headless installs but the graphical version is straightforward for first-time setup.
Accept the EULA. Standard step, nothing to decide.
Target disk and filesystem selection. This is the most important decision in the install. The installer asks which disk to install on, and clicking “Options” exposes the filesystem choice. The default is ext4 with LVM, which works for most simple cases.
Your choice depends on your hardware:
| If you have | Use | Why |
|---|---|---|
| Single SSD, 8-16GB RAM | LVM-thin (ext4) | Lower RAM overhead, simpler |
| 2 identical SSDs or NVMe drives | ZFS RAID1 (mirror) | Common homelab default, snapshots + redundancy |
| 4+ disks, 32GB+ RAM | ZFS RAID10 | Better random I/O for VMs + redundancy |
| Mini PC with only 8GB RAM | LVM-thin (ext4) | ZFS ARC overhead too aggressive at this RAM level |
| Existing data on target disk | STOP | Installer wipes — backup first, decide approach |
Two notes on this matrix. First, the storage architecture decisions go deeper than this — pool layouts, special VDEVs, compression, ARC tuning. Those decisions belong in our upcoming storage layout article. Here we’re picking which button to click in the installer defaults. Second, ZFS is usually a good default for homelab virtualization if you have enough RAM, but it’s not universally better than LVM-thin. Lower-RAM systems run cleaner on LVM-thin.
Location, timezone, keyboard. Country and timezone matter for package mirror selection and log timestamps. Keyboard layout matters during emergency console access. Pick correctly.
Root password and email. Set a strong root password. This is the credential for both SSH and web UI. The email field is mandatory; it’s used for system alerts (failed backups, disk health warnings, package update notices). Use a real address.
Network configuration. This step has more wrong-answers-per-square-inch than any other.
- Management interface: Pick the wired NIC. If multiple NICs are present, pick the one connected to your network.
- Hostname (FQDN): Use a proper FQDN like
pve1.home.arpaornode1.lab.internal. Avoid.local: it conflicts with mDNS and breaks resolution in unpredictable ways. Even for a single host, set a proper FQDN. - IP address: Static, CIDR notation. Example:
192.168.1.50/24. - Gateway: Your router’s IP, typically
192.168.1.1. - DNS server: Either your router or a public DNS like
1.1.1.1or9.9.9.9.
Review the summary screen. Before clicking Install, three checks worth doing — these are the irreversible decisions:
⚠️ Before clicking Install:
- Verify target disk: all data on it will be destroyed
- Confirm static IP and gateway: wrong values mean unreachable host after reboot
- Confirm hostname / FQDN format:
.localand bare hostnames cause weeks of weird issues
Click Install. The installer formats disks and copies packages. Usually 3-8 minutes. Reboot when prompted.
Should you install Proxmox on top of Debian?
There’s a second installation method when learning how to install Proxmox: install Debian 13 first, then add Proxmox VE packages on top. Officially supported, documented in the Proxmox wiki.
For most homelab cases, this is unnecessary complexity. The ISO installer handles partitioning, network configuration, and package selection in one step. Debian-on-top is common when repurposing an existing Debian server, or when you need custom partitioning schemes the ISO doesn’t expose.
The downside: troubleshooting becomes harder. You’re debugging Debian state plus Proxmox packages rather than a single tested installer path. Compatibility issues that don’t exist on ISO installs occasionally surface in Debian-on-top deployments.
If you’re not sure which path you need, the ISO is the right answer. Debian-on-top is for specific edge cases, not a general alternative.
First boot and first login
The how to install Proxmox process ends here, at first boot. After reboot, the system displays a login prompt and a URL like https://192.168.1.50:8006. Open that URL in a browser.
The browser warns about a self-signed certificate. This is normal. Proxmox generates a self-signed cert during setup. Accept the warning to proceed. Replacing the cert with a proper one (Let’s Encrypt or internal CA) is a post-install topic.
At the login screen, select Linux PAM as the authentication realm. Enter root as the username and the password you set during install. The web UI loads.
A popup appears: “You do not have a valid subscription for this server.” This is informational. Proxmox runs fine without a subscription. The subscription gives access to enterprise repositories with more conservative testing cycles. For homelab and small business use, the no-subscription repository is fine, but it requires a configuration change to suppress the popup. That fix lives in the post-install checklist (coming).
Click through the popup. The Datacenter view loads.
Quick sanity checks after install
After learning how to install Proxmox and completing the install, three commands confirm the system came up correctly. SSH to the host or use the Shell button in the web UI for the local node.
pveversion
Confirms you’re on the expected kernel and userspace version. Output should show pve-manager/9.1.x with kernel 6.17.x. Mismatch here usually means something went wrong during package installation. Worth investigating before adding workload.
ip a
Confirms the management bridge (vmbr0) exists, the management IP matches what you configured, and the physical NIC is up. Wrong interface name or missing bridge means VMs won’t get network connectivity later.
lsblk
Confirms the installer saw all expected disks. Missing disks suggest either a controller in the wrong mode (RAID instead of AHCI) or a hardware issue. Better to surface this now than after creating VMs.
If all three checks pass, the Proxmox install is in a clean state and ready for post-install configuration.
Common installation problems
Five problems show up repeatedly when people search for how to install Proxmox troubleshooting. Most are easily fixed if you know what to look for.
USB doesn’t boot. Symptom: system boots straight to existing OS or shows “no bootable device”. Cause is usually BIOS-related: Secure Boot still enabled, USB not first in boot order, or USB port not enabled in BIOS. Re-verify BIOS settings. If the USB was created in Rufus with ISO mode instead of DD mode, the USB itself won’t boot, recreate with DD mode.
“No bootable device” after install completes. Symptom: install finishes, system reboots, BIOS shows no bootable OS. Cause is usually boot order still pointing to USB (now removed) or GPT/MBR mismatch with BIOS expectations. Enter BIOS, set internal disk as first boot device, save and reboot.
Wrong NIC selected during install. Symptom: web UI unreachable after install. Modern Linux uses “predictable interface names” like enp3s0 instead of eth0, and which NIC the installer picked may not match what you expected. SSH locally or use serial console, run ip a, identify the correct interface, edit /etc/network/interfaces to reflect reality.
Hostname .local causing resolution failures. Symptom: web UI loads slowly or intermittently, SSH hangs at hostname lookup, weird mDNS behavior. Fix: set a proper FQDN. Edit /etc/hosts and /etc/hostname, set hostname to something like pve1.home.arpa, reboot.
ZFS install failed with errors about disks. Symptom: installer errors out when selecting ZFS RAID1 or RAID10. Cause is usually the SATA controller in RAID mode rather than AHCI. ZFS needs direct disk access. Reboot into BIOS, change SATA mode to AHCI, reboot installer. If the controller is true hardware RAID (Dell PERC, HP Smart Array), put it in HBA/IT mode instead.
Where to go next
The how to install Proxmox flow is just the foundation. Several immediate next steps make the host actually usable.
- Post-install configuration: disable enterprise repo, enable no-subscription repo, suppress nag screen, configure firewall basics, set up email alerts. This is the checklist that turns a fresh install into a usable host. (Coming next.)
- RAM sizing for your workload: how much RAM you actually need varies dramatically by use case. (Coming.)
- Storage architecture decisions: ZFS vs LVM-thin vs Ceph, pool layouts, what breaks in real homelabs. (Coming.)
For hardware context, see our mini PC for Proxmox guide covering specific hardware picks for homelab use. For platform comparison if you’re still deciding, see Proxmox vs ESXi Free 2026 and What is Proxmox VE.
More from the Proxmox series
This article is part of our Proxmox knowledge cluster. Related guides:
- Choosing a platform: Proxmox vs ESXi Free 2026 — comparison after Broadcom’s VMware changes.
- Understanding the platform: What is Proxmox VE? — KVM, LXC, and clustering on Debian explained.
- Hardware selection: Best Mini PC for Proxmox (2026) — real homelab picks and IOMMU/NIC pitfalls.
- Installation: How to Install Proxmox VE 9.1 — step-by-step guide with the ZFS vs LVM-thin decision matrix.
- Migration: ESXi to Proxmox Migration — VirtIO trap, Windows boot errors, ZFS gotchas.
More guides on post-install configuration, storage layouts, and networking are coming as part of the Proxmox foundation series.