The following steps guide you through the post installation of Proxmox VE. By this time I installed Proxmox 8.2
, Kernel 6.8
Plex
for me is running in an Ubuntu LXC in Docker together with other services like Tdarr that needs iGPU. I could not pass the iGPU to a linux VM by this time. I could only do it for the LXC.
* 32GB RAM * Intel® Core™ i5-1340P * 1TB M.2 SSD
Can use this link.
Internet connection when installing the Proxmox Ve is necessary!
apt update
apt full-upgrade
apt install pve-kernel-6.2
reboot
1. Delete local-lvm
under the Datacenter -> Storage
2. Open shell in Proxmox node:
* lvremove /dev/pve/data
. say Y
* lvresize -l +100%FREE /dev/pve/root
* resize2fs /dev/mapper/pve-root
Use the Proxmox VE Post Install script provided by tteck
If bash command not working:
Shell:
1. nano /etc/resolv.conf
2. change namesever with router DNS
Can use this link.
NOTE: Upgrade to Ubuntu 24.04 using this or that
VM shell:
Locate the root partition by running lsblk
. Run cfdisk
and expand your root partition with the available free space (if present after VM disk resize from Proxmox VE GUI + VM reboot) by using the Resize
option.
Next, we will need to get the Logical Volume (LV) path we want to alter using sudo lvdisplay
. Note the LV Path value, we will use this in the next command.
Run lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
that will extend the volume.
Lastly, run sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
to resize the volume to the desires size.
1. Use tteck Operating System script.
NOTE: Upgrade to Ubuntu 24.04 using this or that
2. Install Drivers in the LXC
apt install cifs-utils
apt install -y curl gnupg-utils
apt install net-tools
apt update && apt upgrade -y
2. Install dependencies in the LXC
apt-get update -y
apt-get install -y ocl-icd-libopencl1
Enable Intel Virtualization Technology
and VT-d
in BIOS
Host will lose access to iGPU when passing to VM! GPU is located in /dev/dri: card0 renderD128
1. run nano /etc/default/grub
2. Edit as follows: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
. Add together parameter i915.enable_gvt=1
to enable GVT for all 5th generation (Broadwell) to 10th generation (Comet Lake) Intel Core to enable GVT. Some newer CPUs like i5-1340P use SR-IOV instead of GVT. At this moment I dont know how to enable SR-IOV on 13th cpu.
3. Save changes
4. update grub: update-grub
5. run nano /etc/modules
6. Add modules:
# Modules required for PCI passthrough
vfio
vfio_pci
vfio_virqfd
vfio_iommu_type1
# Modules required for Intel GVT.
kvmgt
vfio-mdev
i915
- Save changes
- Update modules:
update-initramfs -u -k all
- Reboot
1. Open node's shell and run dmesg | grep -e DMAR -e IOMMU
[ 0.034902] ACPI: DMAR 0x0000000042791000 000088 (v02 INTEL EDK2 00000002 01000013)
[ 0.034960] ACPI: Reserving DMAR table memory at [mem 0x42791000-0x42791087]
[ 0.089314] DMAR: IOMMU enabled
[ 0.190011] DMAR: Host address width 39
[ 0.190013] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.190028] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[ 0.190032] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.190037] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[ 0.190040] DMAR: RMRR base: 0x0000004c000000 end: 0x000000503fffff
[ 0.190045] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.190047] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.190049] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.195377] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 1.033749] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[ 1.305721] DMAR: No ATSR found
[ 1.305722] DMAR: No SATC found
[ 1.305724] DMAR: IOMMU feature fl1gp_support inconsistent
[ 1.305726] DMAR: IOMMU feature pgsel_inv inconsistent
[ 1.305728] DMAR: IOMMU feature nwfs inconsistent
[ 1.305730] DMAR: IOMMU feature dit inconsistent
[ 1.305731] DMAR: IOMMU feature sc_support inconsistent
[ 1.305732] DMAR: IOMMU feature dev_iotlb_support inconsistent
[ 1.305734] DMAR: dmar0: Using Queued invalidation
[ 1.305741] DMAR: dmar1: Using Queued invalidation
[ 1.306955] DMAR: Intel(R) Virtualization Technology for Directed I/O
1. Run in pve shell: nano /etc/modprobe.d/pve-blacklist.conf
2. If you get this response than it is fine:
# This file contains a list of modules which are not supported by Proxmox VE
# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
- Run:
nano /etc/modprobe.d/iommu_unsafe_interrupts.conf
- Add line and save:
options vfio_iommu_type1 allow_unsafe_interrupts=1
kvm.conf
andvfio.conf
in /etc/modprobe.d/ must be empty- Reboot!
1. pve shell: lspci -nnv | grep VGA
. Run just lspci
to see all
2. Can check GPU kernel driver and modules: lspci -nnk | grep VGA -A 8
3. Default PCI is 00:02.0
00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-P [Iris Xe Graphics] [8086:a7a0] (rev 04) (prog-if 00 [VGA controller])
Select VM, Shutdown/Stop and go to Hardware. Add new PCI Device. Select Raw Device Select iGPU ( 00:02.0 for this example ). Select MDev_Type. Either i915-GTVg_V5_4 or i915-GTVg_V5_8. I dont really know the difference. Start VM!
Open VM shell and run lspci -nnv | grep VGA
to check GPU has passed. Or check the directory cd /dev/dri
:
by-path card0 renderD128
Good to go with hardware acceleration now.
This works for Privileged containers:
1. Open a pve shell and edit the container config file: nano /etc/pve/lxc/[container_id].conf
2. Add lines: For Proxmox >= 7.0
#for transcoding
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
#for machine-learning
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/bus/usb/ dev/bus/usb/ none bind,optional,create=dir
First you have to mount the drive/usb!!!
1. Edit: nano /etc/pve/lxc/[container_id].conf
2. Add line: mp0: /mnt/usb, mp=/mnt/usb
. You need to create directory /mnt/usb for Proxmox and LXC!
where:
/mnt/usb is directory for mounted drive in proxmox
mp=/mnt/usb is directory in LXC where new drive is to be mounted
Open pve shell:
- Get drive UUID:
ls -n /dev/disk/by-uuid/
- Run:
/sbin/qm set [VM-ID] -virtioX /dev/disk/by-uuid/[UUID]
. Where X in-virtioX
is a number from 0 to 15
Installing and configurind Powertop to optimize resource power consumption on Proxmox VE host.
1. To install powertop run apt-get install -y powertop
2. Create a new systemd service that will run powertop after every reboot nano /etc/systemd/system/powertop.service
[Unit]
Description=Auto-tune power savings (oneshot)
[Service]
Type=oneshot
ExecStart=/usr/sbin/powertop --auto-tune
RemainAfterExit=true
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable powertop.service
- WARNING!!! Enabling powertop.service may crash the Proxmox GUI. Consider disabling this service!
This installer pass the HDDs information from host to guest.
Proxmox VE shell:
1. Install Scrutiny. Linux variant: link
2. Create a new timer: nano /etc/systemd/system/scrutiny.timer
and add:
[Unit]
Description=Scrutiny scheduler
[Timer]
OnUnitActiveSec=120m
OnBootSec=120m
[Install]
WantedBy=timers.target
-
Create a new service
nano /etc/systemd/system/scrutiny.service
and add:[Unit] Description=Scrutiny job [Service] Type=oneshot ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://SCRUTINY_HOST:SCRUTINY_PORT"
-
Replace
SCRUTINY_HOST
andSCRUTINY_PORT
with the corect details for the existing Scrutiny instance. To enable service run the following commands in this order:systemctl daemon-reload systemctl enable scrutiny.service systemctl enable scrutiny.timer systemctl start scrutiny.timer
NOTE: The same steps need to be done inside OMV VM
to that media drives report their SMART metrics to Scrutiny.
- Reserve IP for Proxmox VE