Skip to content

WoofThatByte/proxmox-setup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tteck proxmox scripts

Proxmox scripts

Proxmox VE

The following steps guide you through the post installation of Proxmox VE. By this time I installed Proxmox 8.2, Kernel 6.8

Plex for me is running in an Ubuntu LXC in Docker together with other services like Tdarr that needs iGPU. I could not pass the iGPU to a linux VM by this time. I could only do it for the LXC.

Hardware

* 32GB RAM * Intel® Core™ i5-1340P * 1TB M.2 SSD

Proxmox VE installation

Can use this link.
Internet connection when installing the Proxmox Ve is necessary!

Post installation update (to support 13th gen Intel CPU or newer):

apt update
apt full-upgrade
apt install pve-kernel-6.2
reboot

Increase storage - post install

1. Delete local-lvm under the Datacenter -> Storage
2. Open shell in Proxmox node:
* lvremove /dev/pve/data. say Y
* lvresize -l +100%FREE /dev/pve/root
* resize2fs /dev/mapper/pve-root

Post installation script

Use the Proxmox VE Post Install script provided by tteck

If bash command not working:
Shell:
1. nano /etc/resolv.conf
2. change namesever with router DNS

Create Ubuntu VM

Can use this link.

NOTE: Upgrade to Ubuntu 24.04 using this or that

Expanding root drive

VM shell:

Locate the root partition by running lsblk. Run cfdisk and expand your root partition with the available free space (if present after VM disk resize from Proxmox VE GUI + VM reboot) by using the Resize option.

Next, we will need to get the Logical Volume (LV) path we want to alter using sudo lvdisplay. Note the LV Path value, we will use this in the next command.

Run lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv that will extend the volume.

Lastly, run sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv to resize the volume to the desires size.

Create Ubuntu 24.04 LXC

1. Use tteck Operating System script.

NOTE: Upgrade to Ubuntu 24.04 using this or that

2. Install Drivers in the LXC

  apt install cifs-utils
  apt install -y curl gnupg-utils
  apt install net-tools
  apt update && apt upgrade -y

2. Install dependencies in the LXC

  apt-get update -y
  apt-get install -y ocl-icd-libopencl1

Passthrough Intel iGPU

Enable Intel Virtualization Technology and VT-d in BIOS

Host will lose access to iGPU when passing to VM! GPU is located in /dev/dri: card0 renderD128

Enabling PCI passthrough

Node Shell

1. run nano /etc/default/grub
2. Edit as follows: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt". Add together parameter i915.enable_gvt=1 to enable GVT for all 5th generation (Broadwell) to 10th generation (Comet Lake) Intel Core to enable GVT. Some newer CPUs like i5-1340P use SR-IOV instead of GVT. At this moment I dont know how to enable SR-IOV on 13th cpu.
3. Save changes
4. update grub: update-grub
5. run nano /etc/modules
6. Add modules:

  # Modules required for PCI passthrough
  vfio
  vfio_pci
  vfio_virqfd
  vfio_iommu_type1    
  # Modules required for Intel GVT. 
  kvmgt 
  vfio-mdev
  i915 
  1. Save changes
  2. Update modules: update-initramfs -u -k all
  3. Reboot

Check IOMMU is enabled

1. Open node's shell and run dmesg | grep -e DMAR -e IOMMU

[    0.034902] ACPI: DMAR 0x0000000042791000 000088 (v02 INTEL  EDK2     00000002      01000013)
[    0.034960] ACPI: Reserving DMAR table memory at [mem 0x42791000-0x42791087]
[    0.089314] DMAR: IOMMU enabled
[    0.190011] DMAR: Host address width 39
[    0.190013] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.190028] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.190032] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.190037] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.190040] DMAR: RMRR base: 0x0000004c000000 end: 0x000000503fffff
[    0.190045] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.190047] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.190049] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.195377] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    1.033749] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    1.305721] DMAR: No ATSR found
[    1.305722] DMAR: No SATC found
[    1.305724] DMAR: IOMMU feature fl1gp_support inconsistent
[    1.305726] DMAR: IOMMU feature pgsel_inv inconsistent
[    1.305728] DMAR: IOMMU feature nwfs inconsistent
[    1.305730] DMAR: IOMMU feature dit inconsistent
[    1.305731] DMAR: IOMMU feature sc_support inconsistent
[    1.305732] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    1.305734] DMAR: dmar0: Using Queued invalidation
[    1.305741] DMAR: dmar1: Using Queued invalidation
[    1.306955] DMAR: Intel(R) Virtualization Technology for Directed I/O

Check module i915 is not blacklisted

1. Run in pve shell: nano /etc/modprobe.d/pve-blacklist.conf
2. If you get this response than it is fine:

# This file contains a list of modules which are not supported by Proxmox VE 

# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
  1. Run: nano /etc/modprobe.d/iommu_unsafe_interrupts.conf
  2. Add line and save: options vfio_iommu_type1 allow_unsafe_interrupts=1
  3. kvm.conf and vfio.conf in /etc/modprobe.d/ must be empty
  4. Reboot!

Check iGPU

1. pve shell: lspci -nnv | grep VGA. Run just lspci to see all
2. Can check GPU kernel driver and modules: lspci -nnk | grep VGA -A 8
3. Default PCI is 00:02.0

00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-P [Iris Xe Graphics] [8086:a7a0] (rev 04) (prog-if 00 [VGA controller])

Add iGPU to VM - Only for GVT!!!

Select VM, Shutdown/Stop and go to Hardware. Add new PCI Device. Select Raw Device Select iGPU ( 00:02.0 for this example ). Select MDev_Type. Either i915-GTVg_V5_4 or i915-GTVg_V5_8. I dont really know the difference. Start VM!

Open VM shell and run lspci -nnv | grep VGA to check GPU has passed. Or check the directory cd /dev/dri:

by-path  card0  renderD128

Good to go with hardware acceleration now.

Add iGPU to LXC

This works for Privileged containers:
1. Open a pve shell and edit the container config file: nano /etc/pve/lxc/[container_id].conf
2. Add lines: For Proxmox >= 7.0

#for transcoding
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file

#for machine-learning
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/bus/usb/ dev/bus/usb/ none bind,optional,create=dir

Add mount point to LXC

First you have to mount the drive/usb!!!
1. Edit: nano /etc/pve/lxc/[container_id].conf
2. Add line: mp0: /mnt/usb, mp=/mnt/usb. You need to create directory /mnt/usb for Proxmox and LXC!

where:
/mnt/usb is directory for mounted drive in proxmox
mp=/mnt/usb is directory in LXC where new drive is to be mounted

Add drive to VM

Open pve shell:

  1. Get drive UUID: ls -n /dev/disk/by-uuid/
  2. Run: /sbin/qm set [VM-ID] -virtioX /dev/disk/by-uuid/[UUID]. Where X in -virtioX is a number from 0 to 15

Power optimization

Powertop

Installing and configurind Powertop to optimize resource power consumption on Proxmox VE host.
1. To install powertop run apt-get install -y powertop 2. Create a new systemd service that will run powertop after every reboot nano /etc/systemd/system/powertop.service

[Unit]
Description=Auto-tune power savings (oneshot)

[Service]
Type=oneshot
ExecStart=/usr/sbin/powertop --auto-tune
RemainAfterExit=true

[Install]
WantedBy=multi-user.target
  1. systemctl daemon-reload
  2. systemctl enable powertop.service - WARNING!!! Enabling powertop.service may crash the Proxmox GUI. Consider disabling this service!

Hard Drive monitoring tool

Scrutiny spoke

This installer pass the HDDs information from host to guest.

Proxmox VE shell:

1. Install Scrutiny. Linux variant: link
2. Create a new timer: nano /etc/systemd/system/scrutiny.timer and add:

  [Unit]
  Description=Scrutiny scheduler
  
  [Timer]
  OnUnitActiveSec=120m
  OnBootSec=120m
  
  [Install]
  WantedBy=timers.target
  1. Create a new service nano /etc/systemd/system/scrutiny.service and add:

    [Unit]
    Description=Scrutiny job
     
    [Service]
    Type=oneshot
    ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://SCRUTINY_HOST:SCRUTINY_PORT"
    
  2. Replace SCRUTINY_HOST and SCRUTINY_PORT with the corect details for the existing Scrutiny instance. To enable service run the following commands in this order:

      systemctl daemon-reload
      systemctl enable scrutiny.service
      systemctl enable scrutiny.timer
      systemctl start scrutiny.timer
    

NOTE: The same steps need to be done inside OMV VM to that media drives report their SMART metrics to Scrutiny.

ROUTER setup

  1. Reserve IP for Proxmox VE

References

  1. tteck
  2. sherbibv

About

Get your home server started with Proxmox

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published