Skillnad mellan versioner av "Rootmanual:Proxmox"

Från Lysators datorhandbok, den ultimata referensen.
Hoppa till navigering Hoppa till sök
(Skapade sidan med 'Instructions for managing and using our setup of Proxmox. == Creating a virtual machine == # Connect to the management interface of one of the compute nodes (e.g. https://proxa…')
 
 
(9 mellanliggande sidversioner av 6 användare visas inte)
Rad 1: Rad 1:
 
Instructions for managing and using our setup of Proxmox.
 
Instructions for managing and using our setup of Proxmox.
   
  +
NOTE: If your cursor gets stuck in SPACE, 'use ctrl + alt + r' to release the cursor.
   
 
== Creating a virtual machine ==
 
== Creating a virtual machine ==
Rad 14: Rad 15:
 
## Change Bus/Device to VIRTIO
 
## Change Bus/Device to VIRTIO
 
## Decide on a disk size. The disk image will use this much space on the storage server. If you are unsure, pick a smaller size and increase it later (10GB should be plenty for the typical VM).
 
## Decide on a disk size. The disk image will use this much space on the storage server. If you are unsure, pick a smaller size and increase it later (10GB should be plenty for the typical VM).
## ''Optionally'' change Cache to Writeback to trade some reliability for a bit better write performance.
 
 
# CPU tab: Allocate CPUs/cores as appropriate.
 
# CPU tab: Allocate CPUs/cores as appropriate.
 
# Memory tab: allocate memory as appropriate.
 
# Memory tab: allocate memory as appropriate.
Rad 28: Rad 28:
 
# Go to the Hardware tab
 
# Go to the Hardware tab
 
# Edit Display, and set Graphic card to SPICE.
 
# Edit Display, and set Graphic card to SPICE.
  +
# Go to the Options tab
  +
# Edit "Start at boot" and set to enable if the machine should start automatically.
 
# Start the machine, and perform installation as usual.
 
# Start the machine, and perform installation as usual.
 
   
 
== Using the console ==
 
== Using the console ==
 
There are two options: SPICE, and Java. Don't use Java (unless you can't help it).
 
There are two options: SPICE, and Java. Don't use Java (unless you can't help it).
   
A prerequisite for using SPICE the remote-viewer program part of virt-manager).
+
A prerequisite for using SPICE is the remote-viewer program (part of virt-manager).
   
To connect it to a VM, first start it, and then click SPICE in the top right corner in the Proxmox management interface. This downloads a file, that you pass on the command line to remote-viewer.
+
To connect to a VM, first start the VM, and then click SPICE in the top right corner in the Proxmox management interface. This downloads a file that you pass on the command line to remote-viewer.
   
   
 
== Storage ==
 
== Storage ==
  +
=== Lägg till ny ISO att boota från ===
  +
  +
[[Fil:8C6MwuX.png]]
  +
 
=== Configuring iSCSI on a storage server ===
 
=== Configuring iSCSI on a storage server ===
 
This section assumes debian.
 
This section assumes debian.
   
  +
* Follow the steps on http://zfsonlinux.org/debian.html
* <TODO: setup zfs>
 
* <TODO: create zfs file system>
+
* Create a zfs pool:
  +
** <code>zpool create ${POOLNAME} raidz2 /dev/sd{b..d}</code>
* Create a block device on the zfs file system:
+
* Create a block device in the zfs pool:
** <code>zfs create -V <SIZE> filesystem/vm-storage</code>
+
** <code>zfs create -V $SIZE ${POOLNAME}/vm-storage</code>
 
* Install tgt:
 
* Install tgt:
 
** <code>aptitude install tgt</code>
 
** <code>aptitude install tgt</code>
Rad 95: Rad 101:
 
# In the Content dropdown, select ISO and deselect Images.
 
# In the Content dropdown, select ISO and deselect Images.
 
# Set Max Backups to 0.
 
# Set Max Backups to 0.
  +
  +
=== Flytta diskbild till ny lagringslösning ===
  +
Om du vill flytta en disk från det interna zfs-filsystemet (local-zfs) till en av Ceph-poolerna (trillian-vm eller trillian-vm-ssd) så kan det göras via webbgränssnittet. Om diskbilden som ska flyttas ligger på en av Ceph-poolerna så blir processen mer invecklad. Ett exempel med en flytt från trillian-vm till trillian-vm-ssd (hur rag flyttades):
  +
# Stoppa det VM vars disk ska flyttas.
  +
# Se till så att det inte finns några snapshots.
  +
# ssh:a in på servern som VM:et har körts på.
  +
# Kör: /usr/bin/qemu-img create -f raw 'rbd:vm_ssd/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm-ssd.keyring' 15G
  +
# Kontrollera att den nya diskbilden finns med: rbd list vm_ssd -c /etc/ceph/ceph.conf -k /etc/pve/priv/ceph/trillian-vm-ssd.keyring --id caspian
  +
# Flytta disken med: /usr/bin/qemu-img convert -p -n -f raw -O raw -t writeback 'rbd:vm/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm.keyring:conf=/etc/ceph/ceph.conf' 'zeroinit:rbd:vm_ssd/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm-ssd.keyring:conf=/etc/ceph/ceph.conf'
  +
# Kontrollera diskbilden med: rbd info vm_ssd/vm-114-disk-1 -c /etc/ceph/ceph.conf -k /etc/pve/priv/ceph/trillian-vm-ssd.keyring --id caspian
  +
# Konfigurations filen för VM:en måste redigeras för att den ska använda den nya diskbilden: nano /etc/pve/qemu-server/101.conf
  +
# Nu kan du starta VM:en igen.
  +
# Ta bort den gammla diskbilden när du känner dig säker.

Nuvarande version från 1 oktober 2023 kl. 12.10

Instructions for managing and using our setup of Proxmox.

NOTE: If your cursor gets stuck in SPACE, 'use ctrl + alt + r' to release the cursor.

Creating a virtual machine

  1. Connect to the management interface of one of the compute nodes (e.g. https://proxar.lysator.liu.se:8006/ or https://proxer.lysator.liu.se:8006/) and log in.
  2. Click Create VM in the top right corner.
  3. General tab:
    1. Set a name for the VM
    2. Choose compute node for the VM to run on
    3. Add it to the appropriate resource pool
  4. OS tab: Pick Linux 3.X/2.6 Kernel (l26)
  5. CD/DVD tab: pick an ISO to install from.
  6. Hard disk tab:
    1. Change Bus/Device to VIRTIO
    2. Decide on a disk size. The disk image will use this much space on the storage server. If you are unsure, pick a smaller size and increase it later (10GB should be plenty for the typical VM).
  7. CPU tab: Allocate CPUs/cores as appropriate.
  8. Memory tab: allocate memory as appropriate.
  9. Network tab:
    1. Change Model to VirtIO (paravirtualized)
    2. If the VM will need a public IP
      1. Choose bridged mode
      2. Set Bridge to vmbr0
    3. If the VM will not need a public IP
      1. Choose NAT
  10. Confirm tab: Click Finish.
  11. Select the new VM in the tree view.
  12. Go to the Hardware tab
  13. Edit Display, and set Graphic card to SPICE.
  14. Go to the Options tab
  15. Edit "Start at boot" and set to enable if the machine should start automatically.
  16. Start the machine, and perform installation as usual.

Using the console

There are two options: SPICE, and Java. Don't use Java (unless you can't help it).

A prerequisite for using SPICE is the remote-viewer program (part of virt-manager).

To connect to a VM, first start the VM, and then click SPICE in the top right corner in the Proxmox management interface. This downloads a file that you pass on the command line to remote-viewer.


Storage

Lägg till ny ISO att boota från

8C6MwuX.png

Configuring iSCSI on a storage server

This section assumes debian.

  • Follow the steps on http://zfsonlinux.org/debian.html
  • Create a zfs pool:
    • zpool create ${POOLNAME} raidz2 /dev/sd{b..d}
  • Create a block device in the zfs pool:
    • zfs create -V $SIZE ${POOLNAME}/vm-storage
  • Install tgt:
    • aptitude install tgt
    • If your tgt version is older than 1:1.0.17-1.1, you need to manually install the init script:
    • Setup tgt to start automatically:
      • update-rc.d tgt defaults
  • Create an iSCSI target:
    • tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.${YYYY}-${MM}.se.liu.lysator:${HOSTNAME}.vm-storage
  • Add a LUN to the target, backed by the previously created block device:
    • tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /filesystem/vm-storage
  • Make the target available on the network:
    • tgtadm --lld iscsi --mode target --op bind --tid 1 -I 10.44.1.0/24
  • Save the configuration:
    • tgt-admin --dump > /etc/tgt/targets.conf


Attaching new VM storage to the Proxmox cluster

  1. Go to Proxmox management interface.
  2. Click Datacenter in the treeview.
  3. Select the Storage tab.
  4. Click Add -> iSCSI
  5. Set the ID to ${STORAGE_SERVER_HOSTNAME}-vm-storage-iscsi (e.g. proxstore-vm-storage-iscsi).
  6. Enter portal (the IP of the storage server)
  7. Select the desired iSCSI target from the target dropdown.
  8. Uncheck the checkbox for Use LUNs directly.
  9. Click add.
  10. Next, click Add -> LVM
  11. Set the ID to ${STORAGE_SERVER_HOSTNAME}-vm-storage-lvm (e.g. proxstore-vm-storage-lvm).
  12. For Base storage, choose the iSCSI volume you added previously.
  13. Check the checkbox for Shared.
  14. Click Add.


Attaching new ISO storage

  1. Go to Proxmox management interface.
  2. Click Datacenter in the treeview.
  3. Select the Storage tab.
  4. Click Add -> NFS.
  5. Set the ID to ${STORAGE_SERVER_HOSTNAME}-iso-storage-nfs (e.g. proxstore-iso-storage-nfs).
  6. Enter the IP of the storage server.
  7. Select the export from the Export dropdown.
  8. In the Content dropdown, select ISO and deselect Images.
  9. Set Max Backups to 0.

Flytta diskbild till ny lagringslösning

Om du vill flytta en disk från det interna zfs-filsystemet (local-zfs) till en av Ceph-poolerna (trillian-vm eller trillian-vm-ssd) så kan det göras via webbgränssnittet. Om diskbilden som ska flyttas ligger på en av Ceph-poolerna så blir processen mer invecklad. Ett exempel med en flytt från trillian-vm till trillian-vm-ssd (hur rag flyttades):

  1. Stoppa det VM vars disk ska flyttas.
  2. Se till så att det inte finns några snapshots.
  3. ssh:a in på servern som VM:et har körts på.
  4. Kör: /usr/bin/qemu-img create -f raw 'rbd:vm_ssd/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm-ssd.keyring' 15G
  5. Kontrollera att den nya diskbilden finns med: rbd list vm_ssd -c /etc/ceph/ceph.conf -k /etc/pve/priv/ceph/trillian-vm-ssd.keyring --id caspian
  6. Flytta disken med: /usr/bin/qemu-img convert -p -n -f raw -O raw -t writeback 'rbd:vm/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm.keyring:conf=/etc/ceph/ceph.conf' 'zeroinit:rbd:vm_ssd/vm-114-disk-1:mon_host=10.44.1.98;10.44.1.99;10.44.1.100:auth_supported=cephx:id=caspian:keyring=/etc/pve/priv/ceph/trillian-vm-ssd.keyring:conf=/etc/ceph/ceph.conf'
  7. Kontrollera diskbilden med: rbd info vm_ssd/vm-114-disk-1 -c /etc/ceph/ceph.conf -k /etc/pve/priv/ceph/trillian-vm-ssd.keyring --id caspian
  8. Konfigurations filen för VM:en måste redigeras för att den ska använda den nya diskbilden: nano /etc/pve/qemu-server/101.conf
  9. Nu kan du starta VM:en igen.
  10. Ta bort den gammla diskbilden när du känner dig säker.