Skip to main content

lxd

Типы инстансов: https://github.com/dustinkirkland/instance-type 

Мастхэв:

t1.micro:
  cpu: 1.0
  mem: 0.613
t2.2xlarge:
  cpu: 8.0
  mem: 32.0
t2.large:
  cpu: 2.0
  mem: 8.0
t2.medium:
  cpu: 2.0
  mem: 4.0
t2.micro:
  cpu: 1.0
  mem: 1.0
t2.nano:
  cpu: 1.0
  mem: 0.5
t2.small:
  cpu: 1.0
  mem: 2.0
t2.xlarge:
  cpu: 4.0
  mem: 16.0
t3.2xlarge:
  cpu: 8.0
  mem: 32.0
t3.large:
  cpu: 2.0
  mem: 8.0
t3.medium:
  cpu: 2.0
  mem: 4.0
t3.micro:
  cpu: 2.0
  mem: 1.0
t3.nano:
  cpu: 2.0
  mem: 0.5
t3.small:
  cpu: 2.0
  mem: 2.0
t3.xlarge:
  cpu: 4.0
  mem: 16.0

Launch a VM that boots from an ISO

To launch a VM that boots from an ISO, you must first create a VM. Let’s assume that we want to create a VM and install it from the ISO image. In this scenario, use the following command to create an empty VM:

lxc init iso-vm --empty --vm

The second step is to import an ISO image that can later be attached to the VM as a storage volume:

lxc storage volume import <pool> <path-to-image.iso> iso-volume --type=iso

Lastly, you need to attach the custom ISO volume to the VM using the following command:

lxc config device add iso-vm iso-volume disk pool=<pool> source=iso-volume boot.priority=10

The boot.priority configuration key ensures that the VM will boot from the ISO first. Start the VM and connect to the console as there might be a menu you need to interact with:

lxc start iso-vm --console

Once you’re done in the serial console, you need to disconnect from the console using ctrl+a-q, and connect to the VGA console using the following command:

lxc console iso-vm --type=vga

You should now see the installer. After the installation is done, you need to detach the custom ISO volume:

lxc storage volume detach <pool> iso-volume iso-vm

Now the VM can be rebooted, and it will boot from disk.

Подключить физический диск с хоста к виртуалке

lxc config device add vm1 disk1 disk source=/dev/sda

Запуск FreeBSD/OPNSense/pfSense

Добавь в конфиг виртуалки:

config:
  raw.qemu: |
    -cpu host
  raw.qemu.conf: |
    [device "dev-qemu_rng"]

Удаленное управление и вебморда

sudo snap set lxd ui.enable=true
sudo systemctl reload snap.lxd.daemon
lxc config set core.https_address :8443
lxc config trust add

Проброс порта с виртуалки во внешку

https://documentation.ubuntu.com/lxd/en/latest/reference/devices_proxy 

lxc config device set mature-kangaroo eth0 ipv4.address=10.117.170.194
lxc config device add mature-kangaroo ssh-forward proxy listen=tcp:8.8.8.8:2222 connect=tcp:10.117.170.194:22 nat=true

Но можно и так

root@example:~# lxc network forward port add lxdbr0 IP-HOST tcp PORT-HOST IP-VM PORT-VM

Random

# images list
incus image list images: architecture=x86_64 type=virtual-machine

# error
root@huanan:~# incus launch images:almalinux/9 alma1 --vm -c limits.cpu=2 -c limits.memory=2GiB -d root,size=20GiB
Launching alma1
Error: Failed instance creation: Failed creating instance record: Add instance info to the database: This "instances" entry already exists
root@huanan:~# incus config device add alma1 agent disk source=agent:config
Device agent added to alma1

additional disks

lxc storage volume create hdd-storage iscsi-for-vm-export size=3000GiB --type=block

lxc config device add ftp iscsi-for-vm-export disk pool=hdd-storage source=iscsi-for-vm-export


road to cluster storage

# sdb - iscsi disk
# os - ubuntu 24
iscsiadm -m discovery -t st -p X.X.X.X
iscsiadm -m node --targetname iqn.2023-09.com.example:stor.tgt2-incus --portal X.X.X.X --login
apt install lvm2-lockd dlm-controld

vim /etc/lvm/lvm.conf:
global {
        use_lvmlockd = 1
        locking_type = 1
}
systemctl status lvmlockd.service lvmlocks.service dlm.service
vgcreate --shared vg0 /dev/sdb
vgchange --lockstart
incus storage create pool1 lvmcluster source=vg0