wiki:Documentation/Manuals/ovirt

Version 1 (modified by Olexander Boretsky, 9 years ago) (diff)

--

For vLabs oVirt was installed self-hosted engine.

setup virtualization host

In order to setup self-hosted next steps must be done

  • install ovirt-hosted-engine-setup package
  • run setup, it require run in screen. It good idea because it quite long.
  • set where self-hosted engine will be installed. By default it require 25 GB partition and overhead for about 5.1GB so for it was used 31 separate LUN. It cannot be placed on common storage because after adding such storage in ovirt it cannot be used for storing other images.
  • any configuration of cluster name, data-center name and storage pool name at this point will take no effect to ovirt.

Installing OS to VM

  • configure initial boot of vm for OS installation. for installation vLabs oVirt PXE boot was choosed. After booting self-hosted setup will provide console socket for connection and password. It can be accessed only local so must be used ssh with X-forwarding. Notice: reboot of vm will cause it shutdown.
  • After installing OS successful installation must be approved and vm booted in normal mode.

Installing oVirt

  • normal setup and configuration of ovirt:
  • install Lets Encrypt certs:
    • apache /etc/pki/ovirt-engine/apache-ca.pem -> /etc/pki/ovirt-engine/ca-letsencrypt.pem /etc/pki/ovirt-engine/certs/apache.cer
    • websocket /etc/pki/ovirt-engine/certs/websocket-proxy.cer -> apache.cer
  • WARNING!!!''' Any changes must be done before proceeding with self-hosted setup because it will connect to ovirt-engine and once read configuration. If some problem will occur installation must be done from very beginning.

Adding first host

  • proceed with self-hosted setup. It will connect to ovirt-engine and add host to cluster.

Adding Hosts

  • install ovirt-hosted-engine-setup package
  • run self-hosted setup
  • chose LUN with ovirt-engine
  • chose installation of additional host
  • it will connect to existing virtualization host via ssh and copy answer-file. Path to answer-file on can be also provided as option.

Managing Engine

if you need to reboot ovirt-engine you must set it to global maintenance mode. It will stop monitoring engine aliveness. If you set local maintenance ha-agent wont run on this host.

Start, stop, get console access and setting any maintenance can be done via hosted-engine

Balloon hook

By default self-hosted vm doesn't get enabled memory balloon device. To enable it you need to

  • copy 1GB partition without normal name by dd to file.
  • untar it as simple tar archive
  • modify vm.conf and add devices={device:memballoon,specParams:{model:none},type:balloon}
  • create tar archive back
  • copy by dd it to partition

CPU Hook

Modern Windows OS require cpu type core2duo. By default ovirt hasnt its cpu type. For adding this CPU type you need to modify engine db

{{{public.vdc_options.pk_vdc_options

option_id 570 option_value ServerCPUList

option_value 2:Intel Core 2 Duo Family:vmx,nx,model_Core2duo:Core2duo:x86_64; 3:Intel Conroe Family:vmx,nx,model_Conroe:Conroe:x86_64; 4:Intel Penryn Family:vmx,nx,model_Penryn:Penryn:x86_64; 5:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; 6:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; 7:Intel SandyBridge? Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; 8:Intel Haswell-noTSX Family:vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64; 9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; 10:Intel Broadwell-noTSX Family:vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64; 11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; 2:AMD Opteron G1:svm,nx,model_Opteron_G1:Opteron_G1:x86_64; 3:AMD Opteron G2:svm,nx,model_Opteron_G2:Opteron_G2:x86_64; 4:AMD Opteron G3:svm,nx,model_Opteron_G3:Opteron_G3:x86_64; 5:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; 6:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; 3:IBM POWER8:powernv,model_POWER8:POWER8:ppc64;

version 3.6 }}}

native client console