virt-v2v-output-rhv - Using virt-v2v to convert guests to oVirt or RHV
virt-v2v [-i* options] -o rhv-upload [-oc ENGINE_URL] -os STORAGE
[-op PASSWORD] [-of raw]
[-oo rhv-cafile=FILE]
[-oo rhv-cluster=CLUSTER]
[-oo rhv-proxy]
[-oo rhv-disk-uuid=UUID ...]
[-oo rhv-verifypeer]
virt-v2v [-i* options] -o rhv -os [esd:/path|/path]
virt-v2v [-i* options] -o vdsm
[-oo vdsm-image-uuid=UUID]
[-oo vdsm-vol-uuid=UUID]
[-oo vdsm-vm-uuid=UUID]
[-oo vdsm-ovf-output=DIR]
This page documents how to use
virt-v2v(1) to convert guests to an oVirt
or RHV management instance. There are three output modes that you can select,
but only
-o rhv-upload should be used normally, the other two are
deprecated:
-
-o rhv-upload -os STORAGE
- Full description: "OUTPUT TO RHV"
This is the modern method for uploading to oVirt/RHV via the REST API. It
requires oVirt/RHV ≥ 4.2.
-
-o rhv -os esd:/path
-
-o rhv -os /path
- Full description: "OUTPUT TO EXPORT STORAGE
DOMAIN"
This is the old method for uploading to oVirt/RHV via the Export Storage
Domain (ESD). The ESD can either be accessed over NFS (using the -os
esd:/path form) or if you have already NFS-mounted it somewhere
specify the path to the mountpoint as -os /path.
The Export Storage Domain was deprecated in oVirt 4, and so we expect that
this method will stop working at some point in the future.
- -o vdsm
- This is the old method used internally by the RHV-M user
interface. It is never intended to be used directly by end users.
This new method to upload guests to oVirt or RHV directly via the REST API
requires oVirt/RHV ≥ 4.2.
You need to specify
-o rhv-upload as well as the following extra
parameters:
-
-oc
"https://ovirt-engine.example.com/ovirt-engine/api"
- The URL of the REST API which is usually the server name
with "/ovirt-engine/api" appended, but might be different if you
installed oVirt Engine on a different path.
You can optionally add a username and port number to the URL. If the
username is not specified then virt-v2v defaults to using
"admin@internal" which is the typical superuser account for
oVirt instances.
- -of raw
- Currently you must use -of raw and you cannot use
-oa preallocated.
These restrictions will be loosened in a future version.
-
-op password-file
- A file containing a password to be used when connecting to
the oVirt engine. Note the file should contain the whole password,
without any trailing newline, and for security the file should have
mode 0600 so that others cannot read it.
-
-os "ovirt-data"
- The storage domain.
-
-oo rhv-cafile=ca.pem
- The ca.pem file (Certificate Authority), copied from
/etc/pki/ovirt-engine/ca.pem on the oVirt engine.
If -oo rhv-verifypeer is enabled then this option can be used to
control which CA is used to verify the client’s identity. If this
option is not used then the system’s global trust store is
used.
-
-oo rhv-cluster="CLUSTERNAME"
- Set the RHV Cluster Name. If not given it uses
"Default".
-
-oo rhv-disk-uuid="UUID"
- This option can used to manually specify UUIDs for the
disks when creating the virtual machine. If not specified, the oVirt
engine will generate random UUIDs for the disks. Please note that:
- •
- you must pass as many -oo rhv-disk-uuid=UUID
options as the amount of disks in the guest
- •
- the specified UUIDs must not conflict with the UUIDs of
existing disks
- -oo rhv-proxy
- Proxy the upload through oVirt Engine. This is slower than
uploading directly to the oVirt node but may be necessary if you do not
have direct network access to the nodes.
- -oo rhv-verifypeer
- Verify the oVirt/RHV server’s identity by checking
the server‘s certificate against the Certificate Authority.
This section only applies to the
-o rhv output mode. If you use virt-v2v
from the RHV-M user interface, then behind the scenes the import is managed by
VDSM using the
-o vdsm output mode (which end users should not try to
use directly).
You have to specify
-o rhv and an
-os option that points to the
RHV-M Export Storage Domain. You can either specify the NFS server and
mountpoint, eg. "-os rhv-storage:/rhv/export", or you can
mount that first and point to the directory where it is mounted, eg.
"-os /tmp/mnt". Be careful not to point to the Data Storage
Domain by accident as that will not work.
On successful completion virt-v2v will have written the new guest to the Export
Storage Domain, but it will not yet be ready to run. It must be imported into
RHV using the UI before it can be used.
In RHV ≥ 2.2 this is done from the Storage tab. Select the export domain
the guest was written to. A pane will appear underneath the storage domain
list displaying several tabs, one of which is "VM Import". The
converted guest will be listed here. Select the appropriate guest an click
"Import". See the RHV documentation for additional details.
If you export several guests, then you can import them all at the same time
through the UI.
If you do not have an oVirt or RHV instance to test against, then you can test
conversions by creating a directory structure which looks enough like a RHV-M
Export Storage Domain to trick virt-v2v:
uuid=`uuidgen`
mkdir /tmp/rhv
mkdir /tmp/rhv/$uuid
mkdir /tmp/rhv/$uuid/images
mkdir /tmp/rhv/$uuid/master
mkdir /tmp/rhv/$uuid/master/vms
touch /tmp/rhv/$uuid/dom_md
virt-v2v [...] -o rhv -os /tmp/rhv
When you export to the RHV-M Export Storage Domain, and then import that guest
through the RHV-M UI, you may encounter an import failure. Diagnosing these
failures is infuriatingly difficult as the UI generally hides the true reason
for the failure.
There are several log files of interest:
- /var/log/vdsm/import/
- In oVirt ≥ 4.1.0, VDSM preserves the virt-v2v log
file for 30 days in this directory.
This directory is found on the host which performed the conversion. The host
can be selected in the import dialog, or can be found under the
"Events" tab in oVirt administration.
- /var/log/vdsm/vdsm.log
- As above, this file is present on the host which performed
the conversion. It contains detailed error messages from low-level
operations executed by VDSM, and is useful if the error was not caused by
virt-v2v, but by VDSM.
- /var/log/ovirt-engine/engine.log
- This log file is stored on the RHV-M server. It contains
more detail for any errors caused by the oVirt GUI.
virt-v2v(1).
Richard W.M. Jones
Copyright (C) 2009-2020 Red Hat Inc.
To get a list of bugs against libguestfs, use this link:
https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools
To report a new bug against libguestfs, use this link:
https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools
When reporting a bug, please supply:
- •
- The version of libguestfs.
- •
- Where you got libguestfs (eg. which Linux distro, compiled
from source, etc)
- •
- Describe the bug accurately and give a way to reproduce
it.
- •
- Run libguestfs-test-tool(1) and paste the
complete, unedited output into the bug report.