当前位置:网站首页>minikube config set driver kvm2

minikube config set driver kvm2

2022-06-09 20:41:00 Backend cloud

Reference resources : https://minikube.sigs.k8s.io/docs/drivers/kvm2/

Record minikube Of driver Switch to kvm2 Problems encountered

PROVIDER_KVM2_NOT_FOUND: The ‘kvm2’ provider was not found: exec: “virsh”: executable file not found in $PATH

[email protected] ~]$ minikube config set vm-driver kvm2
  These changes will take effect upon a minikube delete and then a minikube start
[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration

  Exiting due to PROVIDER_KVM2_NOT_FOUND: The 'kvm2' provider was not found: exec: "virsh": executable file not found in $PATH
  Suggestion: Install libvirt
  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/language-bash Copy code 
[[email protected] ~]$ cat /sys/module/kvm_intel/parameters/nested
N
[[email protected] ~]$ uname -r
3.10.0-1160.el7.x86_64
[[email protected] ~]$ lsmod|grep kvm_intel
kvm_intel             188740  0 
kvm                   637289  1 kvm_intel
#  There's something on it kvm explain host bois Open the intel vmx,vmware It's also on intel vmx( The machine is vmware virtual machine ), Then confirm whether nested virtualization is enabled 
[[email protected] ~]$ logout
[[email protected] ~]# modprobe -r kvm_intel^C
[[email protected] ~]# lsmod|grep kvm_intel
kvm_intel             188740  0 
kvm                   637289  1 kvm_intel
[[email protected] ~]# modprobe -r kvm_intel
[[email protected] ~]# lsmod|grep kvm_intel
[[email protected] ~]# modprobe kvm_intel nested=1
[[email protected] ~]# lsmod|grep kvm_intel
kvm_intel             188740  0 
kvm                   637289  1 kvm_intel
[[email protected] ~]# cat /sys/module/kvm_intel/parameters/nested
Y
# Enable nested virtualization permanently
[[email protected] ~]# vi /etc/modprobe.d/kvm.conf
#  Edit the file , Add a row 
options kvm_intel nested=1language-bash Copy code 
[[email protected] ~]$ minikube config set vm-driver kvm2
  These changes will take effect upon a minikube delete and then a minikube start
[[email protected] ~]$ minikube delete
  "minikube" profile does not exist, trying anyways.
  Removed all traces of the "minikube" cluster.
[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration

  Exiting due to PROVIDER_KVM2_NOT_FOUND: The 'kvm2' provider was not found: exec: "virsh": executable file not found in $PATH
  Suggestion: Install libvirt
  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/

[[email protected] ~]$ sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install^Clanguage-bash Copy code 

Exiting due to PR_KVM_USER_PERMISSION: libvirt group membership check failed

[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration

  Exiting due to PR_KVM_USER_PERMISSION: libvirt group membership check failed:
user is not a member of the appropriate libvirt group
  Suggestion: Ensure that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!)
  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
  Related issues:
     https://github.com/kubernetes/minikube/issues/5617
     https://github.com/kubernetes/minikube/issues/10070

[[email protected] ~]$ sudo groupadd libvirt4minikube
[sudo] password for developer: 
[[email protected] ~]$ sudo usermod -a -G libvirt4minikube $USERlanguage-bash Copy code 

Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory

[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration

  Exiting due to PROVIDER_KVM2_ERROR: /bin/virsh domcapabilities --virttype kvm failed:
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
exit status 1
  Suggestion: Follow your Linux distribution instructions for configuring KVM
  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/

[[email protected] ~]$ sudo systemctl start libvirtd
[[email protected] ~]$ sudo systemctl enable libvirtdlanguage-bash Copy code 

authentication unavailable: no polkit agent available to authenticate action ‘org.libvirt.unix.manage’

[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration

  Exiting due to PROVIDER_KVM2_ERROR: /bin/virsh domcapabilities --virttype kvm failed:
error: failed to connect to the hypervisor
error: authentication unavailable: no polkit agent available to authenticate action 'org.libvirt.unix.manage'
exit status 1
  Suggestion: Follow your Linux distribution instructions for configuring KVM
  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/language-bash Copy code 
 This is mainly WebVirtMgr The installation of resulted in errors , The solution is as follows :

1、 increase libvirtd User group 
groupadd libvirtd

2、 Set user to group 
sudo usermod -a -G libvirtd $USER

3、 Set to start libvirtd User groups for services 
vi /etc/libvirt/libvirtd.conf
 Modify the following 
# This is restricted to 'root' by default.
#unix_sock_group = "libvirt"
unix_sock_group = "libvirtd"

4、 Add permission to start configuration  -  by libvirt Add one polkit Strategy 
vi /etc/polkit-1/localauthority/50-local.d/50-org.libvirtd-group-access.pkla
#  Add the following 
[libvirtd group Management Access]
Identity=unix-group:libvirtd
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

5、 Restart the service 
service libvirtd restartlanguage-bash Copy code 

error creating VM: virError(Code=1, Domain=10, Message=’internal error: qemu unexpectedly closed the monitor: Cannot set up guest memory ‘pc.ram’: Cannot allocate memory’)

[[email protected] ~]$ minikube delete
  Deleting "minikube" in kvm2 ...
  Removed all traces of the "minikube" cluster.
[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration
  Starting control plane node minikube in cluster minikube
  Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
  Deleting "minikube" in kvm2 ...
  StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: error creating VM: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory')
  Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
  Failed to start kvm2 VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: error creating VM: virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory')

  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: error creating VM: virError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory')

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│      If the above advice does not help, please let us know:                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯language-bash Copy code 

Reference resources : https://github.com/kubernetes/minikube/issues/3634

The mistake is Openstack Nova When there is not enough memory, the , At this time, it is necessary to adjust the memory over allocation or lower vm Of flavor: nova-conductor.log Report errors : ERROR nova.scheduler.utils [req-9880cb62-7a70-41aa-b6c0-db4ec5333e98 53a1cf0ad2924532aa4b7b0750dec282 0ab2dbde4f754b699e22461426cd0774 - - -] [instance: 36bb1220-f295-4205-ba2e-6e41f8b134b9] Error from last host: xiandian (node xiandian): [u’Traceback (most recent call last):\n’, u’ File “/usr/lib/python2.7/site-packages/nova/compute/manager.py”, line 1926, in _do_build_and_run_instance\n filter_properties)\n’, u’ File “/usr/lib/python2.7/site-packages/nova/compute/manager.py”, line 2116, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n’, u”RescheduledException: Build of instance 36bb1220-f295-4205-ba2e-6e41f8b134b9 was re-scheduled: internal error: process exited while connecting to monitor: 2019-05-20T17:38:19.473598Z qemu-kvm: cannot set up guest memory ‘pc.ram’: Cannot allocate memory\n\n”]

Error reason :memory Insufficient ,free see memory After use , Lower memory

[[email protected] ~]$ free
              total        used        free      shared  buff/cache   available
Mem:        7990064     3911652      490180       12216     3588232     3777588
Swap:             0           0           0
[[email protected] ~]$ minikube start --memory 2048
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration
  Starting control plane node minikube in cluster minikube
  Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
     kubelet.housekeeping-interval=5m
     Generating certificates and keys ...
     Booting up control plane ...
     Configuring RBAC rules ...
  Verifying Kubernetes components...
     Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: storage-provisioner, default-storageclass
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by defaultlanguage-bash Copy code 

minikube with vm-driver kvm2 Successful launch

[[email protected] ~]$ minikube start --memory 4096
  minikube v1.25.2 on Centos 7.9.2009
  Using the kvm2 driver based on user configuration
  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2-...: 65 B / 65 B [----------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2-...: 11.62 MiB / 11.62 MiB  100.00% 8.31 MiB p
  Downloading VM boot image ...
    > minikube-v1.25.2.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.25.2.iso: 237.06 MiB / 237.06 MiB [] 100.00% 8.84 MiB p/s 27s
  Starting control plane node minikube in cluster minikube
  Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
  Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
     kubelet.housekeeping-interval=5m
     Generating certificates and keys ...
     Booting up control plane ...
     Configuring RBAC rules ...
  Verifying Kubernetes components...
     Using image gcr.io/k8s-minikube/storage-provisioner:v5
  Enabled addons: default-storageclass, storage-provisioner
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by defaultlanguage-bash Copy code 
[[email protected] ~]$ kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-64897985d-dxggj            1/1     Running   0          4m46s
kube-system   etcd-minikube                      1/1     Running   0          4m59s
kube-system   kube-apiserver-minikube            1/1     Running   0          5m
kube-system   kube-controller-manager-minikube   1/1     Running   0          4m59s
kube-system   kube-proxy-9q4dg                   1/1     Running   0          4m46s
kube-system   kube-scheduler-minikube            1/1     Running   0          4m59s
kube-system   storage-provisioner                1/1     Running   0          4m57s
[[email protected] ~]$ kubectl get node
NAME       STATUS   ROLES                  AGE    VERSION
minikube   Ready    control-plane,master   5m7s   v1.23.3
[[email protected] ~]$ virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : FAIL (Load the 'fuse' module to enable /proc/ overrides)language-bash Copy code 

Usage

Start a cluster using the kvm2 driver:

minikube start --driver=kvm2
 Copy code 

To make kvm2 the default driver:

minikube config set driver kvm2
 Copy code 

Special features

The minikube start command supports 5 additional KVM specific flags:

  • –gpu: Enable experimental NVIDIA GPU support in minikube
  • –hidden: Hide the hypervisor signature from the guest in minikube
  • –kvm-network: The KVM default network name
  • –network: The dedicated KVM private network name
  • –kvm-qemu-uri: The KVM qemu uri, defaults to qemu:///system
原网站

版权声明
本文为[Backend cloud]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/160/202206092035010596.html