Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

[VMware] Disk controller mappings #10454

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
Loading
from

Conversation

winterhazel
Copy link
Member

Description

This is a refactor of the disk controller related logic for VMware that also adds support for SATA and NVME controllers.

A detailed description of these changes is available at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+Controller+Mappings.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

How Has This Been Tested?

The tests below were performed for VMs with the following rootDiskController and dataDiskController configurations:

  • osdefault/osdefault (converted to lsilogic/lsilogic)
  • ide/ide
  • pvscsi/pvscsi
  • sata/sata
  • nvme/nvme
  • sata/lsilogic
  • ide/osdefault
  • osdefault/ide
  1. VM deployment: I deployed one VM with each of the configurations. I verified in vCenter that they had the correct amount of disk controllers, and that each volume was associated to the expected controller. The sata/lsilogic VM was the only one that had a data disk; the others only had a root disk.

  2. VM start: I stopped the VMs deployed in (1) and started them again. I verified in vCenter that they had the correct amount of disk controllers, and that each volume was associated to the expected controller.

  3. Disk attachment: while the VMs were running, I tried to attach a data disk. All the data disks were attached successfully (expect for the VMs using IDE as the data disk controller, which does not allow hot plugging disks; for these, I attached the disks after stopping the VM). I verified that all the disks were using the expected controller. Then, I stopped and started the VM, and verified that they were still using the expected controllers. Finally, I stoped the VMs and detached the volumes. I verified that they were detached successfully.

  4. VM import: I unmanaged the VMs and imported them back. I verified that their settings were infered successfully according to the existing disk controllers. Then, I started the VMs, and verified that the controllers and the volumes were configured correctly.

The next tests were performed using the following imported VMs:

  • osdefault/osdefault
  • ide/ide
  • nvme/nvme
  • sata/lsilogic
  1. Volume migration: I migrated the volumes from NFS to local storage, and verified that the migration finished successfully. Then, I started the VMs and verified that both the controllers and the disks were configured correctly.

  2. Volume resize: I expanded all of the disks, and verified in vCenter that their size was changed. Then, I started the VMs and verified that both the controllers and the disks were configured correctly.

  3. VM snapshot: I took some VM snapshots, started the VMs and verified that everything was ok. I changed the configurations of the VM using osdefault/osdefault to sata/sata and started the VM to begin the reconfiguration process. I verified that the disk controllers in use were not removed, and that the disks were still associated with the previous controllers; however, the SATA controllers were also created. The VM was working as expected. Finally, I deleted the VM snapshots.

  4. Template creation from volume: I created templates from the root disks. Then, I deployed VMs from the templates. I verified that all the VMs had the same disk controllers as the original VM, and that the only existing disk was correctly associated with the configured root disk controller.

  5. Template creation from volume snapshot: I took snapshots from the root disks, and created templates from the snapshots. Then, I deployed VMs from the templates. I verified that all the VMs had the same disk controllers as the original VM, and that the only existing disk was correctly associated with the configured root disk controller.

  6. VM scale: with the VMs stopped, I scaled the VM from Small Instance to Medium Instance. I verified that the offering was changed. I started the VMs, and verified that the VMs were correctly reconfigured in vCenter.

Other tests:

  • System VM creation: after applying the patches, I recreated the SSVM and the CPVM. I verified that they were using a single LSI Logic controller. I also verified the controllers of a new VR and of an existing VR.

  • I attached 3 disks to the ide/ide controller. When trying to attach a 4th disk, I got an expected exception, as the IDE bus reached the maximum amount of devices (the 4th one was the CD/DVD drive).

  • I removed all the disks from the sata/lsilogic VM. I tried to attach the root disk again, and verified that it was attached successfully. I started the VM, and verified that it was configured correctly.

  • I attached 8 disks to the pvscsi/pvscsi VM, and verified that the 8th disk was successfully attached to device number 8 (device number 7 is reserved for the controller).

@winterhazel
Copy link
Member Author

@blueorangutan package

@winterhazel winterhazel changed the title Disk controller mappings [VMware] Disk controller mappings Feb 24, 2025
@blueorangutan
Copy link

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

Copy link

codecov bot commented Feb 24, 2025

Codecov Report

Attention: Patch coverage is 63.14779% with 192 lines in your changes missing coverage. Please review.

Project coverage is 16.65%. Comparing base (fa85a75) to head (7476f60).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
...he/cloudstack/storage/DiskControllerMappingVO.java 31.86% 61 Missing and 1 partial ⚠️
...oud/hypervisor/vmware/resource/VmwareResource.java 74.30% 31 Missing and 6 partials ⚠️
...m/cloud/hypervisor/vmware/mo/VirtualMachineMO.java 78.46% 23 Missing and 5 partials ⚠️
...tack/storage/dao/DiskControllerMappingDaoImpl.java 0.00% 20 Missing ⚠️
...com/cloud/hypervisor/vmware/util/VmwareHelper.java 86.86% 13 Missing ⚠️
.../com/cloud/agent/api/SecStorageVMSetupCommand.java 0.00% 6 Missing ⚠️
...esource/VmwareSecondaryStorageResourceHandler.java 0.00% 6 Missing ⚠️
...cloud/storage/resource/VmwareStorageProcessor.java 0.00% 5 Missing ⚠️
.../secondarystorage/SecondaryStorageManagerImpl.java 0.00% 4 Missing ⚠️
...oud/hypervisor/vmware/mo/HypervisorHostHelper.java 0.00% 4 Missing ⚠️
... and 4 more
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #10454      +/-   ##
============================================
+ Coverage     16.58%   16.65%   +0.07%     
- Complexity    13870    13963      +93     
============================================
  Files          5719     5721       +2     
  Lines        507194   507098      -96     
  Branches      61573    61482      -91     
============================================
+ Hits          84093    84446     +353     
+ Misses       413681   413222     -459     
- Partials       9420     9430      +10     
Flag Coverage Δ
uitests 3.96% <ø> (ø)
unittests 17.53% <63.14%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 12549

@winterhazel
Copy link
Member Author

@DaanHoogland it seems there were some merge issues in main. org.apache.cloudstack.backup.VeeamBackupProvider is missing some methods and imports.

@DaanHoogland
Copy link
Contributor

@DaanHoogland it seems there were some merge issues in main. org.apache.cloudstack.backup.VeeamBackupProvider is missing some methods and imports.

I'll check and update

@DaanHoogland
Copy link
Contributor

@winterhazel , please see #10457 . I have had no time (or infra) to test yet.

@winterhazel
Copy link
Member Author

@blueorangutan package

@blueorangutan
Copy link

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 12586

Copy link

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

@JoaoJandre
Copy link
Contributor

@winterhazel could you fix the conflicts?

@winterhazel
Copy link
Member Author

@blueorangutan package

@blueorangutan
Copy link

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13603

@sureshanaparti sureshanaparti requested a review from nvazquez June 5, 2025 09:30
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors VMware disk controller handling by introducing a dynamic disk controller mapping backed by a new database table, adds support for SATA and NVMe controllers, and updates all callers to use the new DiskControllerMappingVO model.

  • Centralize disk controller metadata in a new disk_controller_mapping table, DAO, VO, and loader logic.
  • Refactor VmwareResource and related classes to use DiskControllerMappingVO instead of static enums, with full end-to-end support in start/attach flows.
  • Update API, schema, manager, and tests to load and propagate supported disk controllers at runtime.

Reviewed Changes

Copilot reviewed 29 out of 29 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/ClusterMO.java Change createBlankVm signature to use DiskControllerMappingVO.
vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/BaseMO.java Add protected no-arg constructor
vmware-base/pom.xml Add dependency on cloud-engine-schema
services/secondary-storage/controller/.../SecondaryStorageManagerImpl.java Inject DiskControllerMappingDao and set supported controllers on setup
server/src/main/java/com/cloud/api/query/QueryManagerImpl.java Replace hard-coded controller lists with dynamic DAO lookup
plugins/hypervisors/vmware/.../VmwareStorageProcessor.java Simplify attachVolume to use DiskControllerMappingVO
plugins/hypervisors/vmware/.../VmwareSecondaryStorageResourceHandler.java Handle new SecStorageVMSetupCommand
plugins/hypervisors/vmware/.../VmwareResource.java Major refactor to use DiskControllerMappingVO for disk controller ops
plugins/hypervisors/vmware/.../VmwareManagerImpl.java Load mappings at configuration time
plugins/hypervisors/vmware/.../VmwareVmImplementer.java Update default root-controller detail to use enum
engine/schema/.../schema-42010to42100.sql Create disk_controller_mapping table and seed default mappings
engine/schema/src/.../DiskControllerMappingDaoImpl.java & DiskControllerMappingDao.java New DAO for disk controller mapping
engine/schema/src/.../DiskControllerMappingVO.java New JPA entity for disk controller mapping
core/src/main/java/com/cloud/agent/api/SecStorageVMSetupCommand.java Add field to carry supported disk controllers

Comment on lines +2104 to +2105
String rootDiskControllerDetail = controllerInfo.get(VmDetailConstants.ROOT_DISK_CONTROLLER);
String dataDiskControllerDetail = controllerInfo.get(VmDetailConstants.DATA_DISK_CONTROLLER);
Copy link
Preview

Copilot AI Jun 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There’s no null-check for controllerInfo or its keys before calling get(...). Add a guard to avoid NPEs or provide a sensible default when controllerInfo is missing or incomplete.

Suggested change
String rootDiskControllerDetail = controllerInfo.get(VmDetailConstants.ROOT_DISK_CONTROLLER);
String dataDiskControllerDetail = controllerInfo.get(VmDetailConstants.DATA_DISK_CONTROLLER);
String rootDiskControllerDetail = null;
String dataDiskControllerDetail = null;
if (controllerInfo != null) {
rootDiskControllerDetail = controllerInfo.getOrDefault(VmDetailConstants.ROOT_DISK_CONTROLLER, null);
dataDiskControllerDetail = controllerInfo.getOrDefault(VmDetailConstants.DATA_DISK_CONTROLLER, null);
}

Copilot uses AI. Check for mistakes.

Positive FeedbackNegative Feedback
@@ -5320,8 +5325,9 @@ private void fillVMOrTemplateDetailOptions(final Map<String, List<String>> optio

if (HypervisorType.VMware.equals(hypervisorType)) {
options.put(VmDetailConstants.NIC_ADAPTER, Arrays.asList("E1000", "PCNet32", "Vmxnet2", "Vmxnet3"));
options.put(VmDetailConstants.ROOT_DISK_CONTROLLER, Arrays.asList("osdefault", "ide", "scsi", "lsilogic", "lsisas1068", "buslogic", "pvscsi"));
options.put(VmDetailConstants.DATA_DISK_CONTROLLER, Arrays.asList("osdefault", "ide", "scsi", "lsilogic", "lsisas1068", "buslogic", "pvscsi"));
List<String> availableDiskControllers = diskControllerMappingDao.listForHypervisor(HypervisorType.VMware).stream().map(DiskControllerMappingVO::getName).collect(Collectors.toList());
Copy link
Preview

Copilot AI Jun 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The list of controller names is not sorted, which can lead to inconsistent option ordering in the UI. Consider sorting alphabetically to ensure a predictable user experience.

Suggested change
List<String> availableDiskControllers = diskControllerMappingDao.listForHypervisor(HypervisorType.VMware).stream().map(DiskControllerMappingVO::getName).collect(Collectors.toList());
List<String> availableDiskControllers = diskControllerMappingDao.listForHypervisor(HypervisorType.VMware).stream()
.map(DiskControllerMappingVO::getName)
.sorted()
.collect(Collectors.toList());

Copilot uses AI. Check for mistakes.

Positive FeedbackNegative Feedback
if (currentBusName.startsWith(mapping.getBusName())) {
logger.debug("Choosing disk controller [{}] for virtual machine [{}] based on current bus name [{}].", mapping.getName(), vmMo, currentBusName);
return mapping;
}
}
}
Copy link
Preview

Copilot AI Jun 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Falling back silently to an arbitrary existing controller may be confusing in failure scenarios. Add a debug or warning log here to record which controller was chosen and why the bus name lookup failed.

Suggested change
}
}
logger.debug("Falling back to an arbitrary existing disk controller for virtual machine [{}] as no matching controller was found based on the current bus name.", vmMo);

Copilot uses AI. Check for mistakes.

Positive FeedbackNegative Feedback
@DaanHoogland
Copy link
Contributor

@blueorangutan test keepEnv

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian test result (tid-13478)
Environment: kvm-ol8 (x2), Advanced Networking with Mgmt server ol8
Total time taken: 67978 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10454-t13478-kvm-ol8.zip
Smoke tests completed. 141 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

Copy link

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

5 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.