Compare commits

...

57 Commits

Author SHA1 Message Date
Daan Hoogland
619a4a9128 4.21/main Health Check, please don't merge this!
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-20 11:23:38 +02:00
Erik Böck
f63118c011
Add erikbocks as a collaborator (#11863) 2025-10-20 10:27:34 +02:00
Abhishek Kumar
4cdcde2fe7
server: do not return extension path to non root admins (#11856)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-17 15:06:28 +05:30
Abhishek Kumar
d8766418e0 extensions: custom action entity access 2025-10-17 14:08:28 +05:30
Nicolas Vazquez
e7a55a766c
Fixes for Import VM Tasks listing (#11841)
* Fix import VM tasks pagination

* Fix UI for pagination and proper listing

* Fixes and improvements

* Polish UI

* Restore config.json

* Fix state on parameter description
2025-10-16 18:56:41 +05:30
Harikrishna Patnala
8b9f5fd8f9 Merge branch '4.20' 2025-10-16 13:39:40 +05:30
Abhishek Kumar
03a4b9f4fd
server,utils: improve js interpretation functionality
Make JS interpretation functionalities configurable via a hidden config
- js.interpretation.enabled
Default value is false, making such functionalities disabled, ie, new
heuristic rules cannot be added or updated.

For JsInterpretor, use --no-java --no-syntax-extensions args and a deny-all ClassFilter.
Replace string-spliced vars with ENGINE_SCOPE Bindings, use a fresh ScriptContext per run, and compile before eval.
Use a named daemon worker with hard timeouts and capture stdout.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-16 09:49:36 +02:00
Abhishek Kumar
c8d44d92a7
api,server: fix entity access
Added access check for:
- createNetworkACL
- listNetworkACLs
- listResourceDetails
- listVirtualMachinesUsageHistory
- listVolumesUsageHistory

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-16 09:49:34 +02:00
Abhishek Kumar
eee43e534f
cloudutils: fix warning, error during kvm agent installation (#11318)
* cloudutils: fix warning, error during kvm agent installation

Fixes #10379

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* Update utilities.py

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-15 20:31:00 +02:00
Pearl Dsilva
0e8b0b8e40
Allow counters to be created with same name, provider and source as a deleted one (#10223) 2025-10-15 13:06:36 +02:00
Wei Zhou
b82369c241
systemvm: fix duplicated "en_US.UTF-8 UTF-8" in /etc/locale.gen (#11823) 2025-10-15 11:42:24 +02:00
Pearl Dsilva
f4b6a74a94
Add support for CSI driver in CKS (#11419)
* Support creation of PV(persistent volumes) in CloudStack projects

* add support for snapshot APIs for project role

* Add support to setup csi driver on k8s cluster creation

* fix deploy script

* update response

* fix table name

* fix linter

* show if csi driver is setup in cluster

* delete pvs whose reclaim policy is delete when cluster is destroyed

* update ref

* move changes to 4.22

* fix variables

* fix eof
2025-10-15 11:03:47 +05:30
Wei Zhou
4327871036
Routed: fix create network exception when auto-allocation is disabled (#11624)
* Routed: fix create network exception when auto-allocation is disabled for regular users

* routed: throw InvalidParameterValueException instead of CloudRuntimeException which gives vague message to regular users
2025-10-14 13:00:33 +02:00
Abhisar Sinha
046014b4c5
NAS BnR: Create Instance from Backup issues (#11754)
* add createCrossZoneInstnaceEnabled to BackupOfferingResponse

* show use IP Address from Backup button when orignal instance is expunged

* Fix NPE in takeBackup if the  vm template is deleted.

* Add since to Cross zone instance creation in BackupOfferingResponse.java

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>

* Store and show Guest os type in the backup metadata

* show warning in create instance from backup form if guest os type is different

* show warning in create instance from backup form if guest os type is different

* backupvmexpunged -> isbackupvmexpunged

* review comments

* fix npe

* improve err msg

* err msg

---------

Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2025-10-14 14:51:57 +05:30
Rohit Yadav
6f931dbd00
agent: increase timeout for host arch retrieval (#11254) (#11822)
Cherry-picked from 44f80648a9ea818e34997416aabbcd95cb03f847

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-14 10:53:45 +02:00
Harikrishna
c0a4392b05
Fix volume copy from primary to primary in simulator (#11836) 2025-10-14 14:01:44 +05:30
dahn
f71d3a8e9f
update the developers guide link on the API page (#11832)
Co-authored-by: Daan Hoogland <dahn@apache.org>
2025-10-14 10:13:31 +02:00
Manoj Kumar
9e535e35d2
Support xz format for template registration (#11786) 2025-10-14 09:13:12 +02:00
Abhishek Kumar
dfcbd2e977
server: consistent behaviour for list apis with project=-1 (#11767)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-14 09:06:53 +02:00
julien-vaz
a574f7ac99
Add logs for host removal (#10423)
Co-authored-by: Julien Hervot de Mattos Vaz <julien.vaz@scclouds.com.br>
Co-authored-by: Bernardo De Marco Gonçalves <bernardomg2004@gmail.com>
2025-10-14 08:49:15 +02:00
CodeBleu
c9ce6e305c
ui: Allow edit source CIDR on load balancer rule (#11766) 2025-10-14 08:30:28 +02:00
Pearl Dsilva
5e7ae227d3
UI: Prevent exceptions when network service provider that's disabled is viewed (#11413) 2025-10-14 08:24:59 +02:00
Abhishek Kumar
0ca63f36a5
api,server,ui: allow cleaning up external details for host and serviceoffering (#11548) 2025-10-13 16:21:43 +02:00
John Bampton
349feebd15
Standardize Markdown headings; enforce MD003 with markdownlint (#11688) 2025-10-13 17:37:32 +05:30
John Bampton
cdb0604e7b
pre-commit: enforce mixed-line-ending for all files (#11667) 2025-10-13 16:26:15 +05:30
John Bampton
e27528f8b2
Update GitHub Actions (#11664) 2025-10-13 16:25:58 +05:30
John Bampton
22ba8dd504
Remove misspelled file not found from rat excludes (#11665) 2025-10-13 16:23:09 +05:30
Vishesh
0ca267f516
Allow uploading of ISO for creating kubernetes supported versions (#9561) 2025-10-13 12:51:30 +02:00
Rohit Yadav
8464e46b53
PR #11778 with changes for main branch (#11781)
* systemvmtemplate: Bump Debian version to 12.12.0

* systemvmtemplate: bump version to 4.22

This bumps the systemvmtemplate version to 4.22 for use with the
main/4.22 branch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

---------

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
2025-10-13 15:09:25 +05:30
Pearl Dsilva
0e93ae3bdf
UI: Add validator for CIDR being passed (#11465) 2025-10-13 11:18:32 +02:00
John Bampton
a5a934dac1
pre-commit: add hooks check-illegal-windows-names and file-contents-sorter (#11662) 2025-10-13 13:59:42 +05:30
Layon
136ea3eafa
UI: Removal of UI blockage to access the changeOfferingForVolume API (#10135) 2025-10-13 10:27:36 +02:00
Rohit Yadav
1e23d6bc20
server: enable KVM volume and VM snapshot by default (#11446)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2025-10-13 09:53:11 +02:00
Wei Zhou
162c45f8fa
api/server: list networks by name (#11470)
* api/server: list networks by name

* Update api/src/main/java/org/apache/cloudstack/api/command/user/network/ListNetworksCmd.java
2025-10-13 12:39:13 +05:30
Vishesh
0b9afe77ca
Enforce distinct hostnames network (#10212)
* Check for unique hostnames for all networks in the vpc

* Address comments
2025-10-13 12:38:31 +05:30
John Bampton
cc6ee906d5
Markdown: add documentation on pre-commit usage (#11680) 2025-10-13 12:11:55 +05:30
Wei Zhou
86cad79c15
importvm: fix IP address allocation on Shared networks (#11811) 2025-10-13 08:16:46 +02:00
Nicolas Vazquez
b106d6e190
VMware to KVM Migrations improvements (#11594)
* Add source VM name on virt-v2v migration log entries

* Improve the feedback by displaying the running importing tasks

* Add source VM name prefix on more conversion logs

* Improve listing and also list completed tasks

* Pass extra parameters to virt-v2v if administrator allows via global setting

* Add Force converting directly to storage pool option

* Refactor based on review comments

* Add properties for env vars for the instance conversion

* Add separate component for Import VM Tasks

* applying copilot suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Fix importing unmanaged instances due to incorrect internal name

* Add VM prefix on each log operation for conversion

* Log the original VM name instead of the cloned VM in case of cloning

* Allow searching storage pool by UUID after conversion to support SharedMountPoint

* Fix search pools logic

* Improve UI and add checks for force convert to pool parameter

* Support Local storage when forceconverttopool is set to true

* Add config key to for allowed extra params and add validation

* Fix params lists

* Fix compile error

* Remove extra stubbings

* Fix extra params execution

---------

Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
2025-10-10 20:00:29 -03:00
Harikrishna
b99a03092f
Added Extension for MaaS integration in CloudStack (#11613)
* Adding extension support for Baremetal MaaS

* Update engine/schema/src/main/resources/META-INF/db/schema-42100to42200.sql

---------

Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
2025-10-10 14:57:30 +05:30
Suresh Kumar Anaparti
df49c4f14b
UI: Move Backup Repository to Infrastructure (from Configuration) (#11738)
* UI: Move Backup Repository to Infrastructure (from Configuration)

* Updated nas doc help link
2025-10-10 13:25:05 +05:30
Abhishek Kumar
67250d99d4
ui: fix add host form state on submit (#11815) 2025-10-10 13:09:25 +05:30
Suresh Kumar Anaparti
2b1f0bbbdb
UI: Fix for cluster addition in VMware (#11812) 2025-10-10 12:35:41 +05:30
Pearl Dsilva
973819dad6
API: Add support to list all snapshot policies & backup schedules (#11587)
* API: Add support to list all snapshot policies & backup schedules

* Add support for backup policy listing without tying it to the vmid

* add tests for snapshot policy listing

* update tests for listbackupschedules

* remove trailing spaces and fix lint failure

* Add upgrade test

* remove unused import

* add create policy - snap/backup in the list view with resource (volume/vm) selection

* add translations

* refresh parent list

* remove unnecessary alert info

* fix checks for UI backup schedule list view

* fix checks for UI backup schedule list view

* add back access checks

* add since param

* fix failing test

* update snapshot policy and backup schedule ownership when VM is moved

* fix issue with showing vm selection

* fix unit test failure

* Update list snappolicy & backup schedule logic to list only those that belong to a proj or for root admin those that belong to it, unless listall & projid is passed

* fix test

* support snap / backup policy search using keyword

* fix tests
2025-10-09 17:22:17 +05:30
Suresh Kumar Anaparti
f67b738eb3
Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly (#11625)
* Migrate volume improvements, to bypass secondary storage when copy volume between pools is allowed directly

* Bypass secondary storage for copy volume between zone-wide pools and
- local storage on host in the same zone
- cluser-wide pools in the same zone

* Bypass secondary storage for volumes on ceph/rdb pool when the scope permits

* Fix dest disk format while migrating volume from ceph/rbd to nfs, and some code improvements

* unit tests

* Update suitable disk offering(s) for volume(s) after migrate VM with volumes when change in pool type (shared or local)

Currently, Migrate VM with volume(s) bypasses the service and disk offerings of the volumes, as the target pools for migration are specified,
which ignores the offerings. Offering change is required when pool type (shared or local) is changed, mainly
- when volume on shared pool is migrated to local pool
- when volume on local pool is migrated to shared pool

* Update with proper message while migrate volume when target pool and offering type mismatches (both are not shared/local)

* Consider host scope first during endpoint selection while copying between primary storages

* Update disk offering count (for listDiskOfferings api) while removing offerings with tags mismatch with storage tags
2025-10-09 16:00:46 +05:30
Abhisar Sinha
4d95f08a3a
Delete template from storage pool instantly if no volume is using it (#11782) 2025-10-09 09:41:18 +02:00
Abhishek Kumar
a6ef24d167
server: consistent domainpath in api responses (#11589)
* server: consistent domainpath in api responses

Currently, some APIs return domainpath as 'ROOT/domain1/domain2' while
other return it as '/domain1/domain2'. This PR makes the response
consistent like "ROOT/domain1/domain2"

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* more changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-09 13:06:28 +05:30
dahn
309b444205
pom.xml: update jetty version (#11793)
* update jetty

* Rollback jetty-maven-plugin version in pom.xml

Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>

---------

Co-authored-by: Daan Hoogland <dahn@apache.org>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2025-10-09 08:39:45 +02:00
Wei Zhou
6089c161a6
Merge remote-tracking branch 'apache/4.20' 2025-10-08 15:40:33 +02:00
Wei Zhou
89d2b17461
storage: change storage pool to Up state when cancel storage migration (#11773)
* storage: change storage pool to Up state when cancel storage migration

* Update 11773: connect host to shared pool after cancelling storage migration

* Update 11773: update db only

* Update 11773: skip capacity update for storpool
2025-10-08 15:34:59 +02:00
Suresh Kumar Anaparti
b143ddc405
Sanitize the rbd file cmd parameter logs during qemu-img convert (through Script) (#11801) 2025-10-08 13:55:08 +02:00
Henrique Sato
cc3170577c
Add Hypervisor default as cache mode for disk offerings (#10282)
Co-authored-by: Henrique Sato <henrique.sato@scclouds.com.br>
2025-10-08 13:39:28 +02:00
Manoj Kumar
9f20979bce
UI: Fix primary storage for datastore cluster and retain traffic labels during zone deployment (#11760) 2025-10-08 13:38:03 +02:00
Abhishek Kumar
a15fbd9bcc
refactor: remove use of term entry-point from extensions code base (#11488)
Addresses #11483

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-10-08 15:42:43 +05:30
dahn
270d3f9a2d
UI: Deal with crosssite api call after login (#10533) 2025-10-08 10:42:00 +02:00
Wei Zhou
314c4591ec
systemvmtemplate: Bump Debian version to 12.12.0 (#11778) 2025-10-08 10:25:36 +02:00
Suresh Kumar Anaparti
09b63bc2e8
Storage pool response improvements (#10740)
* Return details of the storage pool in the response including url, and update capacityBytes and capacityIops if applicable while creating storage pool

* Added capacitybytes parameter to the storage pool response in sync with the capacityiops response parameter and createStoragePool cmd request parameter (existing disksizetotal parameter in the storage pool response can be deprecated)

* Don't keep url in details

* Persist the capacityBytes and capacityIops in the storage_pool_details table while creating storage pool as well, for consistency - as these are updated with during update storage pool

* rebase with main fixes
2025-10-08 11:20:37 +05:30
Vishesh
d2615bb142
Add support for providing userdata to system VMs (#11654)
This PR adds support for specifying user data (cloud-init) for system VMs via Zone Scoped global settings. This allows the operators to customize the System VMs and setup monitoring, logging or execute any custom commands.

We set the user data from the global setting in /var/cache/cloud/cmdline, and use the NoCloud datasource to process user data. cloud-init service is still disabled in the system VMs and it's executed as part of the cloud-postinit service which executes the postinit.sh script.

Added global settings:
systemvm.userdata.enabled - Disabled by default. Needs to be enabled to utilize the feature.
console.proxy.vm.userdata - UUID of the User data to be used for Console Proxy
secstorage.vm.userdata - UUID of the User data to be used for Secondary Storage VM
virtual.router.userdata - UUID of the User data to be used for Virtual Routers
2025-10-08 10:44:26 +05:30
325 changed files with 41464 additions and 34848 deletions

View File

@ -59,6 +59,7 @@ github:
- abh1sar
- rosi-shapeblue
- sudo87
- erikbocks
protected_branches: ~

View File

@ -18,9 +18,6 @@
# MD001/heading-increment Heading levels should only increment by one level at a time
MD001: false
# MD003/heading-style Heading style
MD003: false
# MD004/ul-style Unordered list style
MD004: false

View File

@ -375,6 +375,7 @@ propogate
provison
psudo
pyhsical
re-use
readabilty
readd
reccuring
@ -411,7 +412,6 @@ retriving
retrun
retuned
returing
re-use
rever
rocessor
runing

View File

@ -30,17 +30,17 @@ jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up JDK 17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: '3.10'
architecture: 'x64'

View File

@ -216,19 +216,19 @@ jobs:
smoke/test_list_volumes"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set up JDK 17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Set up Python
uses: actions/setup-python@v5
uses: actions/setup-python@v6
with:
python-version: '3.10'
architecture: 'x64'

View File

@ -32,12 +32,12 @@ jobs:
name: codecov
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set up JDK 17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
distribution: 'temurin'
java-version: '17'

View File

@ -35,7 +35,7 @@ jobs:
language: ["actions"]
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:

View File

@ -47,7 +47,7 @@ jobs:
- name: Set Docker repository name
run: echo "DOCKER_REPOSITORY=apache" >> $GITHUB_ENV
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set ACS version
run: echo "ACS_VERSION=$(grep '<version>' pom.xml | head -2 | tail -1 | cut -d'>' -f2 |cut -d'<' -f1)" >> $GITHUB_ENV

View File

@ -32,7 +32,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Check Out
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Install
run: |
python -m pip install --upgrade pip

View File

@ -32,12 +32,12 @@ jobs:
name: Main Sonar JaCoCo Build
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set up JDK17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
distribution: 'temurin'
java-version: '17'

View File

@ -30,9 +30,9 @@ jobs:
build:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up JDK 17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
java-version: '17'
distribution: 'adopt'

View File

@ -33,13 +33,13 @@ jobs:
name: Sonar JaCoCo Coverage
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
ref: "refs/pull/${{ github.event.number }}/merge"
fetch-depth: 0
- name: Set up JDK17
uses: actions/setup-java@v4
uses: actions/setup-java@v5
with:
distribution: 'temurin'
java-version: '17'

View File

@ -31,10 +31,10 @@ jobs:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Node
uses: actions/setup-node@v3
uses: actions/setup-node@v5
with:
node-version: 16

1
.markdownlintignore Normal file
View File

@ -0,0 +1 @@
CHANGES.md

View File

@ -32,11 +32,12 @@ repos:
name: run gitleaks
description: detect hardcoded secrets
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
rev: v6.0.0
hooks:
#- id: check-added-large-files
- id: check-case-conflict
#- id: check-executables-have-shebangs
- id: check-illegal-windows-names
- id: check-merge-conflict
- id: check-shebang-scripts-are-executable
files: \.sh$
@ -50,7 +51,7 @@ repos:
exclude: >
(?x)
^scripts/vm/systemvm/id_rsa\.cloud$|
^server/src/test/java/org/apache/cloudstack/network/ssl/CertServiceTest.java$|
^server/src/test/java/org/apache/cloudstack/network/ssl/CertServiceTest\.java$|
^server/src/test/java/com/cloud/keystore/KeystoreTest\.java$|
^server/src/test/resources/certs/dsa_self_signed\.key$|
^server/src/test/resources/certs/non_root\.key$|
@ -61,13 +62,15 @@ repos:
^services/console-proxy/rdpconsole/src/test/doc/rdp-key\.pem$|
^systemvm/agent/certs/localhost\.key$|
^systemvm/agent/certs/realhostip\.key$|
^test/integration/smoke/test_ssl_offloading.py$
^test/integration/smoke/test_ssl_offloading\.py$
- id: end-of-file-fixer
exclude: \.vhd$
- id: file-contents-sorter
args: [--unique]
files: ^\.github/linters/codespell\.txt$
- id: fix-byte-order-marker
- id: forbid-submodules
- id: mixed-line-ending
exclude: \.cs$
- id: trailing-whitespace
files: \.(bat|cfg|cs|css|gitignore|header|in|install|java|md|properties|py|rb|rc|sh|sql|te|template|txt|ucls|vue|xml|xsl|yaml|yml)$|^cloud-cli/bindir/cloud-tool$|^debian/changelog$
args: [--markdown-linebreak-ext=md]

View File

@ -1,31 +1,28 @@
Contributing to Apache CloudStack (ACS)
=======================================
# Contributing to Apache CloudStack (ACS)
## Summary
Summary
-------
This document covers how to contribute to the ACS project. ACS uses GitHub PRs to manage code contributions.
These instructions assume you have a GitHub.com account, so if you don't have one you will have to create one. Your proposed code changes will be published to your own fork of the ACS project, and you will submit a Pull Request for your changes to be added.
_Let's get started!!!_
Bug fixes
---------
## Bug fixes
It's very important that we can easily track bug fix commits, so their hashes should remain the same in all branches.
Therefore, a pull request (PR) that fixes a bug, should be sent against a release branch.
This can be either the "current release" or the "previous release", depending on which ones are maintained.
Since the goal is a stable main, bug fixes should be "merged forward" to the next branch in order: "previous release" -> "current release" -> main (in other words: old to new)
Developing new features
-----------------------
## Developing new features
Development should be done in a feature branch, branched off of main.
Send a PR(steps below) to get it into main (2x LGTM applies).
PR will only be merged when main is open, will be held otherwise until main is open again.
No back porting / cherry-picking features to existing branches!
PendingReleaseNotes file
------------------------
## PendingReleaseNotes file
When developing a new feature or making a (major) change to an existing feature you are encouraged to append this to the PendingReleaseNotes file so that the Release Manager can
use this file as a source of information when compiling the Release Notes for a new release.
@ -33,8 +30,7 @@ When adding information to the PendingReleaseNotes file make sure that you write
Updating the PendingReleaseNotes file is preferably a part of the original Pull Request, but that is up to the developers' discretion.
Fork the code
-------------
## Fork the code
In your browser, navigate to: [https://github.com/apache/cloudstack](https://github.com/apache/cloudstack)
@ -51,8 +47,7 @@ $ git fetch upstream
$ git rebase upstream/main
```
Making changes
--------------
## Making changes
It is important that you create a new branch to make changes on and that you do not change the `main` branch (other than to rebase in changes from `upstream/main`). In this example I will assume you will be making your changes to a branch called `feature_x`. This `feature_x` branch will be created on your local repository and will be pushed to your forked repository on GitHub. Once this branch is on your fork you will create a Pull Request for the changes to be added to the ACS project.
@ -68,8 +63,7 @@ $ git commit -a -m "descriptive commit message for your changes"
> The `-b` specifies that you want to create a new branch called `feature_x`. You only specify `-b` the first time you checkout because you are creating a new branch. Once the `feature_x` branch exists, you can later switch to it with only `git checkout feature_x`.
Rebase `feature_x` to include updates from `upstream/main`
------------------------------------------------------------
## Rebase `feature_x` to include updates from `upstream/main`
It is important that you maintain an up-to-date `main` branch in your local repository. This is done by rebasing in the code changes from `upstream/main` (the official ACS project repository) into your local repository. You will want to do this before you start working on a feature as well as right before you submit your changes as a pull request. I recommend you do this process periodically while you work to make sure you are working off the most recent project code.
@ -89,8 +83,7 @@ $ git rebase main
> Now your `feature_x` branch is up-to-date with all the code in `upstream/main`.
Make a GitHub Pull Request to contribute your changes
-----------------------------------------------------
## Make a GitHub Pull Request to contribute your changes
When you are happy with your changes, and you are ready to contribute them, you will create a Pull Request on GitHub to do so. This is done by pushing your local changes to your forked repository (default remote name is `origin`) and then initiating a pull request on GitHub.
@ -114,8 +107,7 @@ To initiate the pull request, do the following:
If you are requested to make modifications to your proposed changes, make the changes locally on your `feature_x` branch, re-push the `feature_x` branch to your fork. The existing pull request should automatically pick up the change and update accordingly.
Cleaning up after a successful pull request
-------------------------------------------
## Cleaning up after a successful pull request
Once the `feature_x` branch has been committed into the `upstream/main` branch, your local `feature_x` branch and the `origin/feature_x` branch are no longer needed. If you want to make additional changes, restart the process with a new branch.
@ -129,6 +121,6 @@ $ git branch -D feature_x
$ git push origin :feature_x
```
Release Principles
------------------
## Release Principles
Detailed information about ACS release principles is available at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up

43
PRE-COMMIT.md Normal file
View File

@ -0,0 +1,43 @@
# pre-commit
We run [pre-commit](https://pre-commit.com/) with
[GitHub Actions](https://github.com/apache/cloudstack/blob/main/.github/workflows/linter.yml) so installation on your
local machine is currently optional.
The `pre-commit` [configuration file](https://github.com/apache/cloudstack/blob/main/.pre-commit-config.yaml)
is in the repository root. Before you can run the hooks, you need to have `pre-commit` installed. `pre-commit` is a
[Python package](https://pypi.org/project/pre-commit/).
From the repository root run: `pip install -r requirements-dev.txt` to install `pre-commit` and after you install
`pre-commit` you will then need to install the pre-commit hooks by running `pre-commit install`.
The hooks run when running `git commit` and also from the command line with `pre-commit`. Some of the hooks will auto
fix the code after the hooks fail whilst most will print error messages from the linters. If a hook fails the overall
commit will fail, and you will need to fix the issues or problems and `git add` and `git commit` again. On `git commit`
the hooks will run mostly only against modified files so if you want to test all hooks against all files and when you
are adding a new hook you should always run:
`pre-commit run --all-files`
Sometimes you might need to skip a hook to commit because the hook is stopping you from committing or your computer
might not have all the installation requirements for all the hooks. The `SKIP` variable is comma separated for two or
more hooks:
`SKIP=codespell git commit -m "foo"`
The same applies when running pre-commit:
`SKIP=codespell pre-commit run --all-files`
Occasionally you can have more serious problems when using `pre-commit` with `git commit`. You can use `--no-verify` to
commit and stop `pre-commit` from checking the hooks. For example:
`git commit --no-verify -m "foo"`
If you are having major problems using `pre-commit` you can always uninstall it.
To run a single hook use `pre-commit run --all-files <hook_id>`
For example just run the `codespell` hook:
`pre-commit run --all-files codespell`

View File

@ -12,7 +12,7 @@
[![Apache CloudStack](tools/logo/apache_cloudstack.png)](https://cloudstack.apache.org/)
Apache CloudStack is open source software designed to deploy and manage large
Apache CloudStack is open-source software designed to deploy and manage large
networks of virtual machines, as a highly available, highly scalable
Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used
by a number of service providers to offer public cloud services, and by many

View File

@ -451,3 +451,9 @@ iscsi.session.cleanup.enabled=false
# If set to true, creates VMs as full clones of their templates on KVM hypervisor. Creates as linked clones otherwise.
# create.full.clone=false
# Instance conversion TMPDIR env var
#convert.instance.env.tmpdir=
# Instance conversion VIRT_V2V_TMPDIR env var
#convert.instance.env.virtv2v.tmpdir=

View File

@ -613,7 +613,7 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
}
protected String getAgentArch() {
String arch = Script.runSimpleBashScript(Script.getExecutableAbsolutePath("arch"), 1000);
String arch = Script.runSimpleBashScript(Script.getExecutableAbsolutePath("arch"), 2000);
logger.debug("Arch for agent: {} found: {}", _name, arch);
return arch;
}

View File

@ -794,6 +794,20 @@ public class AgentProperties{
*/
public static final Property<Boolean> VIRTV2V_VERBOSE_ENABLED = new Property<>("virtv2v.verbose.enabled", false);
/**
* Set env TMPDIR var for virt-v2v Instance Conversion from VMware to KVM
* Data type: String.<br>
* Default value: <code>null</code>
*/
public static final Property<String> CONVERT_ENV_TMPDIR = new Property<>("convert.instance.env.tmpdir", null, String.class);
/**
* Set env VIRT_V2V_TMPDIR var for virt-v2v Instance Conversion from VMware to KVM
* Data type: String.<br>
* Default value: <code>null</code>
*/
public static final Property<String> CONVERT_ENV_VIRTV2V_TMPDIR = new Property<>("convert.instance.env.virtv2v.tmpdir", null, String.class);
/**
* BGP controll CIDR
* Data type: String.<br>

View File

@ -172,4 +172,5 @@ public interface KubernetesCluster extends ControlledEntity, com.cloud.utils.fsm
Long getEtcdNodeCount();
Long getCniConfigId();
String getCniConfigDetails();
boolean isCsiEnabled();
}

View File

@ -70,6 +70,8 @@ public interface AutoScaleService {
Counter createCounter(CreateCounterCmd cmd);
Counter getCounter(long counterId);
boolean deleteCounter(long counterId) throws ResourceInUseException;
List<? extends Counter> listCounters(ListCountersCmd cmd);

View File

@ -37,7 +37,7 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
State getState();
enum DiskCacheMode {
NONE("none"), WRITEBACK("writeback"), WRITETHROUGH("writethrough");
NONE("none"), WRITEBACK("writeback"), WRITETHROUGH("writethrough"), HYPERVISOR_DEFAULT("hypervisor_default");
private final String _diskCacheMode;
@ -69,6 +69,8 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
boolean isCustomized();
boolean isShared();
void setDiskSize(long diskSize);
long getDiskSize();
@ -99,7 +101,6 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
Long getBytesReadRateMaxLength();
void setBytesWriteRate(Long bytesWriteRate);
Long getBytesWriteRate();
@ -112,7 +113,6 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
Long getBytesWriteRateMaxLength();
void setIopsReadRate(Long iopsReadRate);
Long getIopsReadRate();
@ -133,7 +133,6 @@ public interface DiskOffering extends InfrastructureEntity, Identity, InternalId
Long getIopsWriteRateMax();
void setIopsWriteRateMaxLength(Long iopsWriteRateMaxLength);
Long getIopsWriteRateMaxLength();

View File

@ -20,7 +20,6 @@ import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import com.cloud.user.UserData;
import org.apache.cloudstack.api.command.admin.cluster.ListClustersCmd;
import org.apache.cloudstack.api.command.admin.config.ListCfgGroupsByCmd;
import org.apache.cloudstack.api.command.admin.config.ListCfgsByCmd;
@ -72,6 +71,7 @@ import org.apache.cloudstack.api.command.user.vm.GetVMPasswordCmd;
import org.apache.cloudstack.api.command.user.vmgroup.UpdateVMGroupCmd;
import org.apache.cloudstack.config.Configuration;
import org.apache.cloudstack.config.ConfigurationGroup;
import org.apache.cloudstack.framework.config.ConfigKey;
import com.cloud.alert.Alert;
import com.cloud.capacity.Capacity;
@ -91,6 +91,7 @@ import com.cloud.storage.GuestOSHypervisor;
import com.cloud.storage.GuestOsCategory;
import com.cloud.storage.StoragePool;
import com.cloud.user.SSHKeyPair;
import com.cloud.user.UserData;
import com.cloud.utils.Pair;
import com.cloud.utils.Ternary;
import com.cloud.vm.InstanceGroup;
@ -104,6 +105,14 @@ import com.cloud.vm.VirtualMachine.Type;
public interface ManagementService {
static final String Name = "management-server";
ConfigKey<Boolean> JsInterpretationEnabled = new ConfigKey<>("Hidden"
, Boolean.class
, "js.interpretation.enabled"
, "false"
, "Enable/Disable all JavaScript interpretation related functionalities to create or update Javascript rules."
, false
, ConfigKey.Scope.Global);
/**
* returns the a map of the names/values in the configuration table
*
@ -509,4 +518,6 @@ public interface ManagementService {
boolean removeManagementServer(RemoveManagementServerCmd cmd);
void checkJsInterpretationAllowedIfNeededForParameterValue(String paramName, boolean paramValue);
}

View File

@ -18,6 +18,7 @@ package com.cloud.server;
public interface ResourceManagerUtil {
long getResourceId(String resourceId, ResourceTag.ResourceObjectType resourceType);
long getResourceId(String resourceId, ResourceTag.ResourceObjectType resourceType, boolean checkAccess);
String getUuid(String resourceId, ResourceTag.ResourceObjectType resourceType);
ResourceTag.ResourceObjectType getResourceType(String resourceTypeStr);
void checkResourceAccessible(Long accountId, Long domainId, String exceptionMessage);

View File

@ -180,6 +180,8 @@ public interface VolumeApiService {
*/
boolean doesStoragePoolSupportDiskOfferingTags(StoragePool destPool, String diskOfferingTags);
boolean validateConditionsToReplaceDiskOfferingOfVolume(Volume volume, DiskOffering newDiskOffering, StoragePool destPool);
Volume destroyVolume(long volumeId, Account caller, boolean expunge, boolean forceExpunge);
void destroyVolume(long volumeId);

View File

@ -85,7 +85,7 @@ public interface SnapshotApiService {
* the command that specifies the volume criteria
* @return list of snapshot policies
*/
Pair<List<? extends SnapshotPolicy>, Integer> listPoliciesforVolume(ListSnapshotPoliciesCmd cmd);
Pair<List<? extends SnapshotPolicy>, Integer> listSnapshotPolicies(ListSnapshotPoliciesCmd cmd);
boolean deleteSnapshotPolicies(DeleteSnapshotPoliciesCmd cmd);

View File

@ -16,11 +16,12 @@
// under the License.
package com.cloud.storage.snapshot;
import org.apache.cloudstack.acl.ControlledEntity;
import org.apache.cloudstack.api.Displayable;
import org.apache.cloudstack.api.Identity;
import org.apache.cloudstack.api.InternalIdentity;
public interface SnapshotPolicy extends Identity, InternalIdentity, Displayable {
public interface SnapshotPolicy extends ControlledEntity, Identity, InternalIdentity, Displayable {
long getVolumeId();

View File

@ -81,6 +81,34 @@ public abstract class AbstractGetUploadParamsCmd extends BaseCmd {
return projectId;
}
public void setName(String name) {
this.name = name;
}
public void setFormat(String format) {
this.format = format;
}
public void setZoneId(Long zoneId) {
this.zoneId = zoneId;
}
public void setChecksum(String checksum) {
this.checksum = checksum;
}
public void setAccountName(String accountName) {
this.accountName = accountName;
}
public void setDomainId(Long domainId) {
this.domainId = domainId;
}
public void setProjectId(Long projectId) {
this.projectId = projectId;
}
public GetUploadParamsResponse createGetUploadParamsResponse(UUID id, URL postURL, String metadata, String timeout, String signature) {
return new GetUploadParamsResponse(id, postURL, metadata, timeout, signature);
}

View File

@ -64,6 +64,7 @@ public class ApiConstants {
public static final String BACKUP_STORAGE_LIMIT = "backupstoragelimit";
public static final String BACKUP_STORAGE_TOTAL = "backupstoragetotal";
public static final String BACKUP_VM_OFFERING_REMOVED = "vmbackupofferingremoved";
public static final String IS_BACKUP_VM_EXPUNGED = "isbackupvmexpunged";
public static final String BACKUP_TOTAL = "backuptotal";
public static final String BASE64_IMAGE = "base64image";
public static final String BGP_PEERS = "bgppeers";
@ -134,6 +135,7 @@ public class ApiConstants {
public static final String CNI_CONFIG_ID = "cniconfigurationid";
public static final String CNI_CONFIG_DETAILS = "cniconfigdetails";
public static final String CNI_CONFIG_NAME = "cniconfigname";
public static final String CSI_ENABLED = "csienabled";
public static final String COMPONENT = "component";
public static final String CPU = "CPU";
public static final String CPU_CORE_PER_SOCKET = "cpucorepersocket";
@ -214,6 +216,7 @@ public class ApiConstants {
public static final String DURATION = "duration";
public static final String ELIGIBLE = "eligible";
public static final String EMAIL = "email";
public static final String ENABLE_CSI = "enablecsi";
public static final String END_ASN = "endasn";
public static final String END_DATE = "enddate";
public static final String END_IP = "endip";
@ -225,6 +228,7 @@ public class ApiConstants {
public static final String EVENT_TYPE = "eventtype";
public static final String EXPIRES = "expires";
public static final String EXTRA_CONFIG = "extraconfig";
public static final String EXTRA_PARAMS = "extraparams";
public static final String EXTRA_DHCP_OPTION = "extradhcpoption";
public static final String EXTRA_DHCP_OPTION_NAME = "extradhcpoptionname";
public static final String EXTRA_DHCP_OPTION_CODE = "extradhcpoptioncode";
@ -243,6 +247,8 @@ public class ApiConstants {
public static final String FIRSTNAME = "firstname";
public static final String FORCED = "forced";
public static final String FORCED_DESTROY_LOCAL_STORAGE = "forcedestroylocalstorage";
public static final String FORCE_CONVERT_TO_POOL = "forceconverttopool";
public static final String FORCE_DELETE_HOST = "forcedeletehost";
public static final String FORCE_MS_TO_IMPORT_VM_FILES = "forcemstoimportvmfiles";
public static final String FORCE_UPDATE_OS_TYPE = "forceupdateostype";
@ -578,6 +584,7 @@ public class ApiConstants {
public static final String SUITABLE_FOR_VM = "suitableforvirtualmachine";
public static final String SUPPORTS_STORAGE_SNAPSHOT = "supportsstoragesnapshot";
public static final String TARGET_IQN = "targetiqn";
public static final String TASKS_FILTER = "tasksfilter";
public static final String TEMPLATE_FILTER = "templatefilter";
public static final String TEMPLATE_ID = "templateid";
public static final String TEMPLATE_IDS = "templateids";
@ -1157,6 +1164,7 @@ public class ApiConstants {
public static final String OVM3_CLUSTER = "ovm3cluster";
public static final String OVM3_VIP = "ovm3vip";
public static final String CLEAN_UP_DETAILS = "cleanupdetails";
public static final String CLEAN_UP_EXTERNAL_DETAILS = "cleanupexternaldetails";
public static final String CLEAN_UP_PARAMETERS = "cleanupparameters";
public static final String VIRTUAL_SIZE = "virtualsize";
public static final String NETSCALER_CONTROLCENTER_ID = "netscalercontrolcenterid";

View File

@ -17,6 +17,7 @@
package org.apache.cloudstack.api.command.admin.autoscale;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiCommandResourceType;
@ -89,9 +90,6 @@ public class CreateCounterCmd extends BaseAsyncCreateCmd {
if (ctr != null) {
this.setEntityId(ctr.getId());
this.setEntityUuid(ctr.getUuid());
CounterResponse response = _responseGenerator.createCounterResponse(ctr);
response.setResponseName(getCommandName());
this.setResponseObject(response);
} else {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to create Counter with name " + getName());
}
@ -99,6 +97,11 @@ public class CreateCounterCmd extends BaseAsyncCreateCmd {
@Override
public void execute() {
CallContext.current().setEventDetails("Counter ID: " + getEntityId());
Counter ctr = _autoScaleService.getCounter(getEntityId());
CounterResponse response = _responseGenerator.createCounterResponse(ctr);
response.setResponseName(getCommandName());
this.setResponseObject(response);
}
@Override

View File

@ -72,6 +72,14 @@ public class UpdateHostCmd extends BaseCmd {
@Parameter(name = ApiConstants.EXTERNAL_DETAILS, type = CommandType.MAP, description = "Details in key/value pairs using format externaldetails[i].keyname=keyvalue. Example: externaldetails[0].endpoint.url=urlvalue", since = "4.21.0")
protected Map externalDetails;
@Parameter(name = ApiConstants.CLEAN_UP_EXTERNAL_DETAILS,
type = CommandType.BOOLEAN,
description = "Optional boolean field, which indicates if external details should be cleaned up or not " +
"(If set to true, external details removed for this host, externaldetails field ignored; " +
"if false or not set, no action)",
since = "4.22.0")
protected Boolean cleanupExternalDetails;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -112,6 +120,10 @@ public class UpdateHostCmd extends BaseCmd {
return convertExternalDetailsToMap(externalDetails);
}
public boolean isCleanupExternalDetails() {
return Boolean.TRUE.equals(cleanupExternalDetails);
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////

View File

@ -151,7 +151,7 @@ public class CreateDiskOfferingCmd extends BaseCmd {
@Parameter(name = ApiConstants.CACHE_MODE,
type = CommandType.STRING,
required = false,
description = "the cache mode to use for this disk offering. none, writeback or writethrough",
description = "the cache mode to use for this disk offering. none, writeback, writethrough or hypervisor default. If the hypervisor default cache mode is used on other hypervisors than KVM, it will fall back to none cache mode",
since = "4.14")
private String cacheMode;

View File

@ -190,7 +190,7 @@ public class CreateServiceOfferingCmd extends BaseCmd {
@Parameter(name = ApiConstants.CACHE_MODE,
type = CommandType.STRING,
required = false,
description = "the cache mode to use for this disk offering. none, writeback or writethrough",
description = "the cache mode to use for this disk offering. none, writeback, writethrough or hypervisor default. If the hypervisor default cache mode is used on other hypervisors than KVM, it will fall back to none cache mode",
since = "4.14")
private String cacheMode;

View File

@ -101,6 +101,14 @@ public class UpdateServiceOfferingCmd extends BaseCmd {
since = "4.21.0")
private Map externalDetails;
@Parameter(name = ApiConstants.CLEAN_UP_EXTERNAL_DETAILS,
type = CommandType.BOOLEAN,
description = "Optional boolean field, which indicates if external details should be cleaned up or not " +
"(If set to true, external details removed for this offering, externaldetails field ignored; " +
"if false or not set, no action)",
since = "4.22.0")
protected Boolean cleanupExternalDetails;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -205,6 +213,10 @@ public class UpdateServiceOfferingCmd extends BaseCmd {
return convertExternalDetailsToMap(externalDetails);
}
public boolean isCleanupExternalDetails() {
return Boolean.TRUE.equals(cleanupExternalDetails);
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////

View File

@ -159,6 +159,18 @@ public class ImportVmCmd extends ImportUnmanagedInstanceCmd {
description = "(only for importing VMs from VMware to KVM) optional - if true, forces MS to export OVF from VMware to temporary storage, else uses KVM Host if ovftool is available, falls back to MS if not.")
private Boolean forceMsToImportVmFiles;
@Parameter(name = ApiConstants.EXTRA_PARAMS,
type = CommandType.STRING,
since = "4.22",
description = "(only for importing VMs from VMware to KVM) optional - extra parameters to be passed on the virt-v2v command, if allowed by the administrator")
private String extraParams;
@Parameter(name = ApiConstants.FORCE_CONVERT_TO_POOL,
type = CommandType.BOOLEAN,
since = "4.22",
description = "(only for importing VMs from VMware to KVM) optional - if true, forces virt-v2v conversions to write directly on the provided storage pool (avoid using temporary conversion pool).")
private Boolean forceConvertToPool;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -248,6 +260,14 @@ public class ImportVmCmd extends ImportUnmanagedInstanceCmd {
return EventTypes.EVENT_VM_IMPORT;
}
public String getExtraParams() {
return extraParams;
}
public boolean getForceConvertToPool() {
return BooleanUtils.toBooleanDefaultIfNull(forceConvertToPool, false);
}
@Override
public String getEventDescription() {
String vmName = getName();

View File

@ -0,0 +1,116 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.admin.vm;
import com.cloud.exception.ConcurrentOperationException;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.user.Account;
import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseListCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ResponseObject;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.AccountResponse;
import org.apache.cloudstack.api.response.HostResponse;
import org.apache.cloudstack.api.response.ImportVMTaskResponse;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.ZoneResponse;
import org.apache.cloudstack.context.CallContext;
import org.apache.cloudstack.vm.ImportVmTasksManager;
import javax.inject.Inject;
@APICommand(name = "listImportVmTasks",
description = "List running import virtual machine tasks from a unmanaged hosts into CloudStack",
responseObject = ImportVMTaskResponse.class,
responseView = ResponseObject.ResponseView.Full,
requestHasSensitiveInfo = false,
authorized = {RoleType.Admin},
since = "4.22")
public class ListImportVMTasksCmd extends BaseListCmd {
@Inject
public ImportVmTasksManager importVmTasksManager;
@Parameter(name = ApiConstants.ZONE_ID,
type = CommandType.UUID,
entityType = ZoneResponse.class,
required = true,
description = "the zone ID")
private Long zoneId;
@Parameter(name = ApiConstants.ACCOUNT_ID,
type = CommandType.UUID,
entityType = AccountResponse.class,
description = "the ID of the Account")
private Long accountId;
@Parameter(name = ApiConstants.VCENTER,
type = CommandType.STRING,
description = "The name/ip of vCenter. Make sure it is IP address or full qualified domain name for host running vCenter server.")
private String vcenter;
@Parameter(name = ApiConstants.CONVERT_INSTANCE_HOST_ID,
type = CommandType.UUID,
entityType = HostResponse.class,
description = "Conversion host of the importing task")
private Long convertHostId;
@Parameter(name = ApiConstants.TASKS_FILTER, type = CommandType.STRING, description = "Filter tasks by state, valid options are: All, Running, Completed, Failed")
private String tasksFilter;
public Long getZoneId() {
return zoneId;
}
public Long getAccountId() {
return accountId;
}
public String getVcenter() {
return vcenter;
}
public Long getConvertHostId() {
return convertHostId;
}
public String getTasksFilter() {
return tasksFilter;
}
@Override
public void execute() throws ResourceUnavailableException, InsufficientCapacityException, ServerApiException, ConcurrentOperationException, ResourceAllocationException, NetworkRuleConflictException {
ListResponse<ImportVMTaskResponse> response = importVmTasksManager.listImportVMTasks(this);
response.setResponseName(getCommandName());
setResponseObject(response);
}
@Override
public long getEntityOwnerId() {
Account account = CallContext.current().getCallingAccount();
if (account != null) {
return account.getId();
}
return Account.ACCOUNT_ID_SYSTEM;
}
}

View File

@ -24,7 +24,7 @@ import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.ApiErrorCode;
import org.apache.cloudstack.api.BaseCmd;
import org.apache.cloudstack.api.BaseListProjectAndAccountResourcesCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.ServerApiException;
import org.apache.cloudstack.api.response.BackupScheduleResponse;
@ -39,7 +39,6 @@ import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.utils.exception.CloudRuntimeException;
import java.util.ArrayList;
import java.util.List;
@ -48,10 +47,10 @@ import java.util.List;
description = "List backup schedule of a VM",
responseObject = BackupScheduleResponse.class, since = "4.14.0",
authorized = {RoleType.Admin, RoleType.ResourceAdmin, RoleType.DomainAdmin, RoleType.User})
public class ListBackupScheduleCmd extends BaseCmd {
public class ListBackupScheduleCmd extends BaseListProjectAndAccountResourcesCmd {
@Inject
private BackupManager backupManager;
BackupManager backupManager;
/////////////////////////////////////////////////////
//////////////// API parameters /////////////////////
@ -60,10 +59,16 @@ public class ListBackupScheduleCmd extends BaseCmd {
@Parameter(name = ApiConstants.VIRTUAL_MACHINE_ID,
type = CommandType.UUID,
entityType = UserVmResponse.class,
required = true,
description = "ID of the VM")
private Long vmId;
@Parameter(name = ApiConstants.ID,
type = CommandType.UUID,
entityType = BackupScheduleResponse.class,
description = "the ID of the backup schedule",
since = "4.22.0")
private Long id;
/////////////////////////////////////////////////////
/////////////////// Accessors ///////////////////////
/////////////////////////////////////////////////////
@ -72,6 +77,10 @@ public class ListBackupScheduleCmd extends BaseCmd {
return vmId;
}
public Long getId() {
return id;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@ -79,19 +88,18 @@ public class ListBackupScheduleCmd extends BaseCmd {
@Override
public void execute() throws ResourceUnavailableException, InsufficientCapacityException, ServerApiException, ConcurrentOperationException, ResourceAllocationException, NetworkRuleConflictException {
try{
List<BackupSchedule> schedules = backupManager.listBackupSchedule(getVmId());
List<BackupSchedule> schedules = backupManager.listBackupSchedules(this);
ListResponse<BackupScheduleResponse> response = new ListResponse<>();
List<BackupScheduleResponse> scheduleResponses = new ArrayList<>();
if (!CollectionUtils.isNullOrEmpty(schedules)) {
for (BackupSchedule schedule : schedules) {
scheduleResponses.add(_responseGenerator.createBackupScheduleResponse(schedule));
}
response.setResponses(scheduleResponses, schedules.size());
response.setResponseName(getCommandName());
setResponseObject(response);
} else {
throw new CloudRuntimeException("No backup schedule exists for the VM");
}
response.setResponses(scheduleResponses, schedules.size());
response.setResponseName(getCommandName());
setResponseObject(response);
} catch (Exception e) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, e.getMessage());
}

View File

@ -104,6 +104,29 @@ public class GetUploadParamsForIsoCmd extends AbstractGetUploadParamsCmd {
return osTypeId;
}
public void setBootable(Boolean bootable) {
this.bootable = bootable;
}
public void setDisplayText(String displayText) {
this.displayText = displayText;
}
public void setFeatured(Boolean featured) {
this.featured = featured;
}
public void setPublicIso(Boolean publicIso) {
this.publicIso = publicIso;
}
public void setExtractable(Boolean extractable) {
this.extractable = extractable;
}
public void setOsTypeId(Long osTypeId) {
this.osTypeId = osTypeId;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////

View File

@ -53,6 +53,9 @@ public class ListNetworksCmd extends BaseListRetrieveOnlyResourceCountCmd implem
@Parameter(name = ApiConstants.ID, type = CommandType.UUID, entityType = NetworkResponse.class, description = "list networks by ID")
private Long id;
@Parameter(name = ApiConstants.NAME, type = CommandType.STRING, description = "list networks by name", since = "4.22.0")
private String name;
@Parameter(name = ApiConstants.ZONE_ID, type = CommandType.UUID, entityType = ZoneResponse.class, description = "the zone ID of the network")
private Long zoneId;
@ -125,6 +128,10 @@ public class ListNetworksCmd extends BaseListRetrieveOnlyResourceCountCmd implem
return id;
}
public String getName() {
return name;
}
public Long getZoneId() {
return zoneId;
}

View File

@ -23,7 +23,7 @@ import org.apache.cloudstack.acl.RoleType;
import org.apache.cloudstack.api.APICommand;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseListCmd;
import org.apache.cloudstack.api.BaseListProjectAndAccountResourcesCmd;
import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.SnapshotPolicyResponse;
@ -34,7 +34,7 @@ import com.cloud.utils.Pair;
@APICommand(name = "listSnapshotPolicies", description = "Lists snapshot policies.", responseObject = SnapshotPolicyResponse.class,
requestHasSensitiveInfo = false, responseHasSensitiveInfo = false)
public class ListSnapshotPoliciesCmd extends BaseListCmd {
public class ListSnapshotPoliciesCmd extends BaseListProjectAndAccountResourcesCmd {
/////////////////////////////////////////////////////
@ -69,13 +69,14 @@ public class ListSnapshotPoliciesCmd extends BaseListCmd {
public Long getId() {
return id;
}
/////////////////////////////////////////////////////
/////////////// API Implementation///////////////////
/////////////////////////////////////////////////////
@Override
public void execute() {
Pair<List<? extends SnapshotPolicy>, Integer> result = _snapshotService.listPoliciesforVolume(this);
Pair<List<? extends SnapshotPolicy>, Integer> result = _snapshotService.listSnapshotPolicies(this);
ListResponse<SnapshotPolicyResponse> response = new ListResponse<SnapshotPolicyResponse>();
List<SnapshotPolicyResponse> policyResponses = new ArrayList<SnapshotPolicyResponse>();
for (SnapshotPolicy policy : result.first()) {

View File

@ -28,7 +28,6 @@ import org.apache.cloudstack.api.response.ProjectResponse;
import org.apache.cloudstack.api.response.VpnUsersResponse;
import org.apache.cloudstack.context.CallContext;
import com.cloud.domain.Domain;
import com.cloud.event.EventTypes;
import com.cloud.network.VpnUser;
import com.cloud.user.Account;
@ -110,7 +109,6 @@ public class AddVpnUserCmd extends BaseAsyncCreateCmd {
@Override
public void execute() {
VpnUser vpnUser = _entityMgr.findById(VpnUser.class, getEntityId());
Account account = _entityMgr.findById(Account.class, vpnUser.getAccountId());
try {
if (!_ravService.applyVpnUsers(vpnUser.getAccountId(), userName)) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to add vpn user");
@ -118,24 +116,10 @@ public class AddVpnUserCmd extends BaseAsyncCreateCmd {
} catch (Exception ex) {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, ex.getMessage());
}
VpnUsersResponse vpnResponse = new VpnUsersResponse();
vpnResponse.setId(vpnUser.getUuid());
vpnResponse.setUserName(vpnUser.getUsername());
vpnResponse.setAccountName(account.getAccountName());
// re-retrieve the vpnuser, as the call to `applyVpnUsers` might have changed the state
vpnUser = _entityMgr.findById(VpnUser.class, getEntityId());
vpnResponse.setState(vpnUser.getState().toString());
Domain domain = _entityMgr.findById(Domain.class, account.getDomainId());
if (domain != null) {
vpnResponse.setDomainId(domain.getUuid());
vpnResponse.setDomainName(domain.getName());
vpnResponse.setDomainPath(domain.getPath());
}
VpnUsersResponse vpnResponse = _responseGenerator.createVpnUserResponse(vpnUser);
vpnResponse.setResponseName(getCommandName());
vpnResponse.setObjectName("vpnuser");
setResponseObject(vpnResponse);
}

View File

@ -61,6 +61,10 @@ public class BackupOfferingResponse extends BaseResponse {
@Param(description = "zone name")
private String zoneName;
@SerializedName(ApiConstants.CROSS_ZONE_INSTANCE_CREATION)
@Param(description = "the backups with this offering can be used to create Instances on all Zones", since = "4.22.0")
private Boolean crossZoneInstanceCreation;
@SerializedName(ApiConstants.CREATED)
@Param(description = "the date this backup offering was created")
private Date created;
@ -97,6 +101,10 @@ public class BackupOfferingResponse extends BaseResponse {
this.zoneName = zoneName;
}
public void setCrossZoneInstanceCreation(Boolean crossZoneInstanceCreation) {
this.crossZoneInstanceCreation = crossZoneInstanceCreation;
}
public void setCreated(Date created) {
this.created = created;
}

View File

@ -123,6 +123,10 @@ public class BackupResponse extends BaseResponse {
@Param(description = "The backup offering corresponding to this backup was removed from the VM", since = "4.21.0")
private Boolean vmOfferingRemoved;
@SerializedName(ApiConstants.IS_BACKUP_VM_EXPUNGED)
@Param(description = "Indicates whether the VM from which the backup was taken is expunged or not", since = "4.22.0")
private Boolean isVmExpunged;
public String getId() {
return id;
}
@ -306,4 +310,8 @@ public class BackupResponse extends BaseResponse {
public void setVmOfferingRemoved(Boolean vmOfferingRemoved) {
this.vmOfferingRemoved = vmOfferingRemoved;
}
public void setVmExpunged(Boolean isVmExpunged) {
this.isVmExpunged = isVmExpunged;
}
}

View File

@ -62,6 +62,10 @@ public class GetUploadParamsResponse extends BaseResponse {
setObjectName("getuploadparams");
}
public UUID getId() {
return id;
}
public void setId(UUID id) {
this.id = id;
}

View File

@ -0,0 +1,257 @@
//
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
package org.apache.cloudstack.api.response;
import com.cloud.serializer.Param;
import com.google.gson.annotations.SerializedName;
import org.apache.cloudstack.api.ApiConstants;
import org.apache.cloudstack.api.BaseResponse;
import java.util.Date;
public class ImportVMTaskResponse extends BaseResponse {
@SerializedName(ApiConstants.ID)
@Param(description = "the ID of importing task")
private String id;
@SerializedName(ApiConstants.ZONE_ID)
@Param(description = "the Zone ID")
private String zoneId;
@SerializedName(ApiConstants.ZONE_NAME)
@Param(description = "the Zone name")
private String zoneName;
@SerializedName(ApiConstants.ACCOUNT)
@Param(description = "the account name")
private String accountName;
@SerializedName(ApiConstants.ACCOUNT_ID)
@Param(description = "the ID of account")
private String accountId;
@SerializedName(ApiConstants.VIRTUAL_MACHINE_ID)
@Param(description = "the ID of the imported VM (after task is completed)")
private String virtualMachineId;
@SerializedName(ApiConstants.DISPLAY_NAME)
@Param(description = "the display name of the importing VM")
private String displayName;
@SerializedName(ApiConstants.STATE)
@Param(description = "the state of the importing VM task")
private String state;
@SerializedName(ApiConstants.VCENTER)
@Param(description = "the vcenter name of the importing VM task")
private String vcenter;
@SerializedName(ApiConstants.DATACENTER_NAME)
@Param(description = "the datacenter name of the importing VM task")
private String datacenterName;
@SerializedName("sourcevmname")
@Param(description = "the source VM name")
private String sourceVMName;
@SerializedName("step")
@Param(description = "the current step on the importing VM task")
private String step;
@SerializedName("stepduration")
@Param(description = "the duration of the current step")
private String stepDuration;
@SerializedName(ApiConstants.DURATION)
@Param(description = "the total task duration")
private String duration;
@SerializedName(ApiConstants.DESCRIPTION)
@Param(description = "the current step description on the importing VM task")
private String description;
@SerializedName(ApiConstants.CONVERT_INSTANCE_HOST_ID)
@Param(description = "the ID of the host on which the instance is being converted")
private String convertInstanceHostId;
@SerializedName("convertinstancehostname")
@Param(description = "the name of the host on which the instance is being converted")
private String convertInstanceHostName;
@SerializedName(ApiConstants.CREATED)
@Param(description = "the create date of the importing task")
private Date created;
@SerializedName(ApiConstants.LAST_UPDATED)
@Param(description = "the last updated date of the importing task")
private Date lastUpdated;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getZoneId() {
return zoneId;
}
public void setZoneId(String zoneId) {
this.zoneId = zoneId;
}
public String getZoneName() {
return zoneName;
}
public void setZoneName(String zoneName) {
this.zoneName = zoneName;
}
public String getAccountName() {
return accountName;
}
public void setAccountName(String accountName) {
this.accountName = accountName;
}
public String getAccountId() {
return accountId;
}
public void setAccountId(String accountId) {
this.accountId = accountId;
}
public String getVirtualMachineId() {
return virtualMachineId;
}
public void setVirtualMachineId(String virtualMachineId) {
this.virtualMachineId = virtualMachineId;
}
public String getDisplayName() {
return displayName;
}
public void setDisplayName(String displayName) {
this.displayName = displayName;
}
public String getVcenter() {
return vcenter;
}
public void setVcenter(String vcenter) {
this.vcenter = vcenter;
}
public String getDatacenterName() {
return datacenterName;
}
public void setDatacenterName(String datacenterName) {
this.datacenterName = datacenterName;
}
public String getSourceVMName() {
return sourceVMName;
}
public void setSourceVMName(String sourceVMName) {
this.sourceVMName = sourceVMName;
}
public String getStep() {
return step;
}
public void setStep(String step) {
this.step = step;
}
public String getStepDuration() {
return stepDuration;
}
public void setStepDuration(String stepDuration) {
this.stepDuration = stepDuration;
}
public String getDuration() {
return duration;
}
public void setDuration(String duration) {
this.duration = duration;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public String getConvertInstanceHostId() {
return convertInstanceHostId;
}
public void setConvertInstanceHostId(String convertInstanceHostId) {
this.convertInstanceHostId = convertInstanceHostId;
}
public String getConvertInstanceHostName() {
return convertInstanceHostName;
}
public void setConvertInstanceHostName(String convertInstanceHostName) {
this.convertInstanceHostName = convertInstanceHostName;
}
public Date getCreated() {
return created;
}
public void setCreated(Date created) {
this.created = created;
}
public Date getLastUpdated() {
return lastUpdated;
}
public void setLastUpdated(Date lastUpdated) {
this.lastUpdated = lastUpdated;
}
public String getState() {
return state;
}
public void setState(String state) {
this.state = state;
}
}

View File

@ -197,7 +197,7 @@ public class ServiceOfferingResponse extends BaseResponseWithAnnotations {
private Boolean isCustomized;
@SerializedName("cacheMode")
@Param(description = "the cache mode to use for this disk offering. none, writeback or writethrough", since = "4.14")
@Param(description = "the cache mode to use for this disk offering. none, writeback, writethrough or hypervisor default", since = "4.14")
private String cacheMode;
@SerializedName("vspherestoragepolicy")

View File

@ -37,6 +37,10 @@ public class SnapshotPolicyResponse extends BaseResponseWithTagInformation {
@Param(description = "the ID of the disk volume")
private String volumeId;
@SerializedName("volumename")
@Param(description = "the name of the disk volume")
private String volumeName;
@SerializedName("schedule")
@Param(description = "time the snapshot is scheduled to be taken.")
private String schedule;
@ -87,6 +91,10 @@ public class SnapshotPolicyResponse extends BaseResponseWithTagInformation {
this.volumeId = volumeId;
}
public void setVolumeName(String volumeName) {
this.volumeName = volumeName;
}
public String getSchedule() {
return schedule;
}

View File

@ -77,19 +77,24 @@ public class StoragePoolResponse extends BaseResponseWithAnnotations {
@Param(description = "the name of the cluster for the storage pool")
private String clusterName;
@SerializedName(ApiConstants.CAPACITY_BYTES)
@Param(description = "bytes CloudStack can provision from this storage pool", since = "4.22.0")
private Long capacityBytes;
@Deprecated(since = "4.22.0")
@SerializedName("disksizetotal")
@Param(description = "the total disk size of the storage pool")
private Long diskSizeTotal;
@SerializedName("disksizeallocated")
@Param(description = "the host's currently allocated disk size")
@Param(description = "the pool's currently allocated disk size")
private Long diskSizeAllocated;
@SerializedName("disksizeused")
@Param(description = "the host's currently used disk size")
@Param(description = "the pool's currently used disk size")
private Long diskSizeUsed;
@SerializedName("capacityiops")
@SerializedName(ApiConstants.CAPACITY_IOPS)
@Param(description = "IOPS CloudStack can provision from this storage pool")
private Long capacityIops;
@ -288,6 +293,14 @@ public class StoragePoolResponse extends BaseResponseWithAnnotations {
this.clusterName = clusterName;
}
public Long getCapacityBytes() {
return capacityBytes;
}
public void setCapacityBytes(Long capacityBytes) {
this.capacityBytes = capacityBytes;
}
public Long getDiskSizeTotal() {
return diskSizeTotal;
}

View File

@ -41,7 +41,7 @@ import com.google.gson.annotations.SerializedName;
@SuppressWarnings("unused")
@EntityReference(value = {VirtualMachine.class, UserVm.class, VirtualRouter.class})
public class UserVmResponse extends BaseResponseWithTagInformation implements ControlledEntityResponse, SetResourceIconResponse {
public class UserVmResponse extends BaseResponseWithTagInformation implements ControlledViewEntityResponse, SetResourceIconResponse {
@SerializedName(ApiConstants.ID)
@Param(description = "the ID of the virtual machine")
private String id;

View File

@ -28,6 +28,7 @@ import org.apache.cloudstack.api.command.user.backup.CreateBackupCmd;
import org.apache.cloudstack.api.command.user.backup.CreateBackupScheduleCmd;
import org.apache.cloudstack.api.command.user.backup.DeleteBackupScheduleCmd;
import org.apache.cloudstack.api.command.user.backup.ListBackupOfferingsCmd;
import org.apache.cloudstack.api.command.user.backup.ListBackupScheduleCmd;
import org.apache.cloudstack.api.command.user.backup.ListBackupsCmd;
import org.apache.cloudstack.api.response.BackupResponse;
import org.apache.cloudstack.framework.config.ConfigKey;
@ -174,7 +175,7 @@ public interface BackupManager extends BackupService, Configurable, PluggableSer
* @param vmId
* @return
*/
List<BackupSchedule> listBackupSchedule(Long vmId);
List<BackupSchedule> listBackupSchedules(ListBackupScheduleCmd cmd);
/**
* Deletes VM backup schedule for a VM

View File

@ -19,11 +19,12 @@ package org.apache.cloudstack.backup;
import java.util.Date;
import org.apache.cloudstack.acl.ControlledEntity;
import org.apache.cloudstack.api.InternalIdentity;
import com.cloud.utils.DateUtil;
public interface BackupSchedule extends InternalIdentity {
public interface BackupSchedule extends ControlledEntity, InternalIdentity {
Long getVmId();
DateUtil.IntervalType getScheduleType();
String getSchedule();

View File

@ -22,6 +22,8 @@ import org.apache.cloudstack.framework.config.Configurable;
import com.cloud.utils.component.Manager;
import java.io.IOException;
public interface UserDataManager extends Manager, Configurable {
String VM_USERDATA_MAX_LENGTH_STRING = "vm.userdata.max.length";
ConfigKey<Integer> VM_USERDATA_MAX_LENGTH = new ConfigKey<>("Advanced", Integer.class, VM_USERDATA_MAX_LENGTH_STRING, "32768",
@ -29,4 +31,14 @@ public interface UserDataManager extends Manager, Configurable {
String concatenateUserData(String userdata1, String userdata2, String userdataProvider);
String validateUserData(String userData, BaseCmd.HTTPMethod httpmethod);
/**
* This method validates the user data uuid for system VMs and returns the user data
* after compression and base64 encoding for the system VM to consume.
*
* @param userDataUuid
* @return a String containing the user data after compression and base64 encoding
* @throws IOException
*/
String validateAndGetUserDataForSystemVM(String userDataUuid) throws IOException;
}

View File

@ -0,0 +1,41 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cloudstack.vm;
import org.apache.cloudstack.api.Identity;
import org.apache.cloudstack.api.InternalIdentity;
public interface ImportVmTask extends Identity, InternalIdentity {
enum Step {
Prepare, CloningInstance, ConvertingInstance, Importing, Completed
}
enum TaskState {
Running, Completed, Failed;
public static TaskState getValue(String state) {
for (TaskState s : TaskState.values()) {
if (s.name().equalsIgnoreCase(state)) {
return s;
}
}
throw new IllegalArgumentException("Invalid task state: " + state);
}
}
}

View File

@ -0,0 +1,38 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.vm;
import com.cloud.dc.DataCenter;
import com.cloud.host.Host;
import com.cloud.user.Account;
import org.apache.cloudstack.api.command.admin.vm.ListImportVMTasksCmd;
import org.apache.cloudstack.api.response.ImportVMTaskResponse;
import org.apache.cloudstack.api.response.ListResponse;
public interface ImportVmTasksManager {
ListResponse<ImportVMTaskResponse> listImportVMTasks(ListImportVMTasksCmd cmd);
ImportVmTask createImportVMTaskRecord(DataCenter zone, Account owner, long userId, String displayName,
String vcenter, String datacenterName, String sourceVMName,
Host convertHost, Host importHost);
void updateImportVMTaskStep(ImportVmTask importVMTaskVO, DataCenter zone, Account owner, Host convertHost,
Host importHost, Long vmId, ImportVmTask.Step step);
void updateImportVMTaskErrorState(ImportVmTask importVMTaskVO, ImportVmTask.TaskState state, String errorMsg);
}

View File

@ -0,0 +1,98 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.backup;
import com.cloud.exception.InsufficientCapacityException;
import com.cloud.exception.NetworkRuleConflictException;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.exception.ResourceUnavailableException;
import com.cloud.user.Account;
import org.apache.cloudstack.api.ResponseGenerator;
import org.apache.cloudstack.api.response.BackupScheduleResponse;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.backup.BackupManager;
import org.apache.cloudstack.backup.BackupSchedule;
import org.apache.cloudstack.context.CallContext;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.ArrayList;
import java.util.List;
@RunWith(MockitoJUnitRunner.class)
public class ListBackupScheduleCmdTest {
@Mock
private BackupManager backupManager;
@Mock
private ResponseGenerator responseGenerator;
private ListBackupScheduleCmd cmd;
@Before
public void setUp() {
cmd = new ListBackupScheduleCmd();
cmd.backupManager = backupManager;
cmd._responseGenerator = responseGenerator;
}
@Test
public void testExecuteWithSchedules() throws ResourceUnavailableException, InsufficientCapacityException, ResourceAllocationException, NetworkRuleConflictException {
BackupSchedule schedule = Mockito.mock(BackupSchedule.class);
BackupScheduleResponse scheduleResponse = Mockito.mock(BackupScheduleResponse.class);
List<BackupSchedule> schedules = new ArrayList<>();
schedules.add(schedule);
Mockito.when(backupManager.listBackupSchedules(cmd)).thenReturn(schedules);
Mockito.when(responseGenerator.createBackupScheduleResponse(schedule)).thenReturn(scheduleResponse);
Account mockAccount = Mockito.mock(Account.class);
CallContext callContext = Mockito.mock(CallContext.class);
try (org.mockito.MockedStatic<CallContext> mocked = Mockito.mockStatic(CallContext.class)) {
cmd.execute();
}
ListResponse<?> response = (ListResponse<?>) cmd.getResponseObject();
Assert.assertNotNull(response);
Assert.assertEquals(1, response.getResponses().size());
Assert.assertEquals(scheduleResponse, response.getResponses().get(0));
}
@Test
public void testExecuteWithNoSchedules() {
Mockito.when(backupManager.listBackupSchedules(cmd)).thenReturn(new ArrayList<>());
CallContext callContext = Mockito.mock(CallContext.class);
try (org.mockito.MockedStatic<CallContext> mocked = Mockito.mockStatic(CallContext.class)) {
mocked.when(CallContext::current).thenReturn(callContext);
cmd.execute();
} catch (ResourceUnavailableException | InsufficientCapacityException | ResourceAllocationException |
NetworkRuleConflictException e) {
throw new RuntimeException(e);
}
ListResponse<?> response = (ListResponse<?>) cmd.getResponseObject();
Assert.assertNotNull(response);
Assert.assertEquals(0, response.getResponses().size());
}
}

View File

@ -0,0 +1,79 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.user.snapshot;
import com.cloud.storage.snapshot.SnapshotApiService;
import com.cloud.storage.snapshot.SnapshotPolicy;
import com.cloud.utils.Pair;
import org.apache.cloudstack.api.ResponseGenerator;
import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.SnapshotPolicyResponse;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mockito;
import java.util.ArrayList;
import java.util.List;
public class ListSnapshotPoliciesCmdTest {
private ListSnapshotPoliciesCmd cmd;
private SnapshotApiService snapshotService;
private ResponseGenerator responseGenerator;
@Before
public void setUp() {
cmd = new ListSnapshotPoliciesCmd();
snapshotService = Mockito.mock(SnapshotApiService.class);
responseGenerator = Mockito.mock(ResponseGenerator.class);
cmd._snapshotService = snapshotService;
cmd._responseGenerator = responseGenerator;
}
@Test
public void testExecuteWithPolicies() {
SnapshotPolicy policy = Mockito.mock(SnapshotPolicy.class);
SnapshotPolicyResponse policyResponse = Mockito.mock(SnapshotPolicyResponse.class);
List<SnapshotPolicy> policies = new ArrayList<>();
policies.add(policy);
Mockito.when(snapshotService.listSnapshotPolicies(cmd))
.thenReturn(new Pair<>(policies, 1));
Mockito.when(responseGenerator.createSnapshotPolicyResponse(policy))
.thenReturn(policyResponse);
cmd.execute();
ListResponse<?> response = (ListResponse<?>) cmd.getResponseObject();
Assert.assertNotNull(response);
Assert.assertEquals(1, response.getResponses().size());
Assert.assertEquals(policyResponse, response.getResponses().get(0));
}
@Test
public void testExecuteWithNoPolicies() {
Mockito.when(snapshotService.listSnapshotPolicies(cmd))
.thenReturn(new Pair<>(new ArrayList<>(), 0));
cmd.execute();
ListResponse<?> response = (ListResponse<?>) cmd.getResponseObject();
Assert.assertNotNull(response);
Assert.assertTrue(response.getResponses().isEmpty());
}
}

View File

@ -23,30 +23,37 @@ import com.cloud.hypervisor.Hypervisor;
public class ConvertInstanceCommand extends Command {
private RemoteInstanceTO sourceInstance;
private String originalVMName;
private Hypervisor.HypervisorType destinationHypervisorType;
private DataStoreTO conversionTemporaryLocation;
private String templateDirOnConversionLocation;
private boolean checkConversionSupport;
private boolean exportOvfToConversionLocation;
private int threadsCountToExportOvf = 0;
private String extraParams;
public ConvertInstanceCommand() {
}
public ConvertInstanceCommand(RemoteInstanceTO sourceInstance, Hypervisor.HypervisorType destinationHypervisorType, DataStoreTO conversionTemporaryLocation,
String templateDirOnConversionLocation, boolean checkConversionSupport, boolean exportOvfToConversionLocation) {
String templateDirOnConversionLocation, boolean checkConversionSupport, boolean exportOvfToConversionLocation, String sourceVMName) {
this.sourceInstance = sourceInstance;
this.destinationHypervisorType = destinationHypervisorType;
this.conversionTemporaryLocation = conversionTemporaryLocation;
this.templateDirOnConversionLocation = templateDirOnConversionLocation;
this.checkConversionSupport = checkConversionSupport;
this.exportOvfToConversionLocation = exportOvfToConversionLocation;
this.originalVMName = sourceVMName;
}
public RemoteInstanceTO getSourceInstance() {
return sourceInstance;
}
public String getOriginalVMName() {
return originalVMName;
}
public Hypervisor.HypervisorType getDestinationHypervisorType() {
return destinationHypervisorType;
}
@ -75,6 +82,14 @@ public class ConvertInstanceCommand extends Command {
this.threadsCountToExportOvf = threadsCountToExportOvf;
}
public String getExtraParams() {
return extraParams;
}
public void setExtraParams(String extraParams) {
this.extraParams = extraParams;
}
@Override
public boolean executeInSequence() {
return false;

View File

@ -27,17 +27,20 @@ public class ImportConvertedInstanceCommand extends Command {
private List<String> destinationStoragePools;
private DataStoreTO conversionTemporaryLocation;
private String temporaryConvertUuid;
private boolean forceConvertToPool;
public ImportConvertedInstanceCommand() {
}
public ImportConvertedInstanceCommand(RemoteInstanceTO sourceInstance,
List<String> destinationStoragePools,
DataStoreTO conversionTemporaryLocation, String temporaryConvertUuid) {
DataStoreTO conversionTemporaryLocation, String temporaryConvertUuid,
boolean forceConvertToPool) {
this.sourceInstance = sourceInstance;
this.destinationStoragePools = destinationStoragePools;
this.conversionTemporaryLocation = conversionTemporaryLocation;
this.temporaryConvertUuid = temporaryConvertUuid;
this.forceConvertToPool = forceConvertToPool;
}
public RemoteInstanceTO getSourceInstance() {
@ -56,6 +59,10 @@ public class ImportConvertedInstanceCommand extends Command {
return temporaryConvertUuid;
}
public boolean isForceConvertToPool() {
return forceConvertToPool;
}
@Override
public boolean executeInSequence() {
return false;

View File

@ -116,8 +116,8 @@ public class VolumeObjectTO extends DownloadableObjectTO implements DataTO {
iopsWriteRate = volume.getIopsWriteRate();
iopsWriteRateMax = volume.getIopsWriteRateMax();
iopsWriteRateMaxLength = volume.getIopsWriteRateMaxLength();
cacheMode = volume.getCacheMode();
hypervisorType = volume.getHypervisorType();
setCacheMode(volume.getCacheMode());
setDeviceId(volume.getDeviceId());
this.migrationOptions = volume.getMigrationOptions();
this.directDownload = volume.isDirectDownload();
@ -343,6 +343,10 @@ public class VolumeObjectTO extends DownloadableObjectTO implements DataTO {
}
public void setCacheMode(DiskCacheMode cacheMode) {
if (DiskCacheMode.HYPERVISOR_DEFAULT.equals(cacheMode) && !Hypervisor.HypervisorType.KVM.equals(hypervisorType)) {
this.cacheMode = DiskCacheMode.NONE;
return;
}
this.cacheMode = cacheMode;
}

View File

@ -24,6 +24,7 @@
/etc/cloudstack/management/config.json
/etc/cloudstack/extensions/Proxmox/proxmox.sh
/etc/cloudstack/extensions/HyperV/hyperv.py
/etc/cloudstack/extensions/MaaS/maas.py
/etc/default/cloudstack-management
/etc/security/limits.d/cloudstack-limits.conf
/etc/sudoers.d/cloudstack

View File

@ -106,6 +106,9 @@ public interface VirtualMachineManager extends Manager {
ConfigKey<Boolean> VmSyncPowerStateTransitioning = new ConfigKey<>("Advanced", Boolean.class, "vm.sync.power.state.transitioning", "true",
"Whether to sync power states of the transitioning and stalled VMs while processing VM power reports.", false);
ConfigKey<Boolean> SystemVmEnableUserData = new ConfigKey<>(Boolean.class, "systemvm.userdata.enabled", "Advanced", "false",
"Enable user data for system VMs. When enabled, the CPVM, SSVM, and Router system VMs will use the values from the global settings console.proxy.vm.userdata, secstorage.vm.userdata, and virtual.router.userdata, respectively, to provide cloud-init user data to the VM.",
true, ConfigKey.Scope.Zone, null);
interface Topics {
String VM_POWER_STATE = "vm.powerstate";

View File

@ -23,6 +23,7 @@ import java.util.Map;
import java.util.Set;
import com.cloud.exception.ResourceAllocationException;
import com.cloud.storage.Storage;
import com.cloud.utils.Pair;
import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
@ -182,10 +183,10 @@ public interface VolumeOrchestrationService {
*/
DiskProfile importVolume(Type type, String name, DiskOffering offering, Long sizeInBytes, Long minIops, Long maxIops,
Long zoneId, HypervisorType hypervisorType, VirtualMachine vm, VirtualMachineTemplate template,
Account owner, Long deviceId, Long poolId, String path, String chainInfo);
Account owner, Long deviceId, Long poolId, Storage.StoragePoolType poolType, String path, String chainInfo);
DiskProfile updateImportedVolume(Type type, DiskOffering offering, VirtualMachine vm, VirtualMachineTemplate template,
Long deviceId, Long poolId, String path, String chainInfo, DiskProfile diskProfile);
Long deviceId, Long poolId, Storage.StoragePoolType poolType, String path, String chainInfo, DiskProfile diskProfile);
/**
* Unmanage VM volumes

View File

@ -19,6 +19,7 @@
package org.apache.cloudstack.engine.subsystem.api.storage;
import com.cloud.storage.ScopeType;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
public class ClusterScope extends AbstractScope {
private ScopeType type = ScopeType.CLUSTER;
@ -51,4 +52,9 @@ public class ClusterScope extends AbstractScope {
return this.zoneId;
}
@Override
public String toString() {
return String.format("ClusterScope %s", ReflectionToStringBuilderUtils.reflectOnlySelectedFields(
this, "zoneId", "clusterId", "podId"));
}
}

View File

@ -19,8 +19,10 @@
package org.apache.cloudstack.engine.subsystem.api.storage;
import com.cloud.storage.ScopeType;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
public class HostScope extends AbstractScope {
private ScopeType type = ScopeType.HOST;
private Long hostId;
private Long clusterId;
private Long zoneId;
@ -34,7 +36,7 @@ public class HostScope extends AbstractScope {
@Override
public ScopeType getScopeType() {
return ScopeType.HOST;
return this.type;
}
@Override
@ -49,4 +51,10 @@ public class HostScope extends AbstractScope {
public Long getZoneId() {
return zoneId;
}
@Override
public String toString() {
return String.format("HostScope %s", ReflectionToStringBuilderUtils.reflectOnlySelectedFields(
this, "zoneId", "clusterId", "hostId"));
}
}

View File

@ -24,8 +24,8 @@ import com.cloud.hypervisor.Hypervisor;
import com.cloud.storage.StoragePool;
public interface PrimaryDataStoreLifeCycle extends DataStoreLifeCycle {
public static final String CAPACITY_BYTES = "capacityBytes";
public static final String CAPACITY_IOPS = "capacityIops";
String CAPACITY_BYTES = "capacityBytes";
String CAPACITY_IOPS = "capacityIops";
void updateStoragePool(StoragePool storagePool, Map<String, String> details);
void enableStoragePool(DataStore store);

View File

@ -19,6 +19,7 @@
package org.apache.cloudstack.engine.subsystem.api.storage;
import com.cloud.storage.ScopeType;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
public class ZoneScope extends AbstractScope {
private ScopeType type = ScopeType.ZONE;
@ -39,4 +40,9 @@ public class ZoneScope extends AbstractScope {
return this.zoneId;
}
@Override
public String toString() {
return String.format("ZoneScope %s", ReflectionToStringBuilderUtils.reflectOnlySelectedFields(
this, "zoneId"));
}
}

View File

@ -302,6 +302,8 @@ public interface StorageManager extends StorageService {
Answer sendToPool(StoragePool pool, long[] hostIdsToTryFirst, Command cmd) throws StorageUnavailableException;
void updateStoragePoolHostVOAndBytes(StoragePool pool, long hostId, ModifyStoragePoolAnswer mspAnswer);
CapacityVO getSecondaryStorageUsedStats(Long hostId, Long zoneId);
CapacityVO getStoragePoolUsedStats(Long poolId, Long clusterId, Long podId, Long zoneId);

View File

@ -57,6 +57,13 @@ public interface TemplateManager {
+ "will validate if the provided URL is resolvable during the register of templates/ISOs before persisting them in the database.",
true);
ConfigKey<Boolean> TemplateDeleteFromPrimaryStorage = new ConfigKey<Boolean>("Advanced",
Boolean.class,
"template.delete.from.primary.storage", "true",
"Template when deleted will be instantly deleted from the Primary Storage",
true,
ConfigKey.Scope.Global);
static final String VMWARE_TOOLS_ISO = "vmware-tools.iso";
static final String XS_TOOLS_ISO = "xs-tools.iso";
@ -104,6 +111,8 @@ public interface TemplateManager {
*/
List<VMTemplateStoragePoolVO> getUnusedTemplatesInPool(StoragePoolVO pool);
void evictTemplateFromStoragePoolsForZones(Long templateId, List<Long> zoneId);
/**
* Deletes a template in the specified storage pool.
*

View File

@ -2859,6 +2859,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
}
volume.setPath(result.getPath());
volume.setPoolId(pool.getId());
volume.setPoolType(pool.getPoolType());
if (result.getChainInfo() != null) {
volume.setChainInfo(result.getChainInfo());
}
@ -5244,7 +5245,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, VmConfigDriveForceHostCacheUse, VmConfigDriveUseHostCacheOnUnsupportedPool,
HaVmRestartHostUp, ResourceCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel, SystemVmRootDiskSize,
AllowExposeDomainInMetadata, MetadataCustomCloudName, VmMetadataManufacturer, VmMetadataProductName,
VmSyncPowerStateTransitioning
VmSyncPowerStateTransitioning, SystemVmEnableUserData
};
}

View File

@ -4799,6 +4799,18 @@ public class NetworkOrchestrator extends ManagerBase implements NetworkOrchestra
}
});
if (selectedIp != null && GuestType.Shared.equals(network.getGuestType())) {
IPAddressVO ipAddressVO = _ipAddressDao.findByIpAndSourceNetworkId(network.getId(), selectedIp);
if (ipAddressVO != null && IpAddress.State.Free.equals(ipAddressVO.getState())) {
ipAddressVO.setState(IPAddressVO.State.Allocated);
ipAddressVO.setAllocatedTime(new Date());
Account account = _accountDao.findById(vm.getAccountId());
ipAddressVO.setAllocatedInDomainId(account.getDomainId());
ipAddressVO.setAllocatedToAccountId(account.getId());
_ipAddressDao.update(ipAddressVO.getId(), ipAddressVO);
}
}
final Integer networkRate = _networkModel.getNetworkRate(network.getId(), vm.getId());
final NicProfile vmNic = new NicProfile(vo, network, vo.getBroadcastUri(), vo.getIsolationUri(), networkRate, _networkModel.isSecurityGroupSupportedInNetwork(network),
_networkModel.getNetworkTag(vm.getHypervisorType(), network));
@ -4810,15 +4822,15 @@ public class NetworkOrchestrator extends ManagerBase implements NetworkOrchestra
if (network.getGuestType() == GuestType.L2) {
return null;
}
return dataCenter.getNetworkType() == NetworkType.Basic ?
getSelectedIpForNicImportOnBasicZone(ipAddresses.getIp4Address(), network, dataCenter):
return GuestType.Shared.equals(network.getGuestType()) ?
getSelectedIpForNicImportOnSharedNetwork(ipAddresses.getIp4Address(), network, dataCenter):
_ipAddrMgr.acquireGuestIpAddress(network, ipAddresses.getIp4Address());
}
protected String getSelectedIpForNicImportOnBasicZone(String requestedIp, Network network, DataCenter dataCenter) {
protected String getSelectedIpForNicImportOnSharedNetwork(String requestedIp, Network network, DataCenter dataCenter) {
IPAddressVO ipAddressVO = StringUtils.isBlank(requestedIp) ?
_ipAddressDao.findBySourceNetworkIdAndDatacenterIdAndState(network.getId(), dataCenter.getId(), IpAddress.State.Free):
_ipAddressDao.findByIp(requestedIp);
_ipAddressDao.findByIpAndSourceNetworkId(network.getId(), requestedIp);
if (ipAddressVO == null || ipAddressVO.getState() != IpAddress.State.Free) {
String msg = String.format("Cannot find a free IP to assign to VM NIC on network %s", network.getName());
logger.error(msg);

View File

@ -1423,7 +1423,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
String volumeToString = getVolumeIdentificationInfos(volume);
VolumeInfo vol = volFactory.getVolume(volume.getId());
if (vol == null){
if (vol == null) {
throw new CloudRuntimeException(String.format("Volume migration failed because volume [%s] is null.", volumeToString));
}
if (destPool == null) {
@ -2308,6 +2308,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
StoragePoolVO pool = _storagePoolDao.findByUuid(updatedDataStoreUUID);
if (pool != null) {
vol.setPoolId(pool.getId());
vol.setPoolType(pool.getPoolType());
}
}
_volsDao.update(volumeId, vol);
@ -2317,7 +2318,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
@Override
public DiskProfile importVolume(Type type, String name, DiskOffering offering, Long sizeInBytes, Long minIops, Long maxIops,
Long zoneId, HypervisorType hypervisorType, VirtualMachine vm, VirtualMachineTemplate template, Account owner,
Long deviceId, Long poolId, String path, String chainInfo) {
Long deviceId, Long poolId, Storage.StoragePoolType poolType, String path, String chainInfo) {
if (sizeInBytes == null) {
sizeInBytes = offering.getDiskSize();
}
@ -2358,6 +2359,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
vol.setFormat(getSupportedImageFormatForCluster(hypervisorType));
vol.setPoolId(poolId);
vol.setPoolType(poolType);
vol.setPath(path);
vol.setChainInfo(chainInfo);
vol.setState(Volume.State.Ready);
@ -2367,7 +2369,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
@Override
public DiskProfile updateImportedVolume(Type type, DiskOffering offering, VirtualMachine vm, VirtualMachineTemplate template,
Long deviceId, Long poolId, String path, String chainInfo, DiskProfile diskProfile) {
Long deviceId, Long poolId, Storage.StoragePoolType poolType, String path, String chainInfo, DiskProfile diskProfile) {
VolumeVO vol = _volsDao.findById(diskProfile.getVolumeId());
if (vm != null) {
@ -2401,6 +2403,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
vol.setFormat(getSupportedImageFormatForCluster(vm.getHypervisorType()));
vol.setPoolId(poolId);
vol.setPoolType(poolType);
vol.setPath(path);
vol.setChainInfo(chainInfo);
vol.setSize(diskProfile.getSize());

View File

@ -822,7 +822,7 @@ public class NetworkOrchestratorTest extends TestCase {
Mockito.when(network.getId()).thenReturn(networkId);
Mockito.when(dataCenter.getId()).thenReturn(dataCenterId);
Mockito.when(ipAddresses.getIp4Address()).thenReturn(requestedIp);
Mockito.when(testOrchestrator._ipAddressDao.findByIp(requestedIp)).thenReturn(ipAddressVO);
Mockito.when(testOrchestrator._ipAddressDao.findByIpAndSourceNetworkId(networkId, requestedIp)).thenReturn(ipAddressVO);
String ipAddress = testOrchestrator.getSelectedIpForNicImport(network, dataCenter, ipAddresses);
Assert.assertEquals(requestedIp, ipAddress);
}

View File

@ -241,7 +241,7 @@ public class VolumeOrchestratorTest {
volumeOrchestrator.importVolume(volumeType, name, diskOffering, sizeInBytes, null, null,
zoneId, hypervisorType, null, null, owner,
deviceId, poolId, path, chainInfo);
deviceId, poolId, Storage.StoragePoolType.NetworkFilesystem, path, chainInfo);
VolumeVO volume = volumeVOMockedConstructionConstruction.constructed().get(0);
Mockito.verify(volume, Mockito.never()).setInstanceId(Mockito.anyLong());

View File

@ -33,6 +33,8 @@ public interface HostDetailsDao extends GenericDao<DetailVO, Long> {
List<DetailVO> findByName(String name);
void removeExternalDetails(long hostId);
void replaceExternalDetails(long hostId, Map<String, String> details);
}

View File

@ -39,6 +39,7 @@ public class HostDetailsDaoImpl extends GenericDaoBase<DetailVO, Long> implement
protected final SearchBuilder<DetailVO> HostSearch;
protected final SearchBuilder<DetailVO> DetailSearch;
protected final SearchBuilder<DetailVO> DetailNameSearch;
protected final SearchBuilder<DetailVO> ExternalDetailSearch;
public HostDetailsDaoImpl() {
HostSearch = createSearchBuilder();
@ -53,6 +54,11 @@ public class HostDetailsDaoImpl extends GenericDaoBase<DetailVO, Long> implement
DetailNameSearch = createSearchBuilder();
DetailNameSearch.and("name", DetailNameSearch.entity().getName(), SearchCriteria.Op.EQ);
DetailNameSearch.done();
ExternalDetailSearch = createSearchBuilder();
ExternalDetailSearch.and("hostId", ExternalDetailSearch.entity().getHostId(), SearchCriteria.Op.EQ);
ExternalDetailSearch.and("name", ExternalDetailSearch.entity().getName(), SearchCriteria.Op.LIKE);
ExternalDetailSearch.done();
}
@Override
@ -133,6 +139,17 @@ public class HostDetailsDaoImpl extends GenericDaoBase<DetailVO, Long> implement
return listBy(sc);
}
@Override
public void removeExternalDetails(long hostId) {
TransactionLegacy txn = TransactionLegacy.currentTxn();
txn.start();
SearchCriteria<DetailVO> sc = ExternalDetailSearch.create();
sc.setParameters("hostId", hostId);
sc.setParameters("name", VmDetailConstants.EXTERNAL_DETAIL_PREFIX + "%");
remove(sc);
txn.commit();
}
@Override
public void replaceExternalDetails(long hostId, Map<String, String> details) {
if (details.isEmpty()) {
@ -149,11 +166,7 @@ public class HostDetailsDaoImpl extends GenericDaoBase<DetailVO, Long> implement
}
detailVOs.add(new DetailVO(hostId, name, value));
}
SearchBuilder<DetailVO> sb = createSearchBuilder();
sb.and("hostId", sb.entity().getHostId(), SearchCriteria.Op.EQ);
sb.and("name", sb.entity().getName(), SearchCriteria.Op.LIKE);
sb.done();
SearchCriteria<DetailVO> sc = sb.create();
SearchCriteria<DetailVO> sc = ExternalDetailSearch.create();
sc.setParameters("hostId", hostId);
sc.setParameters("name", VmDetailConstants.EXTERNAL_DETAIL_PREFIX + "%");
remove(sc);

View File

@ -24,6 +24,7 @@ import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDao;
public interface CounterDao extends GenericDao<CounterVO, Long> {
CounterVO findByNameProviderValue(String name, String value, String provider);
public List<CounterVO> listCounters(Long id, String name, String source, String provider, String keyword, Filter filter);
}

View File

@ -32,6 +32,7 @@ import com.cloud.utils.db.SearchCriteria.Op;
@Component
public class CounterDaoImpl extends GenericDaoBase<CounterVO, Long> implements CounterDao {
final SearchBuilder<CounterVO> AllFieldsSearch;
final SearchBuilder<CounterVO> CounterValueSearch;
protected CounterDaoImpl() {
AllFieldsSearch = createSearchBuilder();
@ -40,6 +41,21 @@ public class CounterDaoImpl extends GenericDaoBase<CounterVO, Long> implements C
AllFieldsSearch.and("source", AllFieldsSearch.entity().getSource(), Op.EQ);
AllFieldsSearch.and("provider", AllFieldsSearch.entity().getProvider(), Op.EQ);
AllFieldsSearch.done();
CounterValueSearch = createSearchBuilder();
CounterValueSearch.and("name", CounterValueSearch.entity().getName(), Op.EQ);
CounterValueSearch.and("value", CounterValueSearch.entity().getValue(), Op.EQ);
CounterValueSearch.and("provider", CounterValueSearch.entity().getProvider(), Op.EQ);
CounterValueSearch.done();
}
@Override
public CounterVO findByNameProviderValue(String name, String value, String provider) {
SearchCriteria<CounterVO> sc = CounterValueSearch.create();
sc.setParameters("name", name);
sc.setParameters("value", value);
sc.setParameters("provider", provider);
return findOneBy(sc);
}
@Override

View File

@ -18,6 +18,7 @@ package com.cloud.network.dao;
import java.util.List;
import java.util.Map;
import java.util.Set;
import com.cloud.network.Network;
import com.cloud.network.Network.GuestType;
@ -47,6 +48,12 @@ public interface NetworkDao extends GenericDao<NetworkVO, Long>, StateDao<State,
int getOtherPersistentNetworksCount(long id, String broadcastURI, boolean isPersistent);
List<NetworkVO> listByNetworkDomains(Set<String> uniqueNtwkDomains);
List<NetworkVO> listByNetworkDomainsAndAccountIds(Set<String> uniqueNtwkDomains, Set<Long> accountIds);
List<NetworkVO> listByNetworkDomainsAndDomainIds(Set<String> uniqueNtwkDomains, Set<Long> domainIds);
/**
* Retrieves the next available mac address in this network configuration.
*

View File

@ -86,6 +86,7 @@ public class NetworkDaoImpl extends GenericDaoBase<NetworkVO, Long>implements Ne
GenericSearchBuilder<NetworkVO, Long> GarbageCollectedSearch;
SearchBuilder<NetworkVO> PrivateNetworkSearch;
SearchBuilder<NetworkVO> NetworkDomainSearch;
@Inject
ResourceTagDao _tagsDao;
@ -198,6 +199,12 @@ public class NetworkDaoImpl extends GenericDaoBase<NetworkVO, Long>implements Ne
PersistentNetworkSearch.join("persistent", persistentNtwkOffJoin, PersistentNetworkSearch.entity().getNetworkOfferingId(), persistentNtwkOffJoin.entity().getId(), JoinType.INNER);
PersistentNetworkSearch.done();
NetworkDomainSearch = createSearchBuilder();
NetworkDomainSearch.and("networkDomains", NetworkDomainSearch.entity().getNetworkDomain(), Op.IN);
NetworkDomainSearch.and("accounts", NetworkDomainSearch.entity().getAccountId(), Op.IN);
NetworkDomainSearch.and("domains", NetworkDomainSearch.entity().getDomainId(), Op.IN);
NetworkDomainSearch.done();
PhysicalNetworkSearch = createSearchBuilder();
PhysicalNetworkSearch.and("physicalNetworkId", PhysicalNetworkSearch.entity().getPhysicalNetworkId(), Op.EQ);
PhysicalNetworkSearch.done();
@ -428,6 +435,29 @@ public class NetworkDaoImpl extends GenericDaoBase<NetworkVO, Long>implements Ne
return search(sc, null);
}
@Override
public List<NetworkVO> listByNetworkDomains(Set<String> uniqueNtwkDomains) {
SearchCriteria<NetworkVO> sc = NetworkDomainSearch.create();
sc.setParameters("networkDomains", uniqueNtwkDomains.toArray());
return search(sc, null);
}
@Override
public List<NetworkVO> listByNetworkDomainsAndAccountIds(Set<String> uniqueNtwkDomains, Set<Long> accountIds) {
SearchCriteria<NetworkVO> sc = NetworkDomainSearch.create();
sc.setParameters("networkDomains", uniqueNtwkDomains.toArray());
sc.setParameters("accounts", accountIds.toArray());
return search(sc, null);
}
@Override
public List<NetworkVO> listByNetworkDomainsAndDomainIds(Set<String> uniqueNtwkDomains, Set<Long> domainIds) {
SearchCriteria<NetworkVO> sc = NetworkDomainSearch.create();
sc.setParameters("networkDomains", uniqueNtwkDomains.toArray());
sc.setParameters("domains", domainIds.toArray());
return search(sc, null);
}
@Override
public String getNextAvailableMacAddress(final long networkConfigId, Integer zoneMacIdentifier) {
final SequenceFetcher fetch = SequenceFetcher.getInstance();

View File

@ -577,11 +577,11 @@ public class DiskOfferingVO implements DiskOffering {
@Override
public void setEncrypt(boolean encrypt) { this.encrypt = encrypt; }
@Override
public boolean isShared() {
return !useLocalStorage;
}
public boolean getDiskSizeStrictness() {
return diskSizeStrictness;
}

View File

@ -59,6 +59,12 @@ public class SnapshotPolicyVO implements SnapshotPolicy {
@Column(name = "uuid")
String uuid;
@Column(name = "account_id")
long accountId;
@Column(name = "domain_id")
long domainId;
@Column(name = "display", updatable = true, nullable = false)
protected boolean display = true;
@ -66,7 +72,7 @@ public class SnapshotPolicyVO implements SnapshotPolicy {
this.uuid = UUID.randomUUID().toString();
}
public SnapshotPolicyVO(long volumeId, String schedule, String timezone, IntervalType intvType, int maxSnaps, boolean display) {
public SnapshotPolicyVO(long volumeId, String schedule, String timezone, IntervalType intvType, int maxSnaps, long accountId, long domainId, boolean display) {
this.volumeId = volumeId;
this.schedule = schedule;
this.timezone = timezone;
@ -75,6 +81,8 @@ public class SnapshotPolicyVO implements SnapshotPolicy {
this.active = true;
this.display = display;
this.uuid = UUID.randomUUID().toString();
this.accountId = accountId;
this.domainId = domainId;
}
@Override
@ -160,4 +168,32 @@ public class SnapshotPolicyVO implements SnapshotPolicy {
public void setDisplay(boolean display) {
this.display = display;
}
@Override
public long getAccountId() {
return accountId;
}
public void setAccountId(long accountId) {
this.accountId = accountId;
}
@Override
public long getDomainId() {
return domainId;
}
public void setDomainId(long domainId) {
this.domainId = domainId;
}
@Override
public Class<?> getEntityType() {
return SnapshotPolicy.class;
}
@Override
public String getName() {
return null;
}
}

View File

@ -28,6 +28,7 @@ import javax.persistence.Temporal;
import javax.persistence.TemporalType;
import com.cloud.utils.db.GenericDaoBase;
import org.apache.cloudstack.utils.reflectiontostringbuilderutils.ReflectionToStringBuilderUtils;
/**
* Join table for storage pools and hosts
@ -100,4 +101,9 @@ public class StoragePoolHostVO implements StoragePoolHostAssoc {
this.localPath = localPath;
}
@Override
public String toString() {
return ReflectionToStringBuilderUtils.reflectOnlySelectedFields(this, "hostId", "poolId");
}
}

View File

@ -31,6 +31,8 @@ public interface DiskOfferingDao extends GenericDao<DiskOfferingVO, Long> {
List<DiskOfferingVO> listAllBySizeAndProvisioningType(long size, Storage.ProvisioningType provisioningType);
List<DiskOfferingVO> findCustomDiskOfferings();
List<DiskOfferingVO> listByStorageTag(String tag);
List<DiskOfferingVO> listAllActiveAndNonComputeDiskOfferings();
}

View File

@ -26,6 +26,7 @@ import java.util.List;
import javax.inject.Inject;
import javax.persistence.EntityExistsException;
import com.cloud.offering.DiskOffering;
import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
import org.springframework.stereotype.Component;
@ -45,6 +46,8 @@ public class DiskOfferingDaoImpl extends GenericDaoBase<DiskOfferingVO, Long> im
protected DiskOfferingDetailsDao detailsDao;
protected final SearchBuilder<DiskOfferingVO> UniqueNameSearch;
protected final SearchBuilder<DiskOfferingVO> ActiveAndNonComputeSearch;
private final String SizeDiskOfferingSearch = "SELECT * FROM disk_offering WHERE " +
"disk_size = ? AND provisioning_type = ? AND removed IS NULL";
@ -56,6 +59,11 @@ public class DiskOfferingDaoImpl extends GenericDaoBase<DiskOfferingVO, Long> im
UniqueNameSearch.and("name", UniqueNameSearch.entity().getUniqueName(), SearchCriteria.Op.EQ);
UniqueNameSearch.done();
ActiveAndNonComputeSearch = createSearchBuilder();
ActiveAndNonComputeSearch.and("state", ActiveAndNonComputeSearch.entity().getState(), SearchCriteria.Op.EQ);
ActiveAndNonComputeSearch.and("computeOnly", ActiveAndNonComputeSearch.entity().isComputeOnly(), SearchCriteria.Op.EQ);
ActiveAndNonComputeSearch.done();
_computeOnlyAttr = _allAttributes.get("computeOnly");
}
@ -164,4 +172,12 @@ public class DiskOfferingDaoImpl extends GenericDaoBase<DiskOfferingVO, Long> im
sc.setParameters("tagEndLike", "%," + tag);
return listBy(sc);
}
@Override
public List<DiskOfferingVO> listAllActiveAndNonComputeDiskOfferings() {
SearchCriteria<DiskOfferingVO> sc = ActiveAndNonComputeSearch.create();
sc.setParameters("state", DiskOffering.State.Active);
sc.setParameters("computeOnly", false);
return listBy(sc);
}
}

View File

@ -35,6 +35,8 @@ public interface VMTemplatePoolDao extends GenericDao<VMTemplateStoragePoolVO, L
List<VMTemplateStoragePoolVO> listByPoolIdAndState(long poolId, ObjectInDataStoreStateMachine.State state);
List<VMTemplateStoragePoolVO> listByPoolIdsAndTemplate(List<Long> poolIds, Long templateId);
List<VMTemplateStoragePoolVO> listByTemplateStatus(long templateId, VMTemplateStoragePoolVO.Status downloadState);
List<VMTemplateStoragePoolVO> listByTemplateStatus(long templateId, VMTemplateStoragePoolVO.Status downloadState, long poolId);

View File

@ -150,6 +150,16 @@ public class VMTemplatePoolDaoImpl extends GenericDaoBase<VMTemplateStoragePoolV
return findOneIncludingRemovedBy(sc);
}
@Override
public List<VMTemplateStoragePoolVO> listByPoolIdsAndTemplate(List<Long> poolIds, Long templateId) {
SearchCriteria<VMTemplateStoragePoolVO> sc = PoolTemplateSearch.create();
if (CollectionUtils.isNotEmpty(poolIds)) {
sc.setParameters("pool_id", poolIds.toArray());
}
sc.setParameters("template_id", templateId);
return listBy(sc);
}
@Override
public List<VMTemplateStoragePoolVO> listByTemplateStatus(long templateId, VMTemplateStoragePoolVO.Status downloadState) {
SearchCriteria<VMTemplateStoragePoolVO> sc = TemplateStatusSearch.create();

View File

@ -822,6 +822,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
if (volume.getState() != Volume.State.Destroy) {
volume.setState(Volume.State.Destroy);
volume.setPoolId(null);
volume.setPoolType(null);
volume.setInstanceId(null);
update(volume.getId(), volume);
remove(volume.getId());

View File

@ -16,6 +16,14 @@
// under the License.
package com.cloud.upgrade.dao;
import com.cloud.utils.exception.CloudRuntimeException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
public class Upgrade42100to42200 extends DbUpgradeAbstractImpl implements DbUpgrade, DbUpgradeSystemVmTemplate {
@Override
@ -27,4 +35,69 @@ public class Upgrade42100to42200 extends DbUpgradeAbstractImpl implements DbUpgr
public String getUpgradedVersion() {
return "4.22.0.0";
}
@Override
public InputStream[] getPrepareScripts() {
final String scriptFile = "META-INF/db/schema-42100to42200.sql";
final InputStream script = Thread.currentThread().getContextClassLoader().getResourceAsStream(scriptFile);
if (script == null) {
throw new CloudRuntimeException("Unable to find " + scriptFile);
}
return new InputStream[] {script};
}
@Override
public void performDataMigration(Connection conn) {
updateSnapshotPolicyOwnership(conn);
updateBackupScheduleOwnership(conn);
}
protected void updateSnapshotPolicyOwnership(Connection conn) {
// set account_id and domain_id in snapshot_policy table from volume table
String selectSql = "SELECT sp.id, v.account_id, v.domain_id FROM snapshot_policy sp, volumes v WHERE sp.volume_id = v.id AND (sp.account_id IS NULL AND sp.domain_id IS NULL)";
String updateSql = "UPDATE snapshot_policy SET account_id = ?, domain_id = ? WHERE id = ?";
try (PreparedStatement selectPstmt = conn.prepareStatement(selectSql);
ResultSet rs = selectPstmt.executeQuery();
PreparedStatement updatePstmt = conn.prepareStatement(updateSql)) {
while (rs.next()) {
long policyId = rs.getLong(1);
long accountId = rs.getLong(2);
long domainId = rs.getLong(3);
updatePstmt.setLong(1, accountId);
updatePstmt.setLong(2, domainId);
updatePstmt.setLong(3, policyId);
updatePstmt.executeUpdate();
}
} catch (SQLException e) {
throw new CloudRuntimeException("Unable to update snapshot_policy table with account_id and domain_id", e);
}
}
protected void updateBackupScheduleOwnership(Connection conn) {
// Set account_id and domain_id in backup_schedule table from vm_instance table
String selectSql = "SELECT bs.id, vm.account_id, vm.domain_id FROM backup_schedule bs, vm_instance vm WHERE bs.vm_id = vm.id AND (bs.account_id IS NULL AND bs.domain_id IS NULL)";
String updateSql = "UPDATE backup_schedule SET account_id = ?, domain_id = ? WHERE id = ?";
try (PreparedStatement selectPstmt = conn.prepareStatement(selectSql);
ResultSet rs = selectPstmt.executeQuery();
PreparedStatement updatePstmt = conn.prepareStatement(updateSql)) {
while (rs.next()) {
long scheduleId = rs.getLong(1);
long accountId = rs.getLong(2);
long domainId = rs.getLong(3);
updatePstmt.setLong(1, accountId);
updatePstmt.setLong(2, domainId);
updatePstmt.setLong(3, scheduleId);
updatePstmt.executeUpdate();
}
} catch (SQLException e) {
throw new CloudRuntimeException("Unable to update backup_schedule table with account_id and domain_id", e);
}
}
}

View File

@ -0,0 +1,270 @@
//
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
package com.cloud.vm;
import org.apache.cloudstack.vm.ImportVmTask;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;
import java.util.Date;
import java.util.UUID;
@Entity
@Table(name = "import_vm_task")
public class ImportVMTaskVO implements ImportVmTask {
public ImportVMTaskVO(long zoneId, long accountId, long userId, String displayName,
String vcenter, String datacenter, String sourceVMName, long convertHostId, long importHostId) {
this.zoneId = zoneId;
this.accountId = accountId;
this.userId = userId;
this.displayName = displayName;
this.vcenter = vcenter;
this.datacenter = datacenter;
this.sourceVMName = sourceVMName;
this.step = Step.Prepare;
this.uuid = UUID.randomUUID().toString();
this.convertHostId = convertHostId;
this.importHostId = importHostId;
}
public ImportVMTaskVO() {
}
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id")
private long id;
@Column(name = "uuid")
private String uuid;
@Column(name = "zone_id")
private long zoneId;
@Column(name = "account_id")
private long accountId;
@Column(name = "user_id")
private long userId;
@Column(name = "vm_id")
private Long vmId;
@Column(name = "display_name")
private String displayName;
@Column(name = "vcenter")
private String vcenter;
@Column(name = "datacenter")
private String datacenter;
@Column(name = "source_vm_name")
private String sourceVMName;
@Column(name = "convert_host_id")
private long convertHostId;
@Column(name = "import_host_id")
private long importHostId;
@Column(name = "step")
private Step step;
@Column(name = "state")
private TaskState state;
@Column(name = "description")
private String description;
@Column(name = "duration")
private Long duration;
@Column(name = "created")
@Temporal(value = TemporalType.TIMESTAMP)
private Date created;
@Column(name = "updated")
@Temporal(value = TemporalType.TIMESTAMP)
private Date updated;
@Column(name = "removed")
@Temporal(value = TemporalType.TIMESTAMP)
private Date removed;
@Override
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
@Override
public String getUuid() {
return uuid;
}
public void setUuid(String uuid) {
this.uuid = uuid;
}
public long getZoneId() {
return zoneId;
}
public void setZoneId(long zoneId) {
this.zoneId = zoneId;
}
public long getAccountId() {
return accountId;
}
public void setAccountId(long accountId) {
this.accountId = accountId;
}
public long getUserId() {
return userId;
}
public void setUserId(long userId) {
this.userId = userId;
}
public Long getVmId() {
return vmId;
}
public void setVmId(Long vmId) {
this.vmId = vmId;
}
public String getDisplayName() {
return displayName;
}
public void setDisplayName(String displayName) {
this.displayName = displayName;
}
public String getVcenter() {
return vcenter;
}
public void setVcenter(String vcenter) {
this.vcenter = vcenter;
}
public String getDatacenter() {
return datacenter;
}
public void setDatacenter(String datacenter) {
this.datacenter = datacenter;
}
public String getSourceVMName() {
return sourceVMName;
}
public void setSourceVMName(String sourceVMName) {
this.sourceVMName = sourceVMName;
}
public long getConvertHostId() {
return convertHostId;
}
public void setConvertHostId(long convertHostId) {
this.convertHostId = convertHostId;
}
public long getImportHostId() {
return importHostId;
}
public void setImportHostId(long importHostId) {
this.importHostId = importHostId;
}
public Step getStep() {
return step;
}
public void setStep(Step step) {
this.step = step;
}
public TaskState getState() {
return state;
}
public void setState(TaskState state) {
this.state = state;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Long getDuration() {
return duration;
}
public void setDuration(Long duration) {
this.duration = duration;
}
public Date getCreated() {
return created;
}
public void setCreated(Date created) {
this.created = created;
}
public Date getUpdated() {
return updated;
}
public void setUpdated(Date updated) {
this.updated = updated;
}
public Date getRemoved() {
return removed;
}
public void setRemoved(Date removed) {
this.removed = removed;
}
}

View File

@ -0,0 +1,31 @@
//
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
//
package com.cloud.vm.dao;
import com.cloud.utils.Pair;
import com.cloud.utils.db.GenericDao;
import com.cloud.vm.ImportVMTaskVO;
import org.apache.cloudstack.vm.ImportVmTask;
import java.util.List;
public interface ImportVMTaskDao extends GenericDao<ImportVMTaskVO, Long> {
Pair<List<ImportVMTaskVO>, Integer> listImportVMTasks(Long zoneId, Long accountId, String vcenter, Long convertHostId,
ImportVmTask.TaskState state, Long startIndex, Long pageSizeVal);
}

View File

@ -0,0 +1,74 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.vm.dao;
import com.cloud.utils.Pair;
import com.cloud.utils.db.Filter;
import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.vm.ImportVMTaskVO;
import org.apache.cloudstack.vm.ImportVmTask;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Component;
import javax.annotation.PostConstruct;
import java.util.List;
@Component
public class ImportVMTaskDaoImpl extends GenericDaoBase<ImportVMTaskVO, Long> implements ImportVMTaskDao {
private SearchBuilder<ImportVMTaskVO> AllFieldsSearch;
public ImportVMTaskDaoImpl() {
}
@PostConstruct
void init() {
AllFieldsSearch = createSearchBuilder();
AllFieldsSearch.and("zoneId", AllFieldsSearch.entity().getZoneId(), SearchCriteria.Op.EQ);
AllFieldsSearch.and("accountId", AllFieldsSearch.entity().getAccountId(), SearchCriteria.Op.EQ);
AllFieldsSearch.and("vcenter", AllFieldsSearch.entity().getVcenter(), SearchCriteria.Op.EQ);
AllFieldsSearch.and("convertHostId", AllFieldsSearch.entity().getConvertHostId(), SearchCriteria.Op.EQ);
AllFieldsSearch.and("state", AllFieldsSearch.entity().getState(), SearchCriteria.Op.EQ);
AllFieldsSearch.done();
}
@Override
public Pair<List<ImportVMTaskVO>, Integer> listImportVMTasks(Long zoneId, Long accountId, String vcenter, Long convertHostId,
ImportVmTask.TaskState state, Long startIndex, Long pageSizeVal) {
SearchCriteria<ImportVMTaskVO> sc = AllFieldsSearch.create();
if (zoneId != null) {
sc.setParameters("zoneId", zoneId);
}
if (accountId != null) {
sc.setParameters("accountId", accountId);
}
if (StringUtils.isNotBlank(vcenter)) {
sc.setParameters("vcenter", vcenter);
}
if (convertHostId != null) {
sc.setParameters("convertHostId", convertHostId);
}
if (state != null) {
sc.setParameters("state", state);
}
Filter filter = new Filter(ImportVMTaskVO.class, "created", false, startIndex, pageSizeVal);
return searchAndCount(sc, filter);
}
}

View File

@ -68,10 +68,16 @@ public class BackupScheduleVO implements BackupSchedule {
@Column(name = "quiescevm")
Boolean quiesceVM = false;
@Column(name = "account_id")
Long accountId;
@Column(name = "domain_id")
Long domainId;
public BackupScheduleVO() {
}
public BackupScheduleVO(Long vmId, DateUtil.IntervalType scheduleType, String schedule, String timezone, Date scheduledTimestamp, int maxBackups, Boolean quiesceVM) {
public BackupScheduleVO(Long vmId, DateUtil.IntervalType scheduleType, String schedule, String timezone, Date scheduledTimestamp, int maxBackups, Boolean quiesceVM, Long accountId, Long domainId) {
this.vmId = vmId;
this.scheduleType = (short) scheduleType.ordinal();
this.schedule = schedule;
@ -79,6 +85,8 @@ public class BackupScheduleVO implements BackupSchedule {
this.scheduledTimestamp = scheduledTimestamp;
this.maxBackups = maxBackups;
this.quiesceVM = quiesceVM;
this.accountId = accountId;
this.domainId = domainId;
}
@Override
@ -161,4 +169,32 @@ public class BackupScheduleVO implements BackupSchedule {
public Boolean getQuiesceVM() {
return quiesceVM;
}
@Override
public Class<?> getEntityType() {
return BackupSchedule.class;
}
@Override
public String getName() {
return null;
}
@Override
public long getDomainId() {
return domainId;
}
@Override
public long getAccountId() {
return accountId;
}
public void setAccountId(Long accountId) {
this.accountId = accountId;
}
public void setDomainId(Long domainId) {
this.domainId = domainId;
}
}

View File

@ -24,7 +24,7 @@ import org.apache.cloudstack.backup.BackupOfferingVO;
import com.cloud.utils.db.GenericDao;
public interface BackupOfferingDao extends GenericDao<BackupOfferingVO, Long> {
BackupOfferingResponse newBackupOfferingResponse(BackupOffering policy);
BackupOfferingResponse newBackupOfferingResponse(BackupOffering policy, Boolean crossZoneInstanceCreation);
BackupOffering findByExternalId(String externalId, Long zoneId);
BackupOffering findByName(String name, Long zoneId);
}

View File

@ -50,7 +50,7 @@ public class BackupOfferingDaoImpl extends GenericDaoBase<BackupOfferingVO, Long
}
@Override
public BackupOfferingResponse newBackupOfferingResponse(BackupOffering offering) {
public BackupOfferingResponse newBackupOfferingResponse(BackupOffering offering, Boolean crossZoneInstanceCreation) {
DataCenterVO zone = dataCenterDao.findById(offering.getZoneId());
BackupOfferingResponse response = new BackupOfferingResponse();
@ -64,6 +64,9 @@ public class BackupOfferingDaoImpl extends GenericDaoBase<BackupOfferingVO, Long
response.setZoneId(zone.getUuid());
response.setZoneName(zone.getName());
}
if (crossZoneInstanceCreation) {
response.setCrossZoneInstanceCreation(true);
}
response.setCreated(offering.getCreated());
response.setObjectName("backupoffering");
return response;

View File

@ -171,4 +171,5 @@ public interface PrimaryDataStoreDao extends GenericDao<StoragePoolVO, Long> {
List<StoragePoolVO> findPoolsByStorageTypeAndZone(Storage.StoragePoolType storageType, Long zoneId);
List<StoragePoolVO> listByDataCenterIds(List<Long> dataCenterIds);
}

View File

@ -65,6 +65,7 @@ public class PrimaryDataStoreDaoImpl extends GenericDaoBase<StoragePoolVO, Long>
private final GenericSearchBuilder<StoragePoolVO, Long> StatusCountSearch;
private final SearchBuilder<StoragePoolVO> ClustersSearch;
private final SearchBuilder<StoragePoolVO> IdsSearch;
private final SearchBuilder<StoragePoolVO> DcsSearch;
@Inject
private StoragePoolDetailsDao _detailsDao;
@ -167,6 +168,9 @@ public class PrimaryDataStoreDaoImpl extends GenericDaoBase<StoragePoolVO, Long>
IdsSearch.and("ids", IdsSearch.entity().getId(), SearchCriteria.Op.IN);
IdsSearch.done();
DcsSearch = createSearchBuilder();
DcsSearch.and("dataCenterId", DcsSearch.entity().getDataCenterId(), SearchCriteria.Op.IN);
DcsSearch.done();
}
@Override
@ -320,6 +324,9 @@ public class PrimaryDataStoreDaoImpl extends GenericDaoBase<StoragePoolVO, Long>
pool = super.persist(pool);
if (details != null) {
for (Map.Entry<String, String> detail : details.entrySet()) {
if (detail.getKey().toLowerCase().contains("password") || detail.getKey().toLowerCase().contains("token")) {
displayDetails = false;
}
StoragePoolDetailVO vo = new StoragePoolDetailVO(pool.getId(), detail.getKey(), detail.getValue(), displayDetails);
_detailsDao.persist(vo);
}
@ -924,6 +931,16 @@ public class PrimaryDataStoreDaoImpl extends GenericDaoBase<StoragePoolVO, Long>
return listBy(sc);
}
@Override
public List<StoragePoolVO> listByDataCenterIds(List<Long> dataCenterIds) {
if (CollectionUtils.isEmpty(dataCenterIds)) {
return Collections.emptyList();
}
SearchCriteria<StoragePoolVO> sc = DcsSearch.create();
sc.setParameters("dataCenterId", dataCenterIds.toArray());
return listBy(sc);
}
private SearchCriteria<StoragePoolVO> createStoragePoolSearchCriteria(Long storagePoolId, String storagePoolName,
Long zoneId, String path, Long podId, Long clusterId, Long hostId, String address, ScopeType scopeType,
StoragePoolStatus status, String keyword, String storageAccessGroup) {

View File

@ -309,4 +309,5 @@
<bean id="gpuCardDaoImpl" class="com.cloud.gpu.dao.GpuCardDaoImpl" />
<bean id="gpuDeviceDaoImpl" class="com.cloud.gpu.dao.GpuDeviceDaoImpl" />
<bean id="vgpuProfileDaoImpl" class="com.cloud.gpu.dao.VgpuProfileDaoImpl" />
<bean id="importVMTaskDaoImpl" class="com.cloud.vm.dao.ImportVMTaskDaoImpl" />
</beans>

View File

@ -0,0 +1,26 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.
-- in cloud
DROP PROCEDURE IF EXISTS `cloud`.`IDEMPOTENT_DROP_UNIQUE_KEY`;
CREATE PROCEDURE `cloud`.`IDEMPOTENT_DROP_UNIQUE_KEY` (
IN in_table_name VARCHAR(200),
IN in_index_name VARCHAR(200)
)
BEGIN
DECLARE CONTINUE HANDLER FOR 1091, 1025 BEGIN END; SET @ddl = CONCAT('ALTER TABLE ', in_table_name, ' DROP KEY ', in_index_name); PREPARE stmt FROM @ddl; EXECUTE stmt; DEALLOCATE PREPARE stmt; END;

Some files were not shown because too many files have changed in this diff Show More