Compare commits

...

71 Commits

Author SHA1 Message Date
Abhishek Kumar
da1c7cebf9
server: trim autoscale Windows VM hostname (#11327)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
2025-12-15 15:52:32 +01:00
Abhishek Kumar
39d0d62fdd
api,server: normalize string empty value on config update (#11770)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-12-15 15:43:00 +01:00
dahn
f570e16836
.github: initial version of Code Owners (#12253)
* initial version of Code Owners

* Update .github/CODEOWNERS

---------

Co-authored-by: Daan Hoogland <dahn@apache.org>
Co-authored-by: John Bampton <jbampton@users.noreply.github.com>
2025-12-15 11:04:45 +01:00
John Bampton
1919dcfb7c
pre-commit trailing-whitespace cleanup LICENSE/NOTICE (#12242) 2025-12-15 10:09:11 +01:00
John Bampton
f417c6b0a1
yamllint use extends: default (#12066) 2025-12-11 17:59:45 +01:00
John Bampton
78f9e6584b
UI(vue) + extras: fix bugs/spelling and standardize (#12073) 2025-12-11 16:41:50 +01:00
John Bampton
cfe96026dc
Standardize and auto add license headers to all Vue files with pre-commit (#12081) 2025-12-10 16:21:41 +01:00
Pearl Dsilva
3c6484792d
UI: Create Account form to set proper domain and role based on route (#12200) 2025-12-09 10:56:04 +01:00
dahn
51910cd260
Add license information to dependabot.yaml
Added Apache License information to dependabot.yaml
2025-12-08 16:48:18 +01:00
dahn
5151f8dc6a
java dependabot file (#11409)
Co-authored-by: Daan Hoogland <dahn@apache.org>
2025-12-08 16:33:10 +01:00
dahn
c81295439f
removed code in comments (#11145) 2025-12-08 16:31:48 +01:00
Suresh Kumar Anaparti
b0d74fe00c
Merge branch '4.22' 2025-12-05 18:59:03 +05:30
Suresh Kumar Anaparti
a0ba2aaf3f
Merge branch '4.20' into 4.22 2025-12-05 18:41:18 +05:30
Abhisar Sinha
4379666fb6
Proxmox Extension : Make settings such as storage, disk_size,... (#12174)
Make storage, disk-size and os-type configurable in the Proxmox extension

Doc PR: apache/cloudstack-documentation#601

---------

Co-authored-by: dahn <daan.hoogland@gmail.com>
2025-12-03 17:05:22 +05:30
Suresh Kumar Anaparti
e4414d1c44
Fix agent wait before reconnect (#12153) 2025-12-03 11:19:47 +05:30
Abhishek Kumar
26009659f9
Merge remote-tracking branch 'apache/4.22' 2025-12-01 13:07:45 +05:30
Abhishek Kumar
2941b518ba
Merge remote-tracking branch 'apache/4.20' into 4.22 2025-12-01 13:05:08 +05:30
dahn
f3a112fd9e
use upstream method for creating enums from strings (#12158)
Co-authored-by: Daan Hoogland <dahn@apache.org>
2025-12-01 08:33:14 +01:00
Abhishek Kumar
243f566a60
refactor: add null check for BroadcastDomainType retrievals (#11572)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-12-01 08:19:09 +01:00
Wei Zhou
516012a0b4
ceph: fix offline volume migration between ceph pools (#12103) 2025-11-28 15:44:00 +01:00
Abhishek Kumar
44119cf34f
ui: fix dsiple managementservermetricsresponse - agentcount (#12148)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-28 10:29:18 +01:00
John Bampton
db6147060b
Rename PRE-COMMIT.md to PRE_COMMIT.md and fix link (#12157) 2025-11-28 10:01:38 +01:00
Abhishek Kumar
f379d78963
ui: fix section search filter (#12146)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-28 09:29:10 +01:00
Harikrishna
5798fb43a3
Fix upgrade files (#12155) 2025-11-27 15:56:26 +01:00
Daan Hoogland
4e61ddd1bc import 2025-11-26 13:01:52 +01:00
Daan Hoogland
9032fe3fb5 merge LTS branch 4.22 into main 2025-11-26 11:55:50 +01:00
Daan Hoogland
e23c7ef701 Merge release branch 4.20 to 4.22
* 4.20:
  fixed Password Exposure in IPMI Tool Command Execution (#12028)
  server: fix volume offering not updated after offering change (#12003)
  fix API Request Parameters Logged Credential Masking in ApiServer (#12020)
2025-11-26 11:31:27 +01:00
Abhisar Sinha
e33f4754f5
Fix DB upgrade script for 4.22 (#12111) 2025-11-26 09:25:41 +01:00
Abhishek Kumar
9ec8cc4186
api,server,ui: improve listing public ip for associate (#11591)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-26 09:24:12 +01:00
João Jandre
8171d9568c
Block use of internal and external snapshots on KVM (#11039) 2025-11-24 11:39:19 +01:00
Wei Zhou
dba889ea3e
UI: fix list of zones if zone has icon (#12083) 2025-11-24 11:10:43 +01:00
John Bampton
6dc259c7da
Rename and standardize issue templates to .yml (#12082) 2025-11-14 21:22:12 +01:00
John Bampton
39126a4339
Standardize and auto add license headers for Shell files with pre-commit (#12070)
* Add shebang to shell scripts
2025-11-14 14:23:41 +01:00
John Bampton
aa18188d30
pre-commit: auto add license headers for all YAML files (#12069)
Fix and standardize one license header
2025-11-14 14:23:03 +01:00
John Bampton
4ed86a2627
pre-commit upgrade codespell; fix spelling; (#10144) 2025-11-14 14:17:10 +01:00
John Bampton
86ae1fee7f
Standardize and auto add license headers for SQL files with pre-commit (#12071) 2025-11-14 11:47:27 +01:00
Abhishek Kumar
21d844ba1c
ui: fix zone options for image instance deploy button (#12060)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-14 10:43:42 +01:00
John Bampton
ac3b18095a
pre-commit use colored text in the CI for pass / fail / skipped (#11977) 2025-11-13 15:59:07 +05:30
John Bampton
fff4cafdca
ui(locales): remove duplicates and fix typos (#11872) 2025-11-13 11:17:02 +01:00
John Bampton
a5b455ff3a
pre-commit: auto add table of contents with doctoc (#11679)
https://github.com/thlorenz/doctoc?tab=readme-ov-file#usage-as-a-git-hook
https://github.com/thlorenz/doctoc/releases/tag/v2.2.0

Generates table of contents for Markdown files inside local git repository.

Links are compatible with anchors generated by github or other sites.

Added TOC to 3 Markdown files.

Never have to create a TOC again just run: `pre-commit run doctoc --all-files`

- CONTRIBUTING.md
- INSTALL.md
- README.md

So both Apache Airflow and Apache Sedona use `doctoc`:

eb4a8bc03c/.pre-commit-config.yaml (L32)
b0d86fda01/.pre-commit-config.yaml (L34)
2025-11-13 11:13:19 +01:00
John Bampton
8b034dc439
chore: rename workflow linter.yml to pre-commit.yml (#11647) 2025-11-13 15:22:49 +05:30
YoulongChen
028dd86945
fixed Password Exposure in IPMI Tool Command Execution (#12028) 2025-11-13 13:40:36 +05:30
Abhishek Kumar
dc8f465527
engine-schema: upgrade path for 4.23.0 (#12048)
Adds a 4.22.0 to 4.23.0 upgrade path.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-13 08:51:08 +01:00
dahn
e90e31d386
add isPerson check to query for AD (#11843) 2025-11-12 16:09:28 +01:00
Madhukar Mishra
f985a67f4d
Fixes:#7837: Add isolationMethods and vlan to TrafficTypeResponse (#8151)
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan@onecht.net>
2025-11-12 15:49:52 +01:00
dahn
5f9e131198
Svgs (#12051) 2025-11-12 14:31:36 +05:30
Abhishek Kumar
f0a0936675
server: fix volume offering not updated after offering change (#12003)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2025-11-12 09:51:51 +01:00
Abhisar Sinha
671d8ad704
Track volume usage data at a vm granularity as well (#11531)
Co-authored-by: Vishesh <8760112+vishesh92@users.noreply.github.com>
2025-11-12 09:32:01 +01:00
YoulongChen
81787b310e
fix API Request Parameters Logged Credential Masking in ApiServer (#12020) 2025-11-12 13:06:19 +05:30
Erik Böck
23fb0e2ccb
Update GUI Kubernetes logo (#11895) 2025-11-11 18:13:00 +01:00
Davi Torres
40c8bc528d
Keeping consistency with other error messages. (#11649)
Co-authored-by: Davi Torres <dtorres@simnet.ca>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2025-11-11 15:33:07 +01:00
Pearl Dsilva
15439ede7d
UI: Update and reset domain level configuration (#11571) 2025-11-11 09:29:54 +01:00
Wei Zhou
50fe265017
Merge remote-tracking branch 'apache/4.20' into 4.22 2025-11-07 17:19:53 +01:00
Wei Zhou
d26122bf22
Veeam: use pre-defined object mapper (#10715) 2025-11-07 16:13:10 +01:00
Suresh Kumar Anaparti
2dd1e6d786
Enable UEFI on KVM hosts (by default), and configure with some default settings (#11740) 2025-11-07 14:54:02 +01:00
Phsm Qwerty
8c86f24261
enhancement: add instance info as Libvirt metadata (#11061) 2025-11-07 14:31:34 +01:00
Wei Zhou
2954e96947
Veeam: get templateId from vm instance if vm is created from ISO (#10705) 2025-11-07 11:55:27 +01:00
Manoj Kumar
c5c3cc40c1
consider Instance in Starting state for listPodsByUserConcentration (#11845) 2025-11-07 10:43:46 +01:00
Suresh Kumar Anaparti
9c0efb7072
DB setup: support db schema creation (with --schema-only) without force recreate option (#12004) 2025-11-07 09:37:11 +01:00
Suresh Kumar Anaparti
b8ec941ec1
uefi property typo (#11929) 2025-11-07 09:31:11 +01:00
Wei Zhou
8230f04a79
CKS: update cloud.kubernetes.cluster.network.offering to dynamic (#11847) 2025-11-06 11:13:53 +01:00
Pearl Dsilva
a50de029bf
Add empty Provider value in Network/VPC Offering form (#11982) 2025-11-06 11:09:00 +01:00
Suresh Kumar Anaparti
81b2c38be9
Merge branch '4.22' 2025-11-06 14:41:59 +05:30
Suresh Kumar Anaparti
ac8c200790
merge fix 2025-11-06 14:41:27 +05:30
Suresh Kumar Anaparti
5504b053e4
Merge branch '4.20' into 4.22 2025-11-06 14:37:38 +05:30
Harikrishna Patnala
dbda673e1f Updating pom.xml version numbers for release 4.23.0.0-SNAPSHOT
Signed-off-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
2025-11-05 16:54:39 +05:30
Harikrishna Patnala
e66926e6a4 Merge branch '4.22' 2025-11-05 16:52:20 +05:30
Harikrishna Patnala
d160731b9f Updating pom.xml version numbers for release 4.22.1.0-SNAPSHOT
Signed-off-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
2025-11-05 16:07:07 +05:30
Wei Zhou
15c2e50338
UI: fix typo Upload SSL certificate (#11869) 2025-11-03 15:36:52 +01:00
Wei Zhou
d53b6dbda4
api/test: fix storage pool update with only id (#11897) 2025-11-03 15:25:09 +01:00
Suresh Kumar Anaparti
e90e436ef8
UI: Enable listall (for Affinity Groups, SSH Keypairs, User Data) in deploy instance wizard for admin, and lists SSH Keypairs, User Data by domain/account (#11906) 2025-10-29 11:18:32 +01:00
480 changed files with 2986 additions and 2273 deletions

22
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,22 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
/plugins/storage/volume/linstor @rp-
/plugins/storage/volume/storpool @slavkap
.pre-commit-config.yaml @jbampton
/.github/linters/ @jbampton

View File

@ -15,13 +15,14 @@
# specific language governing permissions and limitations # specific language governing permissions and limitations
# under the License. # under the License.
--- ---
extends: relaxed extends: default
rules: rules:
line-length: line-length:
max: 400 # Very forgiving for GitHub Actions and infrastructure files max: 400 # Very forgiving for GitHub Actions and infrastructure files
indentation: disable # Disable indentation checking for existing files indentation: disable # Disable indentation checking for existing files
comments: disable # Disable comment formatting checks comments: disable # Disable comment formatting checks
braces: disable
brackets: disable # Disable bracket spacing checks brackets: disable # Disable bracket spacing checks
colons: colons:
max-spaces-after: -1 # Allow any number of spaces after colon max-spaces-after: -1 # Allow any number of spaces after colon

View File

@ -4,6 +4,7 @@ acount
actuall actuall
acuiring acuiring
acumulate acumulate
addin
addreess addreess
addtion addtion
adminstrator adminstrator
@ -12,10 +13,8 @@ afrer
afterall afterall
againt againt
ags ags
aktive
algoritm algoritm
allo allo
alloacate
allocted allocted
alocation alocation
alogrithm alogrithm
@ -65,6 +64,7 @@ bject
boardcast boardcast
bootstraper bootstraper
bu bu
callin
cant cant
capabilites capabilites
capablity capablity
@ -73,6 +73,7 @@ carrefully
cavaet cavaet
chaing chaing
checkd checkd
checkin
childs childs
choosen choosen
chould chould
@ -93,7 +94,6 @@ confg
configruation configruation
configuable configuable
conneciton conneciton
connexion
constrait constrait
constraits constraits
containg containg
@ -101,9 +101,7 @@ contex
continuesly continuesly
contro contro
controler controler
controles
controll controll
convienient
convinience convinience
coputer coputer
correcponding correcponding
@ -158,13 +156,13 @@ differnet
differnt differnt
direcotry direcotry
directroy directroy
disale
disbale disbale
discrepency discrepency
disover disover
dissapper dissapper
dissassociated dissassociated
divice divice
dockin
doesn' doesn'
doesnot doesnot
doesnt doesnt
@ -175,7 +173,6 @@ eanbled
earch earch
ect ect
elemnt elemnt
eles
elments elments
emmited emmited
enble enble
@ -187,22 +184,19 @@ environmnet
equivalant equivalant
erro erro
erronous erronous
everthing
everytime everytime
excute excute
execept execept
execption execption
exects
execut execut
executeable executeable
exeeded exeeded
exisitng exisitng
exisits exisits
existin
existsing existsing
exitting
expcted expcted
expection expection
explaination
explicitely explicitely
faield faield
faild faild
@ -215,7 +209,6 @@ fillled
findout findout
fisrt fisrt
fo fo
folowing
fowarding fowarding
frist frist
fro fro
@ -234,6 +227,7 @@ hanling
happend happend
hasing hasing
hasnt hasnt
havin
hda hda
hostanme hostanme
hould hould
@ -253,20 +247,14 @@ implmeneted
implmentation implmentation
incase incase
includeing includeing
incosistency
indecates indecates
indien
infor infor
informations informations
informaton informaton
infrastrcuture
ingore ingore
inital
initalize initalize
initator initator
initilization
inspite inspite
instace
instal instal
instnace instnace
intefaces intefaces
@ -284,12 +272,8 @@ ist
klunky klunky
lable lable
leve leve
lief
limite limite
linke
listner listner
lokal
lokales
maintainence maintainence
maintenace maintenace
maintenence maintenence
@ -298,7 +282,6 @@ mambers
manaully manaully
manuel manuel
maxium maxium
mehtod
mergable mergable
mesage mesage
messge messge
@ -308,7 +291,6 @@ minumum
mis mis
modifers modifers
mor mor
mot
mulitply mulitply
multipl multipl
multple multple
@ -322,7 +304,7 @@ nin
nodel nodel
nome nome
noone noone
nowe notin
numbe numbe
numer numer
occured occured
@ -390,12 +372,9 @@ remaning
remore remore
remvoing remvoing
renabling renabling
repeatly
reponse reponse
reqest reqest
reqiured reqiured
requieres
requried
reserv reserv
reserverd reserverd
reseted reseted
@ -414,14 +393,13 @@ retuned
returing returing
rever rever
rocessor rocessor
roperty
runing runing
runnign runnign
sate sate
scalled scalled
scipt
scirpt scirpt
scrip scrip
seconadry
seconday seconday
seesion seesion
sepcified sepcified
@ -434,12 +412,10 @@ settig
sevices sevices
shoul shoul
shoule shoule
sie
signle signle
simplier simplier
singature singature
skiping skiping
snaphsot
snpashot snpashot
specied specied
specifed specifed
@ -450,7 +426,6 @@ standy
statics statics
stickyness stickyness
stil stil
stip
storeage storeage
strat strat
streched streched
@ -459,7 +434,6 @@ succesfull
successfull successfull
suceessful suceessful
suces suces
sucessfully
suiteable suiteable
suppots suppots
suppport suppport
@ -492,7 +466,6 @@ uncompressible
uneccessarily uneccessarily
unexepected unexepected
unexpect unexpect
unknow
unkonw unkonw
unkown unkown
unneccessary unneccessary
@ -500,14 +473,12 @@ unparseable
unrecoginized unrecoginized
unsupport unsupport
unxpected unxpected
updat
uptodate uptodate
usera usera
usign usign
usin usin
utlization utlization
vaidate vaidate
valiate
valule valule
valus valus
varibles varibles
@ -516,8 +487,6 @@ verfying
verifing verifing
virutal virutal
visable visable
wakup
wil wil
wit wit
wll
wth wth

28
.github/workflows/dependabot.yaml vendored Normal file
View File

@ -0,0 +1,28 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "maven" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"

View File

@ -44,6 +44,6 @@ jobs:
path: ~/.cache/pre-commit path: ~/.cache/pre-commit
key: pre-commit|${{ env.PY }}|${{ hashFiles('.pre-commit-config.yaml') }} key: pre-commit|${{ env.PY }}|${{ hashFiles('.pre-commit-config.yaml') }}
- name: Run pre-commit - name: Run pre-commit
run: pre-commit run --all-files run: pre-commit run --color=always --all-files
- name: Run manual pre-commit hooks - name: Run manual pre-commit hooks
run: pre-commit run --all-files --hook-stage manual run: pre-commit run --color=always --all-files --hook-stage manual

View File

@ -25,6 +25,12 @@ repos:
hooks: hooks:
- id: identity - id: identity
- id: check-hooks-apply - id: check-hooks-apply
- repo: https://github.com/thlorenz/doctoc.git
rev: v2.2.0
hooks:
- id: doctoc
name: Add TOC for Markdown files
files: ^CONTRIBUTING\.md$|^INSTALL\.md$|^README\.md$
- repo: https://github.com/oxipng/oxipng - repo: https://github.com/oxipng/oxipng
rev: v9.1.5 rev: v9.1.5
hooks: hooks:
@ -41,6 +47,11 @@ repos:
- repo: https://github.com/Lucas-C/pre-commit-hooks - repo: https://github.com/Lucas-C/pre-commit-hooks
rev: v1.5.5 rev: v1.5.5
hooks: hooks:
- id: chmod
name: set file permissions
args: ['644']
files: \.md$
stages: [manual]
- id: insert-license - id: insert-license
name: add license for all Markdown files name: add license for all Markdown files
files: \.md$ files: \.md$
@ -51,6 +62,44 @@ repos:
- .github/workflows/license-templates/LICENSE.txt - .github/workflows/license-templates/LICENSE.txt
- --fuzzy-match-generates-todo - --fuzzy-match-generates-todo
exclude: ^(CHANGES|ISSUE_TEMPLATE|PULL_REQUEST_TEMPLATE)\.md$|^ui/docs/(full|smoke)-test-plan\.template\.md$ exclude: ^(CHANGES|ISSUE_TEMPLATE|PULL_REQUEST_TEMPLATE)\.md$|^ui/docs/(full|smoke)-test-plan\.template\.md$
- id: insert-license
name: add license for all Shell files
description: automatically adds a licence header to all Shell files that don't have a license header
files: \.sh$
args:
- --comment-style
- '|#|'
- --license-filepath
- .github/workflows/license-templates/LICENSE.txt
- --fuzzy-match-generates-todo
- id: insert-license
name: add license for all SQL files
files: \.sql$
args:
- --comment-style
- '|--|'
- --license-filepath
- .github/workflows/license-templates/LICENSE.txt
- --fuzzy-match-generates-todo
- id: insert-license
name: add license for all Vue files
files: \.vue$
args:
- --comment-style
- '|//|'
- --license-filepath
- .github/workflows/license-templates/LICENSE.txt
- --fuzzy-match-generates-todo
- id: insert-license
name: add license for all YAML files
description: automatically adds a licence header to all YAML files that don't have a license header
files: \.ya?ml$
args:
- --comment-style
- '|#|'
- --license-filepath
- .github/workflows/license-templates/LICENSE.txt
- --fuzzy-match-generates-todo
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0 rev: v6.0.0
hooks: hooks:
@ -84,7 +133,7 @@ repos:
^systemvm/agent/certs/realhostip\.key$| ^systemvm/agent/certs/realhostip\.key$|
^test/integration/smoke/test_ssl_offloading\.py$ ^test/integration/smoke/test_ssl_offloading\.py$
- id: end-of-file-fixer - id: end-of-file-fixer
exclude: \.vhd$ exclude: \.vhd$|\.svg$
- id: file-contents-sorter - id: file-contents-sorter
args: [--unique] args: [--unique]
files: ^\.github/linters/codespell\.txt$ files: ^\.github/linters/codespell\.txt$
@ -92,11 +141,11 @@ repos:
- id: forbid-submodules - id: forbid-submodules
- id: mixed-line-ending - id: mixed-line-ending
- id: trailing-whitespace - id: trailing-whitespace
files: \.(bat|cfg|cs|css|gitignore|header|in|install|java|md|properties|py|rb|rc|sh|sql|te|template|txt|ucls|vue|xml|xsl|yaml|yml)$|^cloud-cli/bindir/cloud-tool$|^debian/changelog$ files: ^(LICENSE|NOTICE)$|\.(bat|cfg|cs|css|gitignore|header|in|install|java|md|properties|py|rb|rc|sh|sql|te|template|txt|ucls|vue|xml|xsl|yaml|yml)$|^cloud-cli/bindir/cloud-tool$|^debian/changelog$
args: [--markdown-linebreak-ext=md] args: [--markdown-linebreak-ext=md]
exclude: ^services/console-proxy/rdpconsole/src/test/doc/freerdp-debug-log\.txt$ exclude: ^services/console-proxy/rdpconsole/src/test/doc/freerdp-debug-log\.txt$
- repo: https://github.com/codespell-project/codespell - repo: https://github.com/codespell-project/codespell
rev: v2.2.6 rev: v2.4.1
hooks: hooks:
- id: codespell - id: codespell
name: run codespell name: run codespell
@ -117,14 +166,6 @@ repos:
args: [--config=.github/linters/.markdown-lint.yml] args: [--config=.github/linters/.markdown-lint.yml]
types: [markdown] types: [markdown]
files: \.(md|mdown|markdown)$ files: \.(md|mdown|markdown)$
- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: v1.5.5
hooks:
- id: chmod
name: set file permissions
args: ['644']
files: \.md$
stages: [manual]
- repo: https://github.com/adrienverge/yamllint - repo: https://github.com/adrienverge/yamllint
rev: v1.37.1 rev: v1.37.1
hooks: hooks:

View File

@ -21,6 +21,24 @@
## Summary ## Summary
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Summary](#summary)
- [Bug fixes](#bug-fixes)
- [Developing new features](#developing-new-features)
- [PendingReleaseNotes file](#pendingreleasenotes-file)
- [Fork the code](#fork-the-code)
- [Making changes](#making-changes)
- [Rebase `feature_x` to include updates from `upstream/main`](#rebase-feature_x-to-include-updates-from-upstreammain)
- [Make a GitHub Pull Request to contribute your changes](#make-a-github-pull-request-to-contribute-your-changes)
- [Cleaning up after a successful pull request](#cleaning-up-after-a-successful-pull-request)
- [Release Principles](#release-principles)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Summary
This document covers how to contribute to the ACS project. ACS uses GitHub PRs to manage code contributions. This document covers how to contribute to the ACS project. ACS uses GitHub PRs to manage code contributions.
These instructions assume you have a GitHub.com account, so if you don't have one you will have to create one. Your proposed code changes will be published to your own fork of the ACS project, and you will submit a Pull Request for your changes to be added. These instructions assume you have a GitHub.com account, so if you don't have one you will have to create one. Your proposed code changes will be published to your own fork of the ACS project, and you will submit a Pull Request for your changes to be added.

View File

@ -26,9 +26,21 @@ or the developer [wiki](https://cwiki.apache.org/confluence/display/CLOUDSTACK/H
Apache CloudStack developers use various platforms for development, this guide Apache CloudStack developers use various platforms for development, this guide
was tested against a CentOS 7 x86_64 setup. was tested against a CentOS 7 x86_64 setup.
* [Setting up development environment](https://cwiki.apache.org/confluence/display/CLOUDSTACK/Setting+up+CloudStack+Development+Environment) for Apache CloudStack. <!-- START doctoc generated TOC please keep comment here to allow auto update -->
* [Building](https://cwiki.apache.org/confluence/display/CLOUDSTACK/How+to+build+CloudStack) Apache CloudStack. <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
* [Appliance based development](https://github.com/rhtyd/monkeybox)
- [Setting up Development Environment](#setting-up-development-environment)
- [Using jenv and/or pyenv for Version Management](#using-jenv-andor-pyenv-for-version-management)
- [Getting the Source Code](#getting-the-source-code)
- [Building](#building)
- [To bring up CloudStack UI](#to-bring-up-cloudstack-ui)
- [Building with non-redistributable plugins](#building-with-non-redistributable-plugins)
- [Packaging and Installation](#packaging-and-installation)
- [Debian/Ubuntu](#debianubuntu)
- [RHEL/CentOS](#rhelcentos)
- [Notes](#notes)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Setting up Development Environment ## Setting up Development Environment

View File

@ -20,7 +20,7 @@
# pre-commit # pre-commit
We run [pre-commit](https://pre-commit.com/) with We run [pre-commit](https://pre-commit.com/) with
[GitHub Actions](https://github.com/apache/cloudstack/blob/main/.github/workflows/linter.yml) so installation on your [GitHub Actions](https://github.com/apache/cloudstack/blob/main/.github/workflows/pre-commit.yml) so installation on your
local machine is currently optional. local machine is currently optional.
The `pre-commit` [configuration file](https://github.com/apache/cloudstack/blob/main/.pre-commit-config.yaml) The `pre-commit` [configuration file](https://github.com/apache/cloudstack/blob/main/.pre-commit-config.yaml)

View File

@ -31,6 +31,24 @@
[![Apache CloudStack](tools/logo/apache_cloudstack.png)](https://cloudstack.apache.org/) [![Apache CloudStack](tools/logo/apache_cloudstack.png)](https://cloudstack.apache.org/)
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Who Uses CloudStack?](#who-uses-cloudstack)
- [Demo](#demo)
- [Getting Started](#getting-started)
- [Getting Source Repository](#getting-source-repository)
- [Documentation](#documentation)
- [News and Events](#news-and-events)
- [Getting Involved and Contributing](#getting-involved-and-contributing)
- [Reporting Security Vulnerabilities](#reporting-security-vulnerabilities)
- [License](#license)
- [Notice of Cryptographic Software](#notice-of-cryptographic-software)
- [Star History](#star-history)
- [Contributors](#contributors)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
Apache CloudStack is open source software designed to deploy and manage large Apache CloudStack is open source software designed to deploy and manage large
networks of virtual machines, as a highly available, highly scalable networks of virtual machines, as a highly available, highly scalable
Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used Infrastructure as a Service (IaaS) cloud computing platform. CloudStack is used

View File

@ -0,0 +1,24 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# Configuration file for UEFI
guest.nvram.template.legacy=@GUESTNVRAMTEMPLATELEGACY@
guest.loader.legacy=@GUESTLOADERLEGACY@
guest.nvram.template.secure=@GUESTNVRAMTEMPLATESECURE@
guest.loader.secure=@GUESTLOADERSECURE@
guest.nvram.path=@GUESTNVRAMPATH@

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>
<dependency> <dependency>

View File

@ -1322,7 +1322,6 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
processResponse((Response)request, task.getLink()); processResponse((Response)request, task.getLink());
} else { } else {
//put the requests from mgt server into another thread pool, as the request may take a longer time to finish. Don't block the NIO main thread pool //put the requests from mgt server into another thread pool, as the request may take a longer time to finish. Don't block the NIO main thread pool
//processRequest(request, task.getLink());
requestHandler.submit(new AgentRequestHandler(getType(), getLink(), request)); requestHandler.submit(new AgentRequestHandler(getType(), getLink(), request));
} }
} catch (final ClassNotFoundException e) { } catch (final ClassNotFoundException e) {
@ -1332,13 +1331,14 @@ public class Agent implements HandlerFactory, IAgentControl, AgentStatusUpdater
} }
} else if (task.getType() == Task.Type.DISCONNECT) { } else if (task.getType() == Task.Type.DISCONNECT) {
try { try {
// an issue has been found if reconnect immediately after disconnecting. please refer to https://github.com/apache/cloudstack/issues/8517 // an issue has been found if reconnect immediately after disconnecting.
// wait 5 seconds before reconnecting // wait 5 seconds before reconnecting
logger.debug("Wait for 5 secs before reconnecting, disconnect task - {}", () -> getLinkLog(task.getLink()));
Thread.sleep(5000); Thread.sleep(5000);
} catch (InterruptedException e) { } catch (InterruptedException e) {
} }
shell.setConnectionTransfer(false); shell.setConnectionTransfer(false);
logger.debug("Executing disconnect task - {}", () -> getLinkLog(task.getLink())); logger.debug("Executing disconnect task - {} and reconnecting", () -> getLinkLog(task.getLink()));
reconnect(task.getLink()); reconnect(task.getLink());
} else if (task.getType() == Task.Type.OTHER) { } else if (task.getType() == Task.Type.OTHER) {
processOtherTask(task); processOtherTask(task);

View File

@ -117,7 +117,7 @@ public class AgentProperties{
/** /**
* Local storage path.<br> * Local storage path.<br>
* This property allows multiple values to be entered in a single String. The differente values must be separated by commas.<br> * This property allows multiple values to be entered in a single String. The different values must be separated by commas.<br>
* Data type: String.<br> * Data type: String.<br>
* Default value: <code>/var/lib/libvirt/images/</code> * Default value: <code>/var/lib/libvirt/images/</code>
*/ */
@ -134,7 +134,7 @@ public class AgentProperties{
/** /**
* MANDATORY: The UUID for the local storage pool.<br> * MANDATORY: The UUID for the local storage pool.<br>
* This property allows multiple values to be entered in a single String. The differente values must be separated by commas.<br> * This property allows multiple values to be entered in a single String. The different values must be separated by commas.<br>
* Data type: String.<br> * Data type: String.<br>
* Default value: <code>null</code> * Default value: <code>null</code>
*/ */

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>
<dependency> <dependency>

View File

@ -0,0 +1,182 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.agent.api.to;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class VirtualMachineMetadataTO {
// VM details
private final String name;
private final String internalName;
private final String displayName;
private final String instanceUuid;
private final Integer cpuCores;
private final Integer memory;
private final Long created;
private final Long started;
// Owner details
private final String ownerDomainUuid;
private final String ownerDomainName;
private final String ownerAccountUuid;
private final String ownerAccountName;
private final String ownerProjectUuid;
private final String ownerProjectName;
// Host and service offering
private final String serviceOfferingName;
private final List<String> serviceOfferingHostTags;
// zone, pod, and cluster details
private final String zoneName;
private final String zoneUuid;
private final String podName;
private final String podUuid;
private final String clusterName;
private final String clusterUuid;
// resource tags
private final Map<String, String> resourceTags;
public VirtualMachineMetadataTO(
String name, String internalName, String displayName, String instanceUuid, Integer cpuCores, Integer memory, Long created, Long started,
String ownerDomainUuid, String ownerDomainName, String ownerAccountUuid, String ownerAccountName, String ownerProjectUuid, String ownerProjectName,
String serviceOfferingName, List<String> serviceOfferingHostTags,
String zoneName, String zoneUuid, String podName, String podUuid, String clusterName, String clusterUuid, Map<String, String> resourceTags) {
/*
* Something failed in the metadata shall not be a fatal error, the VM can still be started
* Thus, the unknown fields just get an explicit "unknown" value so it can be fixed in case
* there are bugs on some execution paths.
* */
this.name = (name != null) ? name : "unknown";
this.internalName = (internalName != null) ? internalName : "unknown";
this.displayName = (displayName != null) ? displayName : "unknown";
this.instanceUuid = (instanceUuid != null) ? instanceUuid : "unknown";
this.cpuCores = (cpuCores != null) ? cpuCores : -1;
this.memory = (memory != null) ? memory : -1;
this.created = (created != null) ? created : 0;
this.started = (started != null) ? started : 0;
this.ownerDomainUuid = (ownerDomainUuid != null) ? ownerDomainUuid : "unknown";
this.ownerDomainName = (ownerDomainName != null) ? ownerDomainName : "unknown";
this.ownerAccountUuid = (ownerAccountUuid != null) ? ownerAccountUuid : "unknown";
this.ownerAccountName = (ownerAccountName != null) ? ownerAccountName : "unknown";
this.ownerProjectUuid = (ownerProjectUuid != null) ? ownerProjectUuid : "unknown";
this.ownerProjectName = (ownerProjectName != null) ? ownerProjectName : "unknown";
this.serviceOfferingName = (serviceOfferingName != null) ? serviceOfferingName : "unknown";
this.serviceOfferingHostTags = (serviceOfferingHostTags != null) ? serviceOfferingHostTags : new ArrayList<>();
this.zoneName = (zoneName != null) ? zoneName : "unknown";
this.zoneUuid = (zoneUuid != null) ? zoneUuid : "unknown";
this.podName = (podName != null) ? podName : "unknown";
this.podUuid = (podUuid != null) ? podUuid : "unknown";
this.clusterName = (clusterName != null) ? clusterName : "unknown";
this.clusterUuid = (clusterUuid != null) ? clusterUuid : "unknown";
this.resourceTags = (resourceTags != null) ? resourceTags : new HashMap<>();
}
public String getName() {
return name;
}
public String getInternalName() {
return internalName;
}
public String getDisplayName() {
return displayName;
}
public String getInstanceUuid() {
return instanceUuid;
}
public Integer getCpuCores() {
return cpuCores;
}
public Integer getMemory() {
return memory;
}
public Long getCreated() { return created; }
public Long getStarted() {
return started;
}
public String getOwnerDomainUuid() {
return ownerDomainUuid;
}
public String getOwnerDomainName() {
return ownerDomainName;
}
public String getOwnerAccountUuid() {
return ownerAccountUuid;
}
public String getOwnerAccountName() {
return ownerAccountName;
}
public String getOwnerProjectUuid() {
return ownerProjectUuid;
}
public String getOwnerProjectName() {
return ownerProjectName;
}
public String getserviceOfferingName() {
return serviceOfferingName;
}
public List<String> getserviceOfferingHostTags() {
return serviceOfferingHostTags;
}
public String getZoneName() {
return zoneName;
}
public String getZoneUuid() {
return zoneUuid;
}
public String getPodName() {
return podName;
}
public String getPodUuid() {
return podUuid;
}
public String getClusterName() {
return clusterName;
}
public String getClusterUuid() {
return clusterUuid;
}
public Map<String, String> getResourceTags() { return resourceTags; }
}

View File

@ -89,6 +89,7 @@ public class VirtualMachineTO {
private DeployAsIsInfoTO deployAsIsInfo; private DeployAsIsInfoTO deployAsIsInfo;
private String metadataManufacturer; private String metadataManufacturer;
private String metadataProductName; private String metadataProductName;
private VirtualMachineMetadataTO metadata;
public VirtualMachineTO(long id, String instanceName, VirtualMachine.Type type, int cpus, Integer speed, long minRam, long maxRam, BootloaderType bootloader, public VirtualMachineTO(long id, String instanceName, VirtualMachine.Type type, int cpus, Integer speed, long minRam, long maxRam, BootloaderType bootloader,
String os, boolean enableHA, boolean limitCpuUse, String vncPassword) { String os, boolean enableHA, boolean limitCpuUse, String vncPassword) {
@ -494,6 +495,14 @@ public class VirtualMachineTO {
this.metadataProductName = metadataProductName; this.metadataProductName = metadataProductName;
} }
public VirtualMachineMetadataTO getMetadata() {
return metadata;
}
public void setMetadata(VirtualMachineMetadataTO metadata) {
this.metadata = metadata;
}
@Override @Override
public String toString() { public String toString() {
return String.format("VM {id: \"%s\", name: \"%s\", uuid: \"%s\", type: \"%s\"}", id, name, uuid, type); return String.format("VM {id: \"%s\", name: \"%s\", uuid: \"%s\", type: \"%s\"}", id, name, uuid, type);

View File

@ -36,5 +36,4 @@ public interface HostStats {
public HostStats getHostStats(); public HostStats getHostStats();
public double getLoadAverage(); public double getLoadAverage();
// public double getXapiMemoryUsageKBs();
} }

View File

@ -78,7 +78,7 @@ public class Networks {
} }
@Override @Override
public String getValueFrom(URI uri) { public String getValueFrom(URI uri) {
return uri.getAuthority(); return uri == null ? null : uri.getAuthority();
} }
}, },
Vswitch("vs", String.class), LinkLocal(null, null), Vnet("vnet", Long.class), Storage("storage", Integer.class), Lswitch("lswitch", String.class) { Vswitch("vs", String.class), LinkLocal(null, null), Vnet("vnet", Long.class), Storage("storage", Integer.class), Lswitch("lswitch", String.class) {
@ -96,7 +96,7 @@ public class Networks {
*/ */
@Override @Override
public String getValueFrom(URI uri) { public String getValueFrom(URI uri) {
return uri.getSchemeSpecificPart(); return uri == null ? null : uri.getSchemeSpecificPart();
} }
}, },
Mido("mido", String.class), Pvlan("pvlan", String.class), Mido("mido", String.class), Pvlan("pvlan", String.class),
@ -177,7 +177,7 @@ public class Networks {
* @return the scheme as BroadcastDomainType * @return the scheme as BroadcastDomainType
*/ */
public static BroadcastDomainType getSchemeValue(URI uri) { public static BroadcastDomainType getSchemeValue(URI uri) {
return toEnumValue(uri.getScheme()); return toEnumValue(uri == null ? null : uri.getScheme());
} }
/** /**
@ -191,7 +191,7 @@ public class Networks {
if (com.cloud.dc.Vlan.UNTAGGED.equalsIgnoreCase(str)) { if (com.cloud.dc.Vlan.UNTAGGED.equalsIgnoreCase(str)) {
return Native; return Native;
} }
return getSchemeValue(new URI(str)); return getSchemeValue(str == null ? null : new URI(str));
} }
/** /**
@ -220,7 +220,7 @@ public class Networks {
* @return the host part as String * @return the host part as String
*/ */
public String getValueFrom(URI uri) { public String getValueFrom(URI uri) {
return uri.getHost(); return uri == null ? null : uri.getHost();
} }
/** /**
@ -243,7 +243,7 @@ public class Networks {
* @throws URISyntaxException the string is not even an uri * @throws URISyntaxException the string is not even an uri
*/ */
public static String getValue(String uriString) throws URISyntaxException { public static String getValue(String uriString) throws URISyntaxException {
return getValue(new URI(uriString)); return getValue(uriString == null ? null : new URI(uriString));
} }
/** /**

View File

@ -41,4 +41,6 @@ public interface PhysicalNetworkTrafficType extends InternalIdentity, Identity {
String getHypervNetworkLabel(); String getHypervNetworkLabel();
String getOvm3NetworkLabel(); String getOvm3NetworkLabel();
String getVlan();
} }

View File

@ -108,8 +108,7 @@ public class LbStickinessMethod {
} }
public void addParam(String name, Boolean required, String description, Boolean isFlag) { public void addParam(String name, Boolean required, String description, Boolean isFlag) {
/* FIXME : UI is breaking if the capability string length is larger , temporarily description is commented out */ /* is this still a valid comment: FIXME : UI is breaking if the capability string length is larger , temporarily description is commented out */
// LbStickinessMethodParam param = new LbStickinessMethodParam(name, required, description);
LbStickinessMethodParam param = new LbStickinessMethodParam(name, required, " ", isFlag); LbStickinessMethodParam param = new LbStickinessMethodParam(name, required, " ", isFlag);
_paramList.add(param); _paramList.add(param);
return; return;
@ -133,7 +132,6 @@ public class LbStickinessMethod {
public void setDescription(String description) { public void setDescription(String description) {
/* FIXME : UI is breaking if the capability string length is larger , temporarily description is commented out */ /* FIXME : UI is breaking if the capability string length is larger , temporarily description is commented out */
//this.description = description;
this._description = " "; this._description = " ";
} }
} }

View File

@ -128,7 +128,7 @@ public class Storage {
public static enum TemplateType { public static enum TemplateType {
ROUTING, // Router template ROUTING, // Router template
SYSTEM, /* routing, system vm template */ SYSTEM, /* routing, system vm template */
BUILTIN, /* buildin template */ BUILTIN, /* builtin template */
PERHOST, /* every host has this template, don't need to install it in secondary storage */ PERHOST, /* every host has this template, don't need to install it in secondary storage */
USER, /* User supplied template/iso */ USER, /* User supplied template/iso */
VNF, /* VNFs (virtual network functions) template */ VNF, /* VNFs (virtual network functions) template */

View File

@ -150,7 +150,7 @@ public class UpdateCfgCmd extends BaseCmd {
ConfigurationResponse response = _responseGenerator.createConfigurationResponse(cfg); ConfigurationResponse response = _responseGenerator.createConfigurationResponse(cfg);
response.setResponseName(getCommandName()); response.setResponseName(getCommandName());
response = setResponseScopes(response); response = setResponseScopes(response);
response = setResponseValue(response, cfg); setResponseValue(response, cfg);
this.setResponseObject(response); this.setResponseObject(response);
} else { } else {
throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to update config"); throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to update config");
@ -161,15 +161,13 @@ public class UpdateCfgCmd extends BaseCmd {
* Sets the configuration value in the response. If the configuration is in the `Hidden` or `Secure` categories, the value is encrypted before being set in the response. * Sets the configuration value in the response. If the configuration is in the `Hidden` or `Secure` categories, the value is encrypted before being set in the response.
* @param response to be set with the configuration `cfg` value * @param response to be set with the configuration `cfg` value
* @param cfg to be used in setting the response value * @param cfg to be used in setting the response value
* @return the response with the configuration's value
*/ */
public ConfigurationResponse setResponseValue(ConfigurationResponse response, Configuration cfg) { public void setResponseValue(ConfigurationResponse response, Configuration cfg) {
String value = cfg.getValue();
if (cfg.isEncrypted()) { if (cfg.isEncrypted()) {
response.setValue(DBEncryptionUtil.encrypt(getValue())); value = DBEncryptionUtil.encrypt(value);
} else {
response.setValue(getValue());
} }
return response; response.setValue(value);
} }
/** /**

View File

@ -1,4 +1,4 @@
// Licensedname = "listIsoPermissions", to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file // or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information // distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file // regarding copyright ownership. The ASF licenses this file

View File

@ -153,6 +153,8 @@ public class UpdateStoragePoolCmd extends BaseCmd {
if (ObjectUtils.anyNotNull(name, capacityIops, capacityBytes, url, isTagARule, tags) || if (ObjectUtils.anyNotNull(name, capacityIops, capacityBytes, url, isTagARule, tags) ||
MapUtils.isNotEmpty(details)) { MapUtils.isNotEmpty(details)) {
result = _storageService.updateStoragePool(this); result = _storageService.updateStoragePool(this);
} else {
result = _storageService.getStoragePool(getId());
} }
if (enabled != null) { if (enabled != null) {

View File

@ -1,4 +1,4 @@
// Licensedname = "listTemplatePermissions", to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file // or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information // distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file // regarding copyright ownership. The ASF licenses this file

View File

@ -26,14 +26,13 @@ import org.apache.cloudstack.api.BaseListCmd;
import org.apache.cloudstack.api.Parameter; import org.apache.cloudstack.api.Parameter;
import org.apache.cloudstack.api.response.ListResponse; import org.apache.cloudstack.api.response.ListResponse;
import org.apache.cloudstack.api.response.PhysicalNetworkResponse; import org.apache.cloudstack.api.response.PhysicalNetworkResponse;
import org.apache.cloudstack.api.response.ProviderResponse;
import org.apache.cloudstack.api.response.TrafficTypeResponse; import org.apache.cloudstack.api.response.TrafficTypeResponse;
import com.cloud.network.PhysicalNetworkTrafficType; import com.cloud.network.PhysicalNetworkTrafficType;
import com.cloud.user.Account; import com.cloud.user.Account;
import com.cloud.utils.Pair; import com.cloud.utils.Pair;
@APICommand(name = "listTrafficTypes", description = "Lists traffic types of a given physical network.", responseObject = ProviderResponse.class, since = "3.0.0", @APICommand(name = "listTrafficTypes", description = "Lists traffic types of a given physical network.", responseObject = TrafficTypeResponse.class, since = "3.0.0",
requestHasSensitiveInfo = false, responseHasSensitiveInfo = false) requestHasSensitiveInfo = false, responseHasSensitiveInfo = false)
public class ListTrafficTypesCmd extends BaseListCmd { public class ListTrafficTypesCmd extends BaseListCmd {

View File

@ -53,7 +53,7 @@ public class ListPublicIpAddressesCmd extends BaseListRetrieveOnlyResourceCountC
@Parameter(name = ApiConstants.ALLOCATED_ONLY, type = CommandType.BOOLEAN, description = "limits search results to allocated public IP addresses") @Parameter(name = ApiConstants.ALLOCATED_ONLY, type = CommandType.BOOLEAN, description = "limits search results to allocated public IP addresses")
private Boolean allocatedOnly; private Boolean allocatedOnly;
@Parameter(name = ApiConstants.STATE, type = CommandType.STRING, description = "lists all public IP addresses by state") @Parameter(name = ApiConstants.STATE, type = CommandType.STRING, description = "lists all public IP addresses by state. A comma-separated list of states can be passed")
private String state; private String state;
@Parameter(name = ApiConstants.FOR_VIRTUAL_NETWORK, type = CommandType.BOOLEAN, description = "the virtual network for the IP address") @Parameter(name = ApiConstants.FOR_VIRTUAL_NETWORK, type = CommandType.BOOLEAN, description = "the virtual network for the IP address")

View File

@ -1,4 +1,4 @@
// Licensedname = "listIsoPermissions", to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file // or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information // distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file // regarding copyright ownership. The ASF licenses this file

View File

@ -1,4 +1,4 @@
// Licensedname = "listTemplatePermissions", to the Apache Software Foundation (ASF) under one // Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file // or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information // distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file // regarding copyright ownership. The ASF licenses this file

View File

@ -66,7 +66,7 @@ public class UpdateVpnConnectionCmd extends BaseAsyncCustomIdCmd {
@Override @Override
public String getEventDescription() { public String getEventDescription() {
return "Updating site-to-site VPN connection id= " + id; return "Updating site-to-site VPN connection ID = " + id;
} }
@Override @Override

View File

@ -63,7 +63,7 @@ public class UpdateVpnGatewayCmd extends BaseAsyncCustomIdCmd {
@Override @Override
public String getEventDescription() { public String getEventDescription() {
return "Update site-to-site VPN gateway id= " + id; return "Update site-to-site VPN gateway ID = " + id;
} }
@Override @Override

View File

@ -27,8 +27,6 @@ import org.apache.cloudstack.api.EntityReference;
import org.apache.cloudstack.network.tls.SslCert; import org.apache.cloudstack.network.tls.SslCert;
import com.cloud.serializer.Param; import com.cloud.serializer.Param;
//import org.apache.cloudstack.api.EntityReference;
@EntityReference(value = SslCert.class) @EntityReference(value = SslCert.class)
public class SslCertResponse extends BaseResponse { public class SslCertResponse extends BaseResponse {

View File

@ -56,6 +56,14 @@ public class TrafficTypeResponse extends BaseResponse {
@Param(description = "The network name label of the physical device dedicated to this traffic on a HyperV host") @Param(description = "The network name label of the physical device dedicated to this traffic on a HyperV host")
private String hypervNetworkLabel; private String hypervNetworkLabel;
@SerializedName(ApiConstants.VLAN)
@Param(description = "The VLAN id to be used for Management traffic by VMware host")
private String vlan;
@SerializedName(ApiConstants.ISOLATION_METHODS)
@Param(description = "isolation methods for the physical network traffic")
private String isolationMethods;
@SerializedName(ApiConstants.OVM3_NETWORK_LABEL) @SerializedName(ApiConstants.OVM3_NETWORK_LABEL)
@Param(description = "The network name of the physical device dedicated to this traffic on an OVM3 host") @Param(description = "The network name of the physical device dedicated to this traffic on an OVM3 host")
private String ovm3NetworkLabel; private String ovm3NetworkLabel;
@ -128,4 +136,20 @@ public class TrafficTypeResponse extends BaseResponse {
public void setOvm3Label(String ovm3Label) { public void setOvm3Label(String ovm3Label) {
this.ovm3NetworkLabel = ovm3Label; this.ovm3NetworkLabel = ovm3Label;
} }
public String getIsolationMethods() {
return isolationMethods;
}
public void setIsolationMethods(String isolationMethods) {
this.isolationMethods = isolationMethods;
}
public String getVlan() {
return vlan;
}
public void setVlan(String vlan) {
this.vlan = vlan;
}
} }

View File

@ -124,6 +124,10 @@ public interface BackupProvider {
*/ */
boolean supportsInstanceFromBackup(); boolean supportsInstanceFromBackup();
default boolean supportsMemoryVmSnapshot() {
return true;
}
/** /**
* Returns the backup storage usage (Used, Total) for a backup provider * Returns the backup storage usage (Used, Total) for a backup provider
* @param zoneId the zone for which to return metrics * @param zoneId the zone for which to return metrics

View File

@ -34,4 +34,11 @@ public interface BackupService {
* @return backup provider * @return backup provider
*/ */
BackupProvider getBackupProvider(final Long zoneId); BackupProvider getBackupProvider(final Long zoneId);
/**
* Find backup provider by name
* @param name backup provider name
* @return backup provider
*/
BackupProvider getBackupProvider(final String name);
} }

View File

@ -37,6 +37,24 @@ public class NetworksTest {
public void setUp() { public void setUp() {
} }
@Test
public void nullBroadcastDomainTypeTest() throws URISyntaxException {
BroadcastDomainType type = BroadcastDomainType.getTypeOf(null);
Assert.assertEquals("a null uri should mean a broadcasttype of undecided", BroadcastDomainType.UnDecided, type);
}
@Test
public void nullBroadcastDomainTypeValueTest() {
URI uri = null;
Assert.assertNull(BroadcastDomainType.getValue(uri));
}
@Test
public void nullBroadcastDomainTypeStringValueTest() throws URISyntaxException {
String uriString = null;
Assert.assertNull(BroadcastDomainType.getValue(uriString));
}
@Test @Test
public void emptyBroadcastDomainTypeTest() throws URISyntaxException { public void emptyBroadcastDomainTypeTest() throws URISyntaxException {
BroadcastDomainType type = BroadcastDomainType.getTypeOf(""); BroadcastDomainType type = BroadcastDomainType.getTypeOf("");

View File

@ -0,0 +1,81 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.cloudstack.api.command.admin.config;
import org.apache.cloudstack.api.response.ConfigurationResponse;
import org.apache.cloudstack.config.Configuration;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.MockedStatic;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import com.cloud.utils.crypt.DBEncryptionUtil;
@RunWith(MockitoJUnitRunner.class)
public class UpdateCfgCmdTest {
private UpdateCfgCmd updateCfgCmd;
private MockedStatic<DBEncryptionUtil> mockedStatic;
@Before
public void setUp() {
updateCfgCmd = new UpdateCfgCmd();
mockedStatic = Mockito.mockStatic(DBEncryptionUtil.class);
}
@After
public void tearDown() {
mockedStatic.close();
}
@Test
public void setResponseValueSetsEncryptedValueWhenConfigurationIsEncrypted() {
ConfigurationResponse response = new ConfigurationResponse();
Configuration cfg = Mockito.mock(Configuration.class);
Mockito.when(cfg.isEncrypted()).thenReturn(true);
Mockito.when(cfg.getValue()).thenReturn("testValue");
Mockito.when(DBEncryptionUtil.encrypt("testValue")).thenReturn("encryptedValue");
updateCfgCmd.setResponseValue(response, cfg);
Assert.assertEquals("encryptedValue", response.getValue());
}
@Test
public void setResponseValueSetsPlainValueWhenConfigurationIsNotEncrypted() {
ConfigurationResponse response = new ConfigurationResponse();
Configuration cfg = Mockito.mock(Configuration.class);
Mockito.when(cfg.isEncrypted()).thenReturn(false);
Mockito.when(cfg.getValue()).thenReturn("testValue");
updateCfgCmd.setResponseValue(response, cfg);
Assert.assertEquals("testValue", response.getValue());
}
@Test
public void setResponseValueHandlesNullConfigurationValueGracefully() {
ConfigurationResponse response = new ConfigurationResponse();
Configuration cfg = Mockito.mock(Configuration.class);
Mockito.when(cfg.isEncrypted()).thenReturn(false);
Mockito.when(cfg.getValue()).thenReturn(null);
updateCfgCmd.setResponseValue(response, cfg);
Assert.assertNull(response.getValue());
}
}

View File

@ -78,10 +78,6 @@ public class ScaleVMCmdTest extends TestCase {
scaleVMCmd._responseGenerator = responseGenerator; scaleVMCmd._responseGenerator = responseGenerator;
UserVmResponse userVmResponse = Mockito.mock(UserVmResponse.class); UserVmResponse userVmResponse = Mockito.mock(UserVmResponse.class);
//List<UserVmResponse> list = Mockito.mock(UserVmResponse.class);
//list.add(userVmResponse);
//LinkedList<UserVmResponse> mockedList = Mockito.mock(LinkedList.class);
//Mockito.when(mockedList.get(0)).thenReturn(userVmResponse);
List<UserVmResponse> list = new LinkedList<UserVmResponse>(); List<UserVmResponse> list = new LinkedList<UserVmResponse>();
list.add(userVmResponse); list.add(userVmResponse);

View File

@ -25,7 +25,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>
<dependency> <dependency>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>
<dependency> <dependency>

View File

@ -629,9 +629,6 @@ public class HAProxyConfigurator implements LoadBalancerConfigurator {
} }
} }
result.addAll(gSection); result.addAll(gSection);
// TODO decide under what circumstances these options are needed
// result.add("\tnokqueue");
// result.add("\tnopoll");
result.add(blankLine); result.add(blankLine);
final List<String> dSection = Arrays.asList(defaultsSection); final List<String> dSection = Arrays.asList(defaultsSection);

View File

@ -417,8 +417,6 @@ public class VirtualRoutingResourceTest implements VirtualRouterDeployer {
// FIXME Check the json content // FIXME Check the json content
assertEquals(VRScripts.UPDATE_CONFIG, script); assertEquals(VRScripts.UPDATE_CONFIG, script);
assertEquals(VRScripts.NETWORK_ACL_CONFIG, args); assertEquals(VRScripts.NETWORK_ACL_CONFIG, args);
// assertEquals(args, " -d eth3 -M 01:23:45:67:89:AB -i 192.168.1.1 -m 24 -a Egress:ALL:0:0:192.168.0.1/24-192.168.0.2/24:ACCEPT:," +
// "Ingress:ICMP:0:0:192.168.0.1/24-192.168.0.2/24:DROP:,Ingress:TCP:20:80:192.168.0.1/24-192.168.0.2/24:ACCEPT:,");
break; break;
case 2: case 2:
assertEquals(VRScripts.UPDATE_CONFIG, script); assertEquals(VRScripts.UPDATE_CONFIG, script);
@ -464,8 +462,6 @@ public class VirtualRoutingResourceTest implements VirtualRouterDeployer {
private void verifyArgs(final SetupGuestNetworkCommand cmd, final String script, final String args) { private void verifyArgs(final SetupGuestNetworkCommand cmd, final String script, final String args) {
// TODO Check the contents of the json file // TODO Check the contents of the json file
//assertEquals(script, VRScripts.VPC_GUEST_NETWORK);
//assertEquals(args, " -C -M 01:23:45:67:89:AB -d eth4 -i 10.1.1.2 -g 10.1.1.1 -m 24 -n 10.1.1.0 -s 8.8.8.8,8.8.4.4 -e cloud.test");
} }
@Test @Test

8
debian/changelog vendored
View File

@ -1,12 +1,12 @@
cloudstack (4.22.0.0) unstable; urgency=low cloudstack (4.23.0.0-SNAPSHOT) unstable; urgency=low
* Update the version to 4.22.0.0 * Update the version to 4.23.0.0-SNAPSHOT
-- the Apache CloudStack project <dev@cloudstack.apache.org> Thu, 30 Oct 2025 19:23:55 +0530 -- the Apache CloudStack project <dev@cloudstack.apache.org> Thu, 30 Oct 2025 19:23:55 +0530
cloudstack (4.22.0.0-SNAPSHOT) unstable; urgency=low cloudstack (4.23.0.0-SNAPSHOT-SNAPSHOT) unstable; urgency=low
* Update the version to 4.22.0.0-SNAPSHOT * Update the version to 4.23.0.0-SNAPSHOT-SNAPSHOT
-- the Apache CloudStack project <dev@cloudstack.apache.org> Thu, Aug 28 11:58:36 2025 +0530 -- the Apache CloudStack project <dev@cloudstack.apache.org> Thu, Aug 28 11:58:36 2025 +0530

View File

@ -16,6 +16,7 @@
# under the License. # under the License.
/etc/cloudstack/agent/agent.properties /etc/cloudstack/agent/agent.properties
/etc/cloudstack/agent/uefi.properties
/etc/cloudstack/agent/environment.properties /etc/cloudstack/agent/environment.properties
/etc/cloudstack/agent/log4j-cloud.xml /etc/cloudstack/agent/log4j-cloud.xml
/etc/default/cloudstack-agent /etc/default/cloudstack-agent

View File

@ -23,7 +23,7 @@ case "$1" in
configure) configure)
OLDCONFDIR="/etc/cloud/agent" OLDCONFDIR="/etc/cloud/agent"
NEWCONFDIR="/etc/cloudstack/agent" NEWCONFDIR="/etc/cloudstack/agent"
CONFFILES="agent.properties log4j.xml log4j-cloud.xml" CONFFILES="agent.properties uefi.properties log4j.xml log4j-cloud.xml"
mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp mkdir -m 0755 -p /usr/share/cloudstack-agent/tmp

2
debian/control vendored
View File

@ -24,7 +24,7 @@ Description: CloudStack server library
Package: cloudstack-agent Package: cloudstack-agent
Architecture: all Architecture: all
Depends: ${python:Depends}, ${python3:Depends}, openjdk-17-jre-headless | java17-runtime-headless | java17-runtime | zulu-17, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, cryptsetup, rng-tools, rsync, lsb-release, ufw, apparmor, cpu-checker, libvirt-daemon-driver-storage-rbd, sysstat Depends: ${python:Depends}, ${python3:Depends}, openjdk-17-jre-headless | java17-runtime-headless | java17-runtime | zulu-17, cloudstack-common (= ${source:Version}), lsb-base (>= 9), openssh-client, qemu-kvm (>= 2.5) | qemu-system-x86 (>= 5.2), libvirt-bin (>= 1.3) | libvirt-daemon-system (>= 3.0), iproute2, ebtables, vlan, ipset, python3-libvirt, ethtool, iptables, cryptsetup, rng-tools, rsync, ovmf, swtpm, lsb-release, ufw, apparmor, cpu-checker, libvirt-daemon-driver-storage-rbd, sysstat
Recommends: init-system-helpers Recommends: init-system-helpers
Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts Conflicts: cloud-agent, cloud-agent-libs, cloud-agent-deps, cloud-agent-scripts
Description: CloudStack agent Description: CloudStack agent

View File

@ -25,7 +25,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<dependencies> <dependencies>
<dependency> <dependency>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -94,6 +94,14 @@ public class UsageEventUtils {
} }
public static void publishUsageEvent(String usageType, long accountId, long zoneId, long resourceId, String resourceName, Long offeringId, Long templateId,
Long size, String entityType, String entityUUID, Long vmId, boolean displayResource) {
if (displayResource) {
saveUsageEvent(usageType, accountId, zoneId, resourceId, offeringId, templateId, size, vmId, resourceName);
}
publishUsageEvent(usageType, accountId, zoneId, entityType, entityUUID);
}
public static void publishUsageEvent(String usageType, long accountId, long zoneId, long resourceId, String resourceName, Long offeringId, Long templateId, public static void publishUsageEvent(String usageType, long accountId, long zoneId, long resourceId, String resourceName, Long offeringId, Long templateId,
Long size, Long virtualSize, String entityType, String entityUUID, Map<String, String> details) { Long size, Long virtualSize, String entityType, String entityUUID, Map<String, String> details) {
saveUsageEvent(usageType, accountId, zoneId, resourceId, resourceName, offeringId, templateId, size, virtualSize, details); saveUsageEvent(usageType, accountId, zoneId, resourceId, resourceName, offeringId, templateId, size, virtualSize, details);
@ -202,6 +210,10 @@ public class UsageEventUtils {
s_usageEventDao.persist(new UsageEventVO(usageType, accountId, zoneId, vmId, securityGroupId)); s_usageEventDao.persist(new UsageEventVO(usageType, accountId, zoneId, vmId, securityGroupId));
} }
public static void saveUsageEvent(String usageType, long accountId, long zoneId, long resourceId, Long offeringId, Long templateId, Long size, Long vmId, String resourceName) {
s_usageEventDao.persist(new UsageEventVO(usageType, accountId, zoneId, resourceId, offeringId, templateId, size, vmId, resourceName));
}
private static void publishUsageEvent(String usageEventType, Long accountId, Long zoneId, String resourceType, String resourceUUID) { private static void publishUsageEvent(String usageEventType, Long accountId, Long zoneId, String resourceType, String resourceUUID) {
String configKey = "publish.usage.events"; String configKey = "publish.usage.events";
String value = s_configDao.getValue(configKey); String value = s_configDao.getValue(configKey);

View File

@ -230,7 +230,7 @@ public interface StorageManager extends StorageService {
/** /**
* should we execute in sequence not involving any storages? * should we execute in sequence not involving any storages?
* @return tru if commands should execute in sequence * @return true if commands should execute in sequence
*/ */
static boolean shouldExecuteInSequenceOnVmware() { static boolean shouldExecuteInSequenceOnVmware() {
return shouldExecuteInSequenceOnVmware(null, null); return shouldExecuteInSequenceOnVmware(null, null);

View File

@ -61,7 +61,6 @@ public class VmWorkSerializer {
// use java binary serialization instead // use java binary serialization instead
// //
return JobSerializerHelper.toObjectSerializedString(work); return JobSerializerHelper.toObjectSerializedString(work);
// return s_gson.toJson(work);
} }
public static <T extends VmWork> T deserialize(Class<?> clazz, String workInJsonText) { public static <T extends VmWork> T deserialize(Class<?> clazz, String workInJsonText) {
@ -69,6 +68,5 @@ public class VmWorkSerializer {
// use java binary serialization instead // use java binary serialization instead
// //
return (T)JobSerializerHelper.fromObjectSerializedString(workInJsonText); return (T)JobSerializerHelper.fromObjectSerializedString(workInJsonText);
// return (T)s_gson.fromJson(workInJsonText, clazz);
} }
} }

View File

@ -42,7 +42,7 @@ public interface VMSnapshotManager extends VMSnapshotService, Manager {
boolean deleteAllVMSnapshots(long id, VMSnapshot.Type type); boolean deleteAllVMSnapshots(long id, VMSnapshot.Type type);
/** /**
* Sync VM snapshot state when VM snapshot in reverting or snapshoting or expunging state * Sync VM snapshot state when VM snapshot in reverting or snapshotting or expunging state
* Used for fullsync after agent connects * Used for fullsync after agent connects
* *
* @param vm, the VM in question * @param vm, the VM in question

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -1652,7 +1652,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
final String reason = shutdown.getReason(); final String reason = shutdown.getReason();
logger.info("Host {} has informed us that it is shutting down with reason {} and detail {}", attache, reason, shutdown.getDetail()); logger.info("Host {} has informed us that it is shutting down with reason {} and detail {}", attache, reason, shutdown.getDetail());
if (reason.equals(ShutdownCommand.Update)) { if (reason.equals(ShutdownCommand.Update)) {
// disconnectWithoutInvestigation(attache, Event.UpdateNeeded);
throw new CloudRuntimeException("Agent update not implemented"); throw new CloudRuntimeException("Agent update not implemented");
} else if (reason.equals(ShutdownCommand.Requested)) { } else if (reason.equals(ShutdownCommand.Requested)) {
disconnectWithoutInvestigation(attache, Event.ShutdownRequested); disconnectWithoutInvestigation(attache, Event.ShutdownRequested);
@ -1753,7 +1752,6 @@ public class AgentManagerImpl extends ManagerBase implements AgentManager, Handl
} }
} catch (final UnsupportedVersionException e) { } catch (final UnsupportedVersionException e) {
logger.warn(e.getMessage()); logger.warn(e.getMessage());
// upgradeAgent(task.getLink(), data, e.getReason());
} catch (final ClassNotFoundException e) { } catch (final ClassNotFoundException e) {
final String message = String.format("Exception occurred when executing tasks! Error '%s'", e.getMessage()); final String message = String.format("Exception occurred when executing tasks! Error '%s'", e.getMessage());
logger.error(message); logger.error(message);

View File

@ -965,7 +965,6 @@ public class ClusteredAgentManagerImpl extends AgentManagerImpl implements Clust
synchronized (_agentToTransferIds) { synchronized (_agentToTransferIds) {
if (!_agentToTransferIds.isEmpty()) { if (!_agentToTransferIds.isEmpty()) {
logger.debug("Found {} agents to transfer", _agentToTransferIds.size()); logger.debug("Found {} agents to transfer", _agentToTransferIds.size());
// for (Long hostId : _agentToTransferIds) {
for (final Iterator<Long> iterator = _agentToTransferIds.iterator(); iterator.hasNext(); ) { for (final Iterator<Long> iterator = _agentToTransferIds.iterator(); iterator.hasNext(); ) {
final Long hostId = iterator.next(); final Long hostId = iterator.next();
final AgentAttache attache = findAttache(hostId); final AgentAttache attache = findAttache(hostId);

View File

@ -213,7 +213,6 @@ public class EngineHostDaoImpl extends GenericDaoBase<EngineHostVO, Long> implem
SequenceSearch = createSearchBuilder(); SequenceSearch = createSearchBuilder();
SequenceSearch.and("id", SequenceSearch.entity().getId(), SearchCriteria.Op.EQ); SequenceSearch.and("id", SequenceSearch.entity().getId(), SearchCriteria.Op.EQ);
// SequenceSearch.addRetrieve("sequence", SequenceSearch.entity().getSequence());
SequenceSearch.done(); SequenceSearch.done();
DirectlyConnectedSearch = createSearchBuilder(); DirectlyConnectedSearch = createSearchBuilder();

View File

@ -903,7 +903,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
// Save usage event and update resource count for user vm volumes // Save usage event and update resource count for user vm volumes
if (vm.getType() == VirtualMachine.Type.User) { if (vm.getType() == VirtualMachine.Type.User) {
UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offering.getId(), null, size, UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offering.getId(), null, size,
Volume.class.getName(), vol.getUuid(), vol.isDisplayVolume()); Volume.class.getName(), vol.getUuid(), vol.getInstanceId(), vol.isDisplayVolume());
_resourceLimitMgr.incrementVolumeResourceCount(vm.getAccountId(), vol.isDisplayVolume(), vol.getSize(), offering); _resourceLimitMgr.incrementVolumeResourceCount(vm.getAccountId(), vol.isDisplayVolume(), vol.getSize(), offering);
} }
DiskProfile diskProfile = toDiskProfile(vol, offering); DiskProfile diskProfile = toDiskProfile(vol, offering);
@ -981,7 +981,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
} }
UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offeringId, vol.getTemplateId(), size, UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offeringId, vol.getTemplateId(), size,
Volume.class.getName(), vol.getUuid(), vol.isDisplayVolume()); Volume.class.getName(), vol.getUuid(), vol.getInstanceId(), vol.isDisplayVolume());
_resourceLimitMgr.incrementVolumeResourceCount(vm.getAccountId(), vol.isDisplayVolume(), vol.getSize(), offering); _resourceLimitMgr.incrementVolumeResourceCount(vm.getAccountId(), vol.isDisplayVolume(), vol.getSize(), offering);
} }
@ -1583,12 +1583,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
vm.addDisk(disk); vm.addDisk(disk);
} }
//if (vm.getType() == VirtualMachine.Type.User && vm.getTemplate().getFormat() == ImageFormat.ISO) {
if (vm.getType() == VirtualMachine.Type.User) { if (vm.getType() == VirtualMachine.Type.User) {
_tmpltMgr.prepareIsoForVmProfile(vm, dest); _tmpltMgr.prepareIsoForVmProfile(vm, dest);
//DataTO dataTO = tmplFactory.getTemplate(vm.getTemplate().getId(), DataStoreRole.Image, vm.getVirtualMachine().getDataCenterId()).getTO();
//DiskTO iso = new DiskTO(dataTO, 3L, null, Volume.Type.ISO);
//vm.addDisk(iso);
} }
} }

View File

@ -140,20 +140,12 @@ public class ProvisioningServiceImpl implements ProvisioningService {
@Override @Override
public List<PodEntity> listPods() { public List<PodEntity> listPods() {
/*
* Not in use now, just commented out.
*/
//List<PodEntity> pods = new ArrayList<PodEntity>();
//pods.add(new PodEntityImpl("pod-uuid-1", "pod1"));
//pods.add(new PodEntityImpl("pod-uuid-2", "pod2"));
return null; return null;
} }
@Override @Override
public List<ZoneEntity> listZones() { public List<ZoneEntity> listZones() {
List<ZoneEntity> zones = new ArrayList<ZoneEntity>(); List<ZoneEntity> zones = new ArrayList<ZoneEntity>();
//zones.add(new ZoneEntityImpl("zone-uuid-1"));
//zones.add(new ZoneEntityImpl("zone-uuid-2"));
return zones; return zones;
} }

View File

@ -25,7 +25,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloudstack</artifactId> <artifactId>cloudstack</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<build> <build>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -36,7 +36,6 @@ public class ClusterVSMMapDaoImpl extends GenericDaoBase<ClusterVSMMapVO, Long>
final SearchBuilder<ClusterVSMMapVO> VsmSearch; final SearchBuilder<ClusterVSMMapVO> VsmSearch;
public ClusterVSMMapDaoImpl() { public ClusterVSMMapDaoImpl() {
//super();
ClusterSearch = createSearchBuilder(); ClusterSearch = createSearchBuilder();
ClusterSearch.and("clusterId", ClusterSearch.entity().getClusterId(), SearchCriteria.Op.EQ); ClusterSearch.and("clusterId", ClusterSearch.entity().getClusterId(), SearchCriteria.Op.EQ);
@ -82,8 +81,6 @@ public class ClusterVSMMapDaoImpl extends GenericDaoBase<ClusterVSMMapVO, Long>
TransactionLegacy txn = TransactionLegacy.currentTxn(); TransactionLegacy txn = TransactionLegacy.currentTxn();
txn.start(); txn.start();
ClusterVSMMapVO cluster = createForUpdate(); ClusterVSMMapVO cluster = createForUpdate();
//cluster.setClusterId(null);
//cluster.setVsmId(null);
update(id, cluster); update(id, cluster);

View File

@ -75,6 +75,9 @@ public class UsageEventVO implements UsageEvent {
@Column(name = "virtual_size") @Column(name = "virtual_size")
private Long virtualSize; private Long virtualSize;
@Column(name = "vm_id")
private Long vmId;
public UsageEventVO() { public UsageEventVO() {
} }
@ -143,6 +146,18 @@ public class UsageEventVO implements UsageEvent {
this.offeringId = securityGroupId; this.offeringId = securityGroupId;
} }
public UsageEventVO(String usageType, long accountId, long zoneId, long resourceId, Long offeringId, Long templateId, Long size, Long vmId, String resourceName) {
this.type = usageType;
this.accountId = accountId;
this.zoneId = zoneId;
this.resourceId = resourceId;
this.offeringId = offeringId;
this.templateId = templateId;
this.size = size;
this.vmId = vmId;
this.resourceName = resourceName;
}
@Override @Override
public long getId() { public long getId() {
return id; return id;
@ -248,4 +263,11 @@ public class UsageEventVO implements UsageEvent {
this.virtualSize = virtualSize; this.virtualSize = virtualSize;
} }
public Long getVmId() {
return vmId;
}
public void setVmId(Long vmId) {
this.vmId = vmId;
}
} }

View File

@ -45,11 +45,11 @@ public class UsageEventDaoImpl extends GenericDaoBase<UsageEventVO, Long> implem
private final SearchBuilder<UsageEventVO> latestEventsSearch; private final SearchBuilder<UsageEventVO> latestEventsSearch;
private final SearchBuilder<UsageEventVO> IpeventsSearch; private final SearchBuilder<UsageEventVO> IpeventsSearch;
private static final String COPY_EVENTS = private static final String COPY_EVENTS =
"INSERT INTO cloud_usage.usage_event (id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size) " "INSERT INTO cloud_usage.usage_event (id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size, vm_id) "
+ "SELECT id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size FROM cloud.usage_event vmevt WHERE vmevt.id > ? and vmevt.id <= ? "; + "SELECT id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size, vm_id FROM cloud.usage_event vmevt WHERE vmevt.id > ? and vmevt.id <= ? ";
private static final String COPY_ALL_EVENTS = private static final String COPY_ALL_EVENTS =
"INSERT INTO cloud_usage.usage_event (id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size) " "INSERT INTO cloud_usage.usage_event (id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size, vm_id) "
+ "SELECT id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size FROM cloud.usage_event vmevt WHERE vmevt.id <= ?"; + "SELECT id, type, account_id, created, zone_id, resource_id, resource_name, offering_id, template_id, size, resource_type, virtual_size, vm_id FROM cloud.usage_event vmevt WHERE vmevt.id <= ?";
private static final String COPY_EVENT_DETAILS = "INSERT INTO cloud_usage.usage_event_details (id, usage_event_id, name, value) " private static final String COPY_EVENT_DETAILS = "INSERT INTO cloud_usage.usage_event_details (id, usage_event_id, name, value) "
+ "SELECT id, usage_event_id, name, value FROM cloud.usage_event_details vmevtDetails WHERE vmevtDetails.usage_event_id > ? and vmevtDetails.usage_event_id <= ? "; + "SELECT id, usage_event_id, name, value FROM cloud.usage_event_details vmevtDetails WHERE vmevtDetails.usage_event_id > ? and vmevtDetails.usage_event_id <= ? ";
private static final String COPY_ALL_EVENT_DETAILS = "INSERT INTO cloud_usage.usage_event_details (id, usage_event_id, name, value) " private static final String COPY_ALL_EVENT_DETAILS = "INSERT INTO cloud_usage.usage_event_details (id, usage_event_id, name, value) "

View File

@ -76,7 +76,6 @@ public class VmRulesetLogDaoImpl extends GenericDaoBase<VmRulesetLogVO, Long> im
@Override @Override
public int createOrUpdate(Set<Long> workItems) { public int createOrUpdate(Set<Long> workItems) {
//return createOrUpdateUsingBatch(workItems);
return createOrUpdateUsingMultiInsert(workItems); return createOrUpdateUsingMultiInsert(workItems);
} }

View File

@ -100,7 +100,6 @@ public class VMTemplateDaoImpl extends GenericDaoBase<VMTemplateVO, Long> implem
private SearchBuilder<VMTemplateVO> PublicIsoSearch; private SearchBuilder<VMTemplateVO> PublicIsoSearch;
private SearchBuilder<VMTemplateVO> UserIsoSearch; private SearchBuilder<VMTemplateVO> UserIsoSearch;
private GenericSearchBuilder<VMTemplateVO, Long> CountTemplatesByAccount; private GenericSearchBuilder<VMTemplateVO, Long> CountTemplatesByAccount;
// private SearchBuilder<VMTemplateVO> updateStateSearch;
private SearchBuilder<VMTemplateVO> AllFieldsSearch; private SearchBuilder<VMTemplateVO> AllFieldsSearch;
protected SearchBuilder<VMTemplateVO> ParentTemplateIdSearch; protected SearchBuilder<VMTemplateVO> ParentTemplateIdSearch;
private SearchBuilder<VMTemplateVO> InactiveUnremovedTmpltSearch; private SearchBuilder<VMTemplateVO> InactiveUnremovedTmpltSearch;
@ -404,12 +403,6 @@ public class VMTemplateDaoImpl extends GenericDaoBase<VMTemplateVO, Long> implem
CountTemplatesByAccount.and("state", CountTemplatesByAccount.entity().getState(), SearchCriteria.Op.EQ); CountTemplatesByAccount.and("state", CountTemplatesByAccount.entity().getState(), SearchCriteria.Op.EQ);
CountTemplatesByAccount.done(); CountTemplatesByAccount.done();
// updateStateSearch = this.createSearchBuilder();
// updateStateSearch.and("id", updateStateSearch.entity().getId(), Op.EQ);
// updateStateSearch.and("state", updateStateSearch.entity().getState(), Op.EQ);
// updateStateSearch.and("updatedCount", updateStateSearch.entity().getUpdatedCount(), Op.EQ);
// updateStateSearch.done();
AllFieldsSearch = createSearchBuilder(); AllFieldsSearch = createSearchBuilder();
AllFieldsSearch.and("state", AllFieldsSearch.entity().getState(), SearchCriteria.Op.EQ); AllFieldsSearch.and("state", AllFieldsSearch.entity().getState(), SearchCriteria.Op.EQ);
AllFieldsSearch.and("accountId", AllFieldsSearch.entity().getAccountId(), SearchCriteria.Op.EQ); AllFieldsSearch.and("accountId", AllFieldsSearch.entity().getAccountId(), SearchCriteria.Op.EQ);

View File

@ -33,11 +33,10 @@ import java.util.List;
import javax.inject.Inject; import javax.inject.Inject;
import com.cloud.utils.FileUtil;
import org.apache.cloudstack.utils.CloudStackVersion; import org.apache.cloudstack.utils.CloudStackVersion;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import com.cloud.upgrade.dao.DbUpgrade; import com.cloud.upgrade.dao.DbUpgrade;
import com.cloud.upgrade.dao.DbUpgradeSystemVmTemplate; import com.cloud.upgrade.dao.DbUpgradeSystemVmTemplate;
@ -91,8 +90,10 @@ import com.cloud.upgrade.dao.Upgrade41910to42000;
import com.cloud.upgrade.dao.Upgrade42000to42010; import com.cloud.upgrade.dao.Upgrade42000to42010;
import com.cloud.upgrade.dao.Upgrade42010to42100; import com.cloud.upgrade.dao.Upgrade42010to42100;
import com.cloud.upgrade.dao.Upgrade42100to42200; import com.cloud.upgrade.dao.Upgrade42100to42200;
import com.cloud.upgrade.dao.Upgrade42200to42210;
import com.cloud.upgrade.dao.Upgrade420to421; import com.cloud.upgrade.dao.Upgrade420to421;
import com.cloud.upgrade.dao.Upgrade421to430; import com.cloud.upgrade.dao.Upgrade421to430;
import com.cloud.upgrade.dao.Upgrade42210to42300;
import com.cloud.upgrade.dao.Upgrade430to440; import com.cloud.upgrade.dao.Upgrade430to440;
import com.cloud.upgrade.dao.Upgrade431to440; import com.cloud.upgrade.dao.Upgrade431to440;
import com.cloud.upgrade.dao.Upgrade432to440; import com.cloud.upgrade.dao.Upgrade432to440;
@ -121,6 +122,7 @@ import com.cloud.upgrade.dao.VersionDao;
import com.cloud.upgrade.dao.VersionDaoImpl; import com.cloud.upgrade.dao.VersionDaoImpl;
import com.cloud.upgrade.dao.VersionVO; import com.cloud.upgrade.dao.VersionVO;
import com.cloud.upgrade.dao.VersionVO.Step; import com.cloud.upgrade.dao.VersionVO.Step;
import com.cloud.utils.FileUtil;
import com.cloud.utils.component.SystemIntegrityChecker; import com.cloud.utils.component.SystemIntegrityChecker;
import com.cloud.utils.crypt.DBEncryptionUtil; import com.cloud.utils.crypt.DBEncryptionUtil;
import com.cloud.utils.db.GlobalLock; import com.cloud.utils.db.GlobalLock;
@ -236,6 +238,8 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
.next("4.20.0.0", new Upgrade42000to42010()) .next("4.20.0.0", new Upgrade42000to42010())
.next("4.20.1.0", new Upgrade42010to42100()) .next("4.20.1.0", new Upgrade42010to42100())
.next("4.21.0.0", new Upgrade42100to42200()) .next("4.21.0.0", new Upgrade42100to42200())
.next("4.22.0.0", new Upgrade42200to42210())
.next("4.22.1.0", new Upgrade42210to42300())
.build(); .build();
} }
@ -313,20 +317,20 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
} }
protected void executeProcedureScripts() { protected void executeProcedureScripts() {
LOGGER.info(String.format("Executing Stored Procedure scripts that are under resource directory [%s].", PROCEDURES_DIRECTORY)); LOGGER.info("Executing Stored Procedure scripts that are under resource directory [{}].", PROCEDURES_DIRECTORY);
List<String> filesPathUnderViewsDirectory = FileUtil.getFilesPathsUnderResourceDirectory(PROCEDURES_DIRECTORY); List<String> filesPathUnderViewsDirectory = FileUtil.getFilesPathsUnderResourceDirectory(PROCEDURES_DIRECTORY);
try (TransactionLegacy txn = TransactionLegacy.open("execute-procedure-scripts")) { try (TransactionLegacy txn = TransactionLegacy.open("execute-procedure-scripts")) {
Connection conn = txn.getConnection(); Connection conn = txn.getConnection();
for (String filePath : filesPathUnderViewsDirectory) { for (String filePath : filesPathUnderViewsDirectory) {
LOGGER.debug(String.format("Executing PROCEDURE script [%s].", filePath)); LOGGER.debug("Executing PROCEDURE script [{}].", filePath);
InputStream viewScript = Thread.currentThread().getContextClassLoader().getResourceAsStream(filePath); InputStream viewScript = Thread.currentThread().getContextClassLoader().getResourceAsStream(filePath);
runScript(conn, viewScript); runScript(conn, viewScript);
} }
LOGGER.info(String.format("Finished execution of PROCEDURE scripts that are under resource directory [%s].", PROCEDURES_DIRECTORY)); LOGGER.info("Finished execution of PROCEDURE scripts that are under resource directory [{}].", PROCEDURES_DIRECTORY);
} catch (SQLException e) { } catch (SQLException e) {
String message = String.format("Unable to execute PROCEDURE scripts due to [%s].", e.getMessage()); String message = String.format("Unable to execute PROCEDURE scripts due to [%s].", e.getMessage());
LOGGER.error(message, e); LOGGER.error(message, e);
@ -335,7 +339,7 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
} }
private DbUpgrade[] executeUpgrades(CloudStackVersion dbVersion, CloudStackVersion currentVersion) { private DbUpgrade[] executeUpgrades(CloudStackVersion dbVersion, CloudStackVersion currentVersion) {
LOGGER.info("Database upgrade must be performed from " + dbVersion + " to " + currentVersion); LOGGER.info("Database upgrade must be performed from {} to {}", dbVersion, currentVersion);
final DbUpgrade[] upgrades = calculateUpgradePath(dbVersion, currentVersion); final DbUpgrade[] upgrades = calculateUpgradePath(dbVersion, currentVersion);
@ -348,8 +352,8 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
private VersionVO executeUpgrade(DbUpgrade upgrade) { private VersionVO executeUpgrade(DbUpgrade upgrade) {
VersionVO version; VersionVO version;
LOGGER.debug("Running upgrade " + upgrade.getClass().getSimpleName() + " to upgrade from " + upgrade.getUpgradableVersionRange()[0] + "-" + upgrade LOGGER.debug("Running upgrade {} to upgrade from {}-{} to {}", upgrade.getClass().getSimpleName(), upgrade.getUpgradableVersionRange()[0], upgrade
.getUpgradableVersionRange()[1] + " to " + upgrade.getUpgradedVersion()); .getUpgradableVersionRange()[1], upgrade.getUpgradedVersion());
TransactionLegacy txn = TransactionLegacy.open("Upgrade"); TransactionLegacy txn = TransactionLegacy.open("Upgrade");
txn.start(); txn.start();
try { try {
@ -392,8 +396,8 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
// Run the corresponding '-cleanup.sql' script // Run the corresponding '-cleanup.sql' script
txn = TransactionLegacy.open("Cleanup"); txn = TransactionLegacy.open("Cleanup");
try { try {
LOGGER.info("Cleanup upgrade " + upgrade.getClass().getSimpleName() + " to upgrade from " + upgrade.getUpgradableVersionRange()[0] + "-" + upgrade LOGGER.info("Cleanup upgrade {} to upgrade from {}-{} to {}", upgrade.getClass().getSimpleName(), upgrade.getUpgradableVersionRange()[0], upgrade
.getUpgradableVersionRange()[1] + " to " + upgrade.getUpgradedVersion()); .getUpgradableVersionRange()[1], upgrade.getUpgradedVersion());
txn.start(); txn.start();
Connection conn; Connection conn;
@ -408,7 +412,7 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
if (scripts != null) { if (scripts != null) {
for (InputStream script : scripts) { for (InputStream script : scripts) {
runScript(conn, script); runScript(conn, script);
LOGGER.debug("Cleanup script " + upgrade.getClass().getSimpleName() + " is executed successfully"); LOGGER.debug("Cleanup script {} is executed successfully", upgrade.getClass().getSimpleName());
} }
} }
txn.commit(); txn.commit();
@ -418,27 +422,27 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
version.setUpdated(new Date()); version.setUpdated(new Date());
_dao.update(version.getId(), version); _dao.update(version.getId(), version);
txn.commit(); txn.commit();
LOGGER.debug("Upgrade completed for version " + version.getVersion()); LOGGER.debug("Upgrade completed for version {}", version.getVersion());
} finally { } finally {
txn.close(); txn.close();
} }
} }
protected void executeViewScripts() { protected void executeViewScripts() {
LOGGER.info(String.format("Executing VIEW scripts that are under resource directory [%s].", VIEWS_DIRECTORY)); LOGGER.info("Executing VIEW scripts that are under resource directory [{}].", VIEWS_DIRECTORY);
List<String> filesPathUnderViewsDirectory = FileUtil.getFilesPathsUnderResourceDirectory(VIEWS_DIRECTORY); List<String> filesPathUnderViewsDirectory = FileUtil.getFilesPathsUnderResourceDirectory(VIEWS_DIRECTORY);
try (TransactionLegacy txn = TransactionLegacy.open("execute-view-scripts")) { try (TransactionLegacy txn = TransactionLegacy.open("execute-view-scripts")) {
Connection conn = txn.getConnection(); Connection conn = txn.getConnection();
for (String filePath : filesPathUnderViewsDirectory) { for (String filePath : filesPathUnderViewsDirectory) {
LOGGER.debug(String.format("Executing VIEW script [%s].", filePath)); LOGGER.debug("Executing VIEW script [{}].", filePath);
InputStream viewScript = Thread.currentThread().getContextClassLoader().getResourceAsStream(filePath); InputStream viewScript = Thread.currentThread().getContextClassLoader().getResourceAsStream(filePath);
runScript(conn, viewScript); runScript(conn, viewScript);
} }
LOGGER.info(String.format("Finished execution of VIEW scripts that are under resource directory [%s].", VIEWS_DIRECTORY)); LOGGER.info("Finished execution of VIEW scripts that are under resource directory [{}].", VIEWS_DIRECTORY);
} catch (SQLException e) { } catch (SQLException e) {
String message = String.format("Unable to execute VIEW scripts due to [%s].", e.getMessage()); String message = String.format("Unable to execute VIEW scripts due to [%s].", e.getMessage());
LOGGER.error(message, e); LOGGER.error(message, e);
@ -468,10 +472,10 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
String csVersion = SystemVmTemplateRegistration.parseMetadataFile(); String csVersion = SystemVmTemplateRegistration.parseMetadataFile();
final CloudStackVersion sysVmVersion = CloudStackVersion.parse(csVersion); final CloudStackVersion sysVmVersion = CloudStackVersion.parse(csVersion);
final CloudStackVersion currentVersion = CloudStackVersion.parse(currentVersionValue); final CloudStackVersion currentVersion = CloudStackVersion.parse(currentVersionValue);
SystemVmTemplateRegistration.CS_MAJOR_VERSION = String.valueOf(sysVmVersion.getMajorRelease()) + "." + String.valueOf(sysVmVersion.getMinorRelease()); SystemVmTemplateRegistration.CS_MAJOR_VERSION = sysVmVersion.getMajorRelease() + "." + sysVmVersion.getMinorRelease();
SystemVmTemplateRegistration.CS_TINY_VERSION = String.valueOf(sysVmVersion.getPatchRelease()); SystemVmTemplateRegistration.CS_TINY_VERSION = String.valueOf(sysVmVersion.getPatchRelease());
LOGGER.info("DB version = " + dbVersion + " Code Version = " + currentVersion); LOGGER.info("DB version = {} Code Version = {}", dbVersion, currentVersion);
if (dbVersion.compareTo(currentVersion) > 0) { if (dbVersion.compareTo(currentVersion) > 0) {
throw new CloudRuntimeException("Database version " + dbVersion + " is higher than management software version " + currentVersionValue); throw new CloudRuntimeException("Database version " + dbVersion + " is higher than management software version " + currentVersionValue);
@ -520,7 +524,7 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
ResultSet result = pstmt.executeQuery()) { ResultSet result = pstmt.executeQuery()) {
if (result.next()) { if (result.next()) {
String init = result.getString(1); String init = result.getString(1);
LOGGER.info("init = " + DBEncryptionUtil.decrypt(init)); LOGGER.info("init = {}", DBEncryptionUtil.decrypt(init));
} }
} }
} }
@ -551,21 +555,11 @@ public class DatabaseUpgradeChecker implements SystemIntegrityChecker {
return upgradedVersion; return upgradedVersion;
} }
@Override
public boolean supportsRollingUpgrade() {
return false;
}
@Override @Override
public InputStream[] getPrepareScripts() { public InputStream[] getPrepareScripts() {
return new InputStream[0]; return new InputStream[0];
} }
@Override
public void performDataMigration(Connection conn) {
}
@Override @Override
public InputStream[] getCleanupScripts() { public InputStream[] getCleanupScripts() {
return new InputStream[0]; return new InputStream[0];

View File

@ -77,8 +77,6 @@ public class Upgrade2214to30 extends Upgrade30xBase {
encryptData(conn); encryptData(conn);
// drop keys // drop keys
dropKeysIfExist(conn); dropKeysIfExist(conn);
//update template ID for system Vms
//updateSystemVms(conn); This is not required as system template update is handled during 4.2 upgrade
// update domain network ref // update domain network ref
updateDomainNetworkRef(conn); updateDomainNetworkRef(conn);
// update networks that use redundant routers to the new network offering // update networks that use redundant routers to the new network offering

View File

@ -62,7 +62,6 @@ public class Upgrade302to40 extends Upgrade30xBase {
@Override @Override
public void performDataMigration(Connection conn) { public void performDataMigration(Connection conn) {
//updateVmWareSystemVms(conn); This is not required as system template update is handled during 4.2 upgrade
correctVRProviders(conn); correctVRProviders(conn);
correctMultiplePhysicaNetworkSetups(conn); correctMultiplePhysicaNetworkSetups(conn);
addHostDetailsUniqueKey(conn); addHostDetailsUniqueKey(conn);

View File

@ -65,7 +65,6 @@ public class Upgrade304to305 extends Upgrade30xBase {
addVpcProvider(conn); addVpcProvider(conn);
updateRouterNetworkRef(conn); updateRouterNetworkRef(conn);
fixZoneUsingExternalDevices(conn); fixZoneUsingExternalDevices(conn);
// updateSystemVms(conn);
fixForeignKeys(conn); fixForeignKeys(conn);
encryptClusterDetails(conn); encryptClusterDetails(conn);
} }
@ -81,54 +80,6 @@ public class Upgrade304to305 extends Upgrade30xBase {
return new InputStream[] {script}; return new InputStream[] {script};
} }
private void updateSystemVms(Connection conn) {
PreparedStatement pstmt = null;
ResultSet rs = null;
boolean VMware = false;
try {
pstmt = conn.prepareStatement("select distinct(hypervisor_type) from `cloud`.`cluster` where removed is null");
rs = pstmt.executeQuery();
while (rs.next()) {
if ("VMware".equals(rs.getString(1))) {
VMware = true;
}
}
} catch (SQLException e) {
throw new CloudRuntimeException("Error while iterating through list of hypervisors in use", e);
}
// Just update the VMware system template. Other hypervisor templates are unchanged from previous 3.0.x versions.
logger.debug("Updating VMware System Vms");
try {
//Get 3.0.5 VMware system Vm template Id
pstmt = conn.prepareStatement("select id from `cloud`.`vm_template` where name = 'systemvm-vmware-3.0.5' and removed is null");
rs = pstmt.executeQuery();
if (rs.next()) {
long templateId = rs.getLong(1);
rs.close();
pstmt.close();
// change template type to SYSTEM
pstmt = conn.prepareStatement("update `cloud`.`vm_template` set type='SYSTEM' where id = ?");
pstmt.setLong(1, templateId);
pstmt.executeUpdate();
pstmt.close();
// update template ID of system Vms
pstmt = conn.prepareStatement("update `cloud`.`vm_instance` set vm_template_id = ? where type <> 'User' and hypervisor_type = 'VMware'");
pstmt.setLong(1, templateId);
pstmt.executeUpdate();
pstmt.close();
} else {
if (VMware) {
throw new CloudRuntimeException("3.0.5 VMware SystemVm template not found. Cannot upgrade system Vms");
} else {
logger.warn("3.0.5 VMware SystemVm template not found. VMware hypervisor is not used, so not failing upgrade");
}
}
} catch (SQLException e) {
throw new CloudRuntimeException("Error while updating VMware systemVm template", e);
}
logger.debug("Updating System Vm Template IDs Complete");
}
private void addVpcProvider(Connection conn) { private void addVpcProvider(Connection conn) {
//Encrypt config params and change category to Hidden //Encrypt config params and change category to Hidden
logger.debug("Adding vpc provider to all physical networks in the system"); logger.debug("Adding vpc provider to all physical networks in the system");

View File

@ -159,7 +159,7 @@ public class Upgrade41810to41900 extends DbUpgradeAbstractImpl implements DbUpgr
try (PreparedStatement pstmt = conn.prepareStatement(createNewColumn)) { try (PreparedStatement pstmt = conn.prepareStatement(createNewColumn)) {
pstmt.execute(); pstmt.execute();
} catch (SQLException e) { } catch (SQLException e) {
String message = String.format("Unable to crate new backups' column date due to [%s].", e.getMessage()); String message = String.format("Unable to create new backups' column date due to [%s].", e.getMessage());
logger.error(message, e); logger.error(message, e);
throw new CloudRuntimeException(message, e); throw new CloudRuntimeException(message, e);
} }

View File

@ -0,0 +1,30 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.upgrade.dao;
public class Upgrade42200to42210 extends DbUpgradeAbstractImpl implements DbUpgrade, DbUpgradeSystemVmTemplate {
@Override
public String[] getUpgradableVersionRange() {
return new String[] {"4.22.0.0", "4.22.1.0"};
}
@Override
public String getUpgradedVersion() {
return "4.22.1.0";
}
}

View File

@ -0,0 +1,30 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package com.cloud.upgrade.dao;
public class Upgrade42210to42300 extends DbUpgradeAbstractImpl implements DbUpgrade, DbUpgradeSystemVmTemplate {
@Override
public String[] getUpgradableVersionRange() {
return new String[]{"4.22.1.0", "4.23.0.0"};
}
@Override
public String getUpgradedVersion() {
return "4.23.0.0";
}
}

View File

@ -59,6 +59,9 @@ public class UsageVolumeVO implements InternalIdentity {
@Column(name = "size") @Column(name = "size")
private long size; private long size;
@Column(name = "vm_id")
private Long vmId;
@Column(name = "created") @Column(name = "created")
@Temporal(value = TemporalType.TIMESTAMP) @Temporal(value = TemporalType.TIMESTAMP)
private Date created = null; private Date created = null;
@ -70,13 +73,14 @@ public class UsageVolumeVO implements InternalIdentity {
protected UsageVolumeVO() { protected UsageVolumeVO() {
} }
public UsageVolumeVO(long id, long zoneId, long accountId, long domainId, Long diskOfferingId, Long templateId, long size, Date created, Date deleted) { public UsageVolumeVO(long id, long zoneId, long accountId, long domainId, Long diskOfferingId, Long templateId, Long vmId, long size, Date created, Date deleted) {
this.volumeId = id; this.volumeId = id;
this.zoneId = zoneId; this.zoneId = zoneId;
this.accountId = accountId; this.accountId = accountId;
this.domainId = domainId; this.domainId = domainId;
this.diskOfferingId = diskOfferingId; this.diskOfferingId = diskOfferingId;
this.templateId = templateId; this.templateId = templateId;
this.vmId = vmId;
this.size = size; this.size = size;
this.created = created; this.created = created;
this.deleted = deleted; this.deleted = deleted;
@ -126,4 +130,12 @@ public class UsageVolumeVO implements InternalIdentity {
public long getVolumeId() { public long getVolumeId() {
return volumeId; return volumeId;
} }
public Long getVmId() {
return vmId;
}
public void setVmId(Long vmId) {
this.vmId = vmId;
}
} }

View File

@ -57,6 +57,7 @@ public class UsageStorageDaoImpl extends GenericDaoBase<UsageStorageVO, Long> im
IdSearch.and("accountId", IdSearch.entity().getAccountId(), SearchCriteria.Op.EQ); IdSearch.and("accountId", IdSearch.entity().getAccountId(), SearchCriteria.Op.EQ);
IdSearch.and("id", IdSearch.entity().getEntityId(), SearchCriteria.Op.EQ); IdSearch.and("id", IdSearch.entity().getEntityId(), SearchCriteria.Op.EQ);
IdSearch.and("type", IdSearch.entity().getStorageType(), SearchCriteria.Op.EQ); IdSearch.and("type", IdSearch.entity().getStorageType(), SearchCriteria.Op.EQ);
IdSearch.and("deleted", IdSearch.entity().getDeleted(), SearchCriteria.Op.NULL);
IdSearch.done(); IdSearch.done();
IdZoneSearch = createSearchBuilder(); IdZoneSearch = createSearchBuilder();
@ -74,6 +75,7 @@ public class UsageStorageDaoImpl extends GenericDaoBase<UsageStorageVO, Long> im
sc.setParameters("accountId", accountId); sc.setParameters("accountId", accountId);
sc.setParameters("id", id); sc.setParameters("id", id);
sc.setParameters("type", type); sc.setParameters("type", type);
sc.setParameters("deleted", null);
return listBy(sc, null); return listBy(sc, null);
} }

View File

@ -23,9 +23,7 @@ import com.cloud.usage.UsageVolumeVO;
import com.cloud.utils.db.GenericDao; import com.cloud.utils.db.GenericDao;
public interface UsageVolumeDao extends GenericDao<UsageVolumeVO, Long> { public interface UsageVolumeDao extends GenericDao<UsageVolumeVO, Long> {
public void removeBy(long userId, long id);
public void update(UsageVolumeVO usage);
public List<UsageVolumeVO> getUsageRecords(Long accountId, Long domainId, Date startDate, Date endDate, boolean limit, int page); public List<UsageVolumeVO> getUsageRecords(Long accountId, Long domainId, Date startDate, Date endDate, boolean limit, int page);
List<UsageVolumeVO> listByVolumeId(long volumeId, long accountId);
} }

View File

@ -18,81 +18,46 @@ package com.cloud.usage.dao;
import java.sql.PreparedStatement; import java.sql.PreparedStatement;
import java.sql.ResultSet; import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Date; import java.util.Date;
import java.util.List; import java.util.List;
import java.util.TimeZone; import java.util.TimeZone;
import com.cloud.exception.CloudException; import javax.annotation.PostConstruct;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
import com.cloud.usage.UsageVolumeVO; import com.cloud.usage.UsageVolumeVO;
import com.cloud.utils.DateUtil; import com.cloud.utils.DateUtil;
import com.cloud.utils.db.GenericDaoBase; import com.cloud.utils.db.GenericDaoBase;
import com.cloud.utils.db.SearchBuilder;
import com.cloud.utils.db.SearchCriteria;
import com.cloud.utils.db.TransactionLegacy; import com.cloud.utils.db.TransactionLegacy;
@Component @Component
public class UsageVolumeDaoImpl extends GenericDaoBase<UsageVolumeVO, Long> implements UsageVolumeDao { public class UsageVolumeDaoImpl extends GenericDaoBase<UsageVolumeVO, Long> implements UsageVolumeDao {
protected static final String REMOVE_BY_USERID_VOLID = "DELETE FROM usage_volume WHERE account_id = ? AND volume_id = ?"; protected static final String GET_USAGE_RECORDS_BY_ACCOUNT = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, vm_id, size, created, deleted "
protected static final String UPDATE_DELETED = "UPDATE usage_volume SET deleted = ? WHERE account_id = ? AND volume_id = ? and deleted IS NULL";
protected static final String GET_USAGE_RECORDS_BY_ACCOUNT = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, size, created, deleted "
+ "FROM usage_volume " + "WHERE account_id = ? AND ((deleted IS NULL) OR (created BETWEEN ? AND ?) OR " + "FROM usage_volume " + "WHERE account_id = ? AND ((deleted IS NULL) OR (created BETWEEN ? AND ?) OR "
+ " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?)))"; + " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?)))";
protected static final String GET_USAGE_RECORDS_BY_DOMAIN = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, size, created, deleted " protected static final String GET_USAGE_RECORDS_BY_DOMAIN = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, vm_id, size, created, deleted "
+ "FROM usage_volume " + "WHERE domain_id = ? AND ((deleted IS NULL) OR (created BETWEEN ? AND ?) OR " + "FROM usage_volume " + "WHERE domain_id = ? AND ((deleted IS NULL) OR (created BETWEEN ? AND ?) OR "
+ " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?)))"; + " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?)))";
protected static final String GET_ALL_USAGE_RECORDS = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, size, created, deleted " protected static final String GET_ALL_USAGE_RECORDS = "SELECT volume_id, zone_id, account_id, domain_id, disk_offering_id, template_id, vm_id, size, created, deleted "
+ "FROM usage_volume " + "WHERE (deleted IS NULL) OR (created BETWEEN ? AND ?) OR " + " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?))"; + "FROM usage_volume " + "WHERE (deleted IS NULL) OR (created BETWEEN ? AND ?) OR " + " (deleted BETWEEN ? AND ?) OR ((created <= ?) AND (deleted >= ?))";
private SearchBuilder<UsageVolumeVO> volumeSearch;
public UsageVolumeDaoImpl() { public UsageVolumeDaoImpl() {
} }
@Override @PostConstruct
public void removeBy(long accountId, long volId) { protected void init() {
TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.USAGE_DB); volumeSearch = createSearchBuilder();
try { volumeSearch.and("accountId", volumeSearch.entity().getAccountId(), SearchCriteria.Op.EQ);
txn.start(); volumeSearch.and("volumeId", volumeSearch.entity().getVolumeId(), SearchCriteria.Op.EQ);
try(PreparedStatement pstmt = txn.prepareStatement(REMOVE_BY_USERID_VOLID);) { volumeSearch.and("deleted", volumeSearch.entity().getDeleted(), SearchCriteria.Op.NULL);
if (pstmt != null) { volumeSearch.done();
pstmt.setLong(1, accountId);
pstmt.setLong(2, volId);
pstmt.executeUpdate();
}
}catch (SQLException e) {
throw new CloudException("Error removing usageVolumeVO:"+e.getMessage(), e);
}
txn.commit();
} catch (Exception e) {
txn.rollback();
logger.warn("Error removing usageVolumeVO:"+e.getMessage(), e);
} finally {
txn.close();
}
}
@Override
public void update(UsageVolumeVO usage) {
TransactionLegacy txn = TransactionLegacy.open(TransactionLegacy.USAGE_DB);
PreparedStatement pstmt = null;
try {
txn.start();
if (usage.getDeleted() != null) {
pstmt = txn.prepareAutoCloseStatement(UPDATE_DELETED);
pstmt.setString(1, DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), usage.getDeleted()));
pstmt.setLong(2, usage.getAccountId());
pstmt.setLong(3, usage.getVolumeId());
pstmt.executeUpdate();
}
txn.commit();
} catch (Exception e) {
txn.rollback();
logger.warn("Error updating UsageVolumeVO", e);
} finally {
txn.close();
}
} }
@Override @Override
@ -150,11 +115,15 @@ public class UsageVolumeDaoImpl extends GenericDaoBase<UsageVolumeVO, Long> impl
if (tId == 0) { if (tId == 0) {
tId = null; tId = null;
} }
long size = Long.valueOf(rs.getLong(7)); Long vmId = Long.valueOf(rs.getLong(7));
if (vmId == 0) {
vmId = null;
}
long size = Long.valueOf(rs.getLong(8));
Date createdDate = null; Date createdDate = null;
Date deletedDate = null; Date deletedDate = null;
String createdTS = rs.getString(8); String createdTS = rs.getString(9);
String deletedTS = rs.getString(9); String deletedTS = rs.getString(10);
if (createdTS != null) { if (createdTS != null) {
createdDate = DateUtil.parseDateString(s_gmtTimeZone, createdTS); createdDate = DateUtil.parseDateString(s_gmtTimeZone, createdTS);
@ -163,7 +132,7 @@ public class UsageVolumeDaoImpl extends GenericDaoBase<UsageVolumeVO, Long> impl
deletedDate = DateUtil.parseDateString(s_gmtTimeZone, deletedTS); deletedDate = DateUtil.parseDateString(s_gmtTimeZone, deletedTS);
} }
usageRecords.add(new UsageVolumeVO(vId, zoneId, acctId, dId, doId, tId, size, createdDate, deletedDate)); usageRecords.add(new UsageVolumeVO(vId, zoneId, acctId, dId, doId, tId, vmId, size, createdDate, deletedDate));
} }
} catch (Exception e) { } catch (Exception e) {
txn.rollback(); txn.rollback();
@ -174,4 +143,13 @@ public class UsageVolumeDaoImpl extends GenericDaoBase<UsageVolumeVO, Long> impl
return usageRecords; return usageRecords;
} }
@Override
public List<UsageVolumeVO> listByVolumeId(long volumeId, long accountId) {
SearchCriteria<UsageVolumeVO> sc = volumeSearch.create();
sc.setParameters("accountId", accountId);
sc.setParameters("volumeId", volumeId);
sc.setParameters("deleted", null);
return listBy(sc);
}
} }

View File

@ -226,10 +226,6 @@ public class UserAccountVO implements UserAccount, InternalIdentity {
return created; return created;
} }
// public void setCreated(Date created) {
// this.created = created;
// }
@Override @Override
public Date getRemoved() { public Date getRemoved() {
return removed; return removed;

View File

@ -101,7 +101,7 @@ public class UserVmDaoImpl extends GenericDaoBase<UserVmVO, Long> implements Use
ReservationDao reservationDao; ReservationDao reservationDao;
private static final String LIST_PODS_HAVING_VMS_FOR_ACCOUNT = private static final String LIST_PODS_HAVING_VMS_FOR_ACCOUNT =
"SELECT pod_id FROM cloud.vm_instance WHERE data_center_id = ? AND account_id = ? AND pod_id IS NOT NULL AND (state = 'Running' OR state = 'Stopped') " "SELECT pod_id FROM cloud.vm_instance WHERE data_center_id = ? AND account_id = ? AND pod_id IS NOT NULL AND state IN ('Starting', 'Running', 'Stopped') "
+ "GROUP BY pod_id HAVING count(id) > 0 ORDER BY count(id) DESC"; + "GROUP BY pod_id HAVING count(id) > 0 ORDER BY count(id) DESC";
private static final String VM_DETAILS = "select vm_instance.id, " private static final String VM_DETAILS = "select vm_instance.id, "
@ -782,7 +782,7 @@ public class UserVmDaoImpl extends GenericDaoBase<UserVmVO, Long> implements Use
result.add(new Ternary<Integer, Integer, Integer>(rs.getInt(1), rs.getInt(2), rs.getInt(3))); result.add(new Ternary<Integer, Integer, Integer>(rs.getInt(1), rs.getInt(2), rs.getInt(3)));
} }
} catch (Exception e) { } catch (Exception e) {
logger.warn("Error counting vms by size for dcId= " + dcId, e); logger.warn("Error counting vms by size for Data Center ID = " + dcId, e);
} }
return result; return result;
} }

View File

@ -209,10 +209,8 @@ public class VolumeDataStoreVO implements StateObject<ObjectInDataStoreStateMach
public VolumeDataStoreVO(long hostId, long volumeId, Date lastUpdated, int downloadPercent, Status downloadState, String localDownloadPath, String errorString, public VolumeDataStoreVO(long hostId, long volumeId, Date lastUpdated, int downloadPercent, Status downloadState, String localDownloadPath, String errorString,
String jobId, String installPath, String downloadUrl, String checksum) { String jobId, String installPath, String downloadUrl, String checksum) {
// super();
dataStoreId = hostId; dataStoreId = hostId;
this.volumeId = volumeId; this.volumeId = volumeId;
// this.zoneId = zoneId;
this.lastUpdated = lastUpdated; this.lastUpdated = lastUpdated;
this.downloadPercent = downloadPercent; this.downloadPercent = downloadPercent;
this.downloadState = downloadState; this.downloadState = downloadState;

View File

@ -3,7 +3,7 @@
-- distributed with this work for additional information -- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file -- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the -- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliances -- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at -- with the License. You may obtain a copy of the License at
-- --
-- http://www.apache.org/licenses/LICENSE-2.0 -- http://www.apache.org/licenses/LICENSE-2.0

View File

@ -0,0 +1,20 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.
--;
-- Schema upgrade cleanup from 4.22.0.0 to 4.22.1.0
--;

View File

@ -0,0 +1,27 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.
--;
-- Schema upgrade from 4.22.0.0 to 4.22.1.0
--;
-- Add vm_id column to usage_event table for volume usage events
CALL `cloud`.`IDEMPOTENT_ADD_COLUMN`('cloud.usage_event','vm_id', 'bigint UNSIGNED NULL COMMENT "VM ID associated with volume usage events"');
CALL `cloud_usage`.`IDEMPOTENT_ADD_COLUMN`('cloud_usage.usage_event','vm_id', 'bigint UNSIGNED NULL COMMENT "VM ID associated with volume usage events"');
-- Add vm_id column to cloud_usage.usage_volume table
CALL `cloud_usage`.`IDEMPOTENT_ADD_COLUMN`('cloud_usage.usage_volume','vm_id', 'bigint UNSIGNED NULL COMMENT "VM ID associated with the volume usage"');

View File

@ -0,0 +1,20 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.
--;
-- Schema upgrade cleanup from 4.22.1.0 to 4.23.0.0
--;

View File

@ -0,0 +1,20 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.
--;
-- Schema upgrade from 4.22.1.0 to 4.23.0.0
--;

View File

@ -22,7 +22,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
</parent> </parent>
<artifactId>cloud-engine-service</artifactId> <artifactId>cloud-engine-service</artifactId>
<packaging>war</packaging> <packaging>war</packaging>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -62,7 +62,6 @@ public class StorageCacheReplacementAlgorithmLRU implements StorageCacheReplacem
/* Avoid using configDao at this time, we can't be sure that the database is already upgraded /* Avoid using configDao at this time, we can't be sure that the database is already upgraded
* and there might be fatal errors when using a dao. * and there might be fatal errors when using a dao.
*/ */
//unusedTimeInterval = NumbersUtil.parseInt(configDao.getValue(Config.StorageCacheReplacementLRUTimeInterval.key()), 30);
} }
public void setUnusedTimeInterval(Integer interval) { public void setUnusedTimeInterval(Integer interval) {

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -87,8 +87,6 @@ import com.cloud.utils.component.ComponentContext;
@ContextConfiguration(locations = {"classpath:/storageContext.xml"}) @ContextConfiguration(locations = {"classpath:/storageContext.xml"})
public class VolumeServiceTest extends CloudStackTestNGBase { public class VolumeServiceTest extends CloudStackTestNGBase {
// @Inject
// ImageDataStoreProviderManager imageProviderMgr;
@Inject @Inject
TemplateService imageService; TemplateService imageService;
@Inject @Inject
@ -232,23 +230,7 @@ public class VolumeServiceTest extends CloudStackTestNGBase {
DataStore store = createImageStore(); DataStore store = createImageStore();
VMTemplateVO image = createImageData(); VMTemplateVO image = createImageData();
TemplateInfo template = imageDataFactory.getTemplate(image.getId(), store); TemplateInfo template = imageDataFactory.getTemplate(image.getId(), store);
// AsyncCallFuture<TemplateApiResult> future =
// imageService.createTemplateAsync(template, store);
// future.get();
template = imageDataFactory.getTemplate(image.getId(), store); template = imageDataFactory.getTemplate(image.getId(), store);
/*
* imageProviderMgr.configure("image Provider", new HashMap<String,
* Object>()); VMTemplateVO image = createImageData();
* ImageDataStoreProvider defaultProvider =
* imageProviderMgr.getProvider("DefaultProvider");
* ImageDataStoreLifeCycle lifeCycle =
* defaultProvider.getLifeCycle(); ImageDataStore store =
* lifeCycle.registerDataStore("defaultHttpStore", new
* HashMap<String, String>());
* imageService.registerTemplate(image.getId(),
* store.getImageDataStoreId()); TemplateEntity te =
* imageService.getTemplateEntity(image.getId()); return te;
*/
return template; return template;
} catch (Exception e) { } catch (Exception e) {
Assert.fail("failed", e); Assert.fail("failed", e);
@ -333,30 +315,6 @@ public class VolumeServiceTest extends CloudStackTestNGBase {
ClusterScope scope = new ClusterScope(clusterId, podId, dcId); ClusterScope scope = new ClusterScope(clusterId, podId, dcId);
lifeCycle.attachCluster(store, scope); lifeCycle.attachCluster(store, scope);
/*
* PrimaryDataStoreProvider provider =
* primaryDataStoreProviderMgr.getDataStoreProvider
* ("sample primary data store provider");
* primaryDataStoreProviderMgr.configure("primary data store mgr",
* new HashMap<String, Object>());
*
* List<PrimaryDataStoreVO> ds =
* primaryStoreDao.findPoolByName(this.primaryName); if (ds.size()
* >= 1) { PrimaryDataStoreVO store = ds.get(0); if
* (store.getRemoved() == null) { return
* provider.getDataStore(store.getId()); } }
*
*
* Map<String, String> params = new HashMap<String, String>();
* params.put("url", this.getPrimaryStorageUrl());
* params.put("dcId", dcId.toString()); params.put("clusterId",
* clusterId.toString()); params.put("name", this.primaryName);
* PrimaryDataStoreInfo primaryDataStoreInfo =
* provider.registerDataStore(params); PrimaryDataStoreLifeCycle lc
* = primaryDataStoreInfo.getLifeCycle(); ClusterScope scope = new
* ClusterScope(clusterId, podId, dcId); lc.attachCluster(scope);
* return primaryDataStoreInfo;
*/
return store; return store;
} catch (Exception e) { } catch (Exception e) {
return null; return null;
@ -376,7 +334,6 @@ public class VolumeServiceTest extends CloudStackTestNGBase {
TemplateInfo te = createTemplate(); TemplateInfo te = createTemplate();
VolumeVO volume = createVolume(te.getId(), primaryStore.getId()); VolumeVO volume = createVolume(te.getId(), primaryStore.getId());
VolumeInfo vol = volumeFactory.getVolume(volume.getId(), primaryStore); VolumeInfo vol = volumeFactory.getVolume(volume.getId(), primaryStore);
// ve.createVolumeFromTemplate(primaryStore.getId(), new VHD(), te);
AsyncCallFuture<VolumeApiResult> future = volumeService.createVolumeFromTemplateAsync(vol, primaryStore.getId(), te); AsyncCallFuture<VolumeApiResult> future = volumeService.createVolumeFromTemplateAsync(vol, primaryStore.getId(), te);
try { try {
future.get(); future.get();

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath> <relativePath>../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -24,7 +24,7 @@
<parent> <parent>
<groupId>org.apache.cloudstack</groupId> <groupId>org.apache.cloudstack</groupId>
<artifactId>cloud-engine</artifactId> <artifactId>cloud-engine</artifactId>
<version>4.22.0.0</version> <version>4.23.0.0-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath> <relativePath>../../pom.xml</relativePath>
</parent> </parent>
<dependencies> <dependencies>

View File

@ -672,6 +672,12 @@ public class DefaultSnapshotStrategy extends SnapshotStrategyBase {
} }
} }
if (CollectionUtils.isNotEmpty(vmSnapshotDao.findByVmAndByType(volumeVO.getInstanceId(), VMSnapshot.Type.DiskAndMemory))) {
logger.debug("DefaultSnapshotStrategy cannot handle snapshot [{}] for volume [{}] as the volume is attached to a VM with disk-and-memory VM snapshots." +
"Restoring the volume snapshot will corrupt any newer disk-and-memory VM snapshots.", snapshot);
return StrategyPriority.CANT_HANDLE;
}
return StrategyPriority.DEFAULT; return StrategyPriority.DEFAULT;
} }

View File

@ -26,14 +26,21 @@ import javax.inject.Inject;
import javax.naming.ConfigurationException; import javax.naming.ConfigurationException;
import com.cloud.hypervisor.Hypervisor; import com.cloud.hypervisor.Hypervisor;
import com.cloud.storage.Snapshot;
import com.cloud.storage.dao.SnapshotDao;
import com.cloud.vm.snapshot.VMSnapshotDetailsVO; import com.cloud.vm.snapshot.VMSnapshotDetailsVO;
import com.cloud.vm.snapshot.dao.VMSnapshotDetailsDao; import com.cloud.vm.snapshot.dao.VMSnapshotDetailsDao;
import org.apache.cloudstack.backup.BackupManager;
import org.apache.cloudstack.backup.BackupOfferingVO;
import org.apache.cloudstack.backup.BackupProvider;
import org.apache.cloudstack.backup.dao.BackupOfferingDao;
import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority; import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotOptions; import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotOptions;
import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotStrategy; import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotStrategy;
import org.apache.cloudstack.framework.config.dao.ConfigurationDao; import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao; import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
import org.apache.cloudstack.storage.to.VolumeObjectTO; import org.apache.cloudstack.storage.to.VolumeObjectTO;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import com.cloud.agent.AgentManager; import com.cloud.agent.AgentManager;
@ -104,7 +111,16 @@ public class DefaultVMSnapshotStrategy extends ManagerBase implements VMSnapshot
PrimaryDataStoreDao primaryDataStoreDao; PrimaryDataStoreDao primaryDataStoreDao;
@Inject @Inject
VMSnapshotDetailsDao vmSnapshotDetailsDao; private VMSnapshotDetailsDao vmSnapshotDetailsDao;
@Inject
private BackupManager backupManager;
@Inject
private BackupOfferingDao backupOfferingDao;
@Inject
private SnapshotDao snapshotDao;
protected static final String KVM_FILE_BASED_STORAGE_SNAPSHOT = "kvmFileBasedStorageSnapshot"; protected static final String KVM_FILE_BASED_STORAGE_SNAPSHOT = "kvmFileBasedStorageSnapshot";
@ -480,24 +496,44 @@ public class DefaultVMSnapshotStrategy extends ManagerBase implements VMSnapshot
@Override @Override
public StrategyPriority canHandle(Long vmId, Long rootPoolId, boolean snapshotMemory) { public StrategyPriority canHandle(Long vmId, Long rootPoolId, boolean snapshotMemory) {
UserVmVO vm = userVmDao.findById(vmId); UserVmVO vm = userVmDao.findById(vmId);
String cantHandleLog = String.format("Default VM snapshot cannot handle VM snapshot for [%s]", vm);
if (State.Running.equals(vm.getState()) && !snapshotMemory) { if (State.Running.equals(vm.getState()) && !snapshotMemory) {
logger.debug("Default VM snapshot strategy cannot handle VM snapshot for [{}] as it is running and its memory will not be affected.", vm); logger.debug("{} as it is running and its memory will not be affected.", cantHandleLog, vm);
return StrategyPriority.CANT_HANDLE; return StrategyPriority.CANT_HANDLE;
} }
if (vmHasKvmDiskOnlySnapshot(vm)) { if (vmHasKvmDiskOnlySnapshot(vm)) {
logger.debug("Default VM snapshot strategy cannot handle VM snapshot for [{}] as it has a disk-only VM snapshot using kvmFileBasedStorageSnapshot strategy." + logger.debug("{} as it is not compatible with disk-only VM snapshot on KVM. As disk-and-memory snapshots use internal snapshots and disk-only VM snapshots use" +
"These two strategies are not compatible, as reverting a disk-only VM snapshot will erase newer disk-and-memory VM snapshots.", vm); " external snapshots. When restoring external snapshots, any newer internal snapshots are lost.", cantHandleLog);
return StrategyPriority.CANT_HANDLE; return StrategyPriority.CANT_HANDLE;
} }
List<VolumeVO> volumes = volumeDao.findByInstance(vmId); List<VolumeVO> volumes = volumeDao.findByInstance(vmId);
for (VolumeVO volume : volumes) { for (VolumeVO volume : volumes) {
if (volume.getFormat() != ImageFormat.QCOW2) { if (volume.getFormat() != ImageFormat.QCOW2) {
logger.debug("Default VM snapshot strategy cannot handle VM snapshot for [{}] as it has a volume [{}] that is not in the QCOW2 format.", vm, volume); logger.debug("{} as it has a volume [{}] that is not in the QCOW2 format.", cantHandleLog, vm, volume);
return StrategyPriority.CANT_HANDLE;
}
if (CollectionUtils.isNotEmpty(snapshotDao.listByVolumeIdAndTypeNotInAndStateNotRemoved(volume.getId(), Snapshot.Type.GROUP))) {
logger.debug("{} as it has a volume [{}] with volume snapshots. As disk-and-memory snapshots use internal snapshots and volume snapshots use external" +
" snapshots. When restoring external snapshots, any newer internal snapshots are lost.", cantHandleLog, volume);
return StrategyPriority.CANT_HANDLE; return StrategyPriority.CANT_HANDLE;
} }
} }
BackupOfferingVO backupOfferingVO = backupOfferingDao.findById(vm.getBackupOfferingId());
if (backupOfferingVO == null) {
return StrategyPriority.DEFAULT;
}
BackupProvider provider = backupManager.getBackupProvider(backupOfferingVO.getProvider());
if (!provider.supportsMemoryVmSnapshot()) {
logger.debug("{} as the VM has a backup offering for a provider that is not supported.", cantHandleLog);
return StrategyPriority.CANT_HANDLE;
}
return StrategyPriority.DEFAULT; return StrategyPriority.DEFAULT;
} }
@ -508,7 +544,7 @@ public class DefaultVMSnapshotStrategy extends ManagerBase implements VMSnapshot
for (VMSnapshotVO vmSnapshotVO : vmSnapshotDao.findByVmAndByType(vm.getId(), VMSnapshot.Type.Disk)) { for (VMSnapshotVO vmSnapshotVO : vmSnapshotDao.findByVmAndByType(vm.getId(), VMSnapshot.Type.Disk)) {
List<VMSnapshotDetailsVO> vmSnapshotDetails = vmSnapshotDetailsDao.listDetails(vmSnapshotVO.getId()); List<VMSnapshotDetailsVO> vmSnapshotDetails = vmSnapshotDetailsDao.listDetails(vmSnapshotVO.getId());
if (vmSnapshotDetails.stream().anyMatch(vmSnapshotDetailsVO -> vmSnapshotDetailsVO.getName().equals(KVM_FILE_BASED_STORAGE_SNAPSHOT))) { if (vmSnapshotDetails.stream().anyMatch(detailsVO -> KVM_FILE_BASED_STORAGE_SNAPSHOT.equals(detailsVO.getName()) || STORAGE_SNAPSHOT.equals(detailsVO.getName()))) {
return true; return true;
} }
} }

Some files were not shown because too many files have changed in this diff Show More