- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
Conflicts:
api/src/com/cloud/api/BaseCmd.java
client/tomcatconf/commands.properties.in
server/src/com/cloud/api/ApiResponseHelper.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/network/lb/LoadBalancingRulesManagerImpl.java
server/src/com/cloud/network/rules/RulesManagerImpl.java
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
1. Are there multiple hosts connect to the same local storage pool due to 2.1.x bug
2. Is there any missed premium upgrade
either true answer of above cause mangemnt stopping and asking user to contact Cloud.com support
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Use a new target "system-integrity-checker" in components.xml/components-premium.xml.
All checkers must be explicitly specified in XML file, they will execute before any components load
status 10860: resolved fixed
Changes:
- Added new Investigator 'ManagementIPSystemVMInvestigator' that checks if Vm is alive only for System VM's that have a management IP address.
- If no management IP is found, ping test cannot be done, so this investigator would return null in that case.
- Current implementation InvestigatorImpl is renamed as 'UserVmDomRInvestigator' and does the ping test for user VMs only.
- Corrected the ping test code that was checking a hard-coded string. Now if the ping answer is negative, we just return null
- Added the new investigator to components.xml
- This NPE happened when starting the DomR and its Volume's podId was null.
- This case should never happen that podId of a Volume is null.
- It looks like this is a side-effect of some other bug- most likely another DomR and basic networking issue (9578)
- While reviewing this bug, found out that we need not use the RecreateHostAllocator anymore. Using it actually is not good, since this allocator ignores what is passed to it in the plan by the DeploymentPlanner and lists pods once again.
- Changed components.xml to use FirstFitRoutingAllocator instead.