Moving build scripts maintained in separate repo to cloudstack repo

This commit is contained in:
Santhosh Edukulla 2013-12-09 19:28:08 +05:30 committed by Girish Shilamkar
parent 71aa2c0881
commit a2e4fdbd5a
21 changed files with 2581 additions and 0 deletions

View File

@ -0,0 +1,160 @@
about
=====
This document talks about the *evolving* continuous test infrastructure used to setup, deploy, configure and test Apache CloudStack. Information here is useful for anyone involved in build, test, continuous integration even operators of CloudStack.
components
. nightly yum/apt repositories
. cobbler
.. rhel / ubuntu/ debian kickstarts
.. hypervisor kickstarts
.. adding new profiles
. puppet
. dnsmasq
. ntpd
. jenkins jnlp slave
. scheduling
. networking setup
[insert diagram here]
The above illustration shows a high-level view of the test infrastructure setup. In this section we explain the tools and their organization in the infrastructure. The workflow detailed in a later section shows how this setup works together:
1. At the center of the workflow is the "driver" appliance that manages the infrastructure. This is a Cent OS 6.2 VM running on a XenServer. The "driver"-VM is responsible for triggering the process when it is time for a test run.
The driver appliance is composed of the following parts:
a. Cobbler - cobbler is a provisioning PXE server (and much more) useful for rapid setup of linux machines. It can do DNS, DHCP, power management and package configuration via puppet. It is capable of managing network installation of both physical and virtual infrastructure. Cobbler comes with an expressive CLI as well as web-ui frontends for management.
Cobbler manages installations through profiles and systems:
profiles - these are text files called kickstarts defined for a distribution's installation. For eg: RHEL 6.1 or Ubuntu 12.04 LTS. Each of the machines in the test environment - hypervisors and cloudstack management servers contains a profile in the form of a kickstart.
The profile list looks as follows:
[root@infra ~]# cobbler profile list
cloudstack-rhel
cloudstack-ubuntu
rhel63-kvm
rhel63-x86_64
ubuntu1204-x86_64
xen602
xen56
systems - these are virtual/physical infrastructure mapped to cobbler profiles based on the hostnames of machines that can come alive within the environment.
[root@infra ~]# cobbler system list
acs-qa-h11
acs-qa-h20
acs-qa-h21
acs-qa-h23
cloudstack-rhel
cloudstack-ubuntu
When a new image needs to be added we create a 'distro' in cobbler and associate that with a profile's kickstart. Any new systems to be hooked-up to be serviced by the profile can then be added easily by cmd line.
b. Puppet master - Cobbler reimages machines on-demand but it is upto puppet recipes to do configuration management within them. The configuration management is required for kvm hypervisors (kvm agent for eg:) and for the cloudstack management server which needs mysql, cloudstack, etc. The puppetmasterd daemon on the driver-vm is responsible for 'kicking' nodes to initiate configuration management on themselves when they come alive.
So the driver-vm is also the repository of all the puppet recipes for various modules that need to be configured for the test infrastructure to work. The modules are placed in /etc/puppet and bear the same structure as our github repo. When we need to affect a configuration change on any of our systems we only change the github repo and the systems in place are affected upon next run.
c. dnsmasq - DNS is controlled by cobbler but its configuration of hosts is set within dnsmasq.d/hosts. This is a simple 1-1 mapping of hostnames with IPs. For the most part this should be the single place where one needs to alter for replicating the test setup. Everywhere else only DNS names are/should-be used. open ports 53, 67 on server
d. dhcp - DHCP is also done by dnsmasq. All configuration is in /etc/dnsmasq.conf. static mac-ip-name mappings are given for hypervisors while the virtual instances get dynamic ips
e. ipmitool - ipmi for power management is setup on all the test servers and the ipmitool provides a convienient cli for booting the machines on the network into PXEing.
f. jenkins-slave - jenkins slave.jar is placed on the driver-vm as a service in /etc/init.d to react to jenkins schedules and to post reports to. The slave runs in headless mode as the driver-vm does not run X.
g. ntpd - ntp daemon is running and syncing time for all machines in the system. puppet depends on times to be in sync when configuring nodes. So does the management server when deployed in a cluster
h. puppet - set puppetmaster to listen on 8140
2. NFS storage - the nfs server is a single server serving as both primary and secondary storage. This is likely a limitation when compared to true production deployments but serves in good stead for a test setup. Where it becomes a limitation is in testing different storage backends. Object stores, local storage, clustered local storage etc are not addressed by this setup.
3. Hypervisor hosts - There currently are 4 hosts in this environment. These are arranged at the moment in three pods so as to be capable of being deployed in a two zone environment. One zone with two pods and and a second zone with a single pod. This covers tests that depend on
a. single zone/pod/cluster
b. multiple cluster
c. inter-zone tests
d. multi-pod tests
marvin integration
==================
once cloudstack has been installed and the hypervisors prepared we are ready to use marvin to stitch together zones, pods, clusters and compute and storage to put together a 'cloud'. once configured - we perform a cursory health check to see if we have all systemVMs running in all zones and that built-in templates are downloaded in all zones. Subsequently we are able to launch tests on this environment
Only the latest tests from git are run on the setup. This allows us to test in a pseudo-continuous fashion with a nightly build deployed on the environment. Each test run takes a few hours to finish.
control via github
==================
there are two github repositories controlling the test infrastructure.
a. The puppet recipes at gh:acs-infra-test
b. The gh:cloud-autodeploy repo that has the scripts to orchestrate the overall workflow
workflow
========
When jenkins triggers the job following sequence of actions occur on the test infrastructure
1. The deployment configuration is chosen based on the hypervisor being used. We currently have xen.cfg and kvm.cfg that are in the gh:cloud-autodeploy repo
2. A virtualenv python environment is chosen within which the configuration and test runs by marvin are isolated into. Virtualenv is great for sandboxing test environment runs. In to the virtualenv are copied all the latest tests from the git:incubator-cloudstack repo.
3. we fetch the last successful marvin build from builds.a.o and install it within this virtualenv. installing a new marvin on each run helps us test with the latest APIs available.
4. we fetch the latest version of the driver script from github:cloud-autodeploy. fetching the latest allows us to make adjustments to the infra without having to copy scripts in to the test infrastrcuture.
5. based on the hypervisor chosen we choose a profile for cobbler to reimage the hosts in the infrastructure. if xen is chosen we bring up the profile of the latest xen kickstart available in cobbler. currently - this is at xen 6.0.2. if kvm is chosen we can pick between ubuntu and rhel based host OS kickstarts.
6. with this knowledge we kick off the driver script with the following cmd line arguments
$ python configure.py -v $hypervisor -d $distro -p $profile -l $LOG_LVL
The $distro argument chooses the hostOS of the mgmt server - this can be ubuntu / rhel. LOG_LVL can be set to INFO/DEBUG/WARN for troubleshooting and more verbose log output.
7. The configure script does various operations to prepare the environment:
a. clears up any dirty cobbler systems from previous runs
b. cleans up puppet certificates of these systems. puppet recipes will fail if puppetmaster finds an invalid certificate
c. starts up a new xenserver VM that will act as the mgmt server. we chose to keep things simple by launching the vm on a xenserver. one could employ jclouds via jenkins to deploy the mgmt server VM on a dogfooded cloudstack.
d. in parallel the deployment config of marvin is parsed through to find the hypervisors that need to be cleaned up, pxe booted and prepared for the cloudstack deployment.
e. all the hosts in the marvin config are pxe booted via ipmi and cobbler takes over to reimage them with the profile chosen by the jenkins job run.
f. while this is happening we also seed the secondary storage with the systemvm template reqd for the hypervisor.
g. all the primary stores in the marvin config are then cleaned for the next run.
8. While cobbler is reimaging the hosts with the right profiles, the configure script waits until all hosts are reachable over ssh. It also checks for essential services (http, mysql) ports to come up. Cobbler once done with refreshing the machine hands over the reins to puppet.
9. Puppet slaves within the machines in the environment reach out to puppetmaster to get their identity. mgmt server vm fetches its own recipe and starts configuring itself while hypervisors will do the same in case they need to be acting as kvm agents.
10. When the essential ports for mgmt server - 8080 and 8096 are open and listening we know that the mgmt server has come up successfully. We then go ahead and deploy the configuration specified by marvin.
11. After marvin finishes configuring the cloud - it performs a health check to see if the system is ready for running tests upon.
12. Tests are run using the nose test runner with the marvin plugin and reports are recorded by jenkins.
limitations
===========
enhancements
============
- packaging tests
- puppetize the cobbler appliance
- dogfooding
- run test fixes on idle environment upon checkin without deploy
- custom zones - using a marvin config file
- logging enhancements = archiving + syslog
- digest emails via jenkins. controlling spam
- external devices (LB, VPX, FW)
- mcollective?
future
======
- not everyone deploys cloudstack the same
- multiple hv environments with multiple hv configurations
- multiple storage configurations
troubleshooting
===============
acknowledgements
================

View File

@ -0,0 +1,49 @@
#Cloud AutoDeploy
Scripts here are used to refresh the builds of the management server with those
made out of our CI system. The CI system is internal at the moment.
###Dependencies
* Python
* [jenkinsapi](http://pypi.python.org/pypi/jenkinsapi)
* marvin
build.cfg - contains build information given to the CI system
- branch, BUILDABLE_TARGET
- distro of mgmt server tarball
You may leave the rest as they are defaults and should work fine.
environment.cfg - typically the VM where you intended to install above build of
mgmt server. SSH access to be available and credentials are in the config file.
deployment.cfg - the JSON network model configuration file generated by Marvin so
the mgmt server can be configured. See the Marvin tutorial on how to fetch these.
other options - skip-host - will skip IPMI/PXE refresh of the hosts
- install-marvin - will pull the latest marvin tarball from the CI
system and install it
Once you have the available configuration setup in the above .cfg files simply
run the following.
### 1a. reset the environment with the new build
`$ python configure.py -b build.cfg -e environment.cfg -d deployment.cfg [[--skip-host] --install-marvin]`
OR
### b. reset the environment with specific build number
`$ python configure.py -n <build-number> -e environment.cfg -d deployment.cfg [[--skip-host] --install-marvin]`
### 2. restart mgmt server to have the integration port (8096) open
`$ python restartMgmt.py -e environment.cfg`
### 3. setup cloudstack with your deployment configuration
`$ nosetests -v --with-marvin --marvin-config=deployment.cfg --result-log=result.log -w /tmp`
### 4. restart again for global settings to be applied
`$ python restartMgmt.py -e environment.cfg`
### 5. wait for templates and system VMs to be ready
`$ nosetests -v --with-marvin --marvin-config=deployment.cfg --result-log=result.log testSetupSuccess.py`

View File

@ -0,0 +1,207 @@
#!/usr/bin/env python
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
'''
############################################################
# Experimental state of scripts
# * Need to be reviewed
# * Only a sandbox
############################################################
'''
import random
import marvin
from ConfigParser import SafeConfigParser
from optparse import OptionParser
from marvin.configGenerator import *
def getGlobalSettings(config):
for k, v in dict(config.items('globals')).iteritems():
cfg = configuration()
cfg.name = k
cfg.value = v
yield cfg
def describeResources(config):
zs = cloudstackConfiguration()
z = zone()
z.dns1 = config.get('environment', 'dns1')
z.dns2 = config.get('environment', 'dns2')
z.internaldns1 = config.get('environment', 'internal_dns1')
z.internaldns2 = config.get('environment', 'internal_dns2')
z.name = 'z0'
z.networktype = 'Advanced'
z.guestcidraddress = '10.1.1.0/24'
z.securitygroupenabled = 'false'
vpcprovider = provider()
vpcprovider.name = 'VpcVirtualRouter'
lbprovider = provider()
lbprovider.name = 'InternalLbVm'
pn = physical_network()
pn.name = "z0-pnet"
pn.traffictypes = [traffictype("Guest"), traffictype("Management"), traffictype("Public")]
pn.isolationmethods = ["VLAN"]
pn.vlan = config.get('cloudstack', 'z0.guest.vlan')
pn.providers.append(vpcprovider)
pn.providers.append(lbprovider)
z.physical_networks.append(pn)
p = pod()
p.name = 'z0p0'
p.gateway = config.get('cloudstack', 'z0p0.private.gateway')
p.startip = config.get('cloudstack', 'z0p0.private.pod.startip')
p.endip = config.get('cloudstack', 'z0p0.private.pod.endip')
p.netmask = config.get('cloudstack', 'z0p0.private.netmask')
v = iprange()
v.gateway = config.get('cloudstack', 'z0p0.public.gateway')
v.startip = config.get('cloudstack', 'z0p0.public.vlan.startip')
v.endip = config.get('cloudstack', 'z0p0.public.vlan.endip')
v.netmask = config.get('cloudstack', 'z0p0.public.netmask')
v.vlan = config.get('cloudstack', 'z0p0.public.vlan')
z.ipranges.append(v)
c = cluster()
c.clustername = 'z0p0c0'
c.hypervisor = config.get('cloudstack', 'hypervisor')
c.clustertype = 'CloudManaged'
h = host()
#Host 1
h.username = 'root'
h.password = config.get('cloudstack', 'host.password')
h.url = 'http://%s'%(config.get('cloudstack', 'z0p0c0h0.host'))
c.hosts.append(h)
#Host 2
h1 = host()
h1.username = 'root'
h1.password = config.get('cloudstack', 'host.password')
h1.url = 'http://%s'%(config.get('cloudstack', 'z0p0c0h1.host'))
c.hosts.append(h1)
#Primary 1
ps = primaryStorage()
ps.name = 'z0p0c0ps0'
ps.url = config.get('cloudstack', 'z0p0c0ps0.primary.pool')
c.primaryStorages.append(ps)
#Primary 2
ps1 = primaryStorage()
ps1.name = 'z0p0c0ps1'
ps1.url = config.get('cloudstack', 'z0p0c0ps1.primary.pool')
c.primaryStorages.append(ps1)
p.clusters.append(c)
z.pods.append(p)
#Pod 2
p1 = pod()
p1.name = 'z0p1'
p1.gateway = config.get('cloudstack', 'z0p1.private.gateway')
p1.startip = config.get('cloudstack', 'z0p1.private.pod.startip')
p1.endip = config.get('cloudstack', 'z0p1.private.pod.endip')
p1.netmask = config.get('cloudstack', 'z0p1.private.netmask')
#Second public range
v1 = iprange()
v1.gateway = config.get('cloudstack', 'z0p1.public.gateway')
v1.startip = config.get('cloudstack', 'z0p1.public.vlan.startip')
v1.endip = config.get('cloudstack', 'z0p1.public.vlan.endip')
v1.netmask = config.get('cloudstack', 'z0p1.public.netmask')
v1.vlan = config.get('cloudstack', 'z0p1.public.vlan')
z.ipranges.append(v1)
#cluster in pod 2
c1 = cluster()
c1.clustername = 'z0p1c0'
c1.hypervisor = config.get('cloudstack', 'hypervisor')
c1.clustertype = 'CloudManaged'
#Host 1
h2 = host()
h2.username = 'root'
h2.password = config.get('cloudstack', 'host.password')
h2.url = 'http://%s'%(config.get('cloudstack', 'z0p1c0h0.host'))
c1.hosts.append(h2)
#Primary 1
ps2 = primaryStorage()
ps2.name = 'z0p1c0ps0'
ps2.url = config.get('cloudstack', 'z0p1c0ps0.primary.pool')
c1.primaryStorages.append(ps2)
p1.clusters.append(c1)
z.pods.append(p1)
secondary = secondaryStorage()
secondary.url = config.get('cloudstack', 'z0.secondary.pool')
secondary.provider = "NFS"
z.secondaryStorages.append(secondary)
'''Add zone'''
zs.zones.append(z)
'''Add mgt server'''
mgt = managementServer()
mgt.mgtSvrIp = config.get('environment', 'mshost')
zs.mgtSvr.append(mgt)
'''Add a database'''
db = dbServer()
db.dbSvr = config.get('environment', 'mysql.host')
db.user = config.get('environment', 'mysql.cloud.user')
db.passwd = config.get('environment', 'mysql.cloud.passwd')
zs.dbSvr = db
'''Add some configuration'''
[zs.globalConfig.append(cfg) for cfg in getGlobalSettings(config)]
''''add loggers'''
testClientLogger = logger()
testClientLogger.name = 'TestClient'
testClientLogger.file = '/var/log/testclient.log'
testCaseLogger = logger()
testCaseLogger.name = 'TestCase'
testCaseLogger.file = '/var/log/testcase.log'
zs.logger.append(testClientLogger)
zs.logger.append(testCaseLogger)
return zs
if __name__ == '__main__':
parser = OptionParser()
parser.add_option('-i', '--input', action='store', default='setup.properties', \
dest='input', help='file containing environment setup information')
parser.add_option('-o', '--output', action='store', default='./sandbox.cfg', \
dest='output', help='path where environment json will be generated')
(opts, args) = parser.parse_args()
cfg_parser = SafeConfigParser()
cfg_parser.read(opts.input)
cfg = describeResources(cfg_parser)
generate_setup_config(cfg, opts.output)

View File

@ -0,0 +1,162 @@
{
"zones": [
{
"name": "Sandbox-XenServer",
"guestcidraddress": "10.1.1.0/24",
"dns1": "10.223.75.10",
"physical_networks": [
{
"broadcastdomainrange": "Zone",
"name": "Sandbox-pnet",
"traffictypes": [
{
"typ": "Guest"
},
{
"typ": "Management"
},
{
"typ": "Public"
}
],
"providers": [
{
"broadcastdomainrange": "ZONE",
"name": "VirtualRouter"
},
{
"broadcastdomainrange": "ZONE",
"name": "VpcVirtualRouter"
}
]
}
],
"ipranges": [
{
"startip": "10.223.158.2",
"endip": "10.223.158.20",
"netmask": "255.255.255.128",
"vlan": "580",
"gateway": "10.223.158.1"
}
],
"networktype": "Advanced",
"pods": [
{
"endip": "10.223.78.150",
"name": "POD0",
"startip": "10.223.78.130",
"netmask": "255.255.255.128",
"clusters": [
{
"clustername": "C0",
"hypervisor": "XenServer",
"hosts": [
{
"username": "root",
"url": "http://acs-qa-h20",
"password": "password"
}
],
"clustertype": "CloudManaged",
"primaryStorages": [
{
"url": "nfs://nfs2.lab.vmops.com/export/home/automation/asf/primary",
"name": "PS0"
}
]
}
],
"gateway": "10.223.78.129"
}
],
"internaldns1": "10.223.75.10",
"secondaryStorages": [
{
"url": "nfs://nfs2.lab.vmops.com/export/home/automation/asf/secondary"
}
]
}
],
"dbSvr": {
"dbSvr": "10.223.75.41",
"passwd": "cloud",
"db": "cloud",
"port": 3306,
"user": "cloud"
},
"logger": [
{
"name": "TestClient",
"file": "/var/log/testclient.log"
},
{
"name": "TestCase",
"file": "/var/log/testcase.log"
}
],
"globalConfig": [
{
"name": "storage.cleanup.interval",
"value": "300"
},
{
"name": "direct.agent.load.size",
"value": "1000"
},
{
"name": "default.page.size",
"value": "10000"
},
{
"name": "instance.name",
"value": "QA"
},
{
"name": "workers",
"value": "10"
},
{
"name": "vm.op.wait.interval",
"value": "5"
},
{
"name": "account.cleanup.interval",
"value": "600"
},
{
"name": "guest.domain.suffix",
"value": "sandbox.xen"
},
{
"name": "expunge.delay",
"value": "60"
},
{
"name": "vm.allocation.algorithm",
"value": "random"
},
{
"name": "expunge.interval",
"value": "60"
},
{
"name": "expunge.workers",
"value": "3"
},
{
"name": "secstorage.allowed.internal.sites",
"value": "10.223.0.0/16"
},
{
"name": "check.pod.cidrs",
"value": "true"
}
],
"mgtSvr": [
{
"mgtSvrIp": "10.223.75.41",
"port": 8096
}
]
}

View File

@ -0,0 +1,130 @@
from optparse import OptionParser
from signal import alarm, signal, SIGALRM, SIGKILL
from subprocess import PIPE, Popen
import logging
import os
import paramiko
import select
class wget(object):
def __init__(self, filename, url, path=None):
pass
def background(self, handler):
pass
class remoteSSHClient(object):
def __init__(self, host, port, user, passwd):
self.host = host
self.port = port
self.user = user
self.passwd = passwd
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
self.ssh.connect(str(host),int(port), user, passwd)
except paramiko.SSHException, sshex:
logging.debug(repr(sshex))
def execute(self, command):
stdin, stdout, stderr = self.ssh.exec_command(command)
output = stdout.readlines()
errors = stderr.readlines()
results = []
if output is not None and len(output) == 0:
if errors is not None and len(errors) > 0:
for error in errors:
results.append(error.rstrip())
else:
for strOut in output:
results.append(strOut.rstrip())
return results
def execute_buffered(self, command, bufsize=512):
transport = self.ssh.get_transport()
channel = transport.open_session()
try:
stdin, stdout, sterr = channel.exec_command(command)
while True:
rl, wl, xl = select.select([channel],[],[],0.0)
if len(rl) > 0:
logging.debug(channel.recv(bufsize))
except paramiko.SSHException, e:
logging.debug(repr(e))
def scp(self, srcFile, destPath):
transport = paramiko.Transport((self.host, int(self.port)))
transport.connect(username = self.user, password=self.passwd)
sftp = paramiko.SFTPClient.from_transport(transport)
try:
sftp.put(srcFile, destPath)
except IOError, e:
raise e
class bash:
def __init__(self, args, timeout=600):
self.args = args
logging.debug("execute:%s"%args)
self.timeout = timeout
self.process = None
self.success = False
self.run()
def run(self):
class Alarm(Exception):
pass
def alarm_handler(signum, frame):
raise Alarm
try:
self.process = Popen(self.args, shell=True, stdout=PIPE, stderr=PIPE)
if self.timeout != -1:
signal(SIGALRM, alarm_handler)
alarm(self.timeout)
try:
self.stdout, self.stderr = self.process.communicate()
if self.timeout != -1:
alarm(0)
except Alarm:
os.kill(self.process.pid, SIGKILL)
self.success = self.process.returncode == 0
except:
pass
if not self.success:
logging.debug("Failed to execute:" + self.getErrMsg())
def isSuccess(self):
return self.success
def getStdout(self):
try:
return self.stdout.strip("\n")
except AttributeError:
return ""
def getLines(self):
return self.stdout.split("\n")
def getStderr(self):
try:
return self.stderr.strip("\n")
except AttributeError:
return ""
def getErrMsg(self):
if self.isSuccess():
return ""
if self.getStderr() is None or self.getStderr() == "":
return self.getStdout()
else:
return self.getStderr()

View File

@ -0,0 +1,152 @@
#!/usr/bin/env python
from ConfigParser import ConfigParser
from jenkinsapi import api, jenkins, job
from time import sleep as delay
import jenkinsapi
import logging
import os
class BuildGenerator(object):
"""
1. Create a job on Hudson/Jenkins
2. Poll for job status
3. Fetch latest successful job
4. Resolve Job to Repo URL/fetch artifact
"""
def __init__(self, username=None, passwd=None, url="http://hudson.lab.vmops.com", job='CloudStack-PRIVATE'):
#TODO: Change the username to "vogon" for automation
self.hudsonurl = url
self.tarball = None
self.build_number = 0
#self.jenkinsurl = "http://jenkins.jobclient.org"
if username and passwd:
self.username = username
self.password = passwd
else:
logging.warning("no username given, logging in with default creds")
self.username = "marvin"
self.password = "marvin"
try:
j = jenkins.Jenkins(self.hudsonurl, self.username, self.password)
self.jobclient = j.get_job(job)
except Exception, e:
logging.error("Failed to login to Hudson")
raise e
else:
logging.debug("successfully logged into hudson instance %s \
using username, passwd : %s, %s" \
%(self.hudsonurl, self.username, self.password))
def readBuildConfiguration(self, cfg_file):
cfg = ConfigParser()
cfg.optionxform = str
if cfg.read(cfg_file):
logging.debug("Using config file found at %s"%cfg_file)
self.config = cfg
else:
raise IOError("Cannot find file %s"%cfg_file)
def parseConfigParams(self):
#TODO: passing a config file should be allowed as cmd line args
params = {}
if self.config:
logging.debug("build params found:")
for k,v in dict(self.config.items('build_params')).iteritems():
logging.debug("%s : %s"%(k,v))
return dict(self.config.items('build_params'))
else:
logging.debug("build config not found")
raise ValueError("Build configuration was not initialized")
def build(self, wait=20):
if self.config and self.jobclient:
while self.jobclient.is_queued_or_running():
logging.debug("Waiting %ss for running/queued build to complete"%wait)
delay(wait)
self.jobclient.invoke(params=self.parseConfigParams())
self.build_number = self.jobclient.get_last_buildnumber()
self.paramlist = self.parseConfigParams()
logging.info("Started build : %d"%self.jobclient.get_last_buildnumber())
while self.jobclient.is_running():
logging.debug("Polling build status in %ss"%wait)
delay(wait)
logging.info("Completed build : %d"%self.jobclient.get_last_buildnumber())
logging.debug("Last Good Build : %d, Last Build : %d, Our Build : \
%d"%(self.jobclient.get_last_good_buildnumber(), \
self.jobclient.get_last_buildnumber(), \
self.build_number))
if self.jobclient.get_last_good_buildnumber() == self.build_number:
return self.build_number
else: #lastGoodBuild != ourBuild
our_build = self.getBuildWithNumber(self.build_number)
if our_build is not None and our_build.get_status() == 'SUCCESS':
logging.debug("Our builds' %d status %s"%(self.build_number,
our_build.get_status()))
return self.build_number
else:
logging.debug("Our builds' %d status %s"%(self.build_number,
our_build.get_status()))
return 0
def getLastGoodBuild(self):
return self.jobclient.get_build(self.build_number)
def getBuildWithNumber(self, number):
if number > 0:
bld = self.jobclient.get_build(number)
self.build_number = number
self.paramlist = self.getBuildParamList(bld)
return bld
def getBuildParamValue(self, name):
return self.paramlist[name]
def getTarballName(self):
if self.tarball is not None:
return self.tarball
else:
self.resolveRepoPath()
return self.getTarballName()
def getArtifacts(self):
artifact_dict = self.getLastGoodBuild().get_artifact_dict()
if artifact_dict is not None:
return artifact_dict
def sift(self, dic):
return dic['name'], dic['value']
def getBuildParamList(self, bld):
params = bld.get_actions()['parameters']
return dict(map(self.sift, params))
def resolveRepoPath(self):
tarball_list = ['CloudStack-' ,
self.getBuildParamValue('PACKAGE_VERSION') ,
'-0.', str(self.build_number) , '-' ,
self.getBuildParamValue('DO_DISTRO_PACKAGES') ,
'.tar.gz']
self.tarball = ''.join(tarball_list)
path = os.path.join('yumrepo.lab.vmops.com', 'releases', 'rhel', \
self.getBuildParamValue('DO_DISTRO_PACKAGES').strip('rhel'), \
self.getBuildParamValue('PUSH_TO_REPO'), \
self.tarball)
logging.debug("resolved last good build generated by us to: %s"%path)
return path
if __name__ == '__main__':
# hudson = BuildGenerator(job="marvin")
# hudson.readBuildConfiguration('build.cfg')
# hudson.build()
hudson = BuildGenerator("CloudStack-PRIVATE")
hudson.readBuildConfiguration('build.cfg')
hudson.build(wait=60)
# hudson.getBuildWithNumber(2586)

View File

@ -0,0 +1,394 @@
from ConfigParser import ConfigParser
from bashUtils import bash
from marvin import configGenerator
from marvin import sshClient
from marvin import dbConnection
from argparse import ArgumentParser
from time import sleep as delay
from netaddr import IPNetwork
from netaddr import IPAddress
import contextlib
import telnetlib
import logging
import threading
import Queue
import sys
import random
import string
import urllib2
import urlparse
import socket
WORKSPACE="."
IPMI_PASS="calvin"
DOMAIN = 'fmt.vmops.com'
macinfo = {}
ipmiinfo = {}
cobblerinfo = {}
def generate_system_tables(config):
dhcp = config.items("dhcp")
for entry in dhcp:
macinfo[entry[0]] = {}
mac, passwd, ip = entry[1].split(",")
macinfo[entry[0]]["ethernet"] = mac
macinfo[entry[0]]["password"] = passwd
macinfo[entry[0]]["address"] = ip
ipmi = config.items("ipmi")
for entry in ipmi:
ipmiinfo[entry[0]] = entry[1]
cobbler = config.items("cobbler")
for entry in cobbler:
cobblerinfo[entry[0]] = {}
net, gw, cblrgw = entry[1].split(",")
cobblerinfo[entry[0]]["network"] = net
cobblerinfo[entry[0]]["gateway"] = gw
cobblerinfo[entry[0]]["cblrgw"] = cblrgw
def initLogging(logFile=None, lvl=logging.INFO):
try:
if logFile is None:
logging.basicConfig(level=lvl, \
format="'%(asctime)-6s: %(name)s \
(%(threadName)s) - %(levelname)s - %(message)s'")
else:
logging.basicConfig(filename=logFile, level=lvl, \
format="'%(asctime)-6s: %(name)s \
(%(threadName)s) - %(levelname)s - %(message)s'")
except:
logging.basicConfig(level=lvl)
def mkdirs(path):
dir = bash("mkdir -p %s" % path)
def fetch(filename, url, path):
try:
zipstream = urllib2.urlopen(url)
tarball = open('/tmp/%s' % filename, 'wb')
tarball.write(zipstream.read())
tarball.close()
except urllib2.URLError, u:
raise u
except IOError:
raise
bash("mv /tmp/%s %s" % (filename, path))
def cobblerHomeResolve(ip_address, param="gateway"):
ipAddr = IPAddress(ip_address)
for nic, network in cobblerinfo.items():
subnet = IPNetwork(cobblerinfo[nic]["network"])
if ipAddr in subnet:
return cobblerinfo[nic][param]
def configureManagementServer(mgmt_host):
"""
We currently configure all mgmt servers on a single xen HV. In the future
replace this by launching instances via the API on a IaaS cloud using
desired template
"""
mgmt_vm = macinfo[mgmt_host]
mgmt_ip = macinfo[mgmt_host]["address"]
#Remove and re-add cobbler system
bash("cobbler system remove --name=%s"%mgmt_host)
bash("cobbler system add --name=%s --hostname=%s --mac-address=%s \
--netboot-enabled=yes --enable-gpxe=no \
--profile=%s --server=%s --gateway=%s"%(mgmt_host, mgmt_host,
mgmt_vm["ethernet"], mgmt_host,
cobblerHomeResolve(mgmt_ip, param='cblrgw'),
cobblerHomeResolve(mgmt_ip)));
bash("cobbler sync")
#Revoke all certs from puppetmaster
bash("puppet cert clean %s.%s"%(mgmt_host, DOMAIN))
#Start VM on xenserver
xenssh = \
sshClient.SshClient(macinfo["infraxen"]["address"],
22, "root",
macinfo["infraxen"]["password"])
logging.debug("bash vm-uninstall.sh -n %s"%(mgmt_host))
xenssh.execute("xe vm-uninstall force=true vm=%s"%mgmt_host)
logging.debug("bash vm-start.sh -n %s -m %s"%(mgmt_host, mgmt_vm["ethernet"]))
out = xenssh.execute("bash vm-start.sh -n %s -m %s"%(mgmt_host,
mgmt_vm["ethernet"]))
logging.info("started mgmt server with uuid: %s. Waiting for services .."%out);
return mgmt_host
def mountAndClean(host, path):
"""
Will mount and clear the files on NFS host in the path given. Obviously the
NFS server should be mountable where this script runs
"""
mnt_path = "/tmp/" + ''.join([random.choice(string.ascii_uppercase) for x in xrange(0, 10)])
mkdirs(mnt_path)
logging.info("cleaning up %s:%s" % (host, path))
mnt = bash("mount -t nfs %s:%s %s" % (host, path, mnt_path))
erase = bash("rm -rf %s/*" % mnt_path)
umnt = bash("umount %s" % mnt_path)
def cleanPrimaryStorage(cscfg):
"""
Clean all the NFS primary stores and prepare them for the next run
"""
for zone in cscfg.zones:
for pod in zone.pods:
for cluster in pod.clusters:
for primaryStorage in cluster.primaryStorages:
if urlparse.urlsplit(primaryStorage.url).scheme == "nfs":
mountAndClean(urlparse.urlsplit(primaryStorage.url).hostname, urlparse.urlsplit(primaryStorage.url).path)
logging.info("Cleaned up primary stores")
def seedSecondaryStorage(cscfg, hypervisor):
"""
erase secondary store and seed system VM template via puppet. The
secseeder.sh script is executed on mgmt server bootup which will mount and
place the system VM templates on the NFS
"""
mgmt_server = cscfg.mgtSvr[0].mgtSvrIp
logging.info("Secondary storage seeded via puppet with systemvm templates")
bash("rm -f /etc/puppet/modules/cloudstack/files/secseeder.sh")
for zone in cscfg.zones:
for sstor in zone.secondaryStorages:
shost = urlparse.urlsplit(sstor.url).hostname
spath = urlparse.urlsplit(sstor.url).path
spath = ''.join([shost, ':', spath])
logging.info("seeding %s systemvm template on %s"%(hypervisor, spath))
bash("echo '/bin/bash /root/redeploy.sh -s %s -h %s' >> /etc/puppet/modules/cloudstack/files/secseeder.sh"%(spath, hypervisor))
bash("chmod +x /etc/puppet/modules/cloudstack/files/secseeder.sh")
def refreshHosts(cscfg, hypervisor="xen", profile="xen602"):
"""
Removes cobbler system from previous run.
Creates a new system for current run.
Ipmi boots from PXE - default to Xenserver profile
"""
for zone in cscfg.zones:
for pod in zone.pods:
for cluster in pod.clusters:
for host in cluster.hosts:
hostname = urlparse.urlsplit(host.url).hostname
logging.debug("attempting to refresh host %s"%hostname)
#revoke certs
bash("puppet cert clean %s.%s"%(hostname, DOMAIN))
#setup cobbler profiles and systems
try:
hostmac = macinfo[hostname]['ethernet']
hostip = macinfo[hostname]['address']
bash("cobbler system remove \
--name=%s"%(hostname))
bash("cobbler system add --name=%s --hostname=%s \
--mac-address=%s --netboot-enabled=yes \
--enable-gpxe=no --profile=%s --server=%s \
--gateway=%s"%(hostname, hostname, hostmac,
profile, cobblerHomeResolve(hostip, param='cblrgw'),
cobblerHomeResolve(hostip)))
bash("cobbler sync")
except KeyError:
logging.error("No mac found against host %s. Exiting"%hostname)
sys.exit(2)
#set ipmi to boot from PXE
try:
ipmi_hostname = ipmiinfo[hostname]
logging.debug("found IPMI nic on %s for host %s"%(ipmi_hostname, hostname))
bash("ipmitool -Uroot -P%s -H%s chassis bootdev \
pxe"%(IPMI_PASS, ipmi_hostname))
bash("ipmitool -Uroot -P%s -H%s chassis power cycle"
%(IPMI_PASS, ipmi_hostname))
logging.debug("Sent PXE boot for %s"%ipmi_hostname)
except KeyError:
logging.error("No ipmi host found against %s. Exiting"%hostname)
sys.exit(2)
yield hostname
delay(5) #to begin pxe boot process or wait returns immediately
def _isPortListening(host, port, timeout=120):
"""
Scans 'host' for a listening service on 'port'
"""
tn = None
while timeout != 0:
try:
tn = telnetlib.Telnet(host, port, timeout=timeout)
timeout = 0
except Exception, e:
logging.debug("Failed to telnet connect to %s:%s with %s"%(host, port, e))
delay(5)
timeout = timeout - 5
if tn is None:
logging.error("No service listening on port %s:%d"%(host, port))
return False
else:
logging.info("Unrecognizable service up on %s:%d"%(host, port))
return True
def _isPortOpen(hostQueue, port=22):
"""
Checks if there is an open socket on specified port. Default is SSH
"""
ready = []
host = hostQueue.get()
while True:
channel = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
channel.settimeout(20)
try:
logging.debug("Attempting port=%s connect to host %s"%(port, host))
err = channel.connect_ex((host, port))
except socket.error, e:
logging.debug("encountered %s retrying in 5s"%e)
err = e.errno
delay(5)
finally:
if err == 0:
ready.append(host)
logging.info("host: %s is ready"%host)
break
else:
logging.debug("[%s] host %s is not ready. Retrying"%(err, host))
delay(5)
channel.close()
hostQueue.task_done()
def waitForHostReady(hostlist):
logging.info("Waiting for hosts %s to refresh"%hostlist)
hostQueue = Queue.Queue()
for host in hostlist:
t = threading.Thread(name='HostWait-%s'%hostlist.index(host), target=_isPortOpen,
args=(hostQueue, ))
t.setDaemon(True)
t.start()
[hostQueue.put(host) for host in hostlist]
hostQueue.join()
logging.info("All hosts %s are up"%hostlist)
def isManagementServiceStable(ssh=None, timeout=300, interval=5):
logging.info("Waiting for cloudstack-management service to become stable")
if ssh is None:
return False
while timeout != 0:
cs_status = ''.join(ssh.execute("service cloudstack-management status"))
logging.debug("[-%ds] Cloud Management status: %s"%(timeout, cs_status))
if cs_status.find('running') > 0:
pass
else:
ssh.execute("service cloudstack-management restart")
timeout = timeout - interval
delay(interval)
def testManagementServer(mgmt_host):
"""
Test that the cloudstack service is up
"""
#TODO: Add user registration step
mgmt_ip = macinfo[mgmt_host]["address"]
mgmt_pass = macinfo[mgmt_host]["password"]
with contextlib.closing(sshClient.SshClient(mgmt_ip, 22, "root", mgmt_pass)) as ssh:
isManagementServiceStable(ssh, timeout=60)
def prepareManagementServer(mgmt_host):
"""
Prepare the mgmt server for a marvin test run
"""
if _isPortListening(host=mgmt_host, port=22, timeout=10) \
and _isPortListening(host=mgmt_host, port=3306, timeout=10) \
and _isPortListening(host=mgmt_host, port=8080, timeout=300):
delay(120) #introduce dumb delay
mgmt_ip = macinfo[mgmt_host]["address"]
mgmt_pass = macinfo[mgmt_host]["password"]
with contextlib.closing(sshClient.SshClient(mgmt_ip, 22, "root", mgmt_pass)) as ssh:
# Open up 8096 for Marvin initial signup and register
ssh.execute("mysql -ucloud -pcloud -Dcloud -e\"update configuration set value=8096 where name like 'integr%'\"")
ssh.execute("service cloudstack-management restart")
else:
raise Exception("Reqd services (ssh, mysql) on management server are not up. Aborting")
if _isPortListening(host=mgmt_host, port=8096, timeout=300):
logging.info("All reqd services are up on the management server %s"%mgmt_host)
testManagementServer(mgmt_host)
return
else:
with contextlib.closing(sshClient.SshClient(mgmt_ip, 22, "root", mgmt_pass)) as ssh:
# Force kill java process
ssh.execute("killall -9 java; service cloudstack-management start")
if _isPortListening(host=mgmt_host, port=8096, timeout=300):
logging.info("All reqd services are up on the management server %s"%mgmt_host)
testManagementServer(mgmt_host)
return
else:
raise Exception("Reqd service for integration port on management server %s is not open. Aborting"%mgmt_host)
def init(lvl=logging.INFO):
initLogging(logFile=None, lvl=lvl)
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument("-l", "--logging", action="store", default="INFO",
dest="loglvl", help="logging level (INFO|DEBUG|)")
parser.add_argument("-d", "--distro", action="store",
dest="distro", help="management server distro")
parser.add_argument("-v", "--hypervisor", action="store",
dest="hypervisor", help="hypervisor type")
parser.add_argument("-p", "--profile", action="store", default="xen602",
dest="profile", help="cobbler profile for hypervisor")
parser.add_argument("-e","--environment", help="environment properties file",
dest="system", action="store")
options = parser.parse_args()
if options.loglvl == "DEBUG":
init(logging.DEBUG)
elif options.loglvl == "INFO":
init(logging.INFO)
else:
init(logging.INFO)
if options.system is None:
logging.error("no environment properties given. exiting")
sys.exit(-1)
system = ConfigParser()
try:
with open(options.system, 'r') as cfg:
system.readfp(cfg)
except IOError, e:
logging.error("Specify a valid path for the environment properties")
raise e
generate_system_tables(system)
hosts = []
prepare_mgmt = False
if options.distro is not None:
#Management Server configuration - only tests the packaging
mgmt_host = "cloudstack-"+options.distro
prepare_mgmt = True
logging.info("Configuring management server %s"%mgmt_host)
hosts.append(configureManagementServer(mgmt_host))
if options.hypervisor is not None:
#FIXME: query profiles from hypervisor args through cobbler api
auto_config = options.hypervisor + ".cfg"
cscfg = configGenerator.getSetupConfig(auto_config)
logging.info("Reimaging hosts with %s profile for the %s \
hypervisor" % (options.profile, options.hypervisor))
hosts.extend(refreshHosts(cscfg, options.hypervisor, options.profile))
seedSecondaryStorage(cscfg, options.hypervisor)
cleanPrimaryStorage(cscfg)
waitForHostReady(hosts)
delay(30)
# Re-check because ssh connect works soon as post-installation occurs. But
# server is rebooted after post-installation. Assuming the server is up is
# wrong in these cases. To avoid this we will check again before continuing
# to add the hosts to cloudstack
waitForHostReady(hosts)
if prepare_mgmt:
prepareManagementServer(mgmt_host)
logging.info("All systems go!")

View File

@ -0,0 +1,73 @@
hypervisor="kvm"
#Isolate the run into a virtualenv
/usr/local/bin/virtualenv-2.7 -p /usr/local/bin/python2.7 nightly-smoke-kvm-$BUILD_NUMBER
#Copy the tests into the virtual env
rsync -az test nightly-smoke-kvm-$BUILD_NUMBER/
cd nightly-smoke-kvm-$BUILD_NUMBER
## Start
source bin/activate
#Get Marvin and install
tar=$(wget -O - http://jenkins.cloudstack.org:8080/job/build-marvin-4.0/lastSuccessfulBuild/artifact/tools/marvin/dist/ | grep Marvin | sed -e :a -e 's/<[^>]*>//g;/</N;//ba' | sed -e 's/[ \t]*//g' | cut -d"z" -f1)'z'
url='http://jenkins.cloudstack.org:8080/job/build-marvin-4.0/lastSuccessfulBuild/artifact/tools/marvin/dist/'$tar
wget $url
#Latest deployment configs for marvin
git clone https://github.com/vogxn/cloud-autodeploy.git
cd cloud-autodeploy
git checkout acs-infra-test
cd ..
#Install necessary python eggs
pip -q install $tar
pip -q install netaddr
pip -q install /opt/xunitmp ## Plugin is not in nose-mainline yet: https://github.com/nose-devs/nose/issues/2 ##
#Install marvin-nose plugin
pip -q install lib/python2.7/site-packages/marvin/
#Deploy the configuration - yes/no
if [[ $DEPLOY == "yes" ]]; then
cd cloud-autodeploy
if [[ $hypervisor == 'xen' ]];then
profile='xen602'
else
profile='rhel63-kvm'
fi
python configure.py -v $hypervisor -d $distro -p $profile -l $LOG_LVL
cd ../test
nosetests -v --with-marvin --marvin-config=../cloud-autodeploy/$hypervisor.cfg -w /tmp
#Restart to apply global settings
python ../cloud-autodeploy/restartMgmt.py --config ../cloud-autodeploy/$hypervisor.cfg
cd $WORKSPACE/nightly-smoke-kvm-$BUILD_NUMBER
fi
#Health Check
nosetests -v --with-marvin --marvin-config=cloud-autodeploy/$hypervisor.cfg --load cloud-autodeploy/testSetupSuccess.py
#Setup Test Data
cd test
bash setup-test-data.sh -t integration/smoke -m 10.223.75.41 -p password -d 10.223.75.41 -h $hypervisor
for file in `find integration/smoke/ -name *.py -type f`
do
sed -i "s/http:\/\/iso.linuxquestions.org\/download\/504\/1819\/http\/gd4.tuwien.ac.at\/dsl-4.4.10.iso/http:\/\/nfs1.lab.vmops.com\/isos_32bit\/dsl-4.4.10.iso/g" $file
done
if [[ $? -ne 0 ]]; then
echo "Problem seeding test data"
exit 2
fi
if [[ $DEBUG == "yes" ]]; then
nosetests -v --with-marvin --marvin-config=../cloud-autodeploy/$hypervisor.cfg -w integration/smoke --load --with-xunitmp --collect-only
else
set +e
nosetests -v --processes=5 --process-timeout=3600 --with-marvin --marvin-config=`pwd`/../cloud-autodeploy/$hypervisor.cfg -w integration/smoke --load --with-xunitmp
set -e
fi
cp -fv integration/smoke/nosetests.xml $WORKSPACE
#deactivate, cleanup and exit
deactivate
rm -rf nightly-smoke-kvm-$BUILD_NUMBER

View File

@ -0,0 +1,225 @@
{
"zones": [
{
"name": "z0",
"guestcidraddress": "10.1.1.0/24",
"dns2": "8.8.8.8",
"dns1": "8.8.8.8",
"physical_networks": [
{
"name": "z0-pnet",
"providers": [
{
"broadcastdomainrange": "ZONE",
"name": "VirtualRouter"
},
{
"broadcastdomainrange": "ZONE",
"name": "VpcVirtualRouter"
},
{
"broadcastdomainrange": "ZONE",
"name": "InternalLbVm"
}
],
"broadcastdomainrange": "Zone",
"vlan": "2001-2050",
"traffictypes": [
{
"typ": "Guest"
},
{
"typ": "Management"
},
{
"typ": "Public"
}
],
"isolationmethods": [
"VLAN"
]
}
],
"securitygroupenabled": "false",
"ipranges": [
{
"startip": "10.208.10.10",
"endip": "10.208.10.62",
"netmask": "255.255.255.192",
"vlan": "100",
"gateway": "10.208.10.1"
},
{
"startip": "10.208.10.66",
"endip": "10.208.10.126",
"netmask": "255.255.255.192",
"vlan": "101",
"gateway": "10.208.10.65"
}
],
"networktype": "Advanced",
"pods": [
{
"endip": "10.208.8.75",
"name": "z0p0",
"startip": "10.208.8.70",
"netmask": "255.255.255.192",
"clusters": [
{
"clustername": "z0p0c0",
"hypervisor": "KVM",
"hosts": [
{
"username": "root",
"url": "http://apache-81-3",
"password": "password"
},
{
"username": "root",
"url": "http://apache-81-2",
"password": "password"
}
],
"clustertype": "CloudManaged",
"primaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary",
"name": "z0p0c0ps0"
},
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary1",
"name": "z0p0c0ps1"
}
]
}
],
"gateway": "10.208.8.65"
},
{
"endip": "10.208.8.205",
"name": "z0p1",
"startip": "10.208.8.200",
"netmask": "255.255.255.192",
"clusters": [
{
"clustername": "z0p1c0",
"hypervisor": "KVM",
"hosts": [
{
"username": "root",
"url": "http://apache-83-1",
"password": "password"
}
],
"clustertype": "CloudManaged",
"primaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary2",
"name": "z0p1c0ps0"
}
]
}
],
"gateway": "10.208.8.193"
}
],
"internaldns1": "10.208.8.5",
"internaldns2": "10.208.8.5",
"secondaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/secondary",
"provider": "NFS"
}
]
}
],
"dbSvr": {
"dbSvr": "cloudstack-centos63",
"passwd": "cloud",
"db": "cloud",
"port": 3306,
"user": "cloud"
},
"logger": [
{
"name": "TestClient",
"file": "/var/log/testclient.log"
},
{
"name": "TestCase",
"file": "/var/log/testcase.log"
}
],
"globalConfig": [
{
"name": "storage.cleanup.interval",
"value": "120"
},
{
"name": "direct.agent.load.size",
"value": "1000"
},
{
"name": "default.page.size",
"value": "10000"
},
{
"name": "account.cleanup.interval",
"value": "120"
},
{
"name": "workers",
"value": "10"
},
{
"name": "vm.op.wait.interval",
"value": "5"
},
{
"name": "network.gc.interval",
"value": "120"
},
{
"name": "guest.domain.suffix",
"value": "sandbox.kvm"
},
{
"name": "expunge.delay",
"value": "60"
},
{
"name": "vm.allocation.algorithm",
"value": "random"
},
{
"name": "expunge.interval",
"value": "60"
},
{
"name": "enable.dynamic.scale.vm",
"value": "true"
},
{
"name": "instance.name",
"value": "QA"
},
{
"name": "expunge.workers",
"value": "3"
},
{
"name": "secstorage.allowed.internal.sites",
"value": "10.208.8.0/26,10.208.8.65/26,10.208.8.128/26,10.208.8.192/26,10.208.13.194/32"
},
{
"name": "check.pod.cidrs",
"value": "true"
}
],
"mgtSvr": [
{
"mgtSvrIp": "cloudstack-centos63",
"port": 8096
}
]
}

View File

@ -0,0 +1,86 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
[globals]
#global settings in cloudstack
expunge.delay=60
expunge.interval=60
storage.cleanup.interval=120
account.cleanup.interval=120
network.gc.interval=120
expunge.workers=3
workers=10
vm.allocation.algorithm=random
vm.op.wait.interval=5
guest.domain.suffix=sandbox.kvm
instance.name=QA
direct.agent.load.size=1000
default.page.size=10000
check.pod.cidrs=true
secstorage.allowed.internal.sites=10.208.8.0/26,10.208.8.65/26,10.208.8.128/26,10.208.8.192/26,10.208.13.194/32
enable.dynamic.scale.vm=true
[environment]
dns1=8.8.8.8
dns2=8.8.8.8
internal_dns1=10.208.8.5
internal_dns2=10.208.8.5
mshost=cloudstack-centos63
mysql.host=cloudstack-centos63
mysql.cloud.user=cloud
mysql.cloud.passwd=cloud
[cloudstack]
hypervisor=KVM
host.password=password
#Zone 1
z0.guest.vlan=2001-2050
z0p0.private.gateway=10.208.8.65
z0p0.private.pod.startip=10.208.8.70
z0p0.private.pod.endip=10.208.8.75
z0p0.private.netmask=255.255.255.192
z0p0.public.gateway=10.208.10.1
z0p0.public.vlan.startip=10.208.10.10
z0p0.public.vlan.endip=10.208.10.62
z0p0.public.netmask=255.255.255.192
z0p0.public.vlan=100
z0p0c0h0.host=apache-81-3
z0p0c0h1.host=apache-81-2
z0p0c0ps0.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary
z0p0c0ps1.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary1
z0p1.private.gateway=10.208.8.193
z0p1.private.pod.startip=10.208.8.200
z0p1.private.pod.endip=10.208.8.205
z0p1.private.netmask=255.255.255.192
z0p1.public.gateway=10.208.10.65
z0p1.public.vlan.startip=10.208.10.66
z0p1.public.vlan.endip=10.208.10.126
z0p1.public.netmask=255.255.255.192
z0p1.public.vlan=101
z0p1c0h0.host=apache-83-1
z0p1c0ps0.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary2
z0.secondary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/secondary

View File

@ -0,0 +1,14 @@
from marvin import dbConnection
def _openIntegrationPort():
dbhost = '10.223.132.200'#csconfig.dbSvr.dbSvr
dbuser = 'cloud'#csconfig.dbSvr.user
dbpasswd = 'cloud'#csconfig.dbSvr.passwd
conn = dbConnection.dbConnection(dbhost, 3306, dbuser, dbpasswd, "cloud")
query = "update configuration set value='8096' where name='integration.api.port'"
print conn.execute(query)
query = "select name,value from configuration where name='integration.api.port'"
print conn.execute(query)
if __name__ == '__main__':
_openIntegrationPort()

View File

@ -0,0 +1,108 @@
#!/bin/bash
#set -x
usage() {
printf "Usage: %s:\n
[-s nfs path to secondary storage <nfs-server:/export/path> ] [-u url to system template] [-h hypervisor type (kvm|xenserver|vmware) ]\n" $(basename $0) >&2
printf "\nThe -s flag will clean the secondary path and install the specified
hypervisor's system template as per -h, if -h is not given then xenserver is
assumed\n"
}
failed() {
exit $1
}
#flags
sflag=
hflag=
uflag=
VERSION="1.0.1"
echo "Redeploy Version: $VERSION"
#some defaults
spath='nfs2.lab.vmops.com:/export/home/bvt/secondary'
hypervisor='xenserver'
sysvmurl='http://download.cloud.com/templates/acton/acton-systemvm-02062012.vhd.bz2'
systemvm_seeder='/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt'
while getopts 'u:s:h:' OPTION
do
case $OPTION in
s) sflag=1
spath="$OPTARG"
;;
h) hflag=1
hypervisor="$OPTARG"
;;
u) uflag=1
sysvmurl="$OPTARG"
;;
?) usage
failed 2
;;
esac
done
if [[ -e /etc/redhat-release ]]
then
cat /etc/redhat-release
else
echo "script works on rpm environments only"
exit 5
fi
#check if process is running
proc=$(ps aux | grep cloud | wc -l)
if [[ $proc -lt 2 ]]
then
echo "Cloud process not running"
if [[ -e /var/run/cloud-management.pid ]]
then
rm -f /var/run/cloud-management.pid
fi
else
#stop service
service cloud-management stop
fi
#TODO: archive old logs
#refresh log state
cat /dev/null > /var/log/cloud/management/management-server.log
cat /dev/null > /var/log/cloud/management/api-server.log
cat /dev/null > /var/log/cloud/management/catalina.out
#replace disk size reqd to 1GB max
sed -i 's/DISKSPACE=5120000/DISKSPACE=20000/g' $systemvm_seeder
if [[ "$uflag" != "1" && "$hypervisor" != "xenserver" ]]
then
echo "URL of systemvm template is reqd."
usage
fi
if [[ "$sflag" == "1" ]]
then
mkdir -p /tmp/secondary
mount -t nfs $spath /tmp/secondary
rm -rf /tmp/secondary/*
if [[ "$hflag" == "1" && "$hypervisor" == "xenserver" ]]
then
bash -x $systemvm_seeder -m /tmp/secondary/ -u $sysvmurl -h xenserver
elif [[ "$hflag" == "1" && "$hypervisor" == "kvm" ]]
then
bash -x $systemvm_seeder -m /tmp/secondary/ -u $sysvmurl -h kvm
elif [[ "$hflag" == "1" && "$hypervisor" == "vmware" ]]
then
bash -x $systemvm_seeder -m /tmp/secondary/ -u $sysvmurl -h vmware
else
bash -x $systemvm_seeder -m /tmp/secondary/ -u $sysvmurl -h xenserver
fi
umount /tmp/secondary
else
echo "please provide the nfs secondary storage path where templates are stored"
usage
fi

View File

@ -0,0 +1,36 @@
from ConfigParser import ConfigParser
from optparse import OptionParser
import marvin
from marvin import configGenerator
from marvin import sshClient
from time import sleep as delay
import telnetlib
import socket
if __name__ == '__main__':
parser = OptionParser()
parser.add_option("-c", "--config", action="store", default="xen.cfg",
dest="config", help="the path where the server configurations is stored")
(options, args) = parser.parse_args()
if options.config is None:
raise
cscfg = configGenerator.getSetupConfig(options.config)
mgmt_server = cscfg.mgtSvr[0].mgtSvrIp
ssh = sshClient.SshClient(mgmt_server, 22, "root", "password")
ssh.execute("service cloudstack-management restart")
#Telnet wait until api port is open
tn = None
timeout = 120
while timeout > 0:
try:
tn = telnetlib.Telnet(mgmt_server, 8096, timeout=120)
break
except Exception:
delay(1)
timeout = timeout - 1
if tn is None:
raise socket.error("Unable to reach API port")

View File

@ -0,0 +1,25 @@
[cobbler]
#nic=network,gateway,cobbler_gateway
eth0=10.223.75.0/25,10.223.75.1,10.223.75.10
eth1=10.223.78.0/25,10.223.78.1,10.223.78.2
eth2=10.223.78.128/25,10.223.78.128,10.223.78.130
[ipmi]
#hostname=ipmi_ip
infra=10.223.103.86
acs-qa-h11=10.223.103.87
acs-qa-h20=10.223.103.96
acs-qa-h21=10.223.103.97
acs-qa-h23=10.223.103.99
[dhcp]
#hostname=mac,passwd,ipv4
infra=9e:40:7d:09:f2:ef,password,10.223.75.10
cloudstack-rhel=b6:c8:db:33:72:41,password,10.223.75.41
cloudstack-ubuntu=b6:c8:db:33:72:42,password,10.223.75.42
jenkins=b6:c8:db:33:72:43,password,10.223.75.43
acs-qa-h11=d0:67:e5:ef:e0:1b,password,10.223.75.20
acs-qa-h20=d0:67:e5:ef:e0:ff,password,10.223.78.20
acs-qa-h21=d0:67:e5:ef:e0:2d,password,10.223.78.140
acs-qa-h23=d0:67:e5:f1:b1:36,password,10.223.75.21
acs-qa-jenkins-slave=9e:2f:91:31:f4:8d,password,10.223.75.11

View File

@ -0,0 +1,64 @@
import marvin
import unittest
from marvin.cloudstackTestCase import *
from marvin.cloudstackAPI import *
from time import sleep as delay
class TestSetupSuccess(cloudstackTestCase):
"""
Test to verify if the cloudstack is ready to launch tests upon
1. Verify that system VMs are up and running in all zones
2. Verify that built-in templates are Ready in all zones
"""
@classmethod
def setUpClass(cls):
cls.apiClient = super(TestSetupSuccess, cls).getClsTestClient().getApiClient()
zones = listZones.listZonesCmd()
cls.zones_list = cls.apiClient.listZones(zones)
cls.retry = 50
def test_systemVmReady(self):
"""
system VMs need to be ready and Running for each zone in cloudstack
"""
for z in self.zones_list:
retry = self.retry
while retry != 0:
self.debug("looking for system VMs in zone: %s, %s"%(z.id, z.name))
sysvms = listSystemVms.listSystemVmsCmd()
sysvms.zoneid = z.id
sysvms.state = 'Running'
sysvms_list = self.apiClient.listSystemVms(sysvms)
if sysvms_list is not None and len(sysvms_list) == 2:
assert len(sysvms_list) == 2
self.debug("found %d system VMs running {%s}"%(len(sysvms_list), sysvms_list))
break
retry = retry - 1
delay(60) #wait a minute for retry
self.assertNotEqual(retry, 0, "system VMs not Running in zone %s"%z.name)
def test_templateBuiltInReady(self):
"""
built-in templates CentOS to be ready
"""
for z in self.zones_list:
retry = self.retry
while retry != 0:
self.debug("Looking for at least one ready builtin template")
templates = listTemplates.listTemplatesCmd()
templates.templatefilter = 'featured'
templates.listall = 'true'
templates_list = self.apiClient.listTemplates(templates)
if templates_list is not None:
builtins = [tmpl for tmpl in templates_list if tmpl.templatetype == 'BUILTIN' and tmpl.isready == True]
if len(builtins) > 0:
self.debug("Found %d builtins ready for use %s"%(len(builtins), builtins))
break
retry = retry - 1
delay(60) #wait a minute for retry
self.assertNotEqual(retry, 0, "builtIn templates not ready in zone %s"%z.name)
@classmethod
def tearDownClass(cls):
pass

View File

@ -0,0 +1,38 @@
#!/bin/bash
# Starts a vm on the xenserver with a predefined MAC and name-label
usage() {
printf "Usage: %s: -m <mac> -n <vm name>\n" $(basename $0) >&2
exit 2
}
mac=
vmname=
while getopts 'm:n:' OPTION
do
case $OPTION in
m) mac="$OPTARG"
;;
n) vmname="$OPTARG"
;;
?) usage
exit 1
;;
esac
done
vmuuid=$(xe vm-install template=Other\ install\ media new-name-label=$vmname)
sruuid=$(xe sr-list type=lvm | grep uuid | awk '{print $5}')
vdiuuid=$(xe vdi-create name-label=$vmname sharable=0 sr-uuid=$sruuid type=user virtual-size=21474836480)
vbduuid=$(xe vbd-create bootable=true mode=RW type=DISK device=0 unpluggable=true vdi-uuid=$vdiuuid vm-uuid=$vmuuid)
nwuuid=$(xe network-list bridge=xenbr0 | grep uuid | awk '{print $5}')
xe vif-create mac=$mac network-uuid=$nwuuid device=0 vm-uuid=$vmuuid
#Boot network followed by root disk
$(xe vm-param-set HVM-boot-params:order=nc uuid=$vmuuid)
#Minimum mem requirements for RHEL/Ubuntu
$(xe vm-memory-limits-set static-min=1GiB static-max=1GiB dynamic-min=1GiB dynamic-max=1GiB uuid=$vmuuid)
$(xe vm-start uuid=$vmuuid)

View File

@ -0,0 +1,27 @@
#!/bin/bash
# Uninstalls a given VM
usage() {
printf "Usage: %s: -n <vm name>\n" $(basename $0) >&2
exit 2
}
vmname=
while getopts 'n:' OPTION
do
case $OPTION in
n) vmname="$OPTARG"
;;
?) usage
exit 1
;;
esac
done
for vdi_uuid in $(xe vdi-list name-label=$vmname | grep ^uuid | awk '{print $5}')
do
xe vdi-unlock --force uuid=$vdi_uuid
xe vdi-destroy uuid=$vdi_uuid
done
xe vm-uninstall force=true vm=$vmname

View File

@ -0,0 +1,225 @@
{
"zones": [
{
"name": "z0",
"guestcidraddress": "10.1.1.0/24",
"dns2": "8.8.8.8",
"dns1": "8.8.8.8",
"physical_networks": [
{
"name": "z0-pnet",
"providers": [
{
"broadcastdomainrange": "ZONE",
"name": "VirtualRouter"
},
{
"broadcastdomainrange": "ZONE",
"name": "VpcVirtualRouter"
},
{
"broadcastdomainrange": "ZONE",
"name": "InternalLbVm"
}
],
"broadcastdomainrange": "Zone",
"vlan": "2001-2050",
"traffictypes": [
{
"typ": "Guest"
},
{
"typ": "Management"
},
{
"typ": "Public"
}
],
"isolationmethods": [
"VLAN"
]
}
],
"securitygroupenabled": "false",
"ipranges": [
{
"startip": "10.208.10.10",
"endip": "10.208.10.62",
"netmask": "255.255.255.192",
"vlan": "100",
"gateway": "10.208.10.1"
},
{
"startip": "10.208.10.66",
"endip": "10.208.10.126",
"netmask": "255.255.255.192",
"vlan": "101",
"gateway": "10.208.10.65"
}
],
"networktype": "Advanced",
"pods": [
{
"endip": "10.208.8.75",
"name": "z0p0",
"startip": "10.208.8.70",
"netmask": "255.255.255.192",
"clusters": [
{
"clustername": "z0p0c0",
"hypervisor": "XenServer",
"hosts": [
{
"username": "root",
"url": "http://apache-81-3",
"password": "password"
},
{
"username": "root",
"url": "http://apache-81-2",
"password": "password"
}
],
"clustertype": "CloudManaged",
"primaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary",
"name": "z0p0c0ps0"
},
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary1",
"name": "z0p0c0ps1"
}
]
}
],
"gateway": "10.208.8.65"
},
{
"endip": "10.208.8.205",
"name": "z0p1",
"startip": "10.208.8.200",
"netmask": "255.255.255.192",
"clusters": [
{
"clustername": "z0p1c0",
"hypervisor": "XenServer",
"hosts": [
{
"username": "root",
"url": "http://apache-83-1",
"password": "password"
}
],
"clustertype": "CloudManaged",
"primaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/primary2",
"name": "z0p1c0ps0"
}
]
}
],
"gateway": "10.208.8.193"
}
],
"internaldns1": "10.208.8.5",
"internaldns2": "10.208.8.5",
"secondaryStorages": [
{
"url": "nfs://nfs.fmt.vmops.com:/export/automation/acs/secondary",
"provider": "NFS"
}
]
}
],
"dbSvr": {
"dbSvr": "cloudstack-centos63",
"passwd": "cloud",
"db": "cloud",
"port": 3306,
"user": "cloud"
},
"logger": [
{
"name": "TestClient",
"file": "/var/log/testclient.log"
},
{
"name": "TestCase",
"file": "/var/log/testcase.log"
}
],
"globalConfig": [
{
"name": "storage.cleanup.interval",
"value": "120"
},
{
"name": "direct.agent.load.size",
"value": "1000"
},
{
"name": "default.page.size",
"value": "10000"
},
{
"name": "account.cleanup.interval",
"value": "120"
},
{
"name": "workers",
"value": "10"
},
{
"name": "vm.op.wait.interval",
"value": "5"
},
{
"name": "network.gc.interval",
"value": "120"
},
{
"name": "guest.domain.suffix",
"value": "sandbox.xen"
},
{
"name": "expunge.delay",
"value": "60"
},
{
"name": "vm.allocation.algorithm",
"value": "random"
},
{
"name": "expunge.interval",
"value": "60"
},
{
"name": "enable.dynamic.scale.vm",
"value": "true"
},
{
"name": "instance.name",
"value": "QA"
},
{
"name": "expunge.workers",
"value": "3"
},
{
"name": "secstorage.allowed.internal.sites",
"value": "10.208.8.0/26,10.208.8.65/26,10.208.8.128/26,10.208.8.192/26,10.208.13.194/32"
},
{
"name": "check.pod.cidrs",
"value": "true"
}
],
"mgtSvr": [
{
"mgtSvrIp": "cloudstack-centos63",
"port": 8096
}
]
}

View File

@ -0,0 +1,86 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
[globals]
#global settings in cloudstack
expunge.delay=60
expunge.interval=60
storage.cleanup.interval=120
account.cleanup.interval=120
network.gc.interval=120
expunge.workers=3
workers=10
vm.allocation.algorithm=random
vm.op.wait.interval=5
guest.domain.suffix=sandbox.xen
instance.name=QA
direct.agent.load.size=1000
default.page.size=10000
check.pod.cidrs=true
secstorage.allowed.internal.sites=10.208.8.0/26,10.208.8.65/26,10.208.8.128/26,10.208.8.192/26,10.208.13.194/32
enable.dynamic.scale.vm=true
[environment]
dns1=8.8.8.8
dns2=8.8.8.8
internal_dns1=10.208.8.5
internal_dns2=10.208.8.5
mshost=cloudstack-centos63
mysql.host=cloudstack-centos63
mysql.cloud.user=cloud
mysql.cloud.passwd=cloud
[cloudstack]
hypervisor=XenServer
host.password=password
#Zone 1
z0.guest.vlan=2001-2050
z0p0.private.gateway=10.208.8.65
z0p0.private.pod.startip=10.208.8.70
z0p0.private.pod.endip=10.208.8.75
z0p0.private.netmask=255.255.255.192
z0p0.public.gateway=10.208.10.1
z0p0.public.vlan.startip=10.208.10.10
z0p0.public.vlan.endip=10.208.10.62
z0p0.public.netmask=255.255.255.192
z0p0.public.vlan=100
z0p0c0h0.host=apache-81-3
z0p0c0h1.host=apache-81-2
z0p0c0ps0.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary
z0p0c0ps1.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary1
z0p1.private.gateway=10.208.8.193
z0p1.private.pod.startip=10.208.8.200
z0p1.private.pod.endip=10.208.8.205
z0p1.private.netmask=255.255.255.192
z0p1.public.gateway=10.208.10.65
z0p1.public.vlan.startip=10.208.10.66
z0p1.public.vlan.endip=10.208.10.126
z0p1.public.netmask=255.255.255.192
z0p1.public.vlan=101
z0p1c0h0.host=apache-83-1
z0p1c0ps0.primary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/primary2
z0.secondary.pool=nfs://nfs.fmt.vmops.com:/export/automation/acs/secondary

View File

@ -0,0 +1,46 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version .0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import os
from setuptools import setup
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read().strip()
VERSION = '0.1.0'
setup(
name = "xunitmultiprocess",
version = VERSION,
author = "Prasanna Santhanam",
author_email = "Prasanna.Santhanam@citrix.com",
description = "Run tests written using CloudStack's Marvin testclient",
license = 'ASL .0',
classifiers = [
"Intended Audience :: Developers",
"Topic :: Software Development :: Testing",
"Programming Language :: Python",
],
py_modules = ['xunitmultiprocess'],
zip_safe = False,
entry_points = {
'nose.plugins': ['xunitmultiprocess = xunitmultiprocess:Xunitmp']
},
install_requires = ['nose'],
)

View File

@ -0,0 +1,274 @@
"""This plugin provides test results in the standard XUnit XML format.
It was designed for the `Hudson`_ continuous build system but will
probably work for anything else that understands an XUnit-formatted XML
representation of test results.
Add this shell command to your builder ::
nosetests --with-xunitmp
And by default a file named nosetests.xml will be written to the
working directory.
In a Hudson builder, tick the box named "Publish JUnit test result report"
under the Post-build Actions and enter this value for Test report XMLs::
**/nosetests.xml
If you need to change the name or location of the file, you can set the
``--xunit-file`` option.
Here is an abbreviated version of what an XML test report might look like::
<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="nosetests" tests="1" errors="1" failures="0" skip="0">
<testcase classname="path_to_test_suite.TestSomething"
name="test_it" time="0">
<error type="exceptions.TypeError" message="oops, wrong type">
Traceback (most recent call last):
...
TypeError: oops, wrong type
</error>
</testcase>
</testsuite>
.. _Hudson: https://hudson.dev.java.net/
"""
__author__ = "original xunit author, Rosen Diankov (rosen.diankov@gmail.com)"
import doctest
import os
import traceback
import re
import inspect
from nose.plugins.base import Plugin
from nose.exc import SkipTest
from time import time
from xml.sax import saxutils
from nose.pyversion import UNICODE_STRINGS
import sys
import multiprocessing
globalxunitmanager = multiprocessing.Manager()
globalxunitstream = globalxunitmanager.list() # used for gathering statistics
globalxunitstats = multiprocessing.Array('i',[0]*4)
# Invalid XML characters, control characters 0-31 sans \t, \n and \r
CONTROL_CHARACTERS = re.compile(r"[\000-\010\013\014\016-\037]")
def xml_safe(value):
"""Replaces invalid XML characters with '?'."""
return CONTROL_CHARACTERS.sub('?', value)
def escape_cdata(cdata):
"""Escape a string for an XML CDATA section."""
return xml_safe(cdata).replace(']]>', ']]>]]&gt;<![CDATA[')
def nice_classname(obj):
"""Returns a nice name for class object or class instance.
>>> nice_classname(Exception()) # doctest: +ELLIPSIS
'...Exception'
>>> nice_classname(Exception) # doctest: +ELLIPSIS
'...Exception'
"""
if inspect.isclass(obj):
cls_name = obj.__name__
else:
cls_name = obj.__class__.__name__
mod = inspect.getmodule(obj)
if mod:
name = mod.__name__
# jython
if name.startswith('org.python.core.'):
name = name[len('org.python.core.'):]
return "%s.%s" % (name, cls_name)
else:
return cls_name
def exc_message(exc_info):
"""Return the exception's message."""
exc = exc_info[1]
if exc is None:
# str exception
result = exc_info[0]
else:
try:
result = str(exc)
except UnicodeEncodeError:
try:
result = unicode(exc)
except UnicodeError:
# Fallback to args as neither str nor
# unicode(Exception(u'\xe6')) work in Python < 2.6
result = exc.args[0]
return xml_safe(result)
class Xunitmp(Plugin):
"""This plugin provides test results in the standard XUnit XML format."""
name = 'xunitmp'
score = 499 # necessary for it to go after capture
encoding = 'UTF-8'
xunitstream = None
xunitstats = None
xunit_file = None
def _timeTaken(self):
if hasattr(self, '_timer'):
taken = time() - self._timer
else:
# test died before it ran (probably error in setup())
# or success/failure added before test started probably
# due to custom TestResult munging
taken = 0.0
return taken
def _quoteattr(self, attr):
"""Escape an XML attribute. Value can be unicode."""
attr = xml_safe(attr)
if isinstance(attr, unicode) and not UNICODE_STRINGS:
attr = attr.encode(self.encoding)
return saxutils.quoteattr(attr)
def options(self, parser, env):
"""Sets additional command line options."""
Plugin.options(self, parser, env)
parser.add_option(
'--xml-file', action='store',
dest='xunit_file', metavar="FILE",
default=env.get('NOSE_XUNI_FILE', 'nosetests.xml'),
help=("Path to xml file to store the xunit report in. "
"Default is nosetests.xml in the working directory "
"[NOSE_XUNIT_FILE]"))
parser.add_option(
'--xunit-header', action='store',
dest='xunit_header', metavar="HEADER",
default=env.get('NOSE_XUNIT_HEADER', ''),
help=("The attributes of the <testsuite> report that will be created, in particular 'package' and 'name' should be filled."
"[NOSE_XUNIT_HEADER]"))
def configure(self, options, config):
"""Configures the xunit plugin."""
Plugin.configure(self, options, config)
self.config = config
if self.enabled:
self.xunitstream = globalxunitstream
self.xunitstats = globalxunitstats
for i in range(4):
self.xunitstats[i] = 0
self.xunit_file = options.xunit_file
self.xunit_header = options.xunit_header
def report(self, stream):
"""Writes an Xunit-formatted XML file
The file includes a report of test errors and failures.
"""
stats = {'errors': self.xunitstats[0], 'failures': self.xunitstats[1], 'passes': self.xunitstats[2], 'skipped': self.xunitstats[3] }
stats['encoding'] = self.encoding
stats['total'] = (stats['errors'] + stats['failures'] + stats['passes'] + stats['skipped'])
stats['header'] = self.xunit_header
if UNICODE_STRINGS:
error_report_file = open(self.xunit_file, 'w', encoding=self.encoding)
else:
error_report_file = open(self.xunit_file, 'w')
error_report_file.write(
'<?xml version="1.0" encoding="%(encoding)s"?>'
'<testsuite %(header)s tests="%(total)d" '
'errors="%(errors)d" failures="%(failures)d" '
'skip="%(skipped)d">' % stats)
while len(self.xunitstream) > 0:
error_report_file.write(self.xunitstream.pop(0))
#error_report_file.write('<properties><property name="myproperty" value="1.5"/></properties>')
error_report_file.write('</testsuite>')
error_report_file.close()
if self.config.verbosity > 1:
stream.writeln("-" * 70)
stream.writeln("XML: %s" % error_report_file.name)
def startTest(self, test):
"""Initializes a timer before starting a test."""
self._timer = time()
def addstream(self,xml):
try:
self.xunitstream.append(xml)
except Exception, e:
print 'xunitmultiprocess add stream len=%d,%s'%(len(xml),str(e))
def addError(self, test, err, capt=None):
"""Add error output to Xunit report.
"""
taken = self._timeTaken()
if issubclass(err[0], SkipTest):
type = 'skipped'
self.xunitstats[3] += 1
else:
type = 'error'
self.xunitstats[0] += 1
tb = ''.join(traceback.format_exception(*err))
try:
id=test.shortDescription()
if id is None:
id = test.id()
except AttributeError:
id=''
id = id.split('.')
name = self._quoteattr(id[-1])
systemout = ''
# if test.capturedOutput is not None:
# systemout = '<system-out><![CDATA['+escape_cdata(str(test.capturedOutput))+']]></system-out>'
xml = """<testcase classname=%(cls)s name=%(name)s time="%(taken)f">
%(systemout)s
<%(type)s type=%(errtype)s message=%(message)s><![CDATA[%(tb)s]]>
</%(type)s></testcase>
""" %{'cls': self._quoteattr('.'.join(id[:-1])), 'name': self._quoteattr(name), 'taken': taken, 'type': type, 'errtype': self._quoteattr(nice_classname(err[0])), 'message': self._quoteattr(exc_message(err)), 'tb': escape_cdata(tb), 'systemout':systemout}
self.addstream(xml)
def addFailure(self, test, err, capt=None, tb_info=None):
"""Add failure output to Xunit report.
"""
taken = self._timeTaken()
tb = ''.join(traceback.format_exception(*err))
self.xunitstats[1] += 1
try:
id=test.shortDescription()
if id is None:
id = test.id()
except AttributeError:
id=''
id = id.split('.')
name = self._quoteattr(id[-1])
systemout = ''
# if test.capturedOutput is not None:
# systemout = '<system-out><![CDATA['+escape_cdata(str(test.capturedOutput))+']]></system-out>'
xml = """<testcase classname=%(cls)s name=%(name)s time="%(taken)f">
%(systemout)s
<failure type=%(errtype)s message=%(message)s><![CDATA[%(tb)s]]>
</failure></testcase>
""" %{'cls': self._quoteattr('.'.join(id[:-1])), 'name': self._quoteattr(name), 'taken': taken, 'errtype': self._quoteattr(nice_classname(err[0])), 'message': self._quoteattr(exc_message(err)), 'tb': escape_cdata(tb), 'systemout':systemout}
self.addstream(xml)
def addSuccess(self, test, capt=None):
"""Add success output to Xunit report.
"""
taken = self._timeTaken()
self.xunitstats[2] += 1
try:
id=test.shortDescription()
if id is None:
id = test.id()
except AttributeError:
id=''
id = id.split('.')
name = self._quoteattr(id[-1])
systemout=''
# if test.capturedOutput is not None:
# systemout = '<system-out><![CDATA['+escape_cdata(str(test.capturedOutput))+']]></system-out>'
xml = """<testcase classname=%(cls)s name=%(name)s time="%(taken)f" >%(systemout)s</testcase>
""" % {'cls': self._quoteattr('.'.join(id[:-1])), 'name': self._quoteattr(name), 'taken': taken, 'systemout':systemout }
self.addstream(xml)