29
scaffold Documentation Release 3.2 Mike Scherbakov October 30, 2013

Release 3.2 Mike Scherbakov - Read the Docs

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Release 3.2 Mike Scherbakov - Read the Docs

scaffold DocumentationRelease 3.2

Mike Scherbakov

October 30, 2013

Page 2: Release 3.2 Mike Scherbakov - Read the Docs
Page 3: Release 3.2 Mike Scherbakov - Read the Docs

Contents

i

Page 4: Release 3.2 Mike Scherbakov - Read the Docs

ii

Page 5: Release 3.2 Mike Scherbakov - Read the Docs

CHAPTER 1

Development Documentation

1.1 Logical Architecture

Current architecture uses so-called “Metadata via Facter Extension” approach, inspired by blog posts self-classifyingpuppet nodes, pulling a list of hosts from mcollective for puppet, A Simple Puppet Function to Retrieve InformationFrom the Stored Config DB, nodeless-puppet example.

In a nutshell, the Fuel deployment orchestration engine Astute manages OS provisioning via Cobbler, and uses anMCollective plugin to distribute a Facter facts file that defines node’s role and other deployment variables for Puppet.You can find a detailed breakdown of how this works in the Sequence Diagrams.

Following components are involved in managing this process:

• Astute: deployment orchestrator, manages the Puppet cluster (via MCollective) and the Cobbler provisioningservice (over XML-RPC)

• Naily: RPC consumer implementing communication between Nailgun and Astute over AMQP protocol

• Nailgun 1: Web UI backend based on the web.py framework, includes following sub-components:

– Nailgun DB: a relational database holding the current state all OpenStack clusters and provisioning tasks

– Data Model (api/models.py, fixtures/): the definition of NailgunDB using SQLAlchemy ORM

– REST API (api/handlers/): controller layer of the Web UI, receives REST requests from the JavaScript UIand routes them to other Nailgun components

– RPC Receiver (rpc/): handles AMQP messages from Astute

– Task Manager (task/): creates and tracks background tasks

• JavaScript UI (static/js/): Web UI frontend based on Twitter Bootcamp framework, communicates with Nailgunusing REST API

In the current implementation the deployment business logic is spread between Nailgun (primarily in Task, Network,and Volume Manager components) and Astute. Going forward, all logic should be moved from Astute to Nailgun, andAstute should become a simple executor of tasks defined by Nailgun.

Communication paths between these components are illustrated on the Logical Architecture Diagram:

1 Not to be confused with Nailgun the Java CLI accelerator

1

Page 6: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.2 Sequence Diagrams

1.2.1 OS Provisioning

1.2.2 Networks Verification

1.2.3 Details on Cluster Provisioning & Deployment (via Facter extension)

Once deploy and provisioning messages are accepted by Naily, provisioining method is called in Astute. Provisioningpart creates system in Cobbler and calls reboot over Cobbler. Then Astute uses MCollective direct addressing mode tocheck if all required nodes are available, include puppet agent on them. If some nodes are not ready yet, Astute waitsfor a few seconds and does request again. When nodes are booted in target OS, Astute uses naily_fact MCollectiveplugin to post data to a special file /etc/naily.fact on target system. Data include role and all other variables neededfor deployment. Then, Astute calls puppetd MCollective plugin to start deployment. Puppet is started on nodes, andrequests Puppet master for modules and manifests. site.pp on Master node defines one common class for every node.Accordingly, puppet agent starts its run. Modules contain facter extension, which runs before deployment. Extensionreads facts from /etc/naily.fact placed by mcollective, and extends Facter data with these facts, which can be easilyused in Puppet modules. Case structure in running class chooses appropriate class to import, based on $role variable,received from /etc/naily.fact. It loads and starts to execute. All variables from file are available like ordinary facts fromFacter.

It is possible to use the system without Nailgun and Naily: user creates a YAML file with all required data, and callsAstute binary script. Script loads data from YAML and instantiates Astute instance the same way as it’s instanciatedfrom Naily.

1.3 Fuel Development Environment

Basic OS for Fuel development is Ubuntu Linux. Setup instructions below assume Ubuntu 13.04, most of them shouldbe applicable to other Ubuntu and Debian versions, too.

Each subsequent section below assumes that you have followed the steps described in all preceding sections. By theend of this document, you should be able to run and test all key components of Fuel, build Fuel master node installationISO, and generate documentation.

1.3.1 Getting the Source Code

Clone the Mirantis Fuel repositories from GitHub:

git clone https://github.com/stackforge/fuel-maingit clone https://github.com/stackforge/fuel-webgit clone https://github.com/stackforge/fuel-astutegit clone https://github.com/stackforge/fuel-ostfgit clone https://github.com/Mirantis/fuel

All sections below assume you start in your clone of the fuel-main repository.

2 Chapter 1. Development Documentation

Page 7: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.3.2 Setup for Nailgun Unit Tests

1. Install Python dependencies (fysom has no deb package, and the jsonschema deb is outdated, so these moduleshave to be installed from PyPi):

sudo apt-get install python-dev python-pip python-psycopg2 python-jinja2sudo apt-get install python-paste python-yaml python-sqlalchemy python-kombusudo apt-get install python-crypto python-simplejson python-webpysudo apt-get install python-nose python-mock python-decorator python-flake8sudo apt-get install python-netaddr python-netifacessudo easy_install -U setuptools==1.0sudo easy_install -U pip==1.2.1sudo pip install fysom jsonschema hacking==0.7 nose-timer

2. Install and configure PostgreSQL database:

sudo apt-get install postgresql postgresql-server-dev-9.1sudo -u postgres createuser -SDRP nailgun (enter password nailgun)sudo -u postgres createdb nailgun

3. Create required folder for log files:

sudo mkdir /var/log/nailgunsudo chown -R ‘whoami‘.‘whoami‘ /var/log/nailgun

4. Run the Nailgun backend unit tests:

./run_tests.sh --no-jslint --no-ui-tests

5. Run the Nailgun flake8 test:

./run_tests.sh --flake8

1.3.3 Setup for Web UI Tests

1. Install NodeJS (on Debian, you may need to use ‘apt-get install -t experimental’ to get the latest npm, on Ubuntu12.04, use nodejs package instead of nodejs-legacy)) and CasperJS:

sudo apt-get install npm nodejs-legacy phantomjssudo npm install -g jslint requirejscd ~git clone git://github.com/n1k0/casperjs.gitcd casperjsgit checkout tags/1.0.0-RC4sudo ln -sf ‘pwd‘/bin/casperjs /usr/local/bin/casperjs

2. Run full Web UI test suite (this will wipe your Nailgun database in PostgreSQL):

./run_tests.sh --jslint

./run_tests.sh --ui-tests

1.3.4 Running Nailgun in Fake Mode

1. Populate the database from fixtures:

1.3. Fuel Development Environment 3

Page 8: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

cd nailgun./manage.py syncdb./manage.py loaddefault # It loads all basic fixtures listed in settings.yaml./manage.py loaddata nailgun/fixtures/sample_environment.json # Loads fake nodes

2. Start application in “fake” mode, when no real calls to orchestrator are performed:

python manage.py run -p 8000 --fake-tasks | egrep --line-buffered -v ’^$|HTTP’ >> /var/log/nailgun.log 2>&1 &

3. (optional) You can also use –fake-tasks-amqp option if you want to make fake environment use real RabbitMQinstead of fake one:

python manage.py run -p 8000 --fake-tasks-amqp | egrep --line-buffered -v ’^$|HTTP’ >> /var/log/nailgun.log 2>&1 &

1.3.5 Astute and Naily

1. Install Ruby dependencies:

sudo apt-get install git curl\curl -L https://get.rvm.io | bash -s stablervm install 1.9.3

2. Install or update dependencies and run unit tests:

cd astute./run_tests.sh

3. (optional) Run Astute MCollective integration test (you’ll need to have MCollective server running for this towork):

cd astutebundle exec rspec spec/integration/mcollective_spec.rb

1.3.6 Building the Fuel ISO

1. Following software is required to build the Fuel ISO images on Ubuntu 12.10 or newer (on Ubuntu 12.04, usenodejs package instead of nodejs-legacy):

sudo apt-get install build-essential make git ruby ruby-dev rubygems debootstrapsudo apt-get install python-setuptools yum yum-utils libmysqlclient-dev isomd5sumsudo apt-get install python-nose libvirt-bin python-ipaddr python-paramiko python-yamlsudo apt-get install python-pip kpartx extlinux npm nodejs-legacy unzip genisoimagesudo gem install bundler -v 1.2.1sudo gem install buildersudo pip install xmlbuilder jinja2sudo npm install -g requirejs

2. (alternative) If you have completed the instructions in the previous sections of Fuel development environmentsetup guide, the list of additional packages required to build the ISO becomes shorter:

sudo apt-get install ruby-dev ruby-builder bundler libmysqlclient-devsudo apt-get install yum-utils kpartx extlinux genisoimage isomd5sum

3. ISO build process requires sudo permissions, allow yourself to run commands as root user without request for apassword:

4 Chapter 1. Development Documentation

Page 9: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

echo "‘whoami‘ ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers

4. If you haven’t already done so, get the source code:

git clone https://github.com/stackforge/fuel-main

5. Now you can build the Fuel ISO image:

make iso

6. To build an ISO image from custom branches of fuel, astute, nailgun or ostf-tests, edit the “Repos and versions”section of config.mk.

1.3.7 Running the FuelWeb Integration Test

1. Install libvirt and Devops library dependencies:

sudo apt-get install libvirt-bin python-libvirt python-ipaddr python-paramikosudo pip install xmlbuilder django==1.4.3

2. Configure permissions for libvirt and relogin or restart your X for the group changes to take effect (consult/etc/libvirt/libvirtd.conf for the group name):

GROUP=‘grep unix_sock_group /etc/libvirt/libvirtd.conf|cut -d’"’ -f2‘sudo useradd ‘whoami‘ kvmsudo useradd ‘whoami‘ $GROUPchgrp $GROUP /var/lib/libvirt/imageschmod g+w /var/lib/libvirt/images

3. Clone the Mirantis Devops virtual environment manipulation library from GitHub and install it where FuelWebIntegration Test can find it:

git clone [email protected]:Mirantis/devops.gitcd devopspython setup.py buildsudo python setup.py install

4. Configure and populate the Devops DB:

SETTINGS=/usr/local/lib/python2.7/dist-packages/devops-2.0-py2.7.egg/devops/settings.pysed -i "s/’postgres’/’devops’/" $SETTINGSecho "SECRET_KEY = ’secret’" >> $SETTINGSsudo -u postgres createdb devopssudo -u postgres createuser -SDR devopsdjango-admin.py syncdb --settings=devops.settings

5. Run the integration test:

cd fuel-mainmake test-integration

6. To save time, you can execute individual test cases from the integration test suite like this (nice thing aboutTestAdminNode is that it takes you from nothing to a Fuel master with 9 blank nodes connected to 3 virtualnetworks):

cd fuel-mainexport ENV_NAME=fuelwebexport PUBLIC_FORWARD=nat

1.3. Fuel Development Environment 5

Page 10: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

export ISO_PATH=‘pwd‘/build/iso/fuelweb-centos-6.4-x86_64.isonosetests -w fuelweb_test -s fuelweb_test.integration.test_admin_node:TestAdminNode.test_cobbler_alive

7. The test harness creates a snapshot of all nodes called ‘empty’ before starting the tests, and creates a newsnapshot if a test fails. You can revert to a specific snapshot with this command:

dos.py revert --snapshot-name <snapshot_name> <env_name>

8. To fully reset your test environment, tell the Devops toolkit to erase it:

dos.py listdos.py erase <env_name>

1.3.8 Running Fuel Puppet Modules Unit Tests

1. Install PuppetLabs RSpec Helper:

cd ~gem2deb puppetlabs_spec_helpersudo dpkg -i ruby-puppetlabs-spec-helper_0.4.1-1_all.debgem2deb rspec-puppetsudo dpkg -i ruby-rspec-puppet_0.1.6-1_all.deb

2. Run unit tests for a Puppet module:

cd fuel/deployment/puppet/modulerake spec

1.3.9 Installing Cobbler

Install Cobbler from GitHub (it can’t be installed from PyPi, and deb package in Ubuntu is outdated):

cd ~git clone git://github.com/cobbler/cobbler.gitcd cobblergit checkout release24sudo make install

1.3.10 Building Documentation

1. You will need the following software to build documentation:

sudo apt-get install librsvg2-bin rst2pdf python-sphinxsudo pip install sphinxcontrib-plantumlsudo apt-get install python-sphinxcontrib.blockdiag # on Ubuntu 12.10 or highersudo pip install sphinxcontrib-blockdiag # on Ubuntu 12.04

2. Look at the list of available formats and generate the one you need:

cd docsmake helpmake html

You will also need to install Java and PlantUML to automatically generate UML diagrams from the source. You canalso use PlantUML Server for a quick preview of your diagrams.

6 Chapter 1. Development Documentation

Page 11: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.4 Nailgun Development Instructions

1.4.1 Creating Partitions on Nodes

Fuel generates Anaconda Kickstart scripts for Red Hat based systems and preseed files for Ubuntu to partition blockdevices on new nodes. Most of the work is done in the pmanager.py Cobbler script using the data from the “ks_spaces”variable generated by the Nailgun VolumeManager class based on the volumes metadata defined in the openstack.jsonrelease fixture.

Volumes are created following best practices for OpenStack and other components. Following volume types aresupported:

vg an LVM volume group that can contain one or more volumes with type set to “lv”

partition plain non-LVM partition

raid a Linux software RAID-1 array of LVM volumes

Typical slave node will always have an “os” volume group and one or more volumes of other types, depending onthe roles assigned to that node and the role-to-volumes mapping defined in the “volumes_roles_mapping” section ofopenstack.json.

There are a few different ways to add another volume to a slave node:

1. Add a new logical volume definition to one of the existing LVM volume groups.

2. Create a new volume group containing your new logical volumes.

3. Create a new plain partition.

Adding an LV to an Existing Volume Group

If you need to add a new volume to an existing volume group, for example “os”, your volume definition in open-stack.json might look like this:

{"id": "os","type": "vg","min_size": {"generator": "calc_min_os_size"},"label": "Base System","volumes": [

{"mount": "/","type": "lv","name": "root","size": {"generator": "calc_total_root_vg"},"file_system": "ext4"

},{

"mount": "swap","type": "lv","name": "swap","size": {"generator": "calc_swap_size"},"file_system": "swap"

},{

"mount": "/mnt/some/path","type": "lv","name": "LOGICAL_VOLUME_NAME",

1.4. Nailgun Development Instructions 7

Page 12: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

"size": {"generator": "calc_LOGICAL_VOLUME_size","generator_args": ["arg1", "arg2"]

},"file_system": "ext4"

}]

}

Make sure that your logical volume name (“LOGICAL_VOLUME_NAME” in the example above) is not the same asthe volume group name (“os”), and refer to current version of openstack.json for up-to-date format.

Adding Generators to Nailgun VolumeManager

The “size” field in a volume definition can be defined either directly as an integer number in megabytes, or indirectlyvia a so called generator. Generator is a Python lambda that can be called to calculate logical volume size dynamically.In the json example above size is defined as a dictionary with two keys: “generator” is the name of the generator lambdaand “generator_args” is the list of arguments that will be passed to the generator lambda.

There is the method in the VolumeManager class where generators are defined. New volume generator‘NEW_GENERATOR_TO_CALCULATE_SIZ’ needs to be added in the generators dictionary inside this method.

class VolumeManager(object):...def call_generator(self, generator, *args):

generators = {...’NEW_GENERATOR_TO_CALCULATE_SIZE’: lambda: 1000,...

}

Creating a New Volume Group

Another way to add new volume to slave nodes is to create new volume group and to define one or more logical volumeinside the volume group definition:

{"id": "NEW_VOLUME_GROUP_NAME","type": "vg","min_size": {"generator": "calc_NEW_VOLUME_NAME_size"},"label": "Label for NEW VOLUME GROUP as it will be shown on UI","volumes": [

{"mount": "/path/to/mount/point","type": "lv","name": "LOGICAL_VOLUME_NAME","size": {

"generator": "another_generator_to_calc_LOGICAL_VOLUME_size","generator_args": ["arg"]

},"file_system": "xfs"

}]

}

8 Chapter 1. Development Documentation

Page 13: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

Creating a New Plain Partition

Some node roles may be incompatible with LVM and would require plain partitions. If that’s the case, you may haveto define a standalone volume with type “partition” instead of “vg”:

{"id": "NEW_PARTITION_NAME","type": "partition","min_size": {"generator": "calc_NEW_PARTITION_NAME_size"},"label": "Label for NEW PARTITION as it will be shown on UI","mount": "none","disk_label": "LABEL","file_system": "xfs"

}

Note how you can set mount point to “none” and define a disk label to identify the partition instead. Its only possibleto set a disk label on a formatted portition, so you have to set “file_system” parameter to use disk labels.

Updating the Node Role to Volumes Mapping

Unlike a new logical volume added to a pre-existing logical volume group, a new logical volume group or partitionwill not be allocated on the node unless it is included in the role-to-volumes mapping corresponding to one of thenode’s roles, like this:

"volumes_roles_mapping": {"controller": [

{"allocate_size": "min", "id": "os"},{"allocate_size": "all", "id": "image"}],

"compute": ...}

• controller - is a role for which partitioning information is given

• id - is id of volume group or plain partition

• allocate_size - can be “min” or “all” * min - allocate volume with minimal size * all - allocate all free space for

volume, if serveral volumes have this key then free space will be allocated equally

Setting Volume Parameters from Nailgun Settings

In addition to VolumeManager generators, it is also possible to define sizes or whatever you want in the nailgunconfiguration file (/etc/nailgun/settings.yaml). All fixture files are templated using Jinja2 templating engine just beforebeing loaded into nailgun database. For example, we can define mount point for a new volume as follows:

"mount": "{{settings.NEW_LOGICAL_VOLUME_MOUNT_POINT}}"

Of course, NEW_LOGICAL_VOLUME_MOUNT_POINT must be defined in the settings file.

Nailgun is the core of FuelWeb. To allow an enterprise features be easily connected, and open source commityto extend it as well, Nailgun must have simple, very well defined and documented core, with the great pluggablecapabilities.

1.4.2 Reliability

All software contains bugs and may fail, and Nailgun is not an exception of this rule. In reality, it is not possible tocover all failure scenarios, even to come close to 100%. The question is how we can design the system to avoid bugs

1.4. Nailgun Development Instructions 9

Page 14: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

in one module causing the damage of the whole system.

Example from the Nailgun’s past: Agent collected hardware information, include current_speed param on the in-terfaces. One of the interfaces had current_speed=0. At the registration attempt, Nailgun’s validator checked thatcurrent_speed > 0, and validator raised an exception InvalidData, which declined node discovery. current_speed is oneof the attibutes which we can easily skip, it is not even used for deployment in any way at the moment and used onlyfor the information provided to the user. But it prevented node discovery, and it made the server unusable.

Another example. Due to the coincedence of bug and wrong metadata of one of the nodes, GET request on that nodewould return 500 Internal Server Error. Looks like it should affect the only one node, and logically we could removesuch failing node from the environment to get it discovered again. However, UI + API handlers were written in thefollowing way:

• UI calls /api/nodes to fetch info about all nodes to just show how many nodes are allocated, and how many arenot

• NodesCollectionHandler would return 500 if any of nodes raise an exception

It is simple to guess, that the whole UI was completely destroyed by just one failed node. It was impossible to do anyaction on UI.

These two examples give us the starting point to rethink on how to avoid Nailgun crash just if one of the meta attr iswrong.

First, we must devide the meta attributes discovered by agent on two categories:

• absolutely required for node discovering (i.e. MAC address)

• non-required for discovering

– required for deployment (i.e. disks)

– non-required for deployment (i.e. current_speed)

Second, we must have UI refactored to fetch only the information required, not the whole DB to just show two num-bers. To be more specific, we have to make sure that issues in one environment must not affect the other environment.Such a refactoring will require additional handlers in Nailgun, as well as some additions, such as pagination and etc.From Nailgun side, it is bad idea to fail the whole CollectionHandler if one of the objects fail to calculate some at-tribute. My(mihgen) idea is to simply set attrubute to Null if failed to calculate, and program UI to handle it properly.Unit tests must help in testing of this.

Another idea is to limit the /api/nodes, /api/networks and other calls to work only if cluster_id param provided, whetherset to None or some of cluster Ids. In such a way we can be sure that one env will not be able to break the whole UI.

1.5 Alternatives

1.5.1 Metadata via Puppet ENC (part of architecture)

This is alternative possible architecture. See corresponding sequence diagram and more information here: AlternativeImplementation for deployment via ENC.

1.5.2 Alternative Implementation for deployment via ENC

Alternative schema of deployment is different in following:

10 Chapter 1. Development Documentation

Page 15: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

• Naily stores all data about deployment into YAML file before the deployment, and then calls Orchestrator

• Orchestrator loads nodes information from YAML and calls puppet via MCollective

• Puppet requests data from Puppet master

• Puppet uses ENC extension to get information what classes should be applied on particular node. If try toexplain in a few words what ENC is - it is Puppet Master’s extension to call external user defined script

• ENC script loads all required data from YAML file

• YAML file could be replaced by some NoSQL DB

1.5.3 Comparison of deployment approaches

Data from Facter

Pros:

• Easy. Put file on node via MCollective, and we know what will be executed there. It’s easy to check what havebeen executed last time.

• No additional stateful components. Otherwise it could lead to data inconsistency

• Easy to switch into configuration without Puppet Master or even replace it to Chef Solo

• Requires time to place data on nodes before puppet run, and implementation in syncronious way - puppet shouldnot run before the node receive it’s role.

Cons:

• Doesn’t look like a “Puppet” way, when desired state of Cluster should be defined beforeahead and Puppet willconverge the existing state to the desired state

Data from ENC

Pros:

• “Puppet” way, everything what is needed is defined in YAML file

• All information could be found in one place - YAML file

Cons:

• Naily should know the data structure in YAML file to do the merge. (however it can just call Orchestrator withmetadata, and Orchestrator will write data to YAML file)

• Requires additional stateful component - YAML file, what may lead to data inconsistency

• Puppet Master must be installed on the same node as Orchestrator (to access YAML file). Even if YAML file isreplaced to NoSQL DB, ENC script still has to be present on Puppet Master node.

• With increase of deployment complexity and metadata, YAML file will increase in size. It also should containinformation about all clusters and all nodes consequently, which could become a bottleneck for loading data incase of hundrends nodes and thousand requests. Separation of YAML structure in cluster-based will not helpbecause there will be need to pass cluster identifier to puppet, what’s unclear how to do besides facter extension.

• More complex code for Naily(or Orchestrator) is required to do merges of existing data in YAML file and newdata, code to prevent concurrency issues. It would be even more complex with Updates feature, when it wouldrequire of a sequence of actions performed in a specific order.

1.5. Alternatives 11

Page 16: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

• Let’s say we have attribute { ‘keystone’ => { ‘data_dir’ => ‘/var/lib/keystone’ } }, and we want to update ourcluster to new version of OpenStack, node by node, where data_dir location is different. In case with NailyFact,it’s easy - just write facts on target node and run puppet on it, other nodes will not be affected (they still havesettings for old data_dir location). In case with data from ENC it’s much more complex, because there is onlysingle DB - YAML file for the whole cluster. It means it would not be possible to run puppet on old nodes ifthey should not be updated yet.

1.6 REST API Reference

• Releases API• Clusters API• Nodes API• Disks API• Network Configuration API• Notifications API• Tasks API• Logs API• Plugin API• Redhat API• Version API

1.6.1 Releases API

Handlers dealing with releases

class nailgun.api.handlers.release.ReleaseHandlerURL: /api/releases/%release_id%/

Release single handler

GET(release_id)

Returns JSONized Release object.

Http

• 200 (OK)

•404 (release not found in db)

PUT(release_id)

Returns JSONized Release object.

Http

• 200 (OK)

•400 (invalid release data specified)

•404 (release not found in db)

•409 (release with such parameters already exists)

DELETE(release_id)

12 Chapter 1. Development Documentation

Page 17: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

Returns JSONized Release object.

Http

• 204 (release successfully deleted)

•404 (release not found in db)

class nailgun.api.handlers.release.ReleaseCollectionHandlerURL: /api/releases/

Release collection handler

GET()

Returns Collection of JSONized Release objects.

Http

• 200 (OK)

POST()

Returns JSONized Release object.

Http

• 201 (cluster successfully created)

•400 (invalid cluster data specified)

•409 (release with such parameters already exists)

1.6.2 Clusters API

Handlers dealing with clusters

class nailgun.api.handlers.cluster.ClusterHandlerURL: /api/clusters/%cluster_id%/

Cluster single handler

GET(cluster_id)

Returns JSONized Cluster object.

Http

• 200 (OK)

•404 (cluster not found in db)

PUT(cluster_id)

Returns JSONized Cluster object.

Http

• 200 (OK)

•400 (invalid cluster data specified)

•404 (cluster not found in db)

1.6. REST API Reference 13

Page 18: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

DELETE(cluster_id)

Returns {}

Http

• 202 (cluster deletion process launched)

•400 (failed to execute cluster deletion process)

•404 (cluster not found in db)

class nailgun.api.handlers.cluster.ClusterCollectionHandlerURL: /api/clusters/

Cluster collection handler

GET()

Returns Collection of JSONized Cluster objects.

Http

• 200 (OK)

POST()

Returns JSONized Cluster object.

Http

• 201 (cluster successfully created)

•400 (invalid cluster data specified)

•409 (cluster with such parameters already exists)

class nailgun.api.handlers.cluster.ClusterChangesHandlerURL: /api/clusters/%cluster_id%/changes/

Cluster changes handler

PUT(cluster_id)

Returns JSONized Task object.

Http

• 200 (task successfully executed)

•404 (cluster not found in db)

•400 (failed to execute task)

class nailgun.api.handlers.cluster.ClusterAttributesHandlerURL: /api/clusters/%cluster_id%/attributes/

Cluster attributes handler

GET(cluster_id)

Returns JSONized Cluster attributes.

Http

• 200 (OK)

14 Chapter 1. Development Documentation

Page 19: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

•404 (cluster not found in db)

•500 (cluster has no attributes)

PUT(cluster_id)

Returns JSONized Cluster attributes.

Http

• 200 (OK)

•400 (wrong attributes data specified)

•404 (cluster not found in db)

•500 (cluster has no attributes)

class nailgun.api.handlers.cluster.ClusterAttributesDefaultsHandlerURL: /api/clusters/%cluster_id%/attributes/defaults/

Cluster default attributes handler

GET(cluster_id)

Returns JSONized default Cluster attributes.

Http

• 200 (OK)

•404 (cluster not found in db)

•500 (cluster has no attributes)

PUT(cluster_id)

Returns JSONized Cluster attributes.

Http

• 200 (OK)

•400 (wrong attributes data specified)

•404 (cluster not found in db)

•500 (cluster has no attributes)

class nailgun.api.handlers.cluster.ClusterGeneratedDataURL: /api/clusters/%cluster_id%/generated/

Cluster generated data

GET(cluster_id)

Returns JSONized cluster generated data

Http

• 200 (OK)

•404 (cluster not found in db)

1.6. REST API Reference 15

Page 20: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.6.3 Nodes API

Handlers dealing with nodes

class nailgun.api.handlers.node.NodeCollectionHandlerURL: /api/nodes/

Node collection handler

GET()May receive cluster_id parameter to filter list of nodes

Returns Collection of JSONized Node objects.

Http

• 200 (OK)

POST()

Returns JSONized Node object.

Http

• 201 (cluster successfully created)

•400 (invalid node data specified)

•403 (node has incorrect status)

•409 (node with such parameters already exists)

PUT()

Returns Collection of JSONized Node objects.

Http

• 200 (nodes are successfully updated)

•400 (invalid nodes data specified)

class nailgun.api.handlers.node.NodeNICsHandlerURL: /api/nodes/%node_id%/interfaces/

Node network interfaces handler

GET(node_id)

Returns Collection of JSONized Node interfaces.

Http

• 200 (OK)

•404 (node not found in db)

class nailgun.api.handlers.node.NodeCollectionNICsHandlerURL: /api/nodes/interfaces/

Node collection network interfaces handler

PUT()

Returns Collection of JSONized Node objects.

16 Chapter 1. Development Documentation

Page 21: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

Http

• 200 (nodes are successfully updated)

•400 (invalid nodes data specified)

class nailgun.api.handlers.node.NodeNICsDefaultHandlerURL: /api/nodes/%node_id%/interfaces/default_assignment/

Node default network interfaces handler

GET(node_id)

Returns Collection of default JSONized interfaces for node.

Http

• 200 (OK)

•404 (node not found in db)

class nailgun.api.handlers.node.NodeCollectionNICsDefaultHandlerURL: /api/nodes/interfaces/default_assignment/

Node collection default network interfaces handler

GET()May receive cluster_id parameter to filter list of nodes

Returns Collection of JSONized Nodes interfaces.

Http

• 200 (OK)

• 404 (node not found in db)

class nailgun.api.handlers.node.NodeNICsVerifyHandlerURL: /api/nodes/interfaces_verify/

Node NICs verify handler Class is proof of concept. Not ready for use.

POST()

Returns Collection of JSONized Nodes interfaces.

Http

• 200 (OK)

class nailgun.api.handlers.node.NodesAllocationStatsHandlerNode allocation stats handler

GET()

Returns Total and unallocated nodes count.

Http

• 200 (OK)

1.6. REST API Reference 17

Page 22: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.6.4 Disks API

Handlers dealing with disks

class nailgun.api.handlers.disks.NodeDisksHandlerURL: /api/nodes/%node_id%/disks/

Node disks handler

GET(node_id)

Returns JSONized node disks.

Http

• 200 (OK)

•404 (node not found in db)

PUT(node_id)

Returns JSONized node disks.

Http

• 200 (OK)

•400 (invalid disks data specified)

•404 (node not found in db)

class nailgun.api.handlers.disks.NodeDefaultsDisksHandlerURL: /api/nodes/%node_id%/disks/defaults/

Node default disks handler

GET(node_id)

Returns JSONized node disks.

Http

• 200 (OK)

•404 (node or its attributes not found in db)

class nailgun.api.handlers.disks.NodeVolumesInformationHandlerURL: /api/nodes/%node_id%/volumes/

Node volumes information handler

GET(node_id)

Returns JSONized volumes info for node.

Http

• 200 (OK)

•404 (node not found in db)

18 Chapter 1. Development Documentation

Page 23: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.6.5 Network Configuration API

Handlers dealing with network configurations

class nailgun.api.handlers.network_configuration.NovaNetworkConfigurationVerifyHandlerURL: /api/clusters/%cluster_id%/network_configuration/nova_network/verify/

Network configuration verify handler

PUT(cluster_id)

Important this method should be rewritten to be more RESTful

Returns JSONized Task object.

Http

• 202 (network checking task failed)

• 200 (network verification task started)

• 404 (cluster not found in db)

class nailgun.api.handlers.network_configuration.NovaNetworkConfigurationHandlerURL: /api/clusters/%cluster_id%/network_configuration/nova_network/

Network configuration handler

GET(cluster_id)

Returns JSONized network configuration for cluster.

Http

• 200 (OK)

•404 (cluster not found in db)

PUT(cluster_id)

Returns JSONized Task object.

Http

• 202 (network checking task created)

•404 (cluster not found in db)

class nailgun.api.handlers.network_configuration.NeutronNetworkConfigurationHandlerURL: /api/clusters/%cluster_id%/network_configuration/neutron/

Neutron Network configuration handler

GET(cluster_id)

Returns JSONized network configuration for cluster.

Http

• 200 (OK)

•404 (cluster not found in db)

1.6. REST API Reference 19

Page 24: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

1.6.6 Notifications API

Handlers dealing with notifications

class nailgun.api.handlers.notifications.NotificationHandlerURL: /api/notifications/%notification_id%/

Notification single handler

GET(notification_id)

Returns JSONized Notification object.

Http

• 200 (OK)

•404 (notification not found in db)

PUT(notification_id)

Returns JSONized Notification object.

Http

• 200 (OK)

•400 (invalid notification data specified)

•404 (notification not found in db)

1.6.7 Tasks API

class nailgun.api.handlers.tasks.TaskHandlerURL: /api/tasks/%task_id%/

Task single handler

GET(task_id)

Returns JSONized Task object.

Http

• 200 (OK)

•404 (task not found in db)

DELETE(task_id)

Returns JSONized Cluster object.

Http

• 204 (task successfully deleted)

•400 (can’t delete running task manually)

•404 (task not found in db)

20 Chapter 1. Development Documentation

Page 25: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

class nailgun.api.handlers.tasks.TaskCollectionHandlerURL: /api/tasks/

Task collection handler

GET()May receive cluster_id parameter to filter list of tasks

Returns Collection of JSONized Task objects.

Http

• 200 (OK)

• 404 (task not found in db)

1.6.8 Logs API

Handlers dealing with logs

class nailgun.api.handlers.logs.LogEntryCollectionHandlerURL: /api/logs/

Log entry collection handler

GET()Receives following parameters:

•date_before - get logs before this date

•date_after - get logs after this date

•source - source of logs

•node - node id (for getting node logs)

•level - log level (all levels showed by default)

•to - number of entries

•max_entries - max number of entries to load

Returns Collection of log entries, log file size

and if there are new entries. :http: * 200 (OK)

•400 (invalid date_before value)

•400 (invalid date_after value)

•400 (invalid source value)

•400 (invalid node value)

•400 (invalid level value)

•400 (invalid to value)

•400 (invalid max_entries value)

•404 (log file not found)

•404 (log files dir not found)

•404 (node not found)

•500 (node has no assigned ip)

1.6. REST API Reference 21

Page 26: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

•500 (invalid regular expression in config)

class nailgun.api.handlers.logs.LogPackageHandlerLog package handler

PUT()

Returns JSONized Task object.

Http

• 200 (task successfully executed)

•400 (failed to execute task)

class nailgun.api.handlers.logs.LogSourceCollectionHandlerURL: /api/logs/sources/

Log source collection handler

GET()

Returns Collection of log sources (from settings)

Http

• 200 (OK)

class nailgun.api.handlers.logs.LogSourceByNodeCollectionHandlerURL: /api/logs/sources/nodes/%node_id%/

Log source by node collection handler

GET(node_id)

Returns Collection of log sources by node (from settings)

Http

• 200 (OK)

•404 (node not found in db)

1.6.9 Plugin API

Handlers dealing with plugins

1.6.10 Redhat API

Handlers dealing with exclusive Red Hat tasks

class nailgun.api.handlers.redhat.RedHatAccountHandlerURL: /api/redhat/account/

Red Hat account handler

GET()

Returns JSONized RedHatAccount object.

Http

• 200 (OK)

22 Chapter 1. Development Documentation

Page 27: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

•404 (account not found in db)

POST()

Returns JSONized RedHatAccount object.

Http

• 200 (OK)

•400 (invalid account data specified)

•404 (account not found in db)

class nailgun.api.handlers.redhat.RedHatSetupHandlerURL: /api/redhat/setup/

Red Hat setup handler

POST()Starts Red Hat setup and download process

Returns JSONized Task object.

Http

• 202 (setup task created and started)

• 400 (invalid account data specified)

• 404 (release not found in db)

1.6.11 Version API

Product info handlers

class nailgun.api.handlers.version.VersionHandlerURL: /api/version/

Version info handler

GET()

Returns FUEL/FUELWeb commit SHA, release version.

Http

• 200 (OK)

User guide has been moved to docs.mirantis.com. If you want to contribute, checkout the sources from github.

1.6. REST API Reference 23

Page 28: Release 3.2 Mike Scherbakov - Read the Docs

scaffold Documentation, Release 3.2

24 Chapter 1. Development Documentation

Page 29: Release 3.2 Mike Scherbakov - Read the Docs

Python Module Index

nnailgun.api.handlers.cluster, ??nailgun.api.handlers.disks, ??nailgun.api.handlers.logs, ??nailgun.api.handlers.network_configuration,

??nailgun.api.handlers.node, ??nailgun.api.handlers.notifications, ??nailgun.api.handlers.plugin, ??nailgun.api.handlers.redhat, ??nailgun.api.handlers.release, ??nailgun.api.handlers.tasks, ??nailgun.api.handlers.version, ??

25