Tools for creating stemcells
Note: Use US East (Northern Virginia) region when using AWS in following steps. AMI (Amazon Machine Image) to be used for the stemcell building VM is in the US East (Northern Virginia) region.
-
Upload a keypair called "bosh" to AWS that you'll use to connect to the remote vm later
-
Create "bosh-stemcell" security group on AWS to allow SSH access to the stemcell (once per AWS account)
-
Add instructions to set BOSH_AWS_... environment variables
-
Install the vagrant plugins we use:
vagrant plugin install vagrant-berkshelf vagrant plugin install vagrant-omnibus vagrant plugin install vagrant-aws --plugin-version 0.5.0
From a fresh copy of the bosh repo:
If you use AWS EC2-Classic environment, run:
export BOSH_AWS_ACCESS_KEY_ID=YOUR-AWS-ACCESS-KEY
export BOSH_AWS_SECRET_ACCESS_KEY=YOUR-AWS-SECRET-KEY
cd bosh-stemcell
vagrant up remote --provider=aws
If you use AWS VPC environment, run:
export BOSH_AWS_ACCESS_KEY_ID=YOUR-AWS-ACCESS-KEY
export BOSH_AWS_SECRET_ACCESS_KEY=YOUR-AWS-SECRET-KEY
export BOSH_AWS_SECURITY_GROUP=YOUR-AWS-SECURITY-GROUP-ID
export BOSH_AWS_SUBNET_ID=YOUR-AWS-SUBNET-ID
cd bosh-stemcell
vagrant up remote --provider=aws
(Note: BOSH_AWS_SECURITY_GROUP should be security group id, instead of name "bosh-stemcell")
With existing stemcell building VM run:
export BOSH_AWS_ACCESS_KEY_ID=YOUR-AWS-ACCESS-KEY
export BOSH_AWS_SECRET_ACCESS_KEY=YOUR-AWS-SECRET-KEY
cd bosh-stemcell
vagrant provision remote
Once the stemcell builing machine is up, run:
vagrant ssh-config remote
Then copy the resulting output into your ~/.ssh/config
file.
Once this has been done, you can ssh into the stemcell building machine with ssh remote
and you can copy files to and from it using scp localfile remote:/path/to/destination
An OS image is a tarball that contains a snapshot of an entire OS filesystem that contains all the libraries and system utilities that the BOSH agent depends on. It does not contain the BOSH agent or the virtualization tools: there is a separate Rake task that adds the BOSH agent and a chosen set of virtualization tools to any base OS image, thereby producing a stemcell.
If you have changes that will require new OS image you need to build one. A stemcell with a custom OS image can be built using the stemcell-building VM described above.
vagrant ssh -c '
cd /bosh
bundle exec rake stemcell:build_os_image[ubuntu,trusty,/tmp/ubuntu_base_image.tgz]
' remote
The arguments to stemcell:build_os_image
are:
operating_system_name
identifies which type of OS to fetch. Determines which package repository and packaging tool will be used to download and assemble the files. Must match a value recognized by the OperatingSystem module. Currently,ubuntu
centos
andrhel
are recognized.operating_system_version
an identifier that the system may use to decide which release of the OS to download. Acceptable values depend on the operating system. Forubuntu
, usetrusty
. Forcentos
orrhel
, use7
.os_image_path
the path to write the finished OS image tarball to. If a file exists at this path already, it will be overwritten without warning.
There are a few extra steps you need to do before building a RHEL OS image:
-
Start up or re-provision the stemcell building machine (run
vagrant up
orvagrant provision
from this directory) -
Download the RHEL 7.0 Binary DVD image and use
scp
to copy it to the stemcell building machine. Note that RHEL 7.1 does not yet build correctly. -
On the stemcell building machine, mount the RHEL 7 DVD at
/mnt/rhel
:# mkdir -p /mnt/rhel # mount rhel-server-7.0-x86_64-dvd.iso /mnt/rhel
-
On the stemcell building machine, put your Red Hat Account username and password into environment variables:
$ export [email protected] $ export RHN_PASSWORD=my-password
-
On the stemcell building machine, run the stemcell building rake task:
$ cd /bosh $ bundle exec rake stemcell:build_os_image[rhel,7,/tmp/rhel_7_base_image.tgz]
See below Building the stemcell with local OS image on how to build stemcell with the new OS image.
Substitute <current_build> with the current build number, which can be found by looking at bosh artifacts. The final two arguments are the S3 bucket and key for the OS image to use, which can be found by reading the OS_IMAGES document in this project.
vagrant ssh -c '
cd /bosh
CANDIDATE_BUILD_NUMBER=<current_build> http_proxy=http://localhost:3142/ bundle exec rake stemcell:build[vsphere,esxi,centos,nil,go,bosh-os-images,bosh-centos-7-os-image.tgz]
' remote
vagrant ssh -c '
cd /bosh
bundle exec rake stemcell:build_with_local_os_image[aws,xen,ubuntu,trusty,go,/tmp/ubuntu_base_image.tgz]
' remote
Public OS images can be obtained here:
- latest Ubuntu - https://s3.amazonaws.com/bosh-os-images/bosh-ubuntu-trusty-os-image.tgz
- latest Centos - https://s3.amazonaws.com/bosh-os-images/bosh-centos-7-os-image.tgz
AWS stemcells can be shipped in light format which includes a reference to a public AMI. This speeds up the process of uploading the stemcell to AWS. To build a light stemcell:
vagrant ssh -c '
cd /bosh
export BOSH_AWS_ACCESS_KEY_ID=YOUR-AWS-ACCESS-KEY
export BOSH_AWS_SECRET_ACCESS_KEY=YOUR-AWS-SECRET-KEY
bundle exec rake stemcell:build_light[/tmp/bosh-stemcell.tgz,hvm]
' remote
If you find yourself debugging any of the above processes, here is what you need to know:
-
Most of the action happens in Bash scripts, which are referred to as stages, and can be found in
stemcell_builder/stages/<stage_name>/apply.sh
. -
You should make all changes on your local machine, and sync them over to the AWS stemcell building machine using
vagrant provision remote
as explained earlier on this page. -
While debugging a particular stage that is failing, you can resume the process from that stage by adding
resume_from=<stage_name>
to the end of yourbundle exec rake
command. When a stage'sapply.sh
fails, you should see a message of the formCan't find stage '<stage>' to resume from. Aborting.
so you know which stage failed and where you can resume from after fixing the problem.For example:
bundle exec rake stemcell:build_os_image[ubuntu,trusty,/tmp/ubuntu_base_image.tgz] resume_from=rsyslog_config