Skip to content
This repository has been archived by the owner on Oct 5, 2022. It is now read-only.

Commit

Permalink
Merge branch 'release-v2.0.0'
Browse files Browse the repository at this point in the history
  • Loading branch information
Mark Breedlove committed Oct 16, 2015
2 parents ebf1ed2 + 8d31d8e commit c5f1da7
Show file tree
Hide file tree
Showing 163 changed files with 622 additions and 1,380 deletions.
121 changes: 121 additions & 0 deletions README-upgrade-2.0.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@

UPGRADING TO RELEASE 2.0


Release 2.0 contains breaking changes that move variable definitions around. If
you are upgrading from version 1.x, you will encounter errors related to
undefined variables if you don't follow the steps below. Fortunately, the
upgrade process is pretty straightforward if you have a develpment environment
without a lot of customization.

The new version 2 way of doing things

The main change in version 2 is that variables are handled in a more
Ansible-orthodox way, taking advantages of Ansible's variable precedence
rules. We're defining variables that override role defaults in `group_vars`
files.

The inventory file has been changed to define inventory groups that go from
most general (the deployment environment, like "development") to most
specific (for example, the group name corresponding with a role, like
"postgresql_dbs", or even more specific, like "webapps"). See the
`development' and `ingestion' inventory files, noting, for example, how the
`development_ingestion2` group is more specific than `development`. This
fixes some problems with overriding variables that were present in
version 1.

There should be fewer files and variables to maintain overall, after the
initial changes necessitated by upgrading.

Step 1. Back up your `automation' directory.

Back up your `automation' directory before proceeding. You will probably
want it for reference in case there's any question about variables that
you've copied in the following steps, or in case there's any problem.

Suggestions:
*nix in general with rsync: `rsync -a automation/ automation.backup'
OS X: `ditto automation automation.backup'

Step 2. Update your `automation' directory.

Use `git pull' or whatever other means (e.g. zipfile download from GitHub)
to update your `automation' directory with version 2. If you're tracking
`development' or `master', this just means doing a `git pull'.

Step 3. Replace `group_vars/all`

Back up your `group_vars/all` file and copy the new `group_vars/all.dist`
into place (for a typical development environment).

The way to get the cleanest cascade of variable definitions is to spend a
minute or two to look at each variable in your new `group_vars/all` file and
redefine any that you had set differently in your backed-up version
(especially `adminusers' and anything that says "CHANGEME"). Look at
`vars/development.yml' and `roles/*/vars/development.yml'.

Step 4. Remove old files in roles' `vars' directories.

You have, of course, backed up your `automation' directory so that this is
not going to cause any permanent trouble. :-)

Remove the deployment-environment ("level") variable files in
`roles/*/vars`. The version 2 way of defining these is to override them in
`group_vars' files named after the environment, such as
`group_vars/development` or `group_vars/development_ingestion2`.

If you have any values in a `development.yml` file that differ from the
role's defaults (see `<role>/defaults/main.yml`) then make sure it's copied
to `group_vars/development` or `group_vars/development_ingestion2`.
Otherwise, don't worry about it; just delete the file.

Be sure not to remove any `main.yml' files.

Step 5. Change variable declarations for variables that have been renamed.

If you had any of the following variables defined in any `development.yml'
files, note that they have been renamed in version 2, so you'll have to make
sure they're correct in your `group_vars' files. They all have defaults
that seem to be good for the development VMs, so you can probably just
remove them or avoid copying them to your `group_vars' files.

admin_pwhash -> marmotta_admin_pwhash
apt_cache_valid_time -> elasticsearch_apt_cache_valid_time
backups_basedir -> *_backups_basedir for postgresql and mysql
heidrun_allowed_ip -> marmotta_heidrun_allowed_ip
log_rotation_count -> pg_log_rotation_count
log_rotation_count -> pg_log_rotation_count
log_rotation_interval -> pg_log_rotation_interval
log_rotation_interval -> pg_log_rotation_interval
nginx_bookshelf_conn_zone_size -> siteproxy_bookshelf_conn_zone_size
nginx_bookshelf_max_conn -> siteproxy_bookshelf_max_conn
nginx_bookshelf_req_rate -> siteproxy_bookshelf_req_rate
nginx_bookshelf_req_zone_size -> siteproxy_bookshelf_req_zone_size
nginx_conn_zone_size -> siteproxy_nginx_conn_zone_size
nginx_default_max_conn -> siteproxy_nginx_default_max_conn
nginx_default_req_burst_size -> siteproxy_nginx_default_req_burst
nginx_default_req_rate -> siteproxy_nginx_default_req_rate
nginx_limit_conn_log_level -> siteproxy_nginx_limit_conn_log_level
nginx_limit_req_log_level -> siteproxy_nginx_limit_req_log_level
nginx_req_zone_size -> siteproxy_nginx_req_zone_size
rails_env -> various *_rails_env per role
ruby_rbenv_version -> various *_rbenv_versions per role
unicorn_worker_processes -> various *_unicorn_worker_processes per role

Please note that the variable `ruby_rbenv_version' has been removed, and is
superceded by various role-specific variables.

These variables were renamed to allow them to exist at the `group_vars'
level.

Step 6. Test

Run `git status' to see if any files are reported as untracked. These may
be files that you want to remove.

Run `ansible-playbook' with `-C -D' against a VM that you know is up-to-date
and that you've successfully used with version 1. `-C' performs a dry run
and `-D' displays a diff of any changes that Ansible would make to files.
This is a great way to spot variables that have been misdefined; or that are
undefined, in which case they'll probably trigger errors.

23 changes: 9 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ The intention of this project is to provide automated configuration management
for production, development, and staging environments with the same set of
files.

## Version 2

For upgrade notes, see [README-upgrade-2.0.txt](README-upgrade-2.0.txt).

[Release Notes](https://github.com/dpla/automation/releases)

## Installation, VM setup:
Expand Down Expand Up @@ -43,21 +47,12 @@ If you want to work with our new Ingestion2 system, please see
and that they require SSH public keys in their ssh_authorized_keys fields.
The `adminusers` variable is for administrative users who will run
ansible-playbook.
* `ansible/vars/development.yml.dist`
* `ansible/roles/api/vars/development.yml.dist`
* Optional, if you want to use a local source directory for the API app (see
`Vagrantfile`).
* `ansible/roles/elasticsearch/vars/development.yml.dist`
* `ansible/roles/postgresql/vars/development.yml.dist`
* `ansible/roles/frontend/vars/development.yml.dist`
* Optional. For the frontend app, as above. See `Vagrantfile`.
* Optionally, copy and update any other `ansible/roles/*/development.yml.dist`
files in a similar fashion. There are defaults that will take effect if you don't
make any changes.
* `ansible/group_vars/development.dist`
* If you're going to be developing DPLA applications, you might want to
override the `*_use_local_source` variables in some of the roles'
`defaults` directories, as well as the variables related to the source
directories.
* Copy `Vagrantfile.dist` to `Vagrantfile`.
In the future, there will be more hosts in our configuration than you'll want
to have running simultaneously as VMs, and you'll want to edit the default
Vagrantfile to suit your needs, commenting out VMs that you don't want running.
* Make sure that Vagrant has downloaded the base server image that we'll need
for our VMs:
```
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.17.3
2.0.0
10 changes: 2 additions & 8 deletions ansible/central_all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@
vars:
production_munin_master: true
level: production
vars_files:
- ["vars/production.yml"]

- name: Install rbenv/bundler/extra gems to central
hosts: central
Expand All @@ -34,16 +32,14 @@
tasks:
- name: Make sure rbenv and bundler are current
script: >
files/install_ruby_tools.sh {{ ruby_rbenv_version }}
files/install_ruby_tools.sh {{ central_rbenv_version }}
tags:
- scripts
- name: Ensure gems are present for BigCouch restore script
script: >
files/install_couchdb_restore_gems.sh {{ ruby_rbenv_version }}
files/install_couchdb_restore_gems.sh {{ central_rbenv_version }}
tags:
- scripts
vars_files:
- ["vars/production.yml"]

- name: Install scripts to central
hosts: central
Expand Down Expand Up @@ -75,5 +71,3 @@
owner=root group=root mode=755
tags:
- scripts
vars_files:
- ["vars/production.yml"]
6 changes: 0 additions & 6 deletions ansible/contentqa_all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,12 @@
sudo: yes
roles:
- contentqa_proxy
vars_files:
- ["vars/contentqa.yml", "vars/defaults.yml"]

- name: Mail
hosts: all
sudo: yes
roles:
- aws_postfix
vars_files:
- ["vars/contentqa.yml", "vars/defaults.yml"]

- include: playbooks/dbnodes.yml level=contentqa

Expand All @@ -37,5 +33,3 @@
tags:
- api
- api_auth
vars_files:
- ["vars/contentqa.yml", "vars/defaults.yml"]
2 changes: 0 additions & 2 deletions ansible/dev_all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@
- monitoring_web
vars:
level: development
vars_files:
- [ "./vars/development.yml" ]
sudo: yes

- include: playbooks/dev_loadbalancer.yml
Expand Down
13 changes: 3 additions & 10 deletions ansible/dev_ingestion_all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,11 @@
vars:
level: development
ingestion2: true
vars_files:
- [ "./vars/development.yml" ]
sudo: yes

- include: playbooks/dev_loadbalancer.yml ingestion2=true

- include: >-
playbooks/elasticsearch.yml level=development ingestion2=true
es_cluster_loadbal=192.168.50.7
- include: playbooks/elasticsearch.yml level=development ingestion2=true

- include: playbooks/postgresql.yml level=development

Expand All @@ -43,8 +39,8 @@
vars:
level: development
ingestion2: true
vars_files:
- ["vars/development.yml", "vars/defaults.yml"]
skip_configuration: false
do_deployment: true

- name: Web Configuration (ingestion app and marmotta)
# Technically, it's not necessary to specify marmotta because ingestion_app and
Expand All @@ -58,12 +54,9 @@
vars:
level: development
ingestion2: true
vars_files:
- ["vars/development.yml", "vars/defaults.yml"]

- include: >-
playbooks/api.yml level=development ingestion2=true
es_cluster_loadbal=192.168.50.7
- include: playbooks/redis.yml level=development

Expand Down
5 changes: 5 additions & 0 deletions ansible/development
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,8 @@ webapps
[elasticsearch:children]
dbnodes

[development]
loadbal
dbnode1
dbnode2
webapp1
7 changes: 6 additions & 1 deletion ansible/group_vars/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,6 @@
all
*
!.gitignore
!*.dist
!geocoder
!ingestion_app
!worker
50 changes: 45 additions & 5 deletions ansible/group_vars/all.dist
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,58 @@ iana_timezone: US/Eastern
# github_private_key_path: ~/.ssh/id_rsa

## The following variables are only relevant if you're using Amazon SES for mail
# ses_user: CHANGEME
# ses_password: CHANGEME
ses_user: CHANGEME
ses_password: CHANGEME

## Likewise, this is only relevant if you're using the aws_postfix role:
# smtp_relayhost_and_port: smtp.example.com:25
smtp_relayhost_and_port: smtp.example.com:25

# For production and staging:
aws_region: changeme
aws_access_key: changeme
aws_secret_key: changeme

# Not necessary in development:
munin_master_ipaddr: CHANGEME
# We would normally get an IP address by querying the inventory, but
# this will work easiest between development and production
bigcouch_cluster_loadbal: 192.168.50.2
# same with Elasticsearch ...
es_cluster_loadbal: 192.168.50.2

munin_master_ipaddr: 192.168.50.6

dpla_locale: en_US.UTF-8

# Whether to configure applications for debugging output
webapp_debug: 0

# The network interface that is on the internal network that all of the
# servers are on:
internal_network_interface: ansible_eth1

frontend_hostname: local.dp.la
api_hostname: local.dp.la
sitemap_host: sitemaps.example.tld
marmotta_domain: ldp.local.dp.la

# HTTP ports for various applications
# api_port is the port the *loadbalancer* answers on for API requests
api_port: 8080
# api_app_port is the port the *backend app server* runs on
api_app_port: 8003
exhibitions_app_port: 8000
wordpress_app_port: 8001
frontend_app_port: 8002
ingestion_app_port: 8004

nginx_real_ip_from_addrs:
- 192.168.50.0/24

##
# Role-specific vars

# The following variable appears here to cause it to resolve in
# `playbooks/init_index_and_repos.yml`. It should either be removed when we
# get rid of the legacy system, or the playbook's tasks could be incorporated
# into the api role's tasks files so that the value defined in the role's
# variables is inherited.
api_rbenv_version: 1.9.3-p547
Loading

0 comments on commit c5f1da7

Please sign in to comment.