This repository holds an Ansible role
that is installable using ansible-galaxy. This role contains
tasks used to install and set up a Django web app. It exists
primarily to support the Caktus Django project template.
More complete documenation can be found in caktus/tequila.
This Ansible role is released under the BSD License. See the LICENSE file for more details.
If you think you've found a bug or are interested in contributing to this project check out tequila-django on Github.
Development sponsored by Caktus Consulting Group, LLC.
Create an ansible.cfg file in your project directory to tell
Ansible where to install your roles (optionally, set the
ANSIBLE_ROLES_PATH environment variable to do the same thing, or
allow the roles to be installed into /etc/ansible/roles).
You should also enable ssh pipelining for performance (but see
the warning below under _Optimizations_ first), and might
optionally want to enable ssh agent forwarding.:
[defaults] roles_path = deployment/roles/ [ssh_connection] pipelining = True ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
Create a requirements.yml file in your project's deployment
directory. It is recommended to include tequila-common, which sets up the
project directory structure and users, and also tequila-nodejs and geerlingguy/nodejs to install
nodejs and any front-end packages that your project requires
--- # file: deployment/requirements.yml - src: https://github.com/caktus/tequila-common version: v0.8.0 - src: https://github.com/caktus/tequila-django version: v0.9.11 - src: geerlingguy.nodejs version: 4.1.2 name: nodejs - src: https://github.com/caktus/tequila-nodejs version: v0.8.0
Run ansible-galaxy with your requirements file
$ ansible-galaxy install -r deployment/requirements.yml
or, alternatively, run it directly against the url
$ ansible-galaxy install git+https://github.com/caktus/tequila-django
The project then should have access to the tequila-django role in
its playbooks.
The following variables are used by the tequila-django role:
project_namerequiredenv_namerequireddomainrequiredadditional_domainsdefault: empty listis_webdefault:false(required: one ofis_weboris_workeroris_celery_beatset totrue)is_workerdefault:falseis_celery_beatdefault:falsepython_versiondefault:"2.7"root_dirdefault:"/var/www/{{ project_name }}"source_dirdefault:"{{ root_dir }}/src"venv_dirdefault:"{{ root_dir }}/env"force_recreate_venvdefault:falsessh_dirdefault:"/home/{{ project_user }}/.ssh"requirements_filedefault:"{{ source_dir }}/requirements/{{ env_name }}.txt"requirements_extra_argsdefault:""use_newrelicdefault:falsenew_relic_license_keyrequired if use_newrelic is truenew_relic_versiondefault:""pin to a specific version of New Relic APM, e.g."4.14.0.115"supervisor_versiondefault:"3.0"cloud_staticfilesdefault:falsegunicorn_num_workersrequiredgunicorn_num_threadsoptional (note: gunicorn sets this at1if--threads=...is not given)project_userdefault:"{{ project_name }}"project_settingsdefault:"{{ project_name }}.settings.deploy"secret_keyrequireddb_namedefault:"{{ project_name }}_{{ env_name }}"db_userdefault:"{{ project_name }}_{{ env_name }}"db_hostdefault:'localhost'db_portdefault:5432db_passwordrequiredcache_hostoptionalbroker_hostoptionalbroker_passwordoptionalcelery_appdefault:"{{ project_name }}"(e.g. the app name passed tocelery -A APP_NAME worker)celery_worker_extra_argsdefault:"--loglevel=INFO"celery_eventsdefault:falsecelery_camera_classdefault:"django_celery_monitor.camera.Camera"static_dirdefault:"{{ root_dir }}/public/static"media_dirdefault:"{{ root_dir }}/public/media"log_dirdefault:"{{ root_dir }}/log"reporequired: dict containing url and branchsource_is_localdefault:falsegithub_deploy_keyrequired if source_is_local is falselocal_project_dirrequired if source_is_localextra_envdefault: empty dictproject_subdirdefault:""- if a project's main source directory is a subdir of the git repo checkout top directory, e.g. manage.py is not in the top directory and you have to cd to a subdirectory before running it, then set this to the relative path of that subdirectory.wsgi_moduledefault:{{ project_name }}.wsgi- allow configuring an alternate path to the project's wsgi module.use_uwsgidefault:false- use uWSGI instead of gunicorn to run the web app.uwsgi_ini_pathdefault:"{{ root_dir }}/uwsgi.ini"- path to the uWSGI configuration file for this app.uwsgi_processesdefault:10- number of uWSGI worker processes to run.uwsgi_extra_ini_settingsdefault:""- string containing extra options to set in the uWSGI configuration file. Each line in the string should normally containkey = valuepairs.project_portdefault: 8000 - what port Django listens onapp_packagesdefault:[]- additional system packages to install in addition to thedefault_app_packages(refer todefaults/main.ymlfor the default package list).
The extra_env variable is a dict of keys and values that is
desired to be injected into the environment as variables, via the
envfile.j2 template, which will be uploaded as a .env file for use
with the django-dotenv library. Variables will be injected into this
file wrapped in single-quotes, so no additional escaping needs to be
done to make them safe.
Note that if source_is_local is set to false, a Github checkout
key needs to be provided in the environment secrets file, and that key
needs to be added to the repo's settings within Github.
Alternatively, if source_is_local is set to true, the user's local
checkout of the repo is rsynced into the environment, with a few
exclusions (.pyc files, the .git directory, the .env file, and the
node_modules directory).
The cloud_staticfiles variable is to allow for the case where the
Django static files are being collected to an external service, such
as S3. In that case, we don't want to be running collectstatic on
every web instance, since they'll be getting in each other's way.
This variable set to true causes the collectstatic task to be
run only once.
The is_celery_beat variable is used to specify which server
instance will run celery beat, a worker dedicated to running tasks
that are specified to execute at specific times. Generally, you only
want one instance running celery beat at a time, to prevent scheduled
tasks from attempting to be executed more than once. It is
recommended to set aside an inventory group, e.g. [beat], to
distinguish this instance from your ordinary celery workers in their
own group, e.g. [workers]. Your playbook(s) may then set
is_celery_beat, is_worker, and is_web based on the
instances' inventory group membership.
One can fold together the invocation of tequila-django into a single playbook that uses group checking to set the parameters used, like so:
---
- hosts: web:worker:beat
become: yes
roles:
- role: tequila-django
is_web: "{{ 'web' in group_names }}"
is_worker: "{{ 'worker' in group_names }}"
is_celery_beat: "{{ 'beat' in group_names }}"
The celery_events and celery_camera_class variables are used
to enable and configure Celery event monitoring using the "snapshots"
system, which allows worker activity to be tracked in a less expensive
way than storing all event history on disk. Setting celery_events
to true will set up the celery events command to be run alongside
the other Celery commands. By default this will use the
django-celery-monitor
app as its snapshot "camera", so either ensure that this app is installed
in your project or change celery_camera_class to a string naming
the alternative camera class to use (e.g. myapp.Camera). For
more on Celery event monitoring, see
the docs.
You can turn on SSH pipelining (http://docs.ansible.com/ansible/latest/intro_configuration.html#pipelining) to speed up ansible commands (by minimizing SSH operations). Add the following to your project's ansible.cfg file
[ssh_connection] pipelining = True
Warning: this will cause deployments to break if securetty is used in your server's
/etc/sudoers file.