We talk a great deal about best practices and how important they are. Read our 4 best practice tips for effective ops that we've implemented here at Divio.
Thomas Bailey
Marketing
We talk a lot about best practices - what they are and how our platform can best implement them in a way that minimises disruption. Best practices often come from real-world experiences, learned from doing (and the failure that comes from not learning!). You don't have to look very far to see the fallout from not adhering to best practices. Unfortunately, catastrophic data leaks are still relatively commonplace and can often be attributed back to a simple misconfiguration done without following basic principles.
Here are a few examples of the things we implement and enforce.
When new projects are created on the Divio platform, a database is typically required and automatically configured, provisioned and connected to the project. The database itself is physically only accessible to the application, and through the assigned credentials. It is a simple rule but one that immediately rules out misconfiguration.
At the same time, being able to connect to a database to examine data quickly with a friendly tool is something we have probably all done at some point. How do you do this with no direct database access?
In short, you can't. You have to sacrifice convenience for security. And the truth is, you don't have to sacrifice much convenience to gain a lot of security. An extra step, requiring you to have an SSH session to a cloud web application, or to pull down the database into a local development environment, is a very small investment of effort for a very significant return of security. By enforcing this trade-off, we help ensure our users work in secure-by-default environments.
It's an old idea but worth repeating: credentials should never be stored in source code and are best left to machines to generate. While a private company repository could be assumed to be secure, long-forgotten credentials can be readily copied out and also will very likely tend to be forgotten and therefore rarely changed.
We intentionally don't make credentials available and don't allow credentials to be changed by users. Instead, we provide environment variables specific to the environment.
In favour of having configuration files with hardcoded credentials, pointing instead to environment variables means the credentials are managed in-memory and during runtime. This means credentials can be dynamically re-generated and applied silently. Each environment, including your local development environment, has its own set of credentials that you can then depend upon.
The age-old problem of backups probably brings back some late-night stressful memories, and when things do go wrong, it will always be the most inconvenient time! The process often takes only a few minutes, and we designed the cloud backup solutions to be as clutter-free and intuitive as possible.
It goes without saying that an extensive backup process is a must, but as a tedious task with no immediate feature-win, it is too easy to relegate and postpone to a backlog item.
We launch an automated backup process during initial project creation. Depending on the chosen plan and preferences, automated backups run silently in the background and specifically cover static media and databases. There is also a manual backup feature that can often be useful immediately before deploying changes.
Source code is, by design, intended to be stored in a Git-based repository where all changes are tracked and controlled through branch strategies and releases.
In a scenario where, for example, a change is implemented and deployed that has unintended effects on stored data (we have all been there!), the process is simply to deploy the previous release and then choose from the most recent backup to restore a database and other media. The process often takes only a few minutes, and we designed the backup feature to be as clutter-free and intuitive as possible.
Following on from backups, we intentionally prohibit ad-hoc (i.e. file-based) deployment and direct access to the project's runtime code. Instead, test and live environments are connected to a Git-based repository - either your own private Git-repo that we included with new projects or an external provider such as Gitlab or GitHub.
It is often tempting to want to deploy directly from a local development environment to the cloud to preview a quick change. However, this is a short-cut to chaos when multiple team members are working together. Instead, using Git effectively through a branching strategy and making verbose commit notes means that once adopted, it is a more robust way of working and well worth the few extra seconds.
Cloud Management / Cloud Cost Control / PaaS
The Top Three Cloud Strategy Misses to Avoid
Enterprises are increasingly moving to public clouds, attracted by the promised cost savings and appeal of being able to hand over infrastructure support. However, this can be challenging process if you're not prepared with a plan. Learn about common cloud strategy misses here.