It was a Wednesday night at 11 PM, and I was about to pack up and leave when suddenly the product manager rushed over and said: “Bro, we need to deploy a new feature on 20 servers tomorrow, and configure nginx, redis, mysql…” At that moment, I felt overwhelmed. If I had to manually SSH into each server one by one, I would probably be working until dawn.
At that moment, I remembered a saying: “Any repetitive task should be automated”. So, I spent two hours writing an automation deployment script with Ansible, and what would have taken all night was completed in just 20 minutes. Since then, I have never manually SSH’d into a server to perform repetitive configurations again.
The “Bloody History” of Manual Operations
I bet you have encountered similar scenarios: receiving an alert at 2 AM requiring urgent scaling; setting up a development environment for a new hire; redeploying a test environment that has crashed… Each time it involves a flurry of frantic operations:
# Traditional "manual" deployment method
ssh user@server1
sudo apt update && sudo apt install nginx
sudo systemctl start nginx
# Repeat 19 times... Oh my god
This method is not only inefficient but also prone to errors. I have seen cases where a missing configuration parameter led to a frantic rollback in the middle of the night. Even worse, the inconsistency in server configurations can create countless pitfalls later on.
Ansible: The “Transformers” of Operations
Ansible has completely changed this situation. It is based on the SSH protocol and does not require an agent to be installed on the target servers. Instead, it uses YAML formatted playbooks to define infrastructure as code.
The key point is that Ansible has a natural affinity with Python. It is developed in Python and can seamlessly integrate Python scripts, allowing us to incorporate complex logical conditions into the automation process.
Here’s a real example: this is the playbook I use to deploy a Django application:
---
- hosts: webservers
become: yes
vars:
app_name: "my_django_app"
python_version: "3.11"
tasks:
- name: Install Python and dependencies
apt:
name:
- "python{{ python_version }}"
- python3-pip
- nginx
- supervisor
state: present
update_cache: yes
- name: Create application directory
file:
path: "/opt/{{ app_name }}"
state: directory
owner: www-data
group: www-data
- name: Deploy application code
git:
repo: "https://github.com/mycompany/{{ app_name }}.git"
dest: "/opt/{{ app_name }}"
version: "{{ git_branch | default('main') }}"
notify: restart application
The Perfect Combination of Python and Ansible
What truly unleashes the power of Ansible is its deep integration with Python. We can directly execute Python scripts in the playbook to handle complex deployment logic:
# deploy_helper.py
import json
import requests
from ansible.module_utils.basic import AnsibleModule
def check_service_health(url, timeout=30):
"""Check service health status"""
try:
response = requests.get(f"{url}/health", timeout=timeout)
return response.status_code == 200
except requests.RequestException:
return False
def main():
module = AnsibleModule(
argument_spec=dict(
service_url=dict(required=True),
timeout=dict(type='int', default=30)
)
)
url = module.params['service_url']
timeout = module.params['timeout']
if check_service_health(url, timeout):
module.exit_json(changed=False, msg="Service is healthy")
else:
module.fail_json(msg="Service health check failed")
if __name__ == '__main__':
main()
Calling this custom module in the playbook:
- name: Health Check
deploy_helper:
service_url: "http://{{ ansible_host }}:8000"
timeout: 60
register: health_result
Practical Experience: Pitfalls and Best Practices
After several years of use, I have summarized some hard-earned lessons:
1. Idempotency is Key
The essence of Ansible lies in its idempotency. The same playbook should yield consistent results no matter how many times it is executed. I once wrote a “dumb” script that would repeatedly add configuration items, resulting in a mess of nginx configurations.
2. Variable Management Must Be Standardized
In large projects, variable management is a significant topic. I recommend using group_vars and host_vars directories to organize variables:
inventory/
├── group_vars/
│ ├── all.yml # Global variables
│ ├── webservers.yml # Web server group variables
│ └── databases.yml # Database server group variables
└── host_vars/
├── web01.yml # Individual machine variables
└── web02.yml
3. Vault for Encrypting Sensitive Information
Never write sensitive information like passwords or API keys in plain text in playbooks. Ansible Vault is designed for this purpose:
# Create an encrypted file
ansible-vault create secrets.yml
# Reference in playbook
vars_files:
- secrets.yml
Performance Optimization: Speeding Up Deployments
By default, Ansible executes tasks serially, but we can use some tricks to enhance performance:
# Parallel execution, operating on up to 10 servers at a time
- hosts: all
serial: 10
# Or by percentage
- hosts: all
serial: "30%"
We can also optimize by enabling pipelining and adjusting the fork count:
# ansible.cfg
[defaults]
host_key_checking = False
pipelining = True
forks = 20
In my tests, these optimizations reduced the deployment time for 100 servers from 40 minutes to 12 minutes.
Final Thoughts
Automation is not the goal; liberating productivity is. The combination of Ansible and Python allows us to focus more on architecture design and business logic rather than wasting time on repetitive configuration tasks.
Now my team has achieved full automation from code submission to production deployment. After developers submit code, the CI/CD pipeline automatically triggers Ansible for deployment, with no manual intervention required throughout the process.
Remember this saying: “Great programmers are lazy; they find ways to make machines do the work for them”. If you are still manually SSH’ing into servers to perform repetitive tasks, it’s time to embrace automation.
After all, life is short; why waste time on tasks that can be automated?