One of the features I needed to implement as part of this project was to be able to run multiple websites from my server. None of our websites are particularly large in volume, so running them all from my home server is not expected to overload it. If that changes, then I will look at adding more computing power to assist in the load, possibly in the form of a Raspberry Pi or two.
Previously, I was self-hosting my flyingflux.net website using Apache*, and I had configured it entirely by bashing away at the default web site configurations.
The other goal I had was to switch to NGINX as the server that handles all the SSL/TLS side of things, as I am lead to believe it generates less server load (I’ve yet to learn exactly why that is the case. It may be that it protects server resources by instead slowing down web site performance?)
The other goal was to move to an entirely Ansible-controlled configuration. This has some major advantages over directly hacking config files. You can easily set up multiple configuration options and run a script to push one configuration to your server. (Or possibly several scripts)
Taking control
I couldn’t find an Apache role I was happy with, so I simply wrote my own playbooks. I started with a playbook to grab the existing Apache configuration and store it for future roll-back use. That playbook simply imports these tasks:
---# Tasks file to backup apache config
- name: Get Apache ports.conf
fetch:
dest: files/fetched
src: /etc/apache2/ports.conf
- name: Find sites-available
find:
paths: /etc/apache2/sites-available
patterns: '*.conf'
register: sites_available_to_copy
- name: Copy sites-available
fetch:
src: "{{ item.path }}"
dest: files/fetched
loop: "{{ sites_available_to_copy.files }}"
- name: Find sites-enabled
find:
paths: "/etc/apache2/sites-enabled/"
file_type: "link"
follow: no
register: sites_enabled
- name: Dump sites_enabled
debug:
var: sites_enabled
- name: Delete old sites-enabled yaml file
local_action:
module: file
path: "files/fetched/{{ inventory_hostname }}/sites-enabled-list"
state: absent
become: no
- name: Create Host Directory under files
local_action:
module: file
path: "files/fetched/{{ inventory_hostname }}"
state: directory
become: no
- name: Create new sites-enabled yaml file
local_action:
module: file
path: "files/fetched/{{ inventory_hostname }}/sites-enabled-list"
state: touch
become: no
- name: Add sites to sites-enabled yaml file
local_action:
module: lineinfile
path: "files/fetched/{{ inventory_hostname }}/sites-enabled-list"
line: "{{ item.path.split('/')[4] }}"
state: present
become: no
loop: "{{ sites_enabled.files }}"
And then I wrote the reverse of that playbook to push that fetched config back to the server, over-writing anything currently existing. (I won’t bother including those tasks, it’s fairly obvious how to do them given the above if you are using Ansible)
So, that gave me the ability to revert to my working configuration (always a good strategy), and got control of my config in Ansible
Hiding Apache
The next step was to take flyingflux.net offline, moving it to an internal port number not accessible from the Internet via my router, and setting up reverse-proxying via NGINX. I’m not going to go into details, as I ended up not getting my WordPress install working properly in that configuration. I kept getting into a redirect loop for any PHP, and eventually decided to give up and go with a pure NGINX + PHP-FPM install instead. Here are some of the pages I found with advice that were not ultimately successful (though certainly would have worked fine were it not for WordPress insisting on redirecting the base URL back to https://flyingflux.net/ instead of http://flyingflux.net:8100/)
- How To Configure Nginx as a Web Server and Reverse Proxy for Apache on One Ubuntu 18.04 Server
- Apache Module mod_proxy_fcgi (has replaced mod_fast_cgi since the above)
- How to Configure Nginx and Apache Together on the same Ubuntu VPS or Dedicated Server
- Mixed Content with SSL, wordpress behind a reverse proxy
- Redirect loop with WordPress on Apache with nginx reverse proxy and HTTPS on Ubuntu 16
- WordPress redirect loop on site root. Nginx proxy apache
As you can see, there’s a lot of conflicting advice, and in the end I believe the trouble was in using a WordPress site that had been set up to enforce https.
One thing I came across while attempting this, was an excellent page that helped me finally fix hairpin routing on my Edge Router X. If you’re not familiar with the concept, hairpin routing fixes an issue with internal servers that use port-forwarding to present services outside your network. The issue is that hosts inside your network trying to access those ports hit your firewall from the wrong side, causing them to not get correctly routed to the local server. A simplistic way to avoid this is to create a fake DNS entry for your internal server that is presented by your internal forwarding DNS, assigning that public name to the internal address. For most purposes, that works well, but it can cause issues. Hairpin routing is the process of adding firewall rules for the inside of your firewall that treat internal requests on the forwarded ports as though they were external requests, and does a translation to reflect that traffic back. For large sites, that leaves you with the problem of your server logs seeing all the requests as coming from the router, but that’s not an issue for small home routing. You can just ask the people on your network if they are hitting the server.
For larger sites, the solution is to put your server in a proper DMZ. I may end up doing that, but not yet.
Giving up on proxying my WordPress site
(for now, at least)
My eventual solution to getting the proxying working was to give up and just not do any reverse proxying at all. The primary purpose for handing the main site heavy lifting over to Apache was due to complex details of the internal handling of requests between the two servers. This article by Tom Whitbread explains it, and also explains my decision to just toss out Apache and let nginx hand the PHP dynamic work direct to PHP-FPM. PHP-FPM does all the stuff that’s needed to spread the load across processors, and we get a much cleaner configuration.
Implementing nginx via Ansible was relatively straightforward. I took Jeff Geerling’s nginx role from Ansible Galaxy, created a host var file with the nginx configuration and virtual server details I wanted, and then made a few adjustments to the role to get it working a little better with the Ubuntu setup. (Geerlingguy’s setup involves pushing virtual server configurations straight into /etc/nginx/sites-enabled rather than placing them in /etc/nginx/sites-available with a sym-link, for instance) I might at some stage look at making my changes conditional on the detected host and offering a PR to geerlingguy.
The simplest part of my nginx setup is the default server file for port 80 (the standard HTTP port). Once upon a time, the whole Internet ran on unsecured port 80 hypertext transfer protocol, but these days, every browser complains if the site you are visiting isn’t HTTPS or HTTPD. So here’s my /etc/nginx/sites-enabled/default-80.conf file (although it’s actually a sym-link to /etc/nginx/sites-available/default-80.conf)
server {
listen 80 default_server;
server_name _;
index index.html index.htm;
return 301 https://$host$request_uri;
}
So any incoming traffic on port 80 gets a 301 HTTP code asking the browser to send the same request to https: instead. Note that a 301 code is a “permanent redirect”, which most modern browsers will remember, and future requests will get translated without even sending single packet to port 80 first. That’s important to remember if you’re mucking around in this space, changing things on the server, and nothing seems to happen. Your browser may be just ignoring the new setting on the once-redirected port.
That page by Tom Whitbread mentioned above contains all the info you need to set up NGINX passing the PHP pages to PHP-FPM for processing. About the only thing I had to do differently was handle my Let’s Encrypt certificates slightly differently. Since I already had my certs created and they were not due for renewal, I updated my crontab script that I wrote to automate cert renewals. After attempting cert renewals, it copies the certs I receive into directories under /etc/<service>/certs and locks down the access to the privkey to just the user that the service runs under. I also changed the cert-renewal settings to use nginx instead of Apache for processing the authentication.
I’ve also set up this site (bangdash.space) similarly to flyingflux.net. Now I just need to make sure I’ve got all the required configuration included in the Ansible playbook and merge that branch into master. The plan is for Ann’s Tango Capital web site to be brought over from wordpress.com at some stage.
*- Okay, what the heck is a httpd:// URL?