Documented by Kyle Bruder on Nov 07, 2020
Last updated on Jun 11, 2021
With the targeted launch date for the Lookaway site soon, I am putting a lot of thought into how to design the technical infrastructure that will eventually run the Lookaway web application. As the site and the Membership grows, so do the costs to keep the site online.
My number one priority is keeping our data and media from disappearing. After that the focus is on scalability, meaning that the site can handle more members, more visitors, and more storage capacity. Beyond that, I want to keep improving the Member experience by adding new quality of life features as we figure out what we all need to make the task of publishing our content as frictionless and satisfying as possible.
We are no longer using AWS.
The site media files are now stored on Amazon Elastic File System. Decided against using Amazon Aurora until we have some donations due to the cost.
A proposed architecture of AWS services to save on operating costs
Graphic: Kyle Bruder
Members and visitors of the site are considered a crucial part of the architecture. Without Members, the site is an empty shell with no content. Without visitors, we will be unable to reach an audience outside of the membership.
There are no currently plans to create standalone mobile or desktop applications that interact with the Member media files or database.
Route 53 is Amazon Web Services' DNS resolver. When someone types "lookaway.info" into a web browser, it uses that name to look up the IP (Internet Protocol) address of our webserver (or load balancer) in order to begin to make requests such as logging in or uploading a video. Route 53 is the service that tells your web browser which IP address to use for "lookaway.info" requests.
HTTP and HTTPS requests are handled by Nginx running on an the same EC2 instance (or a pool of them in the future) as the Lookaway software. All HTTP requests are forwarded to HTTPS. Requests are then forwarded to a Lookaway listening socket (handled by Gunicorn) where the Lookaway code will process them accordingly and provide a response.
Currently, the media and static files are being stored on a 500GB Elastic Block Store drive on AWS. This filesystem is mounted onto the web server and is read directly. At the time of writing we are using 2% of this capacity.
The new plan is to serve media files and static files using Elastic File Service on AWS. EFS is a completely managed network file share service with multiple replicated drives in various regions around the globe. This provides durability and availability for our media. The EFS filesystem automatically grows and shrinks as media is created and deleted. Despite the fact that the price per GB/h on EFS is more than double the price of EBS, we will only pay for what we use and will save the labor of managing an NFS. And the performance will improve by an order of magnitude.
The database holds vital data and text content for the site. Currently, the database is running on the same EC2 instance as the web server and Lookaway code. This will work pretty well for now, but once the site starts getting traffic, it could cause the site to become unavailable.
After launch, the plan is to migrate the database to Aurora Serverless on AWS. Like EFS, Aurora Serverless is completely manages, automatically scales and we only pay for what we use. Also like EFS, It is replicated in multiple regions.
Both the database and the media filesystem are backed up to Amazon Simple Storage Service every night. At least a week of backups are retained. This current backup system will stay the same after launch except that the data will be copied from their respective services instead of the web server itself. This way, using the Lookaway code repository and the backup files, the entire site can be quickly reconstructed anywhere, anytime.
I maintain complete faith that no matter how big the site gets, we will be covered, either by receiving donations, integrating a sustainable business model or lowering costs. No matter what degree of material success is enjoyed by our Membership we will work out a way for each Member to share in the spoils once the operating costs are covered.