🔗 Static website with hugo

Generate a static website with hugo

Generate new site and select a theme

brew install hugo

hugo new site s3website

cd s3website

git clone https://github.com/theNewDynamic/gohugo-theme-ananke.git themes/ananke

echo 'theme = "ananke"' >> config.toml

Change the title

sed -i.bak 's/My New Hugo Site/S3 Website/g' config.toml

Create a (draft) post

hugo new posts/my-first-post.md

echo 'It Works!' >> content/posts/my-first-post.md

To preview on localhost, first start the hugo server with drafts enabled

hugo server -D

Publish the draft (by changing “draft: true” to “draft: false”)

sed -i.bak 's/draft: true/draft: false/g' content/posts/my-first-post.md

Build static pages

hugo

List site map

tree public

🔗 Configure S3 bucket

Configure S3 bucket as a static website

First, install and configure awscli. The commands below assume you’ve created a named profile called your-profile

Create a S3 bucket. The bucket name must be unique

aws --profile your-profile s3 mb s3://s3website.mozey.co

Configure the bucket we just created as a static website. Note that the index and error documents must match the paths as per the site map above

aws --profile your-profile s3 website s3://s3website.mozey.co/ --index-document index.html --error-document 404.html

aws --profile your-profile s3api get-bucket-website --bucket s3website.mozey.co

Edit public access settings, “By default, Amazon S3 blocks public access to your account and buckets.”, using put-public-access-block

# TODO The `aws s3 website` command already does this?
#aws --profile your-profile s3api put-public-access-block \
#--bucket s3website.mozey.co \
#--public-access-block-configuration "BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false"

Create a bucket policy to make content public, note the “Resource” contains the bucket name s3website.mozey.co, the rest is standard policy

echo '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::s3website.mozey.co/*"
            ]
        }
    ]
}' > bucket-policy-s3website.json

Apply the policy to your bucket

aws --profile your-profile s3api put-bucket-policy --bucket s3website.mozey.co --policy file://bucket-policy.json

aws --profile your-profile s3api get-bucket-policy --bucket s3website.mozey.co

Your bucket is now publicly accessible!, note

  • the bucket URL contains a region, it has the format below, replace with your bucket name and region. See troubleshooting
http://BUCKET.s3-website.REGION.amazonaws.com
  • EDIT 2021-09-03 the example bucket is only accessible via Cloudflare CDN, to see it go to s3website.mozey.co

If we haven’t deployed the site yet, you might see something like the following

404 Not Found
Message: The specified key does not exist.
Key: index.html

🔗 Deploy to S3

Using aws s3 sync

aws --profile your-profile s3 sync public s3://s3website.mozey.co --delete

Alternative deployment tools (not AWS specific)

  • rclone, “is a command line program to manage files on cloud storage”, that supports many providers, “as well as standard transfer protocols”
  • minio/mc, “provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff, find etc. It supports filesystems and Amazon S3 compatible cloud storage services”

🔗 DNS

Create a CNAME for s3website.mozey.co to the bucket URL (must end with a dot)

Note, if you want the root domain URL to redirect to the subdomain, e.g. you’d like your site to be available via both:

  • A subdomain URL, such as https://www.mozey.co, and
  • the root domain URL, such as https://mozey.co

Configure the root domain bucket to redirect to the subdomain, the command to do that will look something like this

echo '{
    "RedirectAllRequestsTo": {
        "HostName": "www.mozey.co",
        "Protocol": "https"
    }
}' > bucket-redirect.json

aws --profile your-profile s3api put-bucket-website --bucket mozey.co --website-configuration file://bucket-redirect.json

Question Why not use one bucket for both root and subdomain? The answer is

  • S3 wasn’t designed for hosting websites
  • Bucket names must be globally unique, if someones already taken a bucket with the name you want then you’re out of luck
  • A records must point to an IP address. So if someones taken the bucket with the same name as your root domain one workaround would be to use EC2 to host a webserver, then redirect to the subdomain from there

Remember to set baseurl in your hugo site config, e.g.

"baseurl": "https://www.mozey.co/"

🔗 CloudFront

CloudFront

“Amazon S3 website endpoints do not support HTTPS or access points. If you want to use HTTPS, you can use Amazon CloudFront to serve a static website hosted on Amazon S3.”

See Using a website endpoint as the origin, with access restricted by a Referrer header “When you use the Amazon S3 static website endpoint, connections between CloudFront and Amazon S3 are available only over HTTP. To use HTTPS for connections between CloudFront and Amazon S3, configure an S3 REST API endpoint for your origin”

TLDR Requires a bunch of commands for setting up a CloudFront distribution, requesting a cert, etc

🔗 Cloudflare

Cloudflare

“You can use Cloudflare to proxy sites that rely on Amazon Web Services (AWS) to store static content using Amazon’s Simple Storage Service (S3)”

Using the aws s3api put-bucket-policy command, replace the public policy with the one from the link, to "…ensures that your site only responds to requests coming from the Cloudflare proxy. This is the current list of IP address ranges used by the Cloudflare proxy"

For existing domains, do it like this

  • First “add the site” in Cloudflare so it automatically picks up existing DNS
  • Change name servers or glue records for a domain. NOTE The NS record must be changed in Registered Domains, not Hosted Zone
  • Wait for the NS record update to propagate
  • The redirect on the root domain bucket as described above is necessary because A-records must point to an IP address
  • Then change the bucket policy as above for redirect on the root, and limit access to Cloudflare on the subdomain

For this blog the Cloudflare settings are

Automatic HTTPS Rewrites: ON
Always use HTTPS: ON
Auto Minify: NONE
Brotli Compression: ON

🔗 Backup Strategy

Assuming the site above is hosted in the primary AWS account. Create a backup AWS account with a different email address (and billing?). The backup AWS account pulls data from the primary account, i.e. the primary does not have any permissions in the backup account. Other than the provision just mentioned, and the globally unique S3 bucket name requirement, the primary and backup accounts must be identical. In case of emergency change the Cloudflare DNS to use resources in the backup account until the primary has been restored

🔗 Troubleshooting

The bucket URL depends on the region, some use a dash (-), while other use a dot (.), e.g. eu-west-2 uses dot and us-west-2 uses dash

From the link above for the aws s3 website command, “All files in the bucket that appear on the static site must be configured to allow visitors to open them. File permissions are configured separately from the bucket website configuration”. See Setting permissions for website access, however the aws s3api commands listed above should have taken care of this