Blog CD Pipeline with AWS CodePipeline

Jumped out of order from my earlier checklist and set up some automagic build and deploy. I’d wanted an excuse to try out CodePipeline, so this was it!

So, how does this blog work? It is deployed to an S3 bucket ( with CloudFront in front of it. CloudFront is set up to use the free SNI certs to provide TLS. Previously, I pushed manually via s3cmd, which worked well with some incantation fiddling.

I won’t write a full CodeBuild and CodeDeploy tutorial, Amazon has that well covered, but a couple bits were funny to work out, so will talk about those.

First, CodePipeline needs to trigger things. This is important as CodeBuild has no mechanism (which I could find) to only care about particular branches. CodeBuild really just does builds (kind of). Conceptually, do everything through CodePipeline and other stuff is just steps which react to the pipeline.

Given this is a static site, the build step just builds a tarball:

version: 0.2

      - wget
      - dpkg -i ./hugo_0.31_Linux-64bit.deb
      - hugo
      - tar -C public -cvzf .

Nothing fancy here, but having the artifact for CodePipeline to pass around is important.

My first pass just had the deploy at the end of the build, but I want to be able to insert some basic tests before I deploy new versions. Just things like link verification, HTML5 validation, and maybe running stuff like Lighthouse against a test instance before letting it out. The test part :-) Because of this, I wanted to seperate build from deploy. It turns out “deploy by copying into an S3 bucket” is not a thing CodeDeploy has any concept for.

So my “deploy” is just another CodeBuild build:

version: 0.2
      - tar -xf
      - rm
      - aws s3 sync . s3:// --acl public-read --cache-control public,max-age=600

You can feed one build to another, it doesn’t mind. When I configured the second build I had to set it up with an output artifact or CodePipeline wouldn’t let me add it. After I saved the CodePipeline changes, I could go back and remove that output. The other decent path is probably to set up a Lambda function that takes apart the tarball and copies things over… but this build approach seems simpler.

I tried to put a cloudfront invalidation into the last step as well, but the version of the aws cli on the build image is old and it is not supported. I’ll sort that out later. Once I do, will change max-age to max-age=31536000 or so and add something like:

- aws cloudfront create-invalidation --distribution-id E1DTTO3T6ZPN9M --paths / /index.html /404.html /archive.html /index.xml

to the build commands, and voila!

Knock, knock. This thing still on?

I’ve attempted to wake this blog up a couple times, but between Jekyll changing, Pygments changing, and whatnot, it has been more pain than any given post seemed worth. I’ve recently had three folks independently chastise me for no longer writing, however, and three is a magic number.

So, this is basically just a test post as I try converting over to Hugo. Jekyll resisted hard enough that I declared bankruptcy. We’ll try this one. If I get annoyed at it, Gutenberg is next on the list. Torsten says nice things about it :-)

All my old posts should be exactly where I left them, no sense breaking URLs.

For a test post I need to include some code, that being most of the point of this thig, so some Rust from wsf.

impl From<hyper::error::Error> for CliError {
    fn from(err: hyper::error::Error) -> CliError {

So far, so good.

Off to deploy and see if this thing works!