~ friquelme.dev
< cd ../blog

deploying an astro site to aws — the full pipeline

| 9 min read | florian riquelme
aws astro devops github-actions
>

tldr: this entire site — astro frontend, aws cdk infrastructure, github actions pipeline — is open source. grab the code at github.com/FlorianRiquelme/friquelme.dev and use it as a starting point for your own setup.

The Goal

Every push to main should result in the site being live — no manual steps, no stored AWS credentials, and a cache strategy that keeps the site fast without serving stale content.

This post walks through exactly how friquelme.dev is deployed: from the GitHub Actions workflow to the AWS infrastructure defined with CDK.

Architecture Overview

The pipeline has four moving parts:

  1. GitHub Actions — builds the Astro site and syncs files to S3
  2. OIDC federation — short-lived AWS credentials with no stored secrets
  3. S3 — bucket with static website hosting, serving as the origin for CloudFront
  4. CloudFront — CDN with HTTPS, HTTP/2+3, and security headers

Everything is defined as code. The GitHub Actions workflow handles CI/CD, and AWS CDK manages the infrastructure.

The GitHub Actions Workflow

The entire deployment lives in a single workflow file. Here’s the full thing:

name: Deploy to AWS

on:
  push:
    branches: [main]
  workflow_dispatch:

permissions:
  id-token: write
  contents: read

concurrency:
  group: deploy-production
  cancel-in-progress: false

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 22
          cache: pnpm

      - run: pnpm install --frozen-lockfile

      - run: pnpm run build

      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE_ARN }}
          aws-region: us-east-1

      # Sync hashed assets with immutable cache headers
      - name: Sync _astro/ assets
        run: |
          aws s3 sync dist/_astro/ s3://${{ secrets.S3_BUCKET_NAME }}/_astro/ \
            --cache-control "public,max-age=31536000,immutable" \
            --delete

      # Sync root files with must-revalidate cache headers
      - name: Sync root files
        run: |
          aws s3 sync dist/ s3://${{ secrets.S3_BUCKET_NAME }}/ \
            --exclude "_astro/*" \
            --cache-control "public,max-age=0,must-revalidate" \
            --delete

      # Invalidate only critical non-hashed paths
      - name: Invalidate CloudFront
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} \
            --paths "/" "/index.html" "/favicon.ico" "/favicon.svg" "/blog/*"

A few things worth noting.

OIDC Authentication

The permissions.id-token: write line is what enables OpenID Connect federation. Instead of storing long-lived AWS access keys as GitHub secrets, the workflow exchanges a short-lived GitHub token for temporary AWS credentials via aws-actions/configure-aws-credentials@v4.

The trust relationship is locked down to:

  • This specific repository
  • The production GitHub environment only
  • Sessions expire after 1 hour (the minimum AWS allows)

This means even if someone forks the repo, they can’t assume the deploy role.

Two-Phase S3 Sync

Astro generates hashed filenames for all processed assets (JS, CSS, images) under _astro/. These files are content-addressable — the filename changes when the content changes. This makes them safe to cache forever:

dist/_astro/Layout.DxF4k2.css
dist/_astro/index.B7mK3p.js

The sync strategy exploits this:

  1. _astro/*max-age=31536000,immutable — cached for 1 year, browsers never revalidate
  2. Everything else (HTML, favicons) → max-age=0,must-revalidate — always checks for fresh content

This gives us the best of both worlds: instant loads for returning visitors on unchanged assets, and immediate updates for new content.

Selective CloudFront Invalidation

Instead of invalidating /* (which costs money at scale and is slow), we only invalidate the paths that actually matter for freshness: the homepage, favicons, and blog content. HTML pages already have must-revalidate cache headers, so CloudFront will check the origin on every request anyway.

>

why not invalidate /*? CloudFront charges $0.005 per path after the first 1,000 invalidations per month. For a static site with few critical paths, targeted invalidation is both faster and cheaper.

Concurrency Control

concurrency:
  group: deploy-production
  cancel-in-progress: false

The cancel-in-progress: false setting is intentional. If two pushes happen in quick succession, we don’t want the second deploy to cancel the first mid-sync — that could leave S3 in an inconsistent state. Instead, the second deploy queues and runs after the first completes.

AWS Infrastructure with CDK

The infrastructure is defined in two CDK stacks. These are deployed manually (not through CI) since infrastructure changes are infrequent and warrant human review.

The Static Site Stack

const DOMAIN_NAME = 'friquelme.dev';

// S3 bucket — static website hosting for subdirectory index resolution
this.bucket = new s3.Bucket(this, 'SiteBucket', {
  websiteIndexDocument: 'index.html',
  publicReadAccess: true,
  blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS,
  objectOwnership: s3.ObjectOwnership.BUCKET_OWNER_PREFERRED,
  removalPolicy: cdk.RemovalPolicy.DESTROY,
  autoDeleteObjects: true,
});

The S3 bucket uses static website hosting with websiteIndexDocument set. This is critical for multi-page static sites — more on why below.

CloudFront Configuration

this.distribution = new cloudfront.Distribution(this, 'SiteDistribution', {
  defaultBehavior: {
    origin: new origins.S3StaticWebsiteOrigin(this.bucket),
    viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
    responseHeadersPolicy,
  },
  domainNames: [DOMAIN_NAME, `www.${DOMAIN_NAME}`],
  certificate,
  httpVersion: cloudfront.HttpVersion.HTTP2_AND_3,
  priceClass: cloudfront.PriceClass.PRICE_CLASS_ALL,
});

Key decisions:

  • S3StaticWebsiteOrigin — connects CloudFront to S3’s website hosting endpoint, which handles index document resolution natively
  • No defaultRootObject — S3 website hosting handles this; setting it on CloudFront would only apply to the root path anyway
  • HTTP/2 and HTTP/3 — HTTP/3 uses QUIC (UDP-based), which eliminates head-of-line blocking and reduces connection setup latency
  • Price Class All — uses all CloudFront edge locations globally for the lowest latency everywhere, since the free tier covers most personal site traffic anyway
  • REDIRECT_TO_HTTPS — all HTTP requests are upgraded, no mixed content possible

The OAC Trap — Why Your Subpages Break

This deserves its own section because it’s a subtle issue that will bite anyone deploying a multi-page static site to S3 + CloudFront.

The “modern” approach you’ll find in most tutorials and AWS docs is to use Origin Access Control (OAC) — keep the S3 bucket private, and let CloudFront authenticate requests to it. This is what CDK’s S3BucketOrigin.withOriginAccessControl() sets up. It sounds clean and secure.

The problem: OAC uses the S3 REST API, which does not resolve subdirectory index documents.

When a user visits /blog, Astro has generated blog/index.html in the bucket. With S3 static website hosting, the request resolves correctly: /blogblog/index.html. But the S3 REST API (used by OAC) treats /blog as a key lookup — there’s no object with that key, so S3 returns 403 Forbidden.

Here’s where it gets worse. A common CDK pattern is to add custom error responses to “fix” SPAs:

// DON'T do this for multi-page static sites
errorResponses: [
  {
    httpStatus: 403,
    responseHttpStatus: 200,
    responsePagePath: '/index.html',
  },
  {
    httpStatus: 404,
    responseHttpStatus: 200,
    responsePagePath: '/index.html',
  },
],

This catches the 403 and silently serves the homepage. For a single-page app with client-side routing, that’s fine — the JS router picks up the URL and renders the right view. But for a statically generated site like Astro in static mode, there’s no client-side router. The user sees the homepage, the URL says /blog, and nothing looks broken at first glance. It’s a silent failure.

>

the symptom: navigating to any subpage (like /blog) shows the homepage instead of the actual page content. the url changes but the wrong page is served. no errors in the console, no 404 — just the wrong content.

The fix is to use S3 static website hosting as the origin instead of OAC:

  1. Enable websiteIndexDocument on the bucket
  2. Allow public read access (CloudFront still handles HTTPS and caching)
  3. Use S3StaticWebsiteOrigin instead of S3BucketOrigin.withOriginAccessControl
  4. Remove defaultRootObject from the distribution (S3 handles it)
  5. Remove errorResponses (no more masking real errors)

Yes, the bucket is technically public. But the content is a static website — it’s meant to be public. The security headers, HTTPS enforcement, and cache policies are all handled at the CloudFront layer regardless of origin type. If your content is meant to be served to the internet, the “private bucket + OAC” approach adds complexity without meaningful security benefit, and breaks multi-page routing in the process.

>

alternatives if you really want a private bucket: you can keep OAC and add a CloudFront Function to rewrite URIs (appending /index.html to directory paths). this works but adds another moving part. for most static sites, website hosting is simpler and more reliable.

Security Headers

Every response gets hardened headers injected at the CDN level:

const responseHeadersPolicy = new cloudfront.ResponseHeadersPolicy(
  this, 'SecurityHeaders', {
    securityHeadersBehavior: {
      strictTransportSecurity: {
        accessControlMaxAge: cdk.Duration.seconds(63072000), // 2 years
        includeSubdomains: true,
        preload: true,
        override: true,
      },
      contentTypeOptions: { override: true },
      frameOptions: {
        frameOption: cloudfront.HeadersFrameOption.DENY,
        override: true,
      },
      referrerPolicy: {
        referrerPolicy:
          cloudfront.HeadersReferrerPolicy.STRICT_ORIGIN_WHEN_CROSS_ORIGIN,
        override: true,
      },
    },
  },
);

This gives us:

  • HSTS with 2-year max-age, subdomains, and preload — the browser will never make an insecure connection
  • X-Content-Type-Options: nosniff — prevents MIME-type sniffing attacks
  • X-Frame-Options: DENY — blocks clickjacking
  • Referrer-Policy — only sends origin on cross-origin requests

DNS and TLS

const certificate = new acm.Certificate(this, 'SiteCertificate', {
  domainName: DOMAIN_NAME,
  subjectAlternativeNames: [`www.${DOMAIN_NAME}`],
  validation: acm.CertificateValidation.fromDns(hostedZone),
});

new route53.ARecord(this, 'SiteAliasRecord', {
  zone: hostedZone,
  target: route53.RecordTarget.fromAlias(
    new targets.CloudFrontTarget(this.distribution),
  ),
});

The ACM certificate covers both the apex domain and www. DNS validation via Route53 means certificate renewal is fully automatic — no manual intervention, no expiry surprises.

The OIDC Stack

The second stack sets up the trust relationship between GitHub Actions and AWS:

const oidcProvider = new iam.OpenIdConnectProvider(
  this, 'GitHubOidcProvider', {
    url: 'https://token.actions.githubusercontent.com',
    clientIds: ['sts.amazonaws.com'],
  },
);

const deployRole = new iam.Role(this, 'GitHubDeployRole', {
  assumedBy: new iam.OpenIdConnectPrincipal(oidcProvider, {
    StringEquals: {
      'token.actions.githubusercontent.com:aud': 'sts.amazonaws.com',
    },
    StringLike: {
      'token.actions.githubusercontent.com:sub':
        `repo:${GITHUB_REPO}:environment:production`,
    },
  }),
  maxSessionDuration: cdk.Duration.hours(1),
});

The role’s permissions follow least privilege — it can only:

  • Read, write, and delete objects in the site bucket
  • List the bucket contents
  • Create CloudFront invalidations

Nothing else. No s3:*, no cloudfront:*. If the credentials were somehow compromised, the blast radius is limited to this specific bucket and distribution.

The Full Flow

Putting it all together, a deploy looks like this:

  1. git push origin main — triggers the workflow
  2. Build — pnpm installs dependencies, Astro builds static files to dist/
  3. OIDC exchange — GitHub token → temporary AWS credentials (STS)
  4. S3 sync phase 1 — hashed assets with immutable cache headers
  5. S3 sync phase 2 — HTML and root files with must-revalidate headers
  6. CloudFront invalidation — clears cached versions of /, /index.html, favicons
  7. Live — the site is updated, typically under 2 minutes end-to-end

The entire pipeline is reproducible, auditable, and runs without any stored secrets. The CDK stacks can be torn down and recreated at any time, and the GitHub Actions workflow is self-contained.

Cost

For a personal portfolio with modest traffic, the monthly AWS bill is under $1:

  • S3 — pennies for storage and requests
  • CloudFront — free tier covers 1TB/month of data transfer
  • Route53 — $0.50/month for the hosted zone
  • ACM — free for public certificates

The GitHub Actions minutes are free for public repos.


If you’re deploying a static site to AWS and want to avoid the common pitfalls — broken subpage routing with OAC, long-lived credentials, stale caches — this setup is a solid starting point. The full source code for both the site and infrastructure is on GitHub.