Designing My Personal Site as a Production System
Summary
This post walks through how I designed and built this site as a small, intentional system: one that I fully own, is easy to publish to, and requires almost no ongoing operational effort. Starting from clear goals and constraints, I chose a static architecture on AWS that prioritizes simplicity, low cost, and long-term maintainability over unnecessary flexibility.
What follows is a high-level breakdown of the architecture, the trade-offs I considered, and why this approach works well for a personal site that’s meant to last.
Context & Problem Statement
Why this site exists and what problem it solves
I built this website because I wanted a place on the internet that I fully own. I wanted it to serve two purposes: a professional portfolio and a space to write about ideas I care about, without relying on a publishing platform to do that for me.
I approached this project the same way I approach production systems. Before writing any code, I defined what I wanted the system to do, the constraints it needed to operate within, and how I would measure success. The goal was not simply to publish content, but to do so in a way that balances ownership, simplicity, cost, and long-term maintainability, while reflecting how I think about architecture in real systems.
The problem itself is familiar. Many blogging platforms make publishing easy, but trade away control and flexibility. Fully custom sites restore that control, but often introduce operational overhead that is hard to justify for a personal project. I wanted a system that feels lightweight to use, while still being thoughtfully designed.
Goals
- Make writing and publishing new posts simple and low effort.
- Establish an online presence without platform lock-in.
- Use the site to showcase my work and how I approach problems.
- Gain additional hands-on experience designing and operating within AWS.
Constraints
- The site needed to fit into a busy life alongside work, family, and other commitments.
- Monthly costs needed to stay low at current traffic levels. The original static placeholder cost $1.83 per month.
- The architecture needed to remain viable for years or be easy to change as needs evolve.
Success Criteria
- Pages load quickly and look good on mobile and desktop
- Publishing a new post requires minimal setup.
- Ongoing operational effort is close to zero.
- The system is secure and able to grow over time.
These goals and constraints shaped every architectural decision that followed.
Architecture Overview
How the system is structured and why these choices work
I chose a static site architecture because it aligns with my goals: speed, low operational effort, and low cost. The site is built using Zola, a static site generator (SSG) that lets me write in Markdown, version everything in GitHub, and produce predictable, cache-friendly output. There is no runtime server, no database, and no application logic running in production.
Zola has an active theme community, which helped me move quickly. I chose the Linkita theme because it provided a clean, minimalist layout with SEO defaults out of the box. It is easy to customize when needed, but it allowed me to launch without spending time on design work that was not central to the problem I was solving.
I purchased my domain through Porkbun and connected it to Amazon Route 53 for DNS, keeping domain ownership simple while managing routing alongside the rest of the infrastructure.
From there, the system is fully automated.
Build and deployment flow
All source code and content live in a GitHub repository. A push to the main branch triggers the deployment pipeline.
AWS CodePipeline orchestrates the workflow, pulling source code from GitHub and coordinating build and deployment stages. Artifacts move between stages using Amazon S3.
AWS CodeBuild runs the build using a buildspec.yml file in the repository. This is where Zola generates the static site. Keeping build logic in version control makes the process easy to understand and change.
Once the build completes, CodePipeline’s S3 Deployment action deploys the generated files into a private S3 bucket. This bucket is never publicly accessible and exists only as an origin for CloudFront.
Content delivery and routing
Amazon CloudFront serves the site globally and handles caching at the edge. I configured CloudFront Origin Access Control (OAC) so CloudFront can read from the private S3 bucket without exposing it to the public internet.
A Web Application Firewall (WAF) is attached at the edge to provide baseline protection. Even for a static site, security is treated as a default requirement.
I use a CloudFront Function for mapping clean URLs like /about to static file paths such as /about/index.html and a Lambda function for automatically invalidating the CloudFront cache after each deployment. This keeps URLs readable and deployments hands-off without adding backend complexity.
Why this architecture works
Publishing a new post is just a Git commit. There are no servers to manage, no manual deployment steps, and no runtime infrastructure to maintain. Costs stay low, performance stays high, and the system can evolve without a rewrite.
This reflects how I build products: define the problem first, automate where possible, and design for long-term maintainability.
Alternatives Considered and Trade-offs
I considered a more dynamic, application-style architecture before settling on a static site. These options offered flexibility, but they also introduced complexity that did not align with my goals.
Dynamic web application stack
One option was to build the site using Next.js with Tailwind CSS, storing static assets and images in S3. To support dynamic features, I considered adding:
- Amazon API Gateway with AWS Lambda
- Amazon Aurora Serverless (MySQL)
- A headless CMS such as Sanity or Strapi
This stack is powerful and common in production systems. It supports dynamic content, forms, and customization. It is also more expensive to operate, more complex to secure, and harder to maintain over time.
For a personal site focused on writing and presentation, it solved problems I did not actually have.
Why I chose not to build it this way
The trade-off was flexibility versus operational simplicity.
A dynamic stack would require managing APIs, databases, authentication, security updates, and runtime behavior. None of that improved the writing or reading experience. It increased surface area and long-term cost.
Known trade-offs of the static approach
A static architecture does have limitations:
- Limited functionality: There is no native backend. Any future form support, for example, would require a small serverless addition.
- Full customization: The site is customizable, but not as flexible as a full React application.
- Build and redeployment dependency: The entire site needs to be rebuilt when a new post is published. If the project grew much larger, I can expect slower builds and deployments.
For the present time and usage, I chose not to over-engineer.
Security and Trust Boundaries
I treated security as a default requirement, even for a personal site. The most effective way to reduce risk was to keep the attack surface small.
There are no public servers, databases, or backend APIs. The only publicly reachable service is CloudFront.
The S3 bucket that stores site content is private. Access is restricted using CloudFront Origin Access Control (OAC), which allows CloudFront to fetch objects while preventing direct access to the bucket.
A Web Application Firewall (WAF) is attached at the edge to provide basic protection against common request patterns.
Trust boundaries are simple:
- GitHub is the source of truth for code and content.
- AWS handles build, deployment, and delivery.
- End users interact only with CloudFront.
This keeps the system easy to reason about and reduces the risk of accidental exposure.
Cost and Operational Footprint
Keeping costs low was a core constraint.
There is no always-on compute. Every service is event-driven or usage-based. If I do not publish new content, the system is effectively idle.
At current traffic levels, monthly costs are comparable to the original static placeholder site. CloudFront, S3, and Route 53 account for most of the cost. CodePipeline, CodeBuild, and Lambda only incur charges during deployments.
Operational effort is minimal:
- No servers to patch.
- No databases to maintain.
- No manual deployments.
Publishing is a Git push. Everything else is automated.
Architecture Diagram
Below is a high-level view of the system.
flowchart TD A[GitHub Repository] -->|Push to main| B[AWS CodePipeline] B --> C[AWS CodeBuild
Zola Build] C --> D[S3 Build Artifacts] D --> E[S3 Private Content Bucket] E -->|Deploy Complete| B B -->|Post-Deploy Action| K[AWS Lambda
CloudFront Invalidation] K -->|Create Invalidation| F[Amazon CloudFront] E -->|OAC| F F -.-> L[CloudFront Function
URL Rewrite] F -.-> G[AWS WAF] F --> H[End Users] I[Porkbun Domain] --> J[Route 53] J --> F
Closing Thoughts
This site is intentionally simple.
Every decision ties back to a small set of goals: ownership, ease of publishing, low cost, and long-term maintainability. The result is a system that stays out of the way when I want to write, while still reflecting how I think about building products.
I can evolve this architecture if my needs change. Until then, it does exactly what I need it to do.