Navigate back to the homepage

Automating My Personal Portfolio

Justin Ho
September 4th, 2020 · 6 min read

In the age of web oriented technologies and especially as a former web developer, I see website portfolios as a replacement for PDF resumes as it allows for more creativity and interactivity, building a stronger connection between the user and the website owner.

How DevOps Came to Be

Development Operations (abbr. DevOps) is a software development philosophy which addresses communication issues between the software development teams and infrastructure teams using the waterfall methodology in enterprise environments.

Some common issues in enterprise corporations utilizing the waterfall methodology include:

  • the prototype built on the developer’s computer may not work the same way on a production environment
  • the requirements for a production environment may not be clearly defined until the final product has been created
  • errors at any stage requires long turnaround times due to stringent release cycles

The DevOps philosophy aims to solve some of these issues by encouraging proper communication and developing a closer relationship between the development teams and infrastructure teams.

Why I Want to Work in DevOps

After experiencing the miscommunications between developers and admins firsthand, I was left frustrated yet unsure about how I can tackle this issue. Fortunately, my interactions with other developers and system administrators lead me to learn about DevOps and started my own research; I read various articles online as well as the critically acclaimed book The Phoenix Project which helped me develop a better understanding of both the origins and applications of DevOps.

Furthermore, DevOps appeals to me as I studied human-computer interaction (HCI) in my undergraduate degree and the DevOps philosophy is similar in the sense that both strive to bridge the gaps between people using technology / people and their technology.

Now you might be thinking: if DevOps is a philosophy, how can you work in it? While I can’t answer that question without getting into corporate and hiring culture, the current role of DevOps engineer entails implementing and maintaining the philosophy through a set of practices and software components. For example, continuous integration (testing frequently), continuous delivery (deploying to production frequently), monitoring and logging, and many other aspects to minimize friction between the software developer’s development build on their own laptop to the final product hosted on a production server.

Implementing DevOps in My Portfolio

Although my portfolio is created, deployed and maintained by one person, myself, I wanted to make sure I follow best practices as I believe in the DevOps philosophy’s advantage to ensure uniformity and immutability in the software development life cycle.

The Website

Portfolio Landing Page
Landing Page v1 after replacing the Novela theme with my own assets

I wasn’t overly picky with the website component of this portfolio as the focus was just to have a working site to display my thoughts and work. I settled on using GatsbyJS because I worked with React at my last workplace and it had a plethora of ready-to-use themes available. Browsing through the list of themes, I picked the Novela theme by Narative because of its clean aesthetic and great out-of-the-box presets for typography, images, and even sourcing data from a headless cms like Contentful.

The Development Pipeline

The heart of this portfolio lives in the development pipeline, with an aim to automate every step of the process after code is committed to the master branch. Below is an overview of what the process looks like after I commit any code changes to my remote git repository.

Portfolio Development Pipeline (Color)
Project pipeline that integrates CI/CD

Breaking it down into detail, these are some thoughts and considerations put into the project pipeline:

Code Quality

As this site is written mostly in javascript, tools such as eslint (code linter) and prettier (code formatter) can help standardize the code base and prevent merge conflicts between developers. Again this may not be all that useful to this project since I’m the only developer but it helps keep me in the habit of using these widely adopted tools as they will most likely be used at my next work place. In addition, the linter and formatter can be enforced at commit time using precommit hooks (or a package like husky).

1// package.json
2{
3 ...
4 "scripts": {
5 ...
6 "lint:fix": "eslint --ignore-path .gitignore --fix ."
7 },
8 ...
9 "husky": {
10 "hooks": {
11 "pre-commit": "npm run lint:fix"
12 }
13 }
14}

Commit Messages

Coming up with a good commit message is hard, almost as hard as coming up with a good variable name. However, a team known as conventional changelog has a cli tool which helps standardize commit messages by linting them using rules from projects such as Angular. I decided to use a more interactive cli solution called commitizen because I don’t have to navigate to Angular project’s contributing guidelines to remember what the valid types are.

Commitizen CLI Options
Commitizen CLI offers an interactive prompt to remind you what the options are for

Continuous Integration & Continuous Delivery

Commonly referred to as CI / CD, there are many variations on how this part of the pipeline is configured depending on project scope and team size. As you can imagine, my one-man project doesn’t need a complicated approval process since I will be at all steps of this operation. However, my goals are to have automated and continuous testing as well as automatic deployments on successful build of the master branch.

Git

A quick side note and one line overview for anyone who has not used git, it is a version control system used to store references to the code base at a point in time, called commits, so that developers have the luxury of reverting changes or working on multiple variations at once on copies known as branches. All of this is stored on a git repository (project folder), locally or on a web hosted service such as GitHub.

Trunk Based Development

I chose to use trunk based development in my version control process for its development velocity without commiting everything onto master as if I were working in a small team. Combining this with my continuous integration server of choice, Circle CI, any branch pushed to the central git repository on GitHub will trigger a webhook which tells Circle CI to start a job.

Continuous Integration

1// .circleci/config.yml
2...
3jobs:
4 test:
5 docker:
6 - image: cimg/node:lts
7 steps:
8 - checkout
9 - node/install-npm
10 - node/install-packages
11 - run:
12 name: Lint Files
13 command: npm run lint:fix
14 - persist_to_workspace:
15 root: .
16 paths:
17 - .
18 build:
19 docker:
20 - image: cimg/node:lts
21 steps:
22 - attach_workspace:
23 at: .
24 - restore_cache:
25 keys:
26 - gatsby-cache-{{ checksum "package-lock.json" }}
27 - gatsby-cache-
28 - run:
29 name: Gatsby Build
30 command: NODE_ENV=production npm run build
31 - save_cache:
32 key: gatsby-cache-{{ checksum "package-lock.json" }}
33 paths:
34 - public
35 - .cache
36 - run:
37 name: Deploy to Netlify (Preview)
38 command: ./node_modules/.bin/netlify deploy --dir=public
39 ...
40workflows:
41 test_and_build:
42 jobs:
43 - test
44 - build:
45 context: Deploy Keys
46 requires:
47 - test
48 ...

The above snippets are from the integration parts of my CircleCI config. As mentioned earlier, every time a change is pushed to remote, or a pull request is made, the webhook will trigger the steps outlined in the config file. At this current time, no tests have been written so the only thing that happens in test job is just linting (which should have been done on every commit anyways). The build job is where things get interesting.

For instance, workspace is persisted from the previous job (test) in order to reuse the same npm packages, ensuring immutability and saving on download time / bandwidth. Next, caching is used between Gatsby builds to speed up build times (this will also be used for Gatsby’s new incremental build feature once it is fully released). Finally, once both test and build jobs succeeds, a preview is deployed to Netlify, my choice of web host.

If any of the steps fail, I would be notified via push notification and the pipeline would stop, not allowing any failed builds to make it to preview or production.

Circle CI Workflow Page
A successful CI/CD workflow on Circle CI

Continuous Delivery

1// .circleci/config.yml
2...
3jobs:
4 ...
5 deploy:
6 docker:
7 - image: cimg/node:lts
8 steps:
9 - attach_workspace:
10 at: .
11 - restore_cache:
12 keys:
13 - gatsby-cache-{{ checksum "package-lock.json" }}
14 - gatsby-cache-
15 - run:
16 name: Deploy to Netlify (Production)
17 command: ./node_modules/.bin/netlify deploy --prod --dir=public
18 - run:
19 name: Update Versioning
20 command: npx semantic-release
21workflows:
22 test_and_build:
23 jobs:
24 ...
25 - deploy:
26 context: Deploy Keys
27 requires:
28 - test
29 - build
30 filters:
31 branches:
32 only:
33 - master

Tightly coupled with the last step, each successful build is deployed to the preview site on Netlify which allows for continuous delivery of the production state on every change. The deploy job makes use of the cache from the build job to ensure the same artifacts are being deployed, saving on build time, and is only triggered on builds on the master branch.

Semantic Versioning

Finally, I decided to add an extra step to my deploy job to make use of the commit linting earlier. Semantic versioning is a strategy to standardize the project version number for mutual understanding of the type of change between one version and another. Without going into too much detail, semantic release uses the standardized commit message to determine the needed change in version number outlined in semantic versioning. So why do I need to version my portfolio? Because this is an open source project, I encourage others to fork (clone) my project and this versioning practice may help encourage better practices or let them know which version of the project they copied easily.

Learning Outcomes

So what did I learn from doing all of this? I learned that modern tools enable us to build out pipelines quickly and efficiently, and that there really is no excuse to not integrate automated testing and deployments past the initial prototype stage as it will save a lot of headaches and technical debt further along the project the sooner these are implemented. I learned how I can bring value to my future workplace by connecting individual components of the integration and deployment pipeline through automation. Finally, I learned that innovation comes from breaking things, and that I should not be afraid to push new changes but instead use tools such as a CI / CD pipeline that supports me pushing frequent changes.

Future Work

There are still a lot of improvements to be made to this portfolio. As I am only using the base Novela theme, there are quite a few tweaks I want to make in the future such as changing the navigation bar and adding pinned articles to the landing page. In addition, a testing suite should be added to complete the continuous integration aspect of this project; for example, browser testing using Cypress can be good way to find broken links or missing HTML objects.

More articles from Justin

HypeTracker - An Exercise In Designing Databases

designing and implementing a sql database using entity relationship diagrams

September 16th, 2020 · 4 min read

Automating My Personal Portfolio

integrating continuous delivery concepts with a personal touch

September 4th, 2020 · 6 min read
© 2020 Justin
Link to $https://github.com/justinhodevLink to $https://linkedin.com/in/justin-ho-devLink to $https://dev.to/justinhodev