+
+
\ No newline at end of file
diff --git a/blog/atom.xml b/blog/atom.xml
new file mode 100644
index 00000000..c00a36db
--- /dev/null
+++ b/blog/atom.xml
@@ -0,0 +1,458 @@
+
+
+ https://evantay.com/blog
+ Evan Tay Blog
+ 2021-09-10T00:00:00.000Z
+ https://github.com/jpmonette/feed
+
+ Evan Tay Blog
+ https://evantay.com/img/logo.png
+
+
+ https://evantay.com/blog/why-you-should-read-ddia
+
+ 2021-09-10T00:00:00.000Z
+
+ Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
I kickstarted my engineering career back in January 2021, as a full stack engineer at Padlet. During the onboarding process, my (amazing) mentor, Brian, imparted a great deal of guidance to me. One of his tips was that I should take a look at Kleppman’s Designing Data-Intensive Applications. Thankfully, we had two copies of the book in the office, purchased by my (also amazing) boss, Shu Yang, who recommended me to read it too. I’m thankful I ended up taking their advice, because I was able to glean so much insights from Kleppman, which happened to be highly applicable to the infrastructure and full stack projects I was developing.
+
+
"This book should be required reading for software engineers." - Kevin Scott, Chief Technology Officer at Microsoft
+
+
Like Brian, Shu Yang and Kevin, I now also believe all software engineers working on a distributed, cloud or data-intensive system will greatly benefit from reading the book. It provides the fundamental framework for thinking about these systems, and also the vocabulary to communicate such thoughts. Coupled together, these insights will empower you to make better design decisions and effectively convey them, even if you lack prior experience in the problem domain.
+
Kleppman also compared the key fundamental ideas behind the broad range of popular data systems out there today, by discussing their advantages, limitations and trade-offs, rather than diving deep into the intricacies of each tool. This was ideal given that the book's objective was to help us choose the right tool for the right occasion, which these characteristics will be more relevant for.
+
If you lack the time (or will) to pour over the entire book, you should at least check out the opening chapter. In it, Kleppman gives a comprehensive yet succinct overview of what I mentioned above, and provides a clear, detailed explanation of the three key principles in designing data-intensive system architecture: Reliability, Scalability and Maintainability. Just reading this first chapter alone was beneficial to me, as I was now able to better understand and discuss architectural concerns with my team.
+
If you're still not convinced whether to invest your time into this book, you can check out a summary I've written for the first chapter, where I’ve condensed Kleppman’s opening discourse on Reliability, Scalability and Maintainability. I’m certain it’ll provide a glimpse into the many lessons that Designing Data-Intensive Applications has to share, and if you do read the book, definitely let me know what you think!
]]>
+
+ Evan Tay
+ https://github.com/DigiPie
+
+
+
+
+
+
+
+ https://evantay.com/blog/docusaurus-posthog
+
+ 2021-06-26T00:00:00.000Z
+
+ I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
+
Plausible only offers a free 30-day trial, and would cost at least $6 monthly subsequently. In comparison, PostHog has a free non-expiring tier which provides up to 1 million events every month. It also requires no credit card and is completely self-serve. There was no need to request and wait for a free license. It took me less than 10 minutes from signing up to getting the analytics running.
+
PostHog is open-source
If you are looking for an open-source solution you can self-host, PostHog may be it.
+
Despite what I have shared so far, I would still choose Plausible over PostHog if the former was free too. That's because the Plausible's dashboard interface looks much better and has more visualization options. For comparison, you can view the PostHog dashboard for this site, and the Plausible dashboard for The Pragmatic Engineer.
+
If you want to integrate PostHog with your Docusaurus site, you can follow the guide below. The later steps are for deployment to GitHub Pages via GitHub Actions. But you can easily modify them for other platforms and deployment workflows. I will give some tips for doing that.
The example in the official PostHog guide for Docusaurus v2 Integration inserts the API key directly into the code (e.g. apiKey: "phc_fakekeyhHBZOuIq"). It is a bad idea to do so, especially if you host your code publicly (i.e. on a GitHub public repo). It is good practice to keep API keys secret and outside of application code instead. We will be using GitHub Encrypted Secrets to achieve that.
+
+
+
Add a repository secret to the GitHub repo hosting your site's code.
+
+
Settings > Secrets > New repository secret > Name:POSTHOG_API_KEY
+
If you are using another deployment platform
There should be a settings panel which allows you to specify environment variables or secrets to insert into your application deployments securely. Put your POSTHOG_API_KEY there. Skip the next step.
+
+
+
Open the GitHub Action workflow file responsible for deploying your site, and add the environment variable POSTHOG_API_KEY to the Docusaurus build step:
# Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build env: POSTHOG_API_KEY: ${{ secrets.POSTHOG_API_KEY }}
+
+
+
That's it! Once you deploy your new changes, the plugin will automatically start tracking pageviews, clicks and more. For more customisation options, you can check out the posthog-docusaurus plugin repo and the PostHog guide for Docusaurus v2 Integration. You can also check out the commit I made to integrate PostHog into this website.
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
I was hesitant to invest for years because I used to think any form of investment was risky. Like most some Singaporeans, I am was kiasi. I was afraid of the unknown. What eventually changed my mind was chancing upon this article: The big problem (of) playing it too safe with money in our 20s by thewokesalaryman, and the following quote:
+
Quote of the Day
Ironically, by not taking any risks and letting all your money get eroded by certain inflation, you are actually doing the riskiest thing.
+
This quote was the wake-up call for me to start investing, and could be yours too if you are someone who is privileged enough to start doing the same (i.e. you have the financial ability to buy bubble tea at least once a week).
+
+
Photo by my bubble-tea buddy Elsie Lee
+Pictured (from left): Milksha's Fresh Milk, and Izumo Matcha Milk with Honey Pearls
+Is Milksha a buy? Milksha isn't cheap by any stretch but you can't put a price tag on happiness ⭐
+
Pro Tip from fellow bubble-tea enthusiast Freda
Buy Milksha coupons from Shopee beforehand to get massive discounts!
+
Anyhow, I reasoned the best way to kick-start my investment journey would be with a robo-advisor platform, given how beginner-friendly it is, how low the commission fees are, and how lazy I am given the passive and minimal-effort investment a robo-advisor offers.
+
You can check out this page by dollarsandsense giving an introduction to robo-advisors for more reasons why you should or should not invest with a robo-advisor platform.
+
I ended up choosing StashAway as my platform of choice, and here are the top 3 reasons why!
StashAway is led by an "Expert Investment Team" (their words not mine) made up of the following co-founders:
+
+
Chief Executive Officer Michele Ferrario, a former CEO of Zalora Group and the co-founder of Rocket Internet
+
Chief Investment Officer Freddy Lim, a former Managing Director and Global Head of Derivatives Strategy at Nomura
+
Chief Technology Officer Nino Ulsamer, the co-founder and former CTO of a now-defunct (oops) software solution company for e-commerce analytics
+
+
These guys have real, solid credentials. They are far more experienced in investing than an amateur like me (surprise surprise), and this assured me that my investments in StashAway would be handled by well-informed and secure hands.
+
Credentials aside, they also have a proven track record of high returns for most of their portfolios. Despite how volatile and uncertain 2020 was, I achieved an impressive 19.59% time-weighted return for my portfolio of StashAway Risk Index 22%, between 3rd of February 2020, and 10th of March, 2021.
+
+
To find out how well StashAway's portfolios performed at other risk-levels, check out their article: Our Returns in 2020.
Another robo-advisor platform I was considering at the time was EndowUs. However, I ultimately went with StashAway because while EndowUs had a minimum investment amount of $10,000.00 (added cents for emphasis), StashAway had no minimum amount at all. This is still the case as of 11 March 2021.
+
As someone new to investing, the decision to invest $10,000 all at once was too intimidating for me to make. More importantly, I was still a student back then, one who did not have that many digits in his bank account. Therefore, StashAway was a natural choice for its low entry barrier!
Naturally, I was concerned about how safe using StashAway would be. More specifically, I was worried about losing my initial investment of $100 in StashAway, in the event of StashAway filing for bankruptcy. As it turned out, it was an unfound concern given that:
+
Quote
Your money is kept entirely separate from StashAway's finances. To ensure that we never touch your money, we use custodian banks that hold your money, whether it's in cash or in securities.
In these custodian institutions, your assets are always in a segregated account-- one that is separate from StashAway's operations and assets. This means that you will always have full access and claim to your assets no matter what happens to StashAway.
+
You can read more about StashAway's Frequently Asked Questions here. Do your own research hor!
I hope you found this post/rambling/thing (*gestures wildly at everything) insightful in any way. If you are still interested in StashAway, but not entirely convinced by me (I’ll try not to take it personally), you can check out this video by Kevin Learns Investing. All the best with getting financially fit!
+
Special thanks to Vanessa Tay for editing this!
]]>
+
+ Evan Tay
+ https://github.com/DigiPie
+
+
+
+
+
+
+ https://evantay.com/blog/docusaurus-gh-action
+
+ 2021-01-17T00:00:00.000Z
+
+ I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
The script below assumes that your Docusaurus website resides at /website of your repo. If that is not the case for you, you will need to:
+
Change cd website to cd <docu_site_root>, or delete the entire line if your Docusaurus website is at the root of your repo /
+
Change build_dir's value from website/build to <docu_site_root>/build, or build if your Docusaurus website is at the root of your repo /
+
+
name: deploy-docusaurus on: push: branches:[main] pull_request: branches:[main] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: publish: runs-on: ubuntu-latest steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it -name: Check out repo uses: actions/checkout@v2 # Node is required for npm -name: Set up Node uses: actions/setup-node@v2 with: node-version:"12" # Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build -name: Deploy to GitHub Pages if: success() uses: crazy-max/ghaction-github-pages@v2 with: target_branch: gh-pages build_dir: website/build env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+
note
GitHub will automatically add GITHUB_TOKEN to Secrets. You need not do so. See this for more information.
I first bought my personal domain www.evantay.com way back in 2013. Younger me thought it would be cool and fun to run my own website, and it still is. 😎
+
To be honest, I can't even remember what my very first website looked like, but I am quite sure it was built using vanilla HTML and CSS, something unthinkable in this day and age.
+
Move to Grav CMS in 2017
+
Given that I was building everything from the ground up, maintainability became problematic. In 2017, older younger me decided it wasn't worth the effort to do it myself, and I started shopping for a framework to adopt. I eventually settled on using Grav, a flat-file Content Management System (CMS) powered by PHP. I chose it because I was familiar and interested in PHP at the time.
+
To get started on building my second site quickly, I used a one-page Grav theme Ceevee.
+
+
However, I still wanted to add my personal touch to my portfolio website. This led me to heavily modifying the theme numerous times over the years, from 2017 up until 2020. It was a really fun journey while it lasted.
+
+
Move to Cloud in 2017
+
About the same time, I also moved from using a website host to DigitalOcean. I hosted my website on the cheapest droplet (VM) I could find (the 5 bucks one). I ran CentOS on the VM, and used an Nginx web server to serve the Grav website. Pretty old-school right?
+
Start of DigiDocs in 2019
+
Last year, during January 2019, I also started working on a separate pet project called DigiDocs, my personal documentation website. I was motivated to do so because I wanted to consolidate useful knowledge gained from attending university classes, and my own self-directed learning. I kept the site updated up until October 2020.
+
Back then, DigiDocs lived at www.evantay.com/docs, and was served by the same Nginx server which served my main portfolio website.
+
Move to Docusaurus v2 in 2020
+
More recently, in October 2020, I decided to replace both documentation and portfolio site with a single Docusaurus v2 website (which you're looking at right now).
+
I did so because I wanted to:
+
+
Keep up with the latest technologies: I wanted to learn more about Docusaurus v2 and also ReactJS, which is what Docusaurus is built upon. It was about time to move on from PHP.
+
Reduce operating costs by moving from a dynamic to static website 💸 : Given that all of my content are static, it did not make sense that I was using a dynamic PHP site generator such as Grav. By moving to a static website, I will be able to host my site at a cheaper cost or even for free! I am planning to achieve the latter by using GitHub pages.
+
+
So far, the experience of using Docusaurus v2 has been great! I am glad I started on this migration journey, despite feeling sentimental about my old portfolio and documentation websites.
]]>
+
+ Evan Tay
+ https://github.com/DigiPie
+
+
+
+
+
+ https://evantay.com/blog/stack-2020
+
+ 2020-12-03T00:00:00.000Z
+
+ STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
Day 1:
+
+
Opening note by Dr. Vivian Balakrishnan: Minister-in-charge of Smart Nation Programme Office and Chan Cheow Hoe: Government Chief Digital Technology Officer
Hunter shared about how GovTech uses Infrastructure-as-Code (IaC) and Policy-as-Code (PaC) to accelerate cloud operations. Read #devops-cloud-and-back-again by Hashicorp CTO Mitchell for more information.
+
+
Establishing a Landing Zone
+
+
A repeatable configuration across multiple Cloud Service Providers (CSP)
+
The foundation of any cloud environment at scale
+
Serves as a consistent platform for governance, onboarding, networking and security
+
+
+
Accelerating deployment with Infrastructure-as-Code (IaC)
+
+
Code and configuration to provision cloud resources and Landing Zones
+
Develop common templates for consistency and familiarity
+
Get feedback on pain points faced by "customers" while using said templates
+
+
+
Consistent audit and reporting with Policy-as-Code (PaC)
+
+
Code and configuration to test and validate cloud infrastructure deployed by IaC
+
Reduce need for manual audits and speed up auditing process
+
+
+
+
Six Principles for Refactoring a Monolith to Microservices
"A Microservice application is structured as a set of loosely coupled services that can be independently deployed." - Chris Richardson
+
Chris shared 6 principles to follow when moving from a Monolith to a Microservice architecture.
+
Here are 5 of them 😉 :
+
+
Adopt Microservice architecture for the right reasons
+
+
Monolith architecture is not necessary bad (an anti-pattern), it has its valid use-cases
+
Adopt Microservice architecture because it allows you to deliver software rapidly, frequently and sustainability, with small, autonomous teams working on the different microservices
+
+
+
Migrate to Microservice architecture incrementally rather than via Big Bang rewrite
+
+
Strangler Application: Incrementally migrate functionality from existing Monolith application to new Strangler Microservice application
+
+
Extract existing code modules and database tables out into microservices
+
Implement new features as microservices
+
+
+
+
+
Consider Return-on-Investment when deciding which modules to convert into microservices
+
+
Consider benefit of decoupling said module
+
Consider cost of decoupling
+
+
How difficult it is to do so
+
How many inbound dependencies it has
+
+
If module A is depended upon by module B, consider extracting module B first
+
+
+
+
+
+
+
Define the service boundaries correctly
+
+
Avoid Runtime Coupling
+
+
Reduced availability due to reliance of a given microservice on another for serving a given request
+
+
For example, microservice A receives an API request which it can only respond to when its own API request to microservice B is answered first
+
+
+
Make microservices as self-contained as possible
+
+
Able to respond to a request without making follow-up requests to other microservices
When introducing non-technical folks to coding, keep in mind that the most important objective is to convey understanding of core programming concepts.
+
On that note, here are some tips Nikhil shared:
+
+
Forgo coding best practices if doing so make it easier for the audience to understand
+
+
Reduce surface area to increase accessibility, even if it is not good coding practice
+
+
Collapse all HTML, JS and CSS into a single file so students can view everything in one place
+
Collapse all functions into one for the same reason as the previous line
+
Combine program lines if it makes things clearer
+
+
+
+
+
Use real-world examples to explain technical concepts
+
+
For example, use a restaurant scenario to explain async
+
+
+
Enable students in sharing their joy/achievement
+
+
Make deploying their application easy
+
+
Can use Netlify Drop: Simply drag and drop your HTML, JS, CSS files and Netlify Drop will do the rest for deployments
+
+
+
+
+
Practise empathy consciously throughout teaching process to identify ways you can make learning easier for your students
"The Cloud Operating Model is a new approach for IT operations that organizations need to use to be successful with cloud adoption and thrive in an era of multi-cloud architecture." - Hashicorp
+
Static
Dynamic
Run
Dedicated infrastructure to run app on
Scheduled automatically across a fleet (e.g. AWS Auto-Scaling Group, Hashicorp Nomad)
Connect
Host-based, static IP-addressing
Service-based, dynamic IP addresses due to dynamic provisioning
Secure
High-trust environment which is IP-address-based, with clear network perimeter
Low-trust environment with no clear perimeter given multi-tenancy and nature of the Cloud
Provision
Dedicated resources - Physical servers, routers and switches
Capacity on-demand, provision VMs, Containers and other managed services, or simply use Serverless services
+
+
Infrastructure operations on the Cloud must follow the Dynamic Cloud Operating Model fully
+
+
Pointless if you provision infrastructure in minutes using Infrastructure-as-Code (IaC), but still manually handle connectivity and security review using tickets
+
Have to adopt dynamic cloud operating tools for all 4 layers
+
+
+
Why use IaC for provisioning
+
+
Split Execution from Definition
+
Execution can be automated and carried out via API, UI or automatically through Continuous-Integration (CI) tools
+
+
+
How can teams use IaC
+
+
Operations team still necessary, focus on creating and improving blueprints and handling edge-cases when doing so
+
Development team creates infrastructure in a self-service manner, using the blueprints created by the Operations team
+separating buzz words from crucial tech
I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
The script below assumes that your Docusaurus website resides at /website of your repo. If that is not the case for you, you will need to:
+
Change cd website to cd <docu_site_root>, or delete the entire line if your Docusaurus website is at the root of your repo /
+
Change build_dir's value from website/build to <docu_site_root>/build, or build if your Docusaurus website is at the root of your repo /
+
+
name: deploy-docusaurus on: push: branches:[main] pull_request: branches:[main] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: publish: runs-on: ubuntu-latest steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it -name: Check out repo uses: actions/checkout@v2 # Node is required for npm -name: Set up Node uses: actions/setup-node@v2 with: node-version:"12" # Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build -name: Deploy to GitHub Pages if: success() uses: crazy-max/ghaction-github-pages@v2 with: target_branch: gh-pages build_dir: website/build env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+
note
GitHub will automatically add GITHUB_TOKEN to Secrets. You need not do so. See this for more information.
I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
+
Plausible only offers a free 30-day trial, and would cost at least $6 monthly subsequently. In comparison, PostHog has a free non-expiring tier which provides up to 1 million events every month. It also requires no credit card and is completely self-serve. There was no need to request and wait for a free license. It took me less than 10 minutes from signing up to getting the analytics running.
+
PostHog is open-source
If you are looking for an open-source solution you can self-host, PostHog may be it.
+
Despite what I have shared so far, I would still choose Plausible over PostHog if the former was free too. That's because the Plausible's dashboard interface looks much better and has more visualization options. For comparison, you can view the PostHog dashboard for this site, and the Plausible dashboard for The Pragmatic Engineer.
+
If you want to integrate PostHog with your Docusaurus site, you can follow the guide below. The later steps are for deployment to GitHub Pages via GitHub Actions. But you can easily modify them for other platforms and deployment workflows. I will give some tips for doing that.
The example in the official PostHog guide for Docusaurus v2 Integration inserts the API key directly into the code (e.g. apiKey: "phc_fakekeyhHBZOuIq"). It is a bad idea to do so, especially if you host your code publicly (i.e. on a GitHub public repo). It is good practice to keep API keys secret and outside of application code instead. We will be using GitHub Encrypted Secrets to achieve that.
+
+
+
Add a repository secret to the GitHub repo hosting your site's code.
+
+
Settings > Secrets > New repository secret > Name:POSTHOG_API_KEY
+
If you are using another deployment platform
There should be a settings panel which allows you to specify environment variables or secrets to insert into your application deployments securely. Put your POSTHOG_API_KEY there. Skip the next step.
+
+
+
Open the GitHub Action workflow file responsible for deploying your site, and add the environment variable POSTHOG_API_KEY to the Docusaurus build step:
# Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build env: POSTHOG_API_KEY: ${{ secrets.POSTHOG_API_KEY }}
+
+
+
That's it! Once you deploy your new changes, the plugin will automatically start tracking pageviews, clicks and more. For more customisation options, you can check out the posthog-docusaurus plugin repo and the PostHog guide for Docusaurus v2 Integration. You can also check out the commit I made to integrate PostHog into this website.
I first bought my personal domain www.evantay.com way back in 2013. Younger me thought it would be cool and fun to run my own website, and it still is. 😎
+
To be honest, I can't even remember what my very first website looked like, but I am quite sure it was built using vanilla HTML and CSS, something unthinkable in this day and age.
+
Move to Grav CMS in 2017
+
Given that I was building everything from the ground up, maintainability became problematic. In 2017, older younger me decided it wasn't worth the effort to do it myself, and I started shopping for a framework to adopt. I eventually settled on using Grav, a flat-file Content Management System (CMS) powered by PHP. I chose it because I was familiar and interested in PHP at the time.
+
To get started on building my second site quickly, I used a one-page Grav theme Ceevee.
+
+
However, I still wanted to add my personal touch to my portfolio website. This led me to heavily modifying the theme numerous times over the years, from 2017 up until 2020. It was a really fun journey while it lasted.
+
+
Move to Cloud in 2017
+
About the same time, I also moved from using a website host to DigitalOcean. I hosted my website on the cheapest droplet (VM) I could find (the 5 bucks one). I ran CentOS on the VM, and used an Nginx web server to serve the Grav website. Pretty old-school right?
+
Start of DigiDocs in 2019
+
Last year, during January 2019, I also started working on a separate pet project called DigiDocs, my personal documentation website. I was motivated to do so because I wanted to consolidate useful knowledge gained from attending university classes, and my own self-directed learning. I kept the site updated up until October 2020.
+
Back then, DigiDocs lived at www.evantay.com/docs, and was served by the same Nginx server which served my main portfolio website.
+
Move to Docusaurus v2 in 2020
+
More recently, in October 2020, I decided to replace both documentation and portfolio site with a single Docusaurus v2 website (which you're looking at right now).
+
I did so because I wanted to:
+
+
Keep up with the latest technologies: I wanted to learn more about Docusaurus v2 and also ReactJS, which is what Docusaurus is built upon. It was about time to move on from PHP.
+
Reduce operating costs by moving from a dynamic to static website 💸 : Given that all of my content are static, it did not make sense that I was using a dynamic PHP site generator such as Grav. By moving to a static website, I will be able to host my site at a cheaper cost or even for free! I am planning to achieve the latter by using GitHub pages.
+
+
So far, the experience of using Docusaurus v2 has been great! I am glad I started on this migration journey, despite feeling sentimental about my old portfolio and documentation websites.
+
+
\ No newline at end of file
diff --git a/blog/index.html b/blog/index.html
new file mode 100644
index 00000000..e949f551
--- /dev/null
+++ b/blog/index.html
@@ -0,0 +1,27 @@
+
+
+
+
+
+Blog | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
I first bought my personal domain www.evantay.com way back in 2013. Younger me thought it would be cool and fun to run my own website, and it still is. 😎
+
To be honest, I can't even remember what my very first website looked like, but I am quite sure it was built using vanilla HTML and CSS, something unthinkable in this day and age.
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
+
\ No newline at end of file
diff --git a/blog/rss.xml b/blog/rss.xml
new file mode 100644
index 00000000..91c3db88
--- /dev/null
+++ b/blog/rss.xml
@@ -0,0 +1,436 @@
+
+
+
+ Evan Tay Blog
+ https://evantay.com/blog
+ Evan Tay Blog
+ Fri, 10 Sep 2021 00:00:00 GMT
+ https://validator.w3.org/feed/docs/rss2.html
+ https://github.com/jpmonette/feed
+ en
+
+
+ https://evantay.com/blog/why-you-should-read-ddia
+ https://evantay.com/blog/why-you-should-read-ddia
+ Fri, 10 Sep 2021 00:00:00 GMT
+
+ Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
I kickstarted my engineering career back in January 2021, as a full stack engineer at Padlet. During the onboarding process, my (amazing) mentor, Brian, imparted a great deal of guidance to me. One of his tips was that I should take a look at Kleppman’s Designing Data-Intensive Applications. Thankfully, we had two copies of the book in the office, purchased by my (also amazing) boss, Shu Yang, who recommended me to read it too. I’m thankful I ended up taking their advice, because I was able to glean so much insights from Kleppman, which happened to be highly applicable to the infrastructure and full stack projects I was developing.
+
+
"This book should be required reading for software engineers." - Kevin Scott, Chief Technology Officer at Microsoft
+
+
Like Brian, Shu Yang and Kevin, I now also believe all software engineers working on a distributed, cloud or data-intensive system will greatly benefit from reading the book. It provides the fundamental framework for thinking about these systems, and also the vocabulary to communicate such thoughts. Coupled together, these insights will empower you to make better design decisions and effectively convey them, even if you lack prior experience in the problem domain.
+
Kleppman also compared the key fundamental ideas behind the broad range of popular data systems out there today, by discussing their advantages, limitations and trade-offs, rather than diving deep into the intricacies of each tool. This was ideal given that the book's objective was to help us choose the right tool for the right occasion, which these characteristics will be more relevant for.
+
If you lack the time (or will) to pour over the entire book, you should at least check out the opening chapter. In it, Kleppman gives a comprehensive yet succinct overview of what I mentioned above, and provides a clear, detailed explanation of the three key principles in designing data-intensive system architecture: Reliability, Scalability and Maintainability. Just reading this first chapter alone was beneficial to me, as I was now able to better understand and discuss architectural concerns with my team.
+
If you're still not convinced whether to invest your time into this book, you can check out a summary I've written for the first chapter, where I’ve condensed Kleppman’s opening discourse on Reliability, Scalability and Maintainability. I’m certain it’ll provide a glimpse into the many lessons that Designing Data-Intensive Applications has to share, and if you do read the book, definitely let me know what you think!
]]>
+ book-review
+ software-engineering
+ software-architecture
+
+
+
+ https://evantay.com/blog/docusaurus-posthog
+ https://evantay.com/blog/docusaurus-posthog
+ Sat, 26 Jun 2021 00:00:00 GMT
+
+ I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
+
Plausible only offers a free 30-day trial, and would cost at least $6 monthly subsequently. In comparison, PostHog has a free non-expiring tier which provides up to 1 million events every month. It also requires no credit card and is completely self-serve. There was no need to request and wait for a free license. It took me less than 10 minutes from signing up to getting the analytics running.
+
PostHog is open-source
If you are looking for an open-source solution you can self-host, PostHog may be it.
+
Despite what I have shared so far, I would still choose Plausible over PostHog if the former was free too. That's because the Plausible's dashboard interface looks much better and has more visualization options. For comparison, you can view the PostHog dashboard for this site, and the Plausible dashboard for The Pragmatic Engineer.
+
If you want to integrate PostHog with your Docusaurus site, you can follow the guide below. The later steps are for deployment to GitHub Pages via GitHub Actions. But you can easily modify them for other platforms and deployment workflows. I will give some tips for doing that.
The example in the official PostHog guide for Docusaurus v2 Integration inserts the API key directly into the code (e.g. apiKey: "phc_fakekeyhHBZOuIq"). It is a bad idea to do so, especially if you host your code publicly (i.e. on a GitHub public repo). It is good practice to keep API keys secret and outside of application code instead. We will be using GitHub Encrypted Secrets to achieve that.
+
+
+
Add a repository secret to the GitHub repo hosting your site's code.
+
+
Settings > Secrets > New repository secret > Name:POSTHOG_API_KEY
+
If you are using another deployment platform
There should be a settings panel which allows you to specify environment variables or secrets to insert into your application deployments securely. Put your POSTHOG_API_KEY there. Skip the next step.
+
+
+
Open the GitHub Action workflow file responsible for deploying your site, and add the environment variable POSTHOG_API_KEY to the Docusaurus build step:
# Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build env: POSTHOG_API_KEY: ${{ secrets.POSTHOG_API_KEY }}
+
+
+
That's it! Once you deploy your new changes, the plugin will automatically start tracking pageviews, clicks and more. For more customisation options, you can check out the posthog-docusaurus plugin repo and the PostHog guide for Docusaurus v2 Integration. You can also check out the commit I made to integrate PostHog into this website.
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
I was hesitant to invest for years because I used to think any form of investment was risky. Like most some Singaporeans, I am was kiasi. I was afraid of the unknown. What eventually changed my mind was chancing upon this article: The big problem (of) playing it too safe with money in our 20s by thewokesalaryman, and the following quote:
+
Quote of the Day
Ironically, by not taking any risks and letting all your money get eroded by certain inflation, you are actually doing the riskiest thing.
+
This quote was the wake-up call for me to start investing, and could be yours too if you are someone who is privileged enough to start doing the same (i.e. you have the financial ability to buy bubble tea at least once a week).
+
+
Photo by my bubble-tea buddy Elsie Lee
+Pictured (from left): Milksha's Fresh Milk, and Izumo Matcha Milk with Honey Pearls
+Is Milksha a buy? Milksha isn't cheap by any stretch but you can't put a price tag on happiness ⭐
+
Pro Tip from fellow bubble-tea enthusiast Freda
Buy Milksha coupons from Shopee beforehand to get massive discounts!
+
Anyhow, I reasoned the best way to kick-start my investment journey would be with a robo-advisor platform, given how beginner-friendly it is, how low the commission fees are, and how lazy I am given the passive and minimal-effort investment a robo-advisor offers.
+
You can check out this page by dollarsandsense giving an introduction to robo-advisors for more reasons why you should or should not invest with a robo-advisor platform.
+
I ended up choosing StashAway as my platform of choice, and here are the top 3 reasons why!
StashAway is led by an "Expert Investment Team" (their words not mine) made up of the following co-founders:
+
+
Chief Executive Officer Michele Ferrario, a former CEO of Zalora Group and the co-founder of Rocket Internet
+
Chief Investment Officer Freddy Lim, a former Managing Director and Global Head of Derivatives Strategy at Nomura
+
Chief Technology Officer Nino Ulsamer, the co-founder and former CTO of a now-defunct (oops) software solution company for e-commerce analytics
+
+
These guys have real, solid credentials. They are far more experienced in investing than an amateur like me (surprise surprise), and this assured me that my investments in StashAway would be handled by well-informed and secure hands.
+
Credentials aside, they also have a proven track record of high returns for most of their portfolios. Despite how volatile and uncertain 2020 was, I achieved an impressive 19.59% time-weighted return for my portfolio of StashAway Risk Index 22%, between 3rd of February 2020, and 10th of March, 2021.
+
+
To find out how well StashAway's portfolios performed at other risk-levels, check out their article: Our Returns in 2020.
Another robo-advisor platform I was considering at the time was EndowUs. However, I ultimately went with StashAway because while EndowUs had a minimum investment amount of $10,000.00 (added cents for emphasis), StashAway had no minimum amount at all. This is still the case as of 11 March 2021.
+
As someone new to investing, the decision to invest $10,000 all at once was too intimidating for me to make. More importantly, I was still a student back then, one who did not have that many digits in his bank account. Therefore, StashAway was a natural choice for its low entry barrier!
Naturally, I was concerned about how safe using StashAway would be. More specifically, I was worried about losing my initial investment of $100 in StashAway, in the event of StashAway filing for bankruptcy. As it turned out, it was an unfound concern given that:
+
Quote
Your money is kept entirely separate from StashAway's finances. To ensure that we never touch your money, we use custodian banks that hold your money, whether it's in cash or in securities.
In these custodian institutions, your assets are always in a segregated account-- one that is separate from StashAway's operations and assets. This means that you will always have full access and claim to your assets no matter what happens to StashAway.
+
You can read more about StashAway's Frequently Asked Questions here. Do your own research hor!
I hope you found this post/rambling/thing (*gestures wildly at everything) insightful in any way. If you are still interested in StashAway, but not entirely convinced by me (I’ll try not to take it personally), you can check out this video by Kevin Learns Investing. All the best with getting financially fit!
+
Special thanks to Vanessa Tay for editing this!
]]>
+ investing
+ sharing
+
+
+
+ https://evantay.com/blog/docusaurus-gh-action
+ https://evantay.com/blog/docusaurus-gh-action
+ Sun, 17 Jan 2021 00:00:00 GMT
+
+ I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
The script below assumes that your Docusaurus website resides at /website of your repo. If that is not the case for you, you will need to:
+
Change cd website to cd <docu_site_root>, or delete the entire line if your Docusaurus website is at the root of your repo /
+
Change build_dir's value from website/build to <docu_site_root>/build, or build if your Docusaurus website is at the root of your repo /
+
+
name: deploy-docusaurus on: push: branches:[main] pull_request: branches:[main] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: publish: runs-on: ubuntu-latest steps: # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it -name: Check out repo uses: actions/checkout@v2 # Node is required for npm -name: Set up Node uses: actions/setup-node@v2 with: node-version:"12" # Install and build Docusaurus website -name: Build Docusaurus website run:| cd website npm install npm run build -name: Deploy to GitHub Pages if: success() uses: crazy-max/ghaction-github-pages@v2 with: target_branch: gh-pages build_dir: website/build env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+
note
GitHub will automatically add GITHUB_TOKEN to Secrets. You need not do so. See this for more information.
]]>
+ docusaurus
+ github-action
+ ci
+
+
+
+ https://evantay.com/blog/history
+ https://evantay.com/blog/history
+ Mon, 21 Dec 2020 00:00:00 GMT
+
+ Established in 2013
+
I first bought my personal domain www.evantay.com way back in 2013. Younger me thought it would be cool and fun to run my own website, and it still is. 😎
+
To be honest, I can't even remember what my very first website looked like, but I am quite sure it was built using vanilla HTML and CSS, something unthinkable in this day and age.
+
Move to Grav CMS in 2017
+
Given that I was building everything from the ground up, maintainability became problematic. In 2017, older younger me decided it wasn't worth the effort to do it myself, and I started shopping for a framework to adopt. I eventually settled on using Grav, a flat-file Content Management System (CMS) powered by PHP. I chose it because I was familiar and interested in PHP at the time.
+
To get started on building my second site quickly, I used a one-page Grav theme Ceevee.
+
+
However, I still wanted to add my personal touch to my portfolio website. This led me to heavily modifying the theme numerous times over the years, from 2017 up until 2020. It was a really fun journey while it lasted.
+
+
Move to Cloud in 2017
+
About the same time, I also moved from using a website host to DigitalOcean. I hosted my website on the cheapest droplet (VM) I could find (the 5 bucks one). I ran CentOS on the VM, and used an Nginx web server to serve the Grav website. Pretty old-school right?
+
Start of DigiDocs in 2019
+
Last year, during January 2019, I also started working on a separate pet project called DigiDocs, my personal documentation website. I was motivated to do so because I wanted to consolidate useful knowledge gained from attending university classes, and my own self-directed learning. I kept the site updated up until October 2020.
+
Back then, DigiDocs lived at www.evantay.com/docs, and was served by the same Nginx server which served my main portfolio website.
+
Move to Docusaurus v2 in 2020
+
More recently, in October 2020, I decided to replace both documentation and portfolio site with a single Docusaurus v2 website (which you're looking at right now).
+
I did so because I wanted to:
+
+
Keep up with the latest technologies: I wanted to learn more about Docusaurus v2 and also ReactJS, which is what Docusaurus is built upon. It was about time to move on from PHP.
+
Reduce operating costs by moving from a dynamic to static website 💸 : Given that all of my content are static, it did not make sense that I was using a dynamic PHP site generator such as Grav. By moving to a static website, I will be able to host my site at a cheaper cost or even for free! I am planning to achieve the latter by using GitHub pages.
+
+
So far, the experience of using Docusaurus v2 has been great! I am glad I started on this migration journey, despite feeling sentimental about my old portfolio and documentation websites.
]]>
+ sharing
+
+
+
+ https://evantay.com/blog/stack-2020
+ https://evantay.com/blog/stack-2020
+ Thu, 03 Dec 2020 00:00:00 GMT
+
+ STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
Day 1:
+
+
Opening note by Dr. Vivian Balakrishnan: Minister-in-charge of Smart Nation Programme Office and Chan Cheow Hoe: Government Chief Digital Technology Officer
Hunter shared about how GovTech uses Infrastructure-as-Code (IaC) and Policy-as-Code (PaC) to accelerate cloud operations. Read #devops-cloud-and-back-again by Hashicorp CTO Mitchell for more information.
+
+
Establishing a Landing Zone
+
+
A repeatable configuration across multiple Cloud Service Providers (CSP)
+
The foundation of any cloud environment at scale
+
Serves as a consistent platform for governance, onboarding, networking and security
+
+
+
Accelerating deployment with Infrastructure-as-Code (IaC)
+
+
Code and configuration to provision cloud resources and Landing Zones
+
Develop common templates for consistency and familiarity
+
Get feedback on pain points faced by "customers" while using said templates
+
+
+
Consistent audit and reporting with Policy-as-Code (PaC)
+
+
Code and configuration to test and validate cloud infrastructure deployed by IaC
+
Reduce need for manual audits and speed up auditing process
+
+
+
+
Six Principles for Refactoring a Monolith to Microservices
"A Microservice application is structured as a set of loosely coupled services that can be independently deployed." - Chris Richardson
+
Chris shared 6 principles to follow when moving from a Monolith to a Microservice architecture.
+
Here are 5 of them 😉 :
+
+
Adopt Microservice architecture for the right reasons
+
+
Monolith architecture is not necessary bad (an anti-pattern), it has its valid use-cases
+
Adopt Microservice architecture because it allows you to deliver software rapidly, frequently and sustainability, with small, autonomous teams working on the different microservices
+
+
+
Migrate to Microservice architecture incrementally rather than via Big Bang rewrite
+
+
Strangler Application: Incrementally migrate functionality from existing Monolith application to new Strangler Microservice application
+
+
Extract existing code modules and database tables out into microservices
+
Implement new features as microservices
+
+
+
+
+
Consider Return-on-Investment when deciding which modules to convert into microservices
+
+
Consider benefit of decoupling said module
+
Consider cost of decoupling
+
+
How difficult it is to do so
+
How many inbound dependencies it has
+
+
If module A is depended upon by module B, consider extracting module B first
+
+
+
+
+
+
+
Define the service boundaries correctly
+
+
Avoid Runtime Coupling
+
+
Reduced availability due to reliance of a given microservice on another for serving a given request
+
+
For example, microservice A receives an API request which it can only respond to when its own API request to microservice B is answered first
+
+
+
Make microservices as self-contained as possible
+
+
Able to respond to a request without making follow-up requests to other microservices
When introducing non-technical folks to coding, keep in mind that the most important objective is to convey understanding of core programming concepts.
+
On that note, here are some tips Nikhil shared:
+
+
Forgo coding best practices if doing so make it easier for the audience to understand
+
+
Reduce surface area to increase accessibility, even if it is not good coding practice
+
+
Collapse all HTML, JS and CSS into a single file so students can view everything in one place
+
Collapse all functions into one for the same reason as the previous line
+
Combine program lines if it makes things clearer
+
+
+
+
+
Use real-world examples to explain technical concepts
+
+
For example, use a restaurant scenario to explain async
+
+
+
Enable students in sharing their joy/achievement
+
+
Make deploying their application easy
+
+
Can use Netlify Drop: Simply drag and drop your HTML, JS, CSS files and Netlify Drop will do the rest for deployments
+
+
+
+
+
Practise empathy consciously throughout teaching process to identify ways you can make learning easier for your students
"The Cloud Operating Model is a new approach for IT operations that organizations need to use to be successful with cloud adoption and thrive in an era of multi-cloud architecture." - Hashicorp
+
Static
Dynamic
Run
Dedicated infrastructure to run app on
Scheduled automatically across a fleet (e.g. AWS Auto-Scaling Group, Hashicorp Nomad)
Connect
Host-based, static IP-addressing
Service-based, dynamic IP addresses due to dynamic provisioning
Secure
High-trust environment which is IP-address-based, with clear network perimeter
Low-trust environment with no clear perimeter given multi-tenancy and nature of the Cloud
Provision
Dedicated resources - Physical servers, routers and switches
Capacity on-demand, provision VMs, Containers and other managed services, or simply use Serverless services
+
+
Infrastructure operations on the Cloud must follow the Dynamic Cloud Operating Model fully
+
+
Pointless if you provision infrastructure in minutes using Infrastructure-as-Code (IaC), but still manually handle connectivity and security review using tickets
+
Have to adopt dynamic cloud operating tools for all 4 layers
+
+
+
Why use IaC for provisioning
+
+
Split Execution from Definition
+
Execution can be automated and carried out via API, UI or automatically through Continuous-Integration (CI) tools
+
+
+
How can teams use IaC
+
+
Operations team still necessary, focus on creating and improving blueprints and handling edge-cases when doing so
+
Development team creates infrastructure in a self-service manner, using the blueprints created by the Operations team
+separating buzz words from crucial tech
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
Day 1:
+
+
Opening note by Dr. Vivian Balakrishnan: Minister-in-charge of Smart Nation Programme Office and Chan Cheow Hoe: Government Chief Digital Technology Officer
Hunter shared about how GovTech uses Infrastructure-as-Code (IaC) and Policy-as-Code (PaC) to accelerate cloud operations. Read #devops-cloud-and-back-again by Hashicorp CTO Mitchell for more information.
+
+
Establishing a Landing Zone
+
+
A repeatable configuration across multiple Cloud Service Providers (CSP)
+
The foundation of any cloud environment at scale
+
Serves as a consistent platform for governance, onboarding, networking and security
+
+
+
Accelerating deployment with Infrastructure-as-Code (IaC)
+
+
Code and configuration to provision cloud resources and Landing Zones
+
Develop common templates for consistency and familiarity
+
Get feedback on pain points faced by "customers" while using said templates
+
+
+
Consistent audit and reporting with Policy-as-Code (PaC)
+
+
Code and configuration to test and validate cloud infrastructure deployed by IaC
+
Reduce need for manual audits and speed up auditing process
+
+
+
+
Six Principles for Refactoring a Monolith to Microservices
"A Microservice application is structured as a set of loosely coupled services that can be independently deployed." - Chris Richardson
+
Chris shared 6 principles to follow when moving from a Monolith to a Microservice architecture.
+
Here are 5 of them 😉 :
+
+
Adopt Microservice architecture for the right reasons
+
+
Monolith architecture is not necessary bad (an anti-pattern), it has its valid use-cases
+
Adopt Microservice architecture because it allows you to deliver software rapidly, frequently and sustainability, with small, autonomous teams working on the different microservices
+
+
+
Migrate to Microservice architecture incrementally rather than via Big Bang rewrite
+
+
Strangler Application: Incrementally migrate functionality from existing Monolith application to new Strangler Microservice application
+
+
Extract existing code modules and database tables out into microservices
+
Implement new features as microservices
+
+
+
+
+
Consider Return-on-Investment when deciding which modules to convert into microservices
+
+
Consider benefit of decoupling said module
+
Consider cost of decoupling
+
+
How difficult it is to do so
+
How many inbound dependencies it has
+
+
If module A is depended upon by module B, consider extracting module B first
+
+
+
+
+
+
+
Define the service boundaries correctly
+
+
Avoid Runtime Coupling
+
+
Reduced availability due to reliance of a given microservice on another for serving a given request
+
+
For example, microservice A receives an API request which it can only respond to when its own API request to microservice B is answered first
+
+
+
Make microservices as self-contained as possible
+
+
Able to respond to a request without making follow-up requests to other microservices
When introducing non-technical folks to coding, keep in mind that the most important objective is to convey understanding of core programming concepts.
+
On that note, here are some tips Nikhil shared:
+
+
Forgo coding best practices if doing so make it easier for the audience to understand
+
+
Reduce surface area to increase accessibility, even if it is not good coding practice
+
+
Collapse all HTML, JS and CSS into a single file so students can view everything in one place
+
Collapse all functions into one for the same reason as the previous line
+
Combine program lines if it makes things clearer
+
+
+
+
+
Use real-world examples to explain technical concepts
+
+
For example, use a restaurant scenario to explain async
+
+
+
Enable students in sharing their joy/achievement
+
+
Make deploying their application easy
+
+
Can use Netlify Drop: Simply drag and drop your HTML, JS, CSS files and Netlify Drop will do the rest for deployments
+
+
+
+
+
Practise empathy consciously throughout teaching process to identify ways you can make learning easier for your students
"The Cloud Operating Model is a new approach for IT operations that organizations need to use to be successful with cloud adoption and thrive in an era of multi-cloud architecture." - Hashicorp
+
Static
Dynamic
Run
Dedicated infrastructure to run app on
Scheduled automatically across a fleet (e.g. AWS Auto-Scaling Group, Hashicorp Nomad)
Connect
Host-based, static IP-addressing
Service-based, dynamic IP addresses due to dynamic provisioning
Secure
High-trust environment which is IP-address-based, with clear network perimeter
Low-trust environment with no clear perimeter given multi-tenancy and nature of the Cloud
Provision
Dedicated resources - Physical servers, routers and switches
Capacity on-demand, provision VMs, Containers and other managed services, or simply use Serverless services
+
+
Infrastructure operations on the Cloud must follow the Dynamic Cloud Operating Model fully
+
+
Pointless if you provision infrastructure in minutes using Infrastructure-as-Code (IaC), but still manually handle connectivity and security review using tickets
+
Have to adopt dynamic cloud operating tools for all 4 layers
+
+
+
Why use IaC for provisioning
+
+
Split Execution from Definition
+
Execution can be automated and carried out via API, UI or automatically through Continuous-Integration (CI) tools
+
+
+
How can teams use IaC
+
+
Operations team still necessary, focus on creating and improving blueprints and handling edge-cases when doing so
+
Development team creates infrastructure in a self-service manner, using the blueprints created by the Operations team
+separating buzz words from crucial tech
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
I was hesitant to invest for years because I used to think any form of investment was risky. Like most some Singaporeans, I am was kiasi. I was afraid of the unknown. What eventually changed my mind was chancing upon this article: The big problem (of) playing it too safe with money in our 20s by thewokesalaryman, and the following quote:
+
Quote of the Day
Ironically, by not taking any risks and letting all your money get eroded by certain inflation, you are actually doing the riskiest thing.
+
This quote was the wake-up call for me to start investing, and could be yours too if you are someone who is privileged enough to start doing the same (i.e. you have the financial ability to buy bubble tea at least once a week).
+
+
Photo by my bubble-tea buddy Elsie Lee
+Pictured (from left): Milksha's Fresh Milk, and Izumo Matcha Milk with Honey Pearls
+Is Milksha a buy? Milksha isn't cheap by any stretch but you can't put a price tag on happiness ⭐
+
Pro Tip from fellow bubble-tea enthusiast Freda
Buy Milksha coupons from Shopee beforehand to get massive discounts!
+
Anyhow, I reasoned the best way to kick-start my investment journey would be with a robo-advisor platform, given how beginner-friendly it is, how low the commission fees are, and how lazy I am given the passive and minimal-effort investment a robo-advisor offers.
+
You can check out this page by dollarsandsense giving an introduction to robo-advisors for more reasons why you should or should not invest with a robo-advisor platform.
+
I ended up choosing StashAway as my platform of choice, and here are the top 3 reasons why!
StashAway is led by an "Expert Investment Team" (their words not mine) made up of the following co-founders:
+
+
Chief Executive Officer Michele Ferrario, a former CEO of Zalora Group and the co-founder of Rocket Internet
+
Chief Investment Officer Freddy Lim, a former Managing Director and Global Head of Derivatives Strategy at Nomura
+
Chief Technology Officer Nino Ulsamer, the co-founder and former CTO of a now-defunct (oops) software solution company for e-commerce analytics
+
+
These guys have real, solid credentials. They are far more experienced in investing than an amateur like me (surprise surprise), and this assured me that my investments in StashAway would be handled by well-informed and secure hands.
+
Credentials aside, they also have a proven track record of high returns for most of their portfolios. Despite how volatile and uncertain 2020 was, I achieved an impressive 19.59% time-weighted return for my portfolio of StashAway Risk Index 22%, between 3rd of February 2020, and 10th of March, 2021.
+
+
To find out how well StashAway's portfolios performed at other risk-levels, check out their article: Our Returns in 2020.
Another robo-advisor platform I was considering at the time was EndowUs. However, I ultimately went with StashAway because while EndowUs had a minimum investment amount of $10,000.00 (added cents for emphasis), StashAway had no minimum amount at all. This is still the case as of 11 March 2021.
+
As someone new to investing, the decision to invest $10,000 all at once was too intimidating for me to make. More importantly, I was still a student back then, one who did not have that many digits in his bank account. Therefore, StashAway was a natural choice for its low entry barrier!
Naturally, I was concerned about how safe using StashAway would be. More specifically, I was worried about losing my initial investment of $100 in StashAway, in the event of StashAway filing for bankruptcy. As it turned out, it was an unfound concern given that:
+
Quote
Your money is kept entirely separate from StashAway's finances. To ensure that we never touch your money, we use custodian banks that hold your money, whether it's in cash or in securities.
In these custodian institutions, your assets are always in a segregated account-- one that is separate from StashAway's operations and assets. This means that you will always have full access and claim to your assets no matter what happens to StashAway.
+
You can read more about StashAway's Frequently Asked Questions here. Do your own research hor!
I hope you found this post/rambling/thing (*gestures wildly at everything) insightful in any way. If you are still interested in StashAway, but not entirely convinced by me (I’ll try not to take it personally), you can check out this video by Kevin Learns Investing. All the best with getting financially fit!
I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
+
+
\ No newline at end of file
diff --git a/blog/tags/book-review/index.html b/blog/tags/book-review/index.html
new file mode 100644
index 00000000..747e0040
--- /dev/null
+++ b/blog/tags/book-review/index.html
@@ -0,0 +1,19 @@
+
+
+
+
+
+One post tagged with "book-review" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
\ No newline at end of file
diff --git a/blog/tags/ci/index.html b/blog/tags/ci/index.html
new file mode 100644
index 00000000..d3495477
--- /dev/null
+++ b/blog/tags/ci/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+One post tagged with "ci" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
+
+
\ No newline at end of file
diff --git a/blog/tags/cloud/index.html b/blog/tags/cloud/index.html
new file mode 100644
index 00000000..6248cad9
--- /dev/null
+++ b/blog/tags/cloud/index.html
@@ -0,0 +1,20 @@
+
+
+
+
+
+One post tagged with "cloud" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
+
\ No newline at end of file
diff --git a/blog/tags/conference/index.html b/blog/tags/conference/index.html
new file mode 100644
index 00000000..5239468a
--- /dev/null
+++ b/blog/tags/conference/index.html
@@ -0,0 +1,20 @@
+
+
+
+
+
+One post tagged with "conference" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
+
\ No newline at end of file
diff --git a/blog/tags/docusaurus/index.html b/blog/tags/docusaurus/index.html
new file mode 100644
index 00000000..9a7c9931
--- /dev/null
+++ b/blog/tags/docusaurus/index.html
@@ -0,0 +1,22 @@
+
+
+
+
+
+2 posts tagged with "docusaurus" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
+
+
\ No newline at end of file
diff --git a/blog/tags/github-action/index.html b/blog/tags/github-action/index.html
new file mode 100644
index 00000000..2050272b
--- /dev/null
+++ b/blog/tags/github-action/index.html
@@ -0,0 +1,21 @@
+
+
+
+
+
+One post tagged with "github-action" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
I got tired of deploying my Docusaurus website to GitHub Pages manually, and decided to do something about it using GitHub Action.
+
Initially, I was planning to follow the official guide on doing so. However, it was actually much more complicated than I liked. I did not really want to generate and store a SSH key on GitHub. Too much effort man.
+
I decided it was better off for me to write my own script. Here it is:
+
+
\ No newline at end of file
diff --git a/blog/tags/gov-tech-stack/index.html b/blog/tags/gov-tech-stack/index.html
new file mode 100644
index 00000000..6a905158
--- /dev/null
+++ b/blog/tags/gov-tech-stack/index.html
@@ -0,0 +1,20 @@
+
+
+
+
+
+One post tagged with "GovTechSTACK" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
+
\ No newline at end of file
diff --git a/blog/tags/index.html b/blog/tags/index.html
new file mode 100644
index 00000000..4dbfebd0
--- /dev/null
+++ b/blog/tags/index.html
@@ -0,0 +1,19 @@
+
+
+
+
+
+Tags | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
+
+
\ No newline at end of file
diff --git a/blog/tags/microservices/index.html b/blog/tags/microservices/index.html
new file mode 100644
index 00000000..5637350f
--- /dev/null
+++ b/blog/tags/microservices/index.html
@@ -0,0 +1,20 @@
+
+
+
+
+
+One post tagged with "microservices" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
STACK 2020 Developers Conference is GovTech’s flagship conference and the largest government-led developers event in Singapore that connects government, industry and the tech community.
+
In this post, you will find my key takeaways for some of the sessions I attended during the conference. I chose to attend these sessions based on my interests in Cloud and Microservices.
+
+
\ No newline at end of file
diff --git a/blog/tags/posthog/index.html b/blog/tags/posthog/index.html
new file mode 100644
index 00000000..57a53d1d
--- /dev/null
+++ b/blog/tags/posthog/index.html
@@ -0,0 +1,20 @@
+
+
+
+
+
+One post tagged with "posthog" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
I integrated PostHog analytics into this website today. I decided to do so after reading Gergely Orosz's Stats page on his blog The Pragmatic Engineer. He had installed Plausible analytics and made his analytics dashboard public. I thought that was cool and wanted to do the same.
+
I managed to find a Plausible plugin for Docusaurus v2, which is what this website runs on. But I decided to use PostHog instead. It also has a Docusaurus plugin and a public dashboard feature. I decided so because it is free and Plausible isn't.
+
+
\ No newline at end of file
diff --git a/blog/tags/sharing/index.html b/blog/tags/sharing/index.html
new file mode 100644
index 00000000..1f71d4e2
--- /dev/null
+++ b/blog/tags/sharing/index.html
@@ -0,0 +1,23 @@
+
+
+
+
+
+2 posts tagged with "sharing" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
This article was posted sometime back in early 2021 and may be outdated. Refer to SingSaver instead for updated information.
+
Disclaimer
This post is not sponsored, and does not constitute financial advice of any form (I bet you know better than me). Read at your own discretion.
+
Signing up on StashAway (a robo-advisor platform) back in the early 2020s was the first step in taking control of my financial health. Before that, 100% of my cash was just sitting in a POSB Savings account, growing at an incredibly pointless rate of 0.05% per annum. I might as well have kept the money under my mattress.
I first bought my personal domain www.evantay.com way back in 2013. Younger me thought it would be cool and fun to run my own website, and it still is. 😎
+
To be honest, I can't even remember what my very first website looked like, but I am quite sure it was built using vanilla HTML and CSS, something unthinkable in this day and age.
+
+
\ No newline at end of file
diff --git a/blog/tags/software-architecture/index.html b/blog/tags/software-architecture/index.html
new file mode 100644
index 00000000..7d9d1509
--- /dev/null
+++ b/blog/tags/software-architecture/index.html
@@ -0,0 +1,19 @@
+
+
+
+
+
+One post tagged with "software-architecture" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
\ No newline at end of file
diff --git a/blog/tags/software-engineering/index.html b/blog/tags/software-engineering/index.html
new file mode 100644
index 00000000..d42f98c8
--- /dev/null
+++ b/blog/tags/software-engineering/index.html
@@ -0,0 +1,19 @@
+
+
+
+
+
+One post tagged with "software-engineering" | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
\ No newline at end of file
diff --git a/blog/why-you-should-read-ddia/index.html b/blog/why-you-should-read-ddia/index.html
new file mode 100644
index 00000000..845b7466
--- /dev/null
+++ b/blog/why-you-should-read-ddia/index.html
@@ -0,0 +1,29 @@
+
+
+
+
+
+Why every Software Engineer should read Designing Data-Intensive Applications | Evan Tay
+
+
+
+
+
+
+
+
+
+
+
Picking up this book was one of the best decisions I made for my fledgling software engineering career. Its insights enabled me to make well-reasoned software design decisions, and confidently communicate them, in spite of my relative professional inexperience. Given how helpful it has been, I’m here today to share more about the impression it has left on me, and convince you that it is a must-read if you are a software engineer.
+
+
I kickstarted my engineering career back in January 2021, as a full stack engineer at Padlet. During the onboarding process, my (amazing) mentor, Brian, imparted a great deal of guidance to me. One of his tips was that I should take a look at Kleppman’s Designing Data-Intensive Applications. Thankfully, we had two copies of the book in the office, purchased by my (also amazing) boss, Shu Yang, who recommended me to read it too. I’m thankful I ended up taking their advice, because I was able to glean so much insights from Kleppman, which happened to be highly applicable to the infrastructure and full stack projects I was developing.
+
+
"This book should be required reading for software engineers." - Kevin Scott, Chief Technology Officer at Microsoft
+
+
Like Brian, Shu Yang and Kevin, I now also believe all software engineers working on a distributed, cloud or data-intensive system will greatly benefit from reading the book. It provides the fundamental framework for thinking about these systems, and also the vocabulary to communicate such thoughts. Coupled together, these insights will empower you to make better design decisions and effectively convey them, even if you lack prior experience in the problem domain.
+
Kleppman also compared the key fundamental ideas behind the broad range of popular data systems out there today, by discussing their advantages, limitations and trade-offs, rather than diving deep into the intricacies of each tool. This was ideal given that the book's objective was to help us choose the right tool for the right occasion, which these characteristics will be more relevant for.
+
If you lack the time (or will) to pour over the entire book, you should at least check out the opening chapter. In it, Kleppman gives a comprehensive yet succinct overview of what I mentioned above, and provides a clear, detailed explanation of the three key principles in designing data-intensive system architecture: Reliability, Scalability and Maintainability. Just reading this first chapter alone was beneficial to me, as I was now able to better understand and discuss architectural concerns with my team.
+
If you're still not convinced whether to invest your time into this book, you can check out a summary I've written for the first chapter, where I’ve condensed Kleppman’s opening discourse on Reliability, Scalability and Maintainability. I’m certain it’ll provide a glimpse into the many lessons that Designing Data-Intensive Applications has to share, and if you do read the book, definitely let me know what you think!
You do not have to write out the CONTAINER_ID in full. The partial CONTAINER_ID specified need only uniquely identify the container. Given a container 'aa151b912448' and 'aa153cd14238', docker logs aa15 will not work, but docker logs aa151 will.
This script assumes that the working directory contains a .git directory, Dockerfile and package.json. A .gitignore and a .dockerignore file may be useful too. It is also assumed that the project dependencies have been installed using npm install.
+
deploy.sh
+
IMAGE_NAME="" VERSION="" # Can be left blank CONTAINER_NAME="" CONTAINER_PORT="" # To be mapped to HOST_PORT HOST_PORT="" # The port clients will access docker stop $CONTAINER_NAME # Stop the running container docker system prune -af # Removes the container and all unused images git pull origin master # Pulls latest source files and image docker build -t $IMAGE_NAME . docker run --name $CONTAINER_NAME -p $CONTAINER_PORT:$HOST_PORT -d $IMAGE_NAME:$VERSION
+
Before the script can be used, replace the following placeholders:
+
+
IMAGE_NAME : The name of your image.
+
VERSION : The version of your image. Can be left blank.
+
CONTAINER_NAME : The desired name of your container.
+
CONTAINER_PORT : The port of your container which your application is running on.
+
HOST_PORT : The host port your container port will be mapped to. This is the port that is exposed on the host system.
In this section, you will find my notes on using Kubernetes on Google Cloud Platform's Kubernetes Engine (GKE). It covers a typical workflow for starting a Kubernetes cluster and deploying an application on it.
This command enables switching to a specific cluster, when working with multiple clusters. It can also be used to access a previously created cluster from a new workstation.
To create a deployment, you need to have your Docker image prepared beforehand. This image must be built and uploaded to the Container Registry before you can deploy it on your GKE cluster.
+
tip
Before you proceed, you need to configure Docker to authenticate to the Container Registry: gcloud auth configure-docker
+
+
Build your image:
+docker build -t gcr.io/$PROJECT_ID/$NAME:$VER .
+
Verify it was built:
+docker images
+
Upload your image to the registry:
+docker push gcr.io/$PROJECT_ID/$NAME:$VER
+
Verify it was uploaded:
+docker run --rm -p $CONT_PORT:$HOST_PORT gcr.io/$PROJECT_ID/$NAME:$VER
+
Create your deployment:
+kubectl create deployment $D_NAME --image=gcr.io/$PROJECT_ID/$NAME:$VER
+
Verify it was deployed:
+kubectl get pods
+
Expose the deployment to the Internet via a Service resource:
+kubectl expose deployment $D_NAME --type=LoadBalancer --port $EXPOSED_PORT --target-port $HOST_PORT
+
Verify the service is running:
+kubectl get service
To update your deployment with a new Docker image, you have to upload it to the Cloud Registry. Next, you can apply a rolling update of your deployment's Docker image.
+
+
Build your new image (remember to update $VER):
+docker build -t gcr.io/$PROJECT_ID/$I_NAME:$VER .
+
Verify it was built:
+docker images
+
Upload your image to the registry:
+docker push gcr.io/$PROJECT_ID/$I_NAME:$VER
+
Verify it was uploaded:
+docker run --rm -p $CONT_PORT:$HOST_PORT gcr.io/$PROJECT_ID/$I_NAME:$VER
+
Apply a rolling image update:
+kubectl set image deployment/$D_NAME $I_NAME=gcr.io/$PROJECT_ID/$I_NAME:$VER
Example scenario:
+I have merged my development branch feat/new-feature into develop and wish to delete the local and remote branch of feat/new-feature now.
+
The solution:
+
+
Delete the local branch:
+
+
git branch -d feat/new-feature
+
or with --force: git branch -D feat/new-feature
+
+
+
Delete the remote branch: git push <remote> -d <branch>
+
+
The general command format it is based on:
+
+
Delete a local branch:
+
+
git branch -d|--delete <branch>
+
or with --force: git branch -D <branch>
+
+
+
Delete a remote branch: git push <remote> -d|--delete <branch>
Example scenario:
+I messed up my local master branch. I want to reset it such that it will revert to being the same as origin/master.
+
The one-line solution:
+git checkout -B master origin/master
+
The general command format it is based on:
+git checkout -b|-B <new_branch> [<start point>]
+
How does it work?
+
git checkout -b|-B <new_branch> is typically used to create and switch to a new branch. However, if <new_branch> already exists, it will be reset instead.
+
Cite
If -B is given, <new_branch> is created if it doesn’t exist; otherwise, it is reset.
Example scenario:
+I want to stop tracking changes for a given file temporarily, as I am still editing it and do not want an incomplete copy of it to be added to staging when I do git add * (to add all changes in the current path).
If you want a certain file or directory to be ignored for the long-term, you should choose to do so using .gitignore files. Otherwise, if you only want Git to ignore a certain file temporarily, before committing it at a later time, you should do so using the git update-index commands.
Git uses .gitignore files to decide which files and directories to ignore before you commit. Files and directories specified in .gitignore will not be tracked nor staged when git add * or git commit -a commands are used.
+
To ignore a file or directory using .gitignore:
+
+
Create a .gitignore file in the desired location (e.g. the project's root directory).
+
+
The .gitignore file affects all files and sub-directories in its directory.
+
+
It recursively affects all files and directories in its sub-directories.
+
Sub-directories with their own .gitignore file use their own .gitignore instead.
+
+
+
+
+
Open the file and enter the filename or directory to be ignored (e.g. secret.txt). See example below.
+
Save the file and commit it.
+
+
Commit it to share the ignore rules with other users using the same repository and also to version-control it.
+
+
+
+
Example .gitignore file:
+
# Ignore specific file plaintext_password.txt # Ignore specific file type using wildcards *.html # Ignore specific directory site/
+
caution
If a file is already being tracked, adding it to .gitignore will not stop Git from tracking it. You will need to remove the file from the Git cache using the git rm --cached <file> command. After doing this, the file will no longer be tracked provided it is specified in .gitignore.
If you want Git to temporarily ignore a file which you will commit at a later time, you can do so easily by using the git update-index commands to tell Git to stop and start tracking the file.
+
To temporarily ignore (stop tracking) a file:
+
git update-index --assume-unchanged <file>
+
To start tracking a file again:
+
git update-index --no-assume-unchanged <file>
+
tip
Read the man page to find out more about git update-index rules.
I found the book's cover illustration captivating. Sadly, I can't say I felt the same for the rest of the book. While the authors did introduce numerous interesting concepts, they did not explore the concepts in a depth which was meaningful. Instead, they jumped far too quickly from one concept to another, covering each concept superficially. As such, the book felt like a list of fun facts. It was a pity given how many concepts introduced could have been starters for insightful discourse.
+
That being said, I still penned a personal reflection on the book. That's because parts of it were still thought provoking for me. In this document, you'll find some of these thoughts. They are the ones I am comfortable sharing. They are also my biggest takeaways from the book.
In Okinawa, a blue zone, there is no word for retirement.
+
+
+
"Blue Zones are regions of the world where a higher than usual number of people live much longer than average." - Wikipedia
+
+
+
Instead, they have Ikigai which loosely translates to the reason for which you wake up in the morning.
+
Ikigai is at the intersection of what you love, what the world needs, what you can be paid for, and what you are good at.
+
I am grateful I found my Ikigai - writing software for computers, early in my life, back when I was 14.
+
I am also thankful I get to develop my other passion - which is to write for humans, on this website.
+
Having an Ikigai, a reason for being, a why for living, is parmount in living a meaningful life:
+
+
+
"He who has a why to live for can bear with almost any how." - Friedrich Nietzsche, a German philosopher who has a profound influence on modern intellectual history
+
+
+
If we ever feel overwhelmed, chances are we might have lost sight of our why.
+
+
+
"Why do you not commit suicide?" - Viktor Frankl, an Austrian neurologist, psychiatrist, philosopher, author, and Holocaust survivor who founded logotherapy
+
+
+
It is also important to always be conscious about why we choose to live day after day.
+
Reading this reminded me of the following quote:
+
+
+
"Memento Mori" - Latin for "remember you must die"
+
+
+
Knowing that we must die one day, why do we still do what we do?
We should live like the locals in Ogimi, a village in Okinawa known as the Village of Longevity - they are always busy with meaningful tasks but did everything with a sense of calm, and never in a rush.
+
We all have to face difficult times in life, we don't get a say in that, but we can choose what attitude we have and what we do in those moments.
+
We also need to remember that this too shall pass, and sometimes all we have to do is persevere a little longer.
+
+
+
"This too shall pass." - A Persian adage which reflects on the temporary nature of all things human
+
"頑張る/Ganbaru." - Japanese for "to persevere" and "to stand firm"
+
+
+
Resilience isn't just about the ability to persevere, but also to stay focused on the important things in life.
+
It is important not to get swept up with negative emotions and lose sight of what truly matters.
Shoma Morita, a psychotherapist who founded Morita therapy - a branch of clinical psychology strongly influenced by Zen Buddhism, believes that feelings will change as a result of your actions.
The concluding paragraph of the book resonated deeply with me. It embodies what I want to live by moving forward:
+
+
+
"Life is not a problem to be solved. Just remember to have something that keeps you busy doing what you love while being surrounded by the people who love you."
+
+
+
Being able to pursue my Ikigai while surrounded by cherished ones is really all I need in this life, besides the lowest tiers of Maslow's hierarchy of needs.
Here, you will find my thoughts on articles, books, videos and other forms of media. I also maintain a Reading List of articles, books, videos, and more which I think are meaningful and insightful.
Here, you will find a collection of concise notes on full-stack software engineering and cloud operations. These notes are filed under their respective topic, with related topics are categorised under the same chapter.
Published on January 16, 2021
+Updated on January 23, 2021
+
In this section you will find my notes on setting up iTerm2, an awesome terminal emulator for MacOS, and ohmyzsh, an amazing open-source framework for managing your zsh configuration.
Javascript ES6 introduced two new keywords to define variables, let and const. Previously, the keyword var was the only way to do so. let and const were introduced because there were issues with var which made it error-prone and hard to debug.
+
var
let
const
Scope
Global or Function
Block scope {}
Block scope {}
Must be initialized when declared
No
No
Yes
Can use before initialization
Yes
No
No
Can be redeclared
Yes
No
No
Can be updated
Yes
Yes
No
Hoisted to start of scope
Yes
Yes
Yes
+
Avoid using var because it is either global or function scope, and a declaration and assignment of a var can easily be redeclared or updated unknowingly in another function. Furthermore, a var is initialized with undefined and can be used even before being assigned a value.
+
On the other hand, let and const are block scope, and cannot be redeclared. Additionally, const cannot be updated. Both let and const also cannot be used before initialization, and will throw Reference error rather than return undefined like var.
+
Quote
"Hoisting is a JavaScript mechanism where variables and function declarations are moved to the top of their scope before code execution." - Sarah Chima Atuonwu, Var, Let, and Const – What's the Difference?
Setting up and running Mininet VM on a virtualization program.
+
Setting up and running NOX/POX controller on your local machine.
+
Connecting your POX controller to your Mininet application (and troubleshooting).
+
+
note
I wrote this setup guide using MacOS, POX using Python3, and VMware Fusion. However, the setup flow and troubleshooting should be similar for Linux/Window and other virtualization programs.
Download, setup and launch the latest Mininet VM here.
+
+
For virtualization program, try VirtualBox first given that it is free and open-source.
+
If you do use VirtualBox, remember to add a Host-only network adapter to your VM under
+Select VM > Settings > Network
+
If you are on MacOS, and the Mininet VM aborts when you launch it (as it does for me), you may have to consider using VMware Fusion instead.
+
If you do use VMware Fusion, change your network adapter:
+Virtual Machine > Network Adapter > Bridged (Autodetect)
+
+
+
Log into the VM using mininet as both username and password.
+
+
Before setting up the NOX/POX on your local machine, it would be best to familiarise and verify that your Mininet setup is working fine. Try out a few of the commands in #mininet-cheat-sheet such as pingall.
You will be setting up the NOX/POX controller on your local machine, and linking it up with the Mininet application in your VM.
+
Verify that your POX installation is working:
+
git clone https://github.com/noxrepo/pox cd pox ./pox.py log.level --DEBUG
+
+
Verify that you see:
+INFO:core:POX 0.7.0 (gar) is up.
+
You can terminate POX after step 1.
+
If it does not work, it is likely because you do not have Python3 installed. You can git checkout master to change to the Python2 version for POX. See this for more information.
Next, we will set up a Mininet network (on your Mininet VM) with the remote controller set to the POX controller (on your local machine). To do so, you will need to open two terminals, one on your local machine where POX is at, and another terminal in your Mininet VM.
Verify that you see something similar to:
+DEBUG:openflow.of_01:Listening on 0.0.0.0:6633
+
+
On your Mininet VM, check if your VM can reach the POX controller at the port it is listening on:
+
nc -zvw10 0.0.0.0 6633
+
+
Replace the IP address and port with what you see after Listening on in the previous step.
+
Verify that you see something similar to:
+Connection to 127.0.0.1 port 6633 [tcp/*] succeeded!
+
If you see something like:
+nc: connectx to 127.0.0.1 port 6633 (tcp) failed: Connection refused,
+it is probably because your VM cannot access your host machine. See #troubleshoot-connectivity.
Next, start Mininet with the controller set to your POX controller on your local machine:
+
sudo mn --controller=remote,ip=0.0.0.0,port=6633
+
+
Replace the IP address and port accordingly.
+
+
Lastly, check if your POX remote controller is connected:
+
h1 ping h2
+
+
Verify that h1 is able to ping h2. If not, your remote controller is not connected.
+
You should also see output in the POX window similar to:
+DEBUG:forwarding.l2_learning:installing flow for 52:1e:48:64:23:43.2 -> 02:07:aa:33:88:e5.1
+
+
If you are able to make it to this point, your setup for Mininet VM and remote POX is completed. See #resources for more information on what you can do next with Mininet!
If you are unable to connect to the POX controller from your VM, it could be one of the following problems:
+
+
Firewall rules are blocking it.
+
Incorrect IP address or port.
+
+
If you are certain you are specifiying the correct IP address and port, and that your firewall is off or allowing traffic in for the port POX is listening on, attempt the fix in #vm-host-connectivity.
The instructions here are for VMware Fusion and MacOS, but you can use it as a guide for solving connectivity issues between Windows/Linux and VirtualBox/other virtualization programs too. See this for more information.
+
On your local machine:
+
+
Turn off your Mininet VM:
+Virtual Machine > Shutdown (for VMware Fusion)
+
Change your network adapter to Bridged (Autodetect) if you have not done so yet:
+Virtual Machine > Network Adapter > Bridged (Autodetect)
+
Turn on your Mininet VM.
+
Find out your local machine's IP address:
+System Preferences > Network > Wi-Fi
+
Look out for something similar to:
+Wi-Fi is connected to YourWifi and has the IP address 192.168.0.152
+
+
On your Mininet VM:
+
Use your local machine's IP address for the nc command:
+
nc -zvw10 192.168.0.152 6633
+
If this works, use this address instead of 127.0.0.1 or 0.0.0.0 whenever you are specifying the remote controller's IP address for Mininet.
# Select the database show dbs use <db-name> # Show all collections in the database show collections # Print out all documents in the database db.<collection-name>.find() # Print out in an easy-to-read but less compact format db.<collection-name>.find().pretty()
show dbs use <db-name> # Show all collections in the database show collections # Remove all documents in the collection db.<collection-name>.remove( { } ) # See reference for more information
The purpose of this guide is to quickly set up a local copy of MongoDB on Windows for local development purposes. Authentication will not be enabled or covered in this tutorial.
Next, create a new collection, use the use command:
+
+
# To display the database you are using db # To switch databases use `use <database>` # To create a new database, switch to a non-existing database use dev # Template use <database>
Next, create a user with readWrite and dbAdmin roles, using the db.createUser() command:
+
+
# Switch to the database you want to add the user to use dev # Create the user with `readWrite` and `dbAdmin` rights db.createUser( { user: "devadmin", pwd: passwordPrompt(), roles: [ "readWrite", "dbAdmin" ] } ) # Template db.createUser( { user: <username>, pwd: <password>, roles: [ "readWrite", "dbAdmin" ] } )
The connection-string is used to access the MongoDB instance from your applications (i.e. MongooseJS). The format of your connection-string is as follows:
+
mongodb://[username:password@]host1[:port1][,...hostN[:portN]][/[database][?options]] # Parts in [ and ] are optional # Example, without authentication mongodb://localhost:27017/dev # Example, with authentication mongodb://devadmin:<password>@localhost:27017/dev # Replace the <password> with your actual password
To verify your connection-string, simply use mongo <mongoURI>:
+
# Example, without authentication mongo mongodb://localhost:27017/dev # Example, with authentication mongo mongodb://devadmin:<password>@localhost:27017/dev
In this document, you will find my summary for the Network Performance Model and Queueing Model content covered under CS4226: Internet Architecture course taught by Dr. Richard Ma. I compiled this document with the help of notes written by my good friend Matthew over here.
The long-term average number L of customers in a stationary system is equal to the long-term average effective arrival rate λ multiplied by the average time W that a customer spends in the system.
+
In the context of Internet Architecture:
+
L=λW
+
+
L: Average number of packets in the system
+
+
L=limt→∞t1∫0tL(s)ds
+
+
+
λ: Average packet arrival rate for the system
+
+
λ=limt→∞tN(t) where N(t) is the number of packets which arrived up to time t
+
+
+
W: Average sojourn time
+
+
W=limn→∞n1∑i=1nWi where n is the number of packets and Wi is the waiting/sojourn time for the ith packet
Model Tis using independent and identically distributed (i.i.d.) random variable T which is exponentially distributed, with λ≥0 condition. Exponential distribution was chosen because of its memoryless property.
Tis are i.i.d. random variables distributed as T with rate λ. This arrival pattern is called a Poisson process, in which starting time does not matter (memoryless property). Therefore, two Poisson proccesses can be merged to create a new Poisson process:
You are a Windows user who just started working on a new NodeJS project. While following through the "Getting Started" guide, you were instructed to run one or more npm run scripts such as npm run dev. These commands fail with errors.
+
Upon checking, you realise that the reason why these commands fail is because they contains Bash (shell) commands not available to your Windows shell. You proceed to install Git Bash for Windows, and then re-run the command again using Git Bash instead. It still fails. It seems like npm run scripts still uses your Windows shell for execution.
Tell npm config what shell you want your npm run scripts to be executed with (in this case, the Git Bash shell):
+
npm config set script-shell "C:\\Program Files\\git\\bin\\bash.exe"
+
note
It is assumed that your Git Bash executable file is located at "C:\Program Files\git\bin\bash.exe", which is the default installation location. If it is not, amend the path accordingly.
It is recommended to avoid using the root user account on a regular basis as it compromises security and is risky. Instead, create a new user account and add it to the sudo group.
As mentioned, usage of the root user account should be avoided. Hence, it is advisable that you add your public key to the user account you created earlier on. It is assumed that you logged into your root account using SSH key.
+
+
Create a authorized_keys file:
+
+
su evan cd ~ mkdir .ssh vim .ssh/authorized_keys
+
+
Insert your public key and save the file with :wq!. You can copy this from the authorized_keys file under the root account's directory. You can find the file using the following commands:
+
+
su root cd ~ vim .ssh/authorized_keys
+
tip
Toggle visual mode by pressing v at the start of the line for the public key you wish to copy over. Press $ to move the cursor to the end of the line; doing so highlights the entire line. Press y to yank (copy). Then exit the file using :q!.
Enter the destination authorized_keys file. Press p to paste what you yanked.
As it is assumed that you logged into your root account using SSH key, this step could be unnecessary. However, do still perform a check to verify that PasswordAuthentication no is in place.
+
+
Open sshd_config with Vim:
+
+
sudo vim /etc/ssh/sshd_config
+
+
Add PasswordAuthentication no. It might be commented out as #PasswordAuthentication no or written as PasswordAuthentication yes. If you find either, replace with PasswordAuthentication no. Else just add it in.
+
+
tip
Use :/PasswordAuthentication to find #PasswordAuthentication no.
Figma: How Figma’s multiplayer technology works: How Figma implemented multiplayer in-house, without having to use operational transforms which is the standard multiplayer algorithm used by apps like Google Docs. I really like how the article explores synchronization issues clearly and succinctly, often with the aid of easy-to-understand animations.
Martin Kleppmann: Designing Data-Intensive Applications: Reading this book was one of the best decisions I ever made as a fledgling software engineer. It enlightens readers on the fundamental ideas behind the broad range of popular data systems out there today. It also discusses the key trade-offs between these systems, so readers can make better-informed decisions about which to use given the constraints and context.
Github: Why Write ADRs: Architecture Decision Records (ADRs) are a great way to document how and why a decision was reached within a codebase. ADRs discuss the problem context, concerns, outcomes, alternative options and accepted trade-offs.
Hashicorp: Unlocking the Cloud Operating Model: Hashicorp's white paper discusses how enterprises can adopt the Cloud Operating Model to maximise the value of their digital transformation efforts. The paper succinctly but clearly explains how one can capitalise on the dynamic nature of the cloud to achieve much more than possible with a static on-premise infrastructure setup.
+
The Akamai Network: A Platform for High-Performance Internet Application: The creators of Akamai's CDN (Content Delivery Network) share how Akamai overcome numerous network challenges such as the middle-mile bottleneck problem in order to enable the delivery of high-performance Internet applications.
Published on July 31, 2021
+Updated on January 3, 2022
+Edited by Vanessa Tay
+
The opening chapter of Kleppman’s Designing Data-Intensive Applications book: Reliable, Scalable, and Maintainable Applications, addresses key concerns you should consider when designing distributed and data-intensive systems, in an insightful way. I believe anyone working on a distributed system will benefit from reading it. However, as not all of us may have the time (or will) to pour over the book, I’ve decided to share a quick summary of the key points Kleppman raises, as well as to offer some of my personal inputs with references to other literature and experts.
Reliability, Scalability, and Maintainability, the three characteristics that Kleppman opens with, are terms you might come across often. If you're not familiar with them, you may wonder: what are they and why are they important?
When building an application, we want it to work correctly, even when things go wrong.
+
+
"Anything that can go wrong will go wrong." - Murphy's law
+
+
The adage above can be applied to just about anything in life and applications are no exception. If we want our applications to be resilient during adverse conditions, we will have to design them with the expectation that things will go wrong. We can't blindly hope they won't.
Reliability: A reliable system continues to work correctly, even when things go wrong.
+
Fault: A fault is typically defined as an individual component of the system deviating from its spec, when it performs in an unexpected way.
+
Failure: The entire system failing as a whole, and being unable to deliver the required service to its users.
+
Fault-tolerant/Resilient: If a system anticipates and prevents faults from causing failures, it is fault-tolerant.
+
+
note
As it is impossible to design a zero-fault system, we should focus on preventing faults from causing failures instead. We do so by implementing fault-tolerance mechanisms.
When we design a fault-tolerant system, we will naturally consider how we can build it to tolerate hardware and software errors. We introduce hardware redundancy, such that if a hard disk fails, there's a backup which will take its place. We also write fault-tolerant code, such that a software fault would not cause the server to fail.
+
These errors aren't the only ones we should consider. We should also consider how we can prevent human errors. After all, we humans design, create and operate these systems.
+
+
"Even when they have the best intentions, humans are known to be unreliable." - Found on Page 9 of the book
+
+
We humans write the code, and decide what hardware to run our code on. We are responsible for all bugs and mistakes within our systems. Given that, it is important we ask ourselves often:
+
+
"How do we make our systems reliable, in spite of unreliable humans?" - Found on Page 9 of the book
+
+
To achieve that, we will need to design the system such that it:
+
+
Minimizes opportunities for introducing errors: We should design abstractions, APIs and administrator interfaces which make it easy to do the right thing, and hard to do the wrong thing.
+
Mitigates impact of failures by allowing quick and easy recovery: We should provide a fast and easy way for developers to roll back a failure-inducing deployment, and for operators to undo accidental changes in the administrator interface.
+
Reduces delay in diagnosing errors through detailed monitoring: We should set up clear and detailed monitoring which could provide early warning signals, and also insights into what went wrong so we can better triage errors.
As the load on our system increases, we want it to continue working correctly. To achieve that, we will have to design it such that it is scalable. Scalability describes a system's ability to deliver its expected functionality in spite of increased load. Given that there are many different types of load a system can have, it is meaningless to discuss whether the system is scalable or not. It is more productive to talk about whether it is scalable in a specific manner:
+
+
"If the system grows in a particular way, what are our options for coping with the growth?" - Found on Page 11 of the book
+
+
Before we can describe scalability, we will first need to define load. We can do so numerically by using load parameters.
A load parameter is a metric you can use to describe a particular load for a given system. Examples include requests per second for a web application, and the ratio of cache hits to misses. The load parameters you should focus on depends on the architecture of your system and your user requirements.
+
+
"An architecture that scales well for a particular application is built around assumptions of which operations will be common and which will be rare - the load parameters." - Page 18 of the book
After defining the load parameters of your system, you can now describe how increases in load affect the system:
+
+
When a load parameter increases, how is the system performance affected if you keep the system resources constant?
+
When a load parameter increases, how much do you need to increase the resources by to keep the system performance constant?
+
+
To answer these questions, you would need to define performance metrics. Examples of such metrics include the throughput of a network protocol, and the response time of a web service.
Response time is a common and important performance metrics for online, distributed systems. There are many different definitions for it out there. In the context of online systems, it is typically defined as the time between a client sending a request to the system and receiving a response from it.
+
When you consider a system's response time, it is important to consider it not as a single value, the average (mean), but as a distribution of values, the percentiles. That's because the response time for requests varies a lot, and there are many outliers which are much slower. There are many reasons why this is so, here are some:
+
+
Different types of requests have different processing time: An online system handles many different types of requests which take varying amounts of time to process.
+
Caching reduces response time for common requests: Common requests are often cached for high-traffic systems, and responded to much faster than those which are not.
+
+
Given these reasons, the distribution of response time is asymmetric and significant outliers are common. This makes the mean much less representative of the response time than the median, also known as the 50th percentile.
+
The median also provides information about the distribution which the mean does not - if the median response time is 80ms, you can infer that half of the requests have a response time faster than 80ms, and also that the other half would be slower than that. You can't infer the same from an average response time, as it is not a middle value like the median.
You should also consider how slow the outliers are, by looking at higher percentiles such as the 95th and 99th percentile. These are the thresholds at which 95% or 99% of the requests are faster than that particular threshold. They're also commonly called tail latencies. It is important you consider these, because the users with the slowest response time are often those who have used the system most extensively.
+
Amazon uses the 99.9th percentile for internal service response time requirements. They do so even though only 0.1% of requests are slower, because the customers with these requests are often the most valuable customers. They experience longer response time because they have more data. They have more data because they made many more purchases than typical customers, thus making them more valuable.
"An architecture that is appropriate for one level of load is unlikely to cope with 10 times that load." - Found on Page 17 of the book
+
+
If you want to maintain good system performance, when the load parameters increase, you would need to increase the resources. There are two ways of doing so:
+
+
Vertical scaling: Scaling up by adding more power - adding more CPU or RAM to your virtual machine instance.
+
Horizontal scaling: Scaling out by adding more machines - adding more instances to your instance group.
+
+
There are tradeoffs between both approaches. A system running on a single, powerful machine is much simpler to develop and maintain than one on multiple machines. However, as you scale up a machine, it gets increasingly costly to do so, and scaling out becomes inevitable. You would need to find the right balance between both approaches if you want to achieve the most cost-effective and efficient outcome.
When building a system, we want to build it such that it is as easy to maintain as possible.
+
+
"It is well known that the majority of the cost of software is not in its initial development, but in its ongoing maintenance - fixing bugs, investigating failures, modifying it for new use cases, and adding new features." - Found on Page 18 of the book
+
+
We should design systems which are easy to operate, understand and evolve. To achieve that, we should follow these three principles when designing a system:
+
+
Operability: We should make it easy for operators to keep the system running smoothly.
+
Simplicity: We should make it easy for engineers to understand the system by reducing as much system complexity as possible.
+
Evolvability: We should make it easy for engineers to change the system in future, adapting it for unanticipated use cases to match requirement changes.
A system with good operability makes routine maintenance tasks easy, allowing the operations team to focus on higher-value contributions. We can achieve that by designing a system with:
+
+
Good telemetry: Set up informative and usable monitoring and logging of the system's runtime behaviour and health.
+
Good documentation: Document in an easy-to-understand manner such that operators are clear on what they can do and what is the outcome - e.g. "If I do X, Y will happen".
+
Good default behaviour: Supply default values and settings for operational/internal tools, but allow operators to override defaults when needed for edge cases.
As a system grows larger, so does its complexity. This makes the system harder to understand by those working on it, which is problematic in many ways, such as:
+
+
Lower productivity: Engineers will take longer to complete tasks because they will have to spend more time understanding what they are working on
+
Higher risk of introducing bugs: Engineers are more likely to overlook hidden assumptions and unintended side effects which will cause faults.
+
+
Moseley and Marks define two types of complexity in their paper Out of the Tar Pit:
+
+
Essential Complexity: inherent in the essence of the problem
+
Accidental Complexity: anything else which the development team would not have to deal with ideally (e.g. complexity arising from suboptimal language and infrastructure)
+
+
While it is inevitable that a system becomes more complex as it grows, we can mitigate it by reducing accidental complexity. We can do so by keeping simplicity in mind when working on the system. One of the best and most common approaches to doing so is by implementing abstractions, which can hide a ton of implementation detail behind a simple-to-understand facade.
It is likely your system's requirements will change due to reasons such as:
+
+
An unanticipated use case emerging
+
Business priorities changing
+
User requesting new features
+
+
The ease at which you evolve your system to meet the new requirements depends heavily on its simplicity. The easier it is to understand your system, the easier it would be to modify it.
Why do we use bundle install rather than gem install?
+
Bundler installs the exact gems and versions that are needed. It resolves all dependency conflicts for you automatically, which you would have to manually resolve if you had used gem install instead.
+
For example, if you have two gems requiring different versions of the same gem nokogiri:
If you use gem install to install sunspot_rails and webrat, it might install both 1.2.0 and 1.3.0 nokogiri versions or even complain about version conflicts. If you use bundle install instead, Bundler will resolve this dependency conflict by installing the right nokogiri version, which is 1.3 in this example.
+
+
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
new file mode 100644
index 00000000..5fe3707a
--- /dev/null
+++ b/sitemap.xml
@@ -0,0 +1 @@
+https://evantay.com/blogweekly0.5https://evantay.com/blog/archiveweekly0.5https://evantay.com/blog/docusaurus-gh-actionweekly0.5https://evantay.com/blog/docusaurus-posthogweekly0.5https://evantay.com/blog/historyweekly0.5https://evantay.com/blog/stack-2020weekly0.5https://evantay.com/blog/stashawayweekly0.5https://evantay.com/blog/tagsweekly0.5https://evantay.com/blog/tags/analyticsweekly0.5https://evantay.com/blog/tags/book-reviewweekly0.5https://evantay.com/blog/tags/ciweekly0.5https://evantay.com/blog/tags/cloudweekly0.5https://evantay.com/blog/tags/conferenceweekly0.5https://evantay.com/blog/tags/docusaurusweekly0.5https://evantay.com/blog/tags/github-actionweekly0.5https://evantay.com/blog/tags/gov-tech-stackweekly0.5https://evantay.com/blog/tags/investingweekly0.5https://evantay.com/blog/tags/microservicesweekly0.5https://evantay.com/blog/tags/posthogweekly0.5https://evantay.com/blog/tags/sharingweekly0.5https://evantay.com/blog/tags/software-architectureweekly0.5https://evantay.com/blog/tags/software-engineeringweekly0.5https://evantay.com/blog/why-you-should-read-ddiaweekly0.5https://evantay.com/projects/weekly0.5https://evantay.com/docs/weekly0.5https://evantay.com/docs/c-cheatsheetweekly0.5https://evantay.com/docs/docker-cheatsheetweekly0.5https://evantay.com/docs/gcp-gke-cheatsheetweekly0.5https://evantay.com/docs/git-cheatsheetweekly0.5https://evantay.com/docs/git-ignore-fileweekly0.5https://evantay.com/docs/ikigaiweekly0.5https://evantay.com/docs/iterm2-cheatsheetweekly0.5https://evantay.com/docs/iterm2-zsh-setupweekly0.5https://evantay.com/docs/js-cheatsheetweekly0.5https://evantay.com/docs/mdxweekly0.5https://evantay.com/docs/mininet-setupweekly0.5https://evantay.com/docs/mongodb-cheatsheetweekly0.5https://evantay.com/docs/mongodb-setupweekly0.5https://evantay.com/docs/network-modelweekly0.5https://evantay.com/docs/nodejs-auto-reloadweekly0.5https://evantay.com/docs/nodejs-set-npm-run-shellweekly0.5https://evantay.com/docs/os-ubuntu-cheatsheetweekly0.5https://evantay.com/docs/reading-listweekly0.5https://evantay.com/docs/reliable-scalable-maintainableweekly0.5https://evantay.com/docs/ruby-dependency-managementweekly0.5https://evantay.com/docs/scaling-memcachedweekly0.5https://evantay.com/weekly0.5
\ No newline at end of file