Posts

System design for Sharding a Database

Image
Database sharding is the process of splitting the data across multiple servers for scalability of an application. The above diagram gives a clear idea of sharding a database. In the above diagram, the above database is a collection of a single database with all the customers from 1 to 8. In the below diagram, we have split the database into shards of 4. Sharding has many advantages, it helps us to maintain the database lot easier, easily scalable, and maintainable. Let us design this database in detail, Our system would have some requirements, let's start with it and design our system with those assumptions in mind. Requirements Data size: Let's assume we have data of few 100 TBs Data partition: Data partition can be done in many ways, it depends on the problem on what basis we can partition our data. We can do partition based on customer_id, location, items, inventory,  and others. Estimations In this part, we will discuss the rough estimations of our system Data part:  In the

System design for a LRU cache

Image
Systems design is the process of defining the architecture, modules, interfaces, and data for a system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development In this article, we will learn how we can design a cache Let's begin to design our system, Any system design would have some requirements. Enlisting the same would give us a better idea of what would be the overall limitation of our MVP (Most Valuable Product) Requirements Data size : Let say we will store data like Facebook. Going upwards to few TBs Cache Eviction Strategy : During the course of time, we might not have space to store all the cache entities at that time we need to have a strategy to evict old cache entity like LRU (Least Recently Used) you can read more here Access Patterns for Cache :  Write through cache : In this cache pattern, the write goes through the cache and is marked as successful only after both the cache and database are successfu

Kubernetes Nginx-ingress controller with HAProxy on Bare Metal

Kubernetes works like a wonder in cloud environments like AWS, Azure, or GCE. Many times we do not have a luxury of the above cloud environments. This article helps you to set up an ingress-controller with Nginx as a load balancer in a bare-metal machine. Pre-requisites 3 virtual machines Git installed Centos 7 or any Linux distro Setting up HA Proxy Centos 7 HAProxy is free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers Create a virtual machine named HAProxy.  Install haproxy using the yum package   yum install haproxy -y Edit the ha config file and add the following like this vi /etc/haproxy/haproxy.cfg Delete all the section below the defaults section and add the following at last frontend http_front bind *:80 bind :8080 stats uri /haproxy?stats default_backend http_back backend http_back balance round-robin server kube 10.10.10.10:80 s