System Design
Load balancer
- Does only routing and no computing. Hence, it can handle higher traffic.
- To avoid single point of failure, have a standby routing server. Standby comes into picture only when main server goes down.
- Assign the same static IP to passive Standby server.
- Modern systems can route 1M requests per sec
Managing large datasets
- Vertical scaling - Increasing capacity of server
- Functional Scaling -
- Split data logically like users, purchasing..
- Microservices share data to other apis
- Implemented at Application layer
- Horizontal scaling
- Also called as Sharding
- Split and store single logical database over multiple servers called data nodes
- Can be implemented at Application layer or DB layer
- Distribution scheme that doesn’t directly depend on number of servers, so that, the number of keys to be redistributed is minimised.
- Consistent Hashing ring
- Say 2 hash functions Hs & Hc
- Hs - Marks servers on consistent hashing ring
- Hc is used to mark partition key or client id on the same consistent hashing ring
Partitioning
- Functional split
Sharding
- Removing storage bottleneck
- Data loss prevention
- To address through put issues for hot reads
- Data loss prevention
- Replication Factor - #servers that hold data
- R- #servers that read data
- Read from all servers, compare and take the latest data
- For read heavy systems R is 1 so that the read happens on one server only
- W
- Write to all #servers synchronously
- For write heavy systems W is 1 so that the data replication doesn’t take time
- for high consistency R+W> RF(Replication factor/Quorum)
Comments
Post a Comment