Designing Data-Intensive Applications Chapter 1 Notes/Overview… Like for real, they are active recall notes for me.

Anthony Johnson
3 min readDec 15, 2020

--

Message me what you see.

Alright, so this is article one in a series of articles I will be writing to concentrate my understanding of “Designing Data-Intensive Applications” by Martin Kleppmann; as well as have articles to look back on to measure improvement.

I highly recommend reading this book and I hope these bring someone value.. If not, that's ok, these are for me, not you, this is in no way a supplement for reading the book.

This chapter gives an overview of the three pillars of a well architected Data-system. These three pillars are

  • Reliability — fault vs failure tolerance, different factors that can lead to bugs and errors, how to prevents those errors, faults, and failures.
  • Scalability — The ability of a system to grow in a way that makes sense in relation to its uniqueness.
  • Maintainability — How easily a system can be maintained relation to ease of use to upkeep

In the olden days the traditional school of thought was to grow your servers vertically until it was no longer feasible compared to adding machines. This has since changed, in today’s world we are less worried about computations and more worried about the sheer amount of data, or “Data Intensiveness vs Computational Intensiveness”.

Note: Three things to think about when thinking about a datasystem
* What is the “Rate of Change”? How much is changing at once?
* What is the “Speed of Change”? How quickly do these changes need to occur?
* What is the Complexity of the data?

What does a Data System do and What are some components of one?

  • Stores Data — This is your conventional database, e.g. SQL, MongoDB, Aurora, Dynamo
  • Remembers result of expensive operations — Cache, e.g. Redis
  • Searches Indexes — Key Value Stores — Elastisearch from AWS
  • Streams Processes — Message Queues, e.g. Apache Kafka
  • Batch Processing —e.g. Azure Batch processing

Many of these systems have similar features, but it all comes down to access patterns which are specialized towards different jobs, use cases, and implementations.

Checking reliability is done by inducing faults. A fault is a deviation from what is expected from spec, failure is caused by error normally caused by human induced bugs.

Reliability is reduced for flexibility. (EC2(Elastic Cloud Computing) any-body).

Scalability is the ability of a system to grow in the direction that meets its unique needs. Tough questions need to be asked for this, like how are our users actually using our product… Something to think about…

(Its my daughters 2nd birthday and i have like 5min to finish this before we go to Defy, the jump park)

Cool notes:
Amazon measures on the 99.9th percentile of latency, they have found that 100ms of latency lowers their profit by 1%, and decreasing latency for the 99.99th percentile isn't cost effective.

Twitter deals with a fanning issue where when one user “Tweets” all of their followers get a notification added to their home screens, this creates some interesting caching mechanisms and they actually treat celebrity tweets differently.

Maintainability can be constructed in three categories, operability, Simplicity (can someone like me understand it) and evolvability, how coupled is this thing… can we switch out parts?

Welp, That is it, i really just wanted to get this one out, Spaghetti notes about Designing Data-Intensive Applications.

If anything above is way wrong, let me know.

--

--

No responses yet