8 System Design Principles I learned After Doing It Wrong More than 50 Times!



At Squad, we strive to build awesome products to solve customer(internal and external) needs. As a product engineer, paramount part of your job is to design and build products. Dig deep into the root cause of the problems, design solutions and implement them as the end product.

Over the course of my journey so far, here are the 8 system and product design principles that I’ve learned from other awesome people at Squad, from feedback and simply doing it not right enough multiple times.

1. What is the underlying problem that led to the feature request?

At Squad, you don’t just code the requirements into the software. As a product engineer, it’s your responsibility to remove the layers and expose the root problem that led to the feature requirement.

Get to know the root cause of the problem that you are trying to solve. Or even better, as the lean principles say “genchi genbutsu” i.e go and see it yourself.

2. How can you make the feature more robust, reliable and usable?

Once the essential feature requirements are finalized, we must press on how can we make the feature more robust, reliable and usable?

Things to ponder upon and take into consideration can be :

  1. The persona of the users that’s going to use that.
  2.  Scenarios in which that feature would be used. Ex, if in the case of fires, than show more data than needed for faster resolutions.
  3. Building in the quality in the product itself or “Jidoka” as said in lean.

3. What is the first iteration going to be?

Given the time and resources you have., what is the best possible first iteration of the product going to be? If it’s a large system or something you are building from scratch, there are always going to be iterations.

The main idea here should be to move fast and get things shipped. Good enough and shipped on time is always better than perfect and in-development forever.

4. How easy will it be to make iterations on the current feature?

The design should incorporate all the non-functional requirements to make future iterations easy.

Scale the feature? Change a component? Use a different 3rd party service? Your implementation should be flexible enough to incorporate and encourage these enhancements.

Design patterns are your best friend here.

5. What are the potential bottlenecks with scale?

Scale-land is where everyone wants to be, but it is scary. It breaks what was not supposed to break and has witnessed more horror stories than a haunted castle.

What are the potential bottlenecks that are not a problem now, but will break at 5X, 10X or 100X scale?

List them down on the feature ticket, or better document it in the code itself.

6. What’s the data that has to be captured and how will it be consumed?

Every feature in the product will need some data that needs to be captured to track it. It can be but not limited to:

  1. Action logs.
  2. Event logs.
  3. Metrics
  4. Failures.
  5. Anamolies.

What affects this majorly is how that data will be consumed? Store it in a structure that will make the consumption of data easy and efficient. Afterall, the only motive to store data is to use it.

7. How good the developer experience will be when interacting with the code base of that feature?

There can be many developers who’ll use or modify the code that you are going to write.

How will be their experience when doing that? Ex. Will the test cases you wrote, make them feel confident enough to make changes fast?

Few points to consider:

  1. Is the code well documented?
  2. Are test cases strong enough?
  3. Is the code, re-usable where it makes sense?
  4. Are functions small and code, simple to read?

8. What metrics will determine that the feature has been implemented successfully?

Finally, after all the fun-time you had creating the feature, what will determine that the feature has been implemented successfully?

The data you tracked will be of paramount importance here.

It can be the case that to track this quantitatively is not possible, but can you track the qualitatively in that case?

The idea here is that you can’t improve what you can’t measure?

Processing 100,000 requests? Fewer errors by the users? 95% work done by the new system instead of old one?

This can and will involve more stakeholders of the team and not just the developer.


Obviously, this is not the exhaustive things to take into consideration while designing a system or a product as an engineer. This just covered what I have learned so far by just doing things wrong or not right enough multiple-times.

It’s fun to build stuff! Continuously improve (“Kaizen” in lean)! Keep iterating! Keep shipping!



That’s all, folks!



Deploying a nginx application using Kubernetes for Self-Healing and Scaling

Kubernetes is an open source system for automating deployment, scaling and management of containerized applications. A more technical term for it is, container orchestrator which is used to manage large fleets of containers.

Minikube is an all-in-one single node installation for trying out kubernetes on local machines. And the following post covers deploying a nginx application container using kubernetes in minikube.

If you don’t have, then this link has it all to install minikube and kubectl (command line tool to access minikube) : Download and install minikube and kubectl

Step 1 : Making minikube up and running

Ensure that minikube is running.


Step 2 : Open the minikube dashboard

Minikube comes with a GUI tool that opens in the web browser. Open the minikube dashboard with following command :


It should open the dashboard in a browser window and it’ll look something like this:


Looks cool! No?

Step 3 : Deploy a webserver using the nginx:alpine image

Alpine linux is preferred for containers because of its small size. We’ll be using the nginx:alpine docker image to deploy a nginx powered webserver.

Now, go the deployments section and click the create button, which will open an interface like below.


Fill in the details as shown in the image.

We can either provide the application details here, or we can upload a YAML file with our Deployment details.

As shown, we are asking kubernetes to create a deployment with nginx:alpine image as container and that we want 3 pods (or simply instances) of that.

A pod in kubernetes is a scheduling unit, a logical collection of one or more containers that are always scheduled together.

Go on and click that awesome deploy button!

Step 4 : Analyzing the deployment

Once we click the deploy button. Kubernetes will trigger the deployment. Deployment will create a ReplicaSet. A ReplicaSet is a replication controller that ensures that specified number of replicas for a pod are running at any given point of time.

Flow is something like this:

Deployment create ReplicaSets, ReplicaSets create Pods. Pods is where the real application resides.


As expected, we have our deployment, replica set and pods in place.

We can also, check our deployment via command line using kubectl.


Step 5 : Create a Service and expose it to the external world with NodePort

So far, we have our pods up and running. But how do we access them?

This is where a service comes into play. K8S provides a higher level abstraction called as a service that logically groups pods and policy to access them. This grouping is done via labels and selectors.

Then we expose the service to the world by defining its service type and service redirects our request to one of the pod and load balances them.

Create a my-nginx-webserver.yaml file with the following content:


apiVersion: v1
kind: Service
  name: my-nginx-web-service
    run: my-nginx-web-service
  type: NodePort
  - port: 80
    protocol: TCP
    app: my-nginx-webserver

Enter the following commands to create a service name my-nginx-web-service


We can now verify that our service is running :


Step 6 : Accessing the application

Our application is running inside the minikube VM. To access the application from our workstation, let’s first get the IP address of the minikube VM:


Now head to the address and port number of the service we got in above step.


And our app is running! Amazing, give yourself a pat now!

Taste of self-healing feature of the kubernetes system :

One of the most powerful feature of kubernetes is self-healing capabilities (just like Piccolo. DBZ, anyone?). While defining our app we created a replica set with 3 pods. Let’s go ahead and kill one pod and kubernetes wil create another one to maintain the running pod count 3.


As we can see in the image. We deleted the bottom-most pod and K8S created a new one instantly.

Such kubernetes! Much HA (High Availability)!

Taste of scaling with Kubernetes:

Now, our app is receiving a crazy amount of traffic and three nginx pods are not enough to handle the load. Kubernetes allows us to scale our deployments with almost zero effort.

Let’s go ahead and spin up a new pod.



Click OK. Now let’s go and check our pods.


As we can see in the image, we have now 4 pods running to handle the increased traffic.

Isn’t it amazing? We just horizontally scaled our application with the power of kubernetes.

This was just the tip of the iceberg what Kubernetes can do. I am also exploring the kubernetes and containerized architecture just like you, hopefully we’ll be back with another post soon with more kubernetes stuff!

That’s all, folks!

Estimation Peril: How To Estimate Software Projects Effectively(or How Not To Lie)


Consider, you are a rockstar engineer and you are given a task by your favorite person, your project manager, to show some new fields in the dashboard.

As usual, you are asked to estimate it as soon as possible. You think that well, seems like a quickie and you are tempted to estimate it a day. But you, being burnt before, decided to look at the fields that are to be added carefully. These fields are for analytics. You think, ok, let’s make it 2 days then. But being more cautious, you dig deeper and find that those analytics are not even being tracked on the app.

Now to complete the story, you’ll have to track the analytics, send them to the server, make the backend accept those and store them, show these on the dashboard, write tests etc….

What seemed a simple task is now a 1-2 week thing. Very hard to estimate. And your manager was expecting a response like, “would be done by end of day”.

What is the problem with estimates?

The main problem with an estimate is that the “estimate” gets translated into commitment. And when you miss a commitment, you breed distrust.

Most estimations are poor because we don’t know what they are for. They are uncertain. A problem that seemed simple to you on the whiteboard, turned out not to be so simple. There were non-functional requirements, codebase friction, some unfortunate bugs etc. We deal with uncertainty.

There is a rule in software engineering that everything takes 3X more time than you think it should, and this holds true even when you know this and take it into account!

Estimates can go the other way too, that is when you overestimate. This is as dangerous as underestimating.

What should an estimate look like?

An estimate should have 3 characteristics :

  1. Honest (Hardest)
  2. Accurate
  3. Precise

1. Honest : 

You have to be able to communicate bad news when the news is bad. And when the continuous outrage of your managers and stakeholders is on your face, you need to be able to continue and assert that the news is bad.

Honesty is important as you breed trust. You are not eliminating disappointment, rage and people getting mad, but you will eliminate distrust.

2. Accurate :

You are given a task and you estimate it to take somewhere between now to the end of the universe. That’s definitely accurate, it’ll be done within that time.

We won’t breed distrust, but we definitely will breed something else.

Which brings us to the 3rd characteristic.

3. Precise : 

An estimate should have just the right amount of precision.

What is the most honest estimation that you can make? I don’t know!

This is as honest as it can get. You really don’t know. But this estimation is neither accurate not precise.

But when we try to make precise estimates, we must note that we are assuming that everything goes right. We get the right breakfast, traffic doesn’t suck, your co-worker is having a good day, no meetings, no hidden requirements, no non-functional complexities etc.

Estimating by work break down

The most common way to estimate a complex task is to break it down into smaller tasks, into sub-tasks. And then those sub-tasks into sub-sub-tasks and so on until each task in hand is manageable and ideally not more than 4 hours of work.

Imagine this forming a tree, with executable tasks at the bottom as leaves. You just estimate the leaves and it all adds up.

This approach works, but there are 2 problems :

  1. We missed the integration cost
  2. We missed some tasks

There is a fundamental truth to work break down structure estimates:

The only way to estimate using work break down chart accurately, to know what are the exact sub-tasks, is to implement the feature!

What to expect from an estimate?

Estimates are uncertain. There is no guarantee that your estimate will work itself out. And that’s OK. It’s your manager’s job to manage that risk. We are not asking them to do something outside of their job.

The problem arises when you make a commitment. If you make a commitment, you must make it. Be ready to move heaven and earth to make it. But if you are not in a position to make a commitment, then don’t make one.

Because he’s going to set up a whole bunch of dominos based on that commitment, and if you fail to deliver, everything fails.

Some interesting links :


Uncle Bob on Estimates: https://www.youtube.com/watch?v=eisuQefYw_o

Happy Estimating!

That’s all, folks!


The Blue Ocean Strategy : How To Create Uncontested Market Space and Make the Competition Irrelevant

When Henry Ford made cheap, reliable cars people said, ‘Nah, what’s wrong with a horse?’ That was a huge bet he made, and it worked.
The whole idea of The Blue Ocean Strategy is to create uncontested market spaces that creates new demands and make the competition irrelevant.

The book describes Red Oceans as known market places that have bloody competition among businesses trying to win customers. Here there is a fixed existing demand of which every company wants a share.

The Blue Ocean on the other hand is an uncontested market place that creates demand for itself, which is not known to others. This makes competition irrelevant. Focus is on creating, not competing.

Value Innovation :

Value innovation occurs when company align innovation with utility, price and cost positions. Instead of using competition as the benchmark companies focus on taking leaps ion value for customers.

Idea behind value innovation if to break out of Value-Cost trade off.

Reducing Costs :

Reduced costs for the products are achieved by eliminating and reducing the factors that the conventional industry competes on.

Best example to illustrate this is the case study of Ford Model T.

Ford eliminated all factors like multiple colors and design variants and focused only on creating better cars for the masses.

Identifying Blue Oceans :

Identifying blue oceans needs managers and strategists of the company to brain storm on the strategy canvas. Where each manager holds his/her department accountable.

The strategy canvas’ focus must be shifted from competition to alternatives and from customers to non-customers.

Reconstruct Market Boundaries :

The author proposed a 6 step framework for identifying blue oceans in new market places :

  1. Look across alternative industries
  2. Look across strategic groups within industries
  3. Look across complementaries
  4. Look across the chain of buyers
  5. Look across functional and emotional appear to buyers
  6. Look across time

Reaching Beyond Existing Demands

To reach the customers in new markets, think of non-customers before customer differentiations.

There are 3 tiers of non-customers :

  1. Jump Ship : These can switch to competitors on any moment.
  2. Refusing : These are using competitors products.
  3. Distant : Product doesn’t appeal to these customers.

Examples of Blue Ocean Strategies Implemented by Famous Companies :

  1. Ford :

Ford standardized the car and made the options limited. This increase the quality of the car and brought the price point down.

2. GM :

General Motors found their blue ocean in making the cars fun, fashionable and comfortable.

3. Watson :

Watson computers introduced tabulators for businesses for the first time. They also introduced leasing pricing models which made it easy for businesses to own a tabulator.

4. Apple :

Apple created Apple II and tapped the new market for ready-made, easy to use personal computers.

5. Dell :

Dell on the other hand, found its blue ocean by changing the purchasing and delivery experience of the buyer. It allowed customization of the machines according to the needs of the buyer.

It is evident from the above examples that blue oceans are not unleashed by technology innovation per se but by linking technology to elements valued by buyers.

Strategy for Blue Ocean Implementation :

Two views on industry structure are related to strategic actions.

  1. Structuralist View :

Based on market structure to conduct and performance. This view on strategy deals with making sure that the company is making money in the red oceans.

2. Reconstructionist View :

This view is based on endogenous growth. It focuses on creativity not systematic approaches.

This view is responsible to find blue oceans for the company.

Both the views towards strategy are necessary to assert the company is making money is also exploring new markets to remain competent in future too.


Learning How To Learn : Course Experience

The human brain has 100 billion neurons, each neuron connected to 10 thousand other neurons. Sitting on your shoulders is the most complicated object in the known universe.

What is learning? Well basically, forming and consolidation neural patterns.

Recently I took the course “Learning How to Learn” on Coursera. I really wanted to figure out the best way to enhance my learning. And this course has been really helpful. It provides you with the right tools and tips to construct your own learning schedule. Methods shown in the course are scientifically proven and helps you understand your brain better.

Here is a quick summary of what I learned in the course :

WEEK 1 :

There are two modes of thinking :

  1. Focused Mode
  2. Diffused Mode

Focused Mode is where our mind is concentrating on following neural patterns that it is already familiar with and Diffused Mode where our mind is sort of relaxed and is ready to find new neural patterns.

Why we procrastinate?

Studies have shown that we procrastinate because when we are about to start a task that we are uncomfortable with our brain activates the parts that correspond to pain and thus wants to stay away with it.

Solution? Well, just get started. With practice this feeling will go away.

Don’t think too much and just “eat that frog”.

WEEK 2 :

Chunking : A chunk is a small interconnectable piece of information that you can learn at a time.

Basic idea behind chunking is to get a bigger idea of the topic that you are going to study and divide it into meaningful chunk. These chunks then get interconnected to to help brain learn effectively.

Mastery is just the art of increasing the number of chunks that you can interconnect.

Personally, I feel chunking is a great way to tackle procrastination too.

Illusion of competence :

When you are done with learning a topic. Force yourself to recall it.

What motivates you? Having a feeling of motivation and excitement towards learning helps the brain learn more effectively as feeling motivated releases dopamine which causes happiness.

Overlearning and Einstellung :

One should be beware of overlearning i.e repeating several times topics you already know. This causes the brain to go into Einstellung, which means that brain refuses to explore new neural patterns and becomes rigid to ones we already know very well.

Week 3 :

Habits are energy saver mode of our brains. When a habit has been formed our brain doesn’t overload itself with information and zombie mode kicks in.

Habit can be described with following parts :

  1. The Cue : Triggers that launches the zombie mode.
  2. The Routine : Habitual response in reaction to the cue.
  3. The Reward : Habits exist because we get reward.
  4. The Belief : To change habits we need to believe that we can change them.

Understanding how habits are formed and work can help us develop new good habits and get rid of the bad ones. 10 years from now everyone is going to know about your bad habits, your success is going to represent you. Now is the time to get rid of them.

Avoid Procrastination? Focus on process not the product.

Also, make your to-do list for the next day the night before. This will allow your diffuse mode to work on it while you will be sleeping.

WEEK 4 :

There is a difference in smartness of people. Smartness equals having a larger working memory.

But people with smaller working memory are scientifically proven to be more creative.

Deliberate Practice, practicing hard stuff again and again, can lift the normal brain to the realm of naturally gifted. Practicing certain neural patterns deepens the mind.

How to become a better learner?

Exercise : Exercising created new neurons in brain.

Life experiences : Gaining varying life experiences also enhances the brain.

Analogies and Metaphors : Can be used to learn and memorize effectively.

And finally ,

Virtue of less brilliant is perseverance and grit.

This course has been great for understanding how our brain works in nutshell. At least know you know how it works, you can make it work for you.

I am trying to become good at things that are way complex than what I have worked on till now. This was the reason why I took the course.

My takeaways were to practice, persevere and have patience.

I have started next course “Introduction to Algorithms by MIT”. This is a course that I always wanted to finish completely. Hopefully this time I can do that.

See you next time!


5 notes on MVP architecture pattern for Android

                                                           Image credits Macoscope

MVP (Model View and Presenter) is an architectural pattern inspired by the popular MVC pattern.

MVP addresses two main points :

  1. Make views as dumb as possible. The dumber the better.
  2. Make each layer loosely coupled and easily testable in isolation.

I am using MVP in one of my production project and have used in some dem0 apps. Here are my 5 notes on using MVP for android.

  1. Package Structure :

Android project contains lots of code and files even for application of medium complexity. Even when not following MVP I have found that arranging the project files in such a way that files that are accessed together are put in same package is more efficient and intuitive than any other approach.

What I prefer doing is create separate package for separate verticals of the app and put all related files like activities, fragments, views, presenters, adapters etc in that package.

ex. packages like add task, view task, list task for a To-Do app.

2. Libraries that are useful for MVP :

In MVP you want your model and presenter to be independent of the life cycle of view. For this, you can use dependency injector library like Dagger2.

Other than that, using RxJava and reactive programming principles for creating presenter is also becoming increasingly popular.

Libraries you can use for this purpose are : RxAndroid and EventBus.

3. Managing Remote and local data sources in the Model :

Android apps have to fetch data from the server. At the same time fetched data must be cached to make the app usable offline and increase the speed.

What I prefer doing is to create three model classes :

1. Remote Data Source

2. Local Data Source

3. Data Repository

All presenters talk to Data Repository class. Data repository model contains references to Local and Remote data repository and calls data from either according to situation.

As the name suggests Local Data Source deals with cached data and disk storage whereas Remote Data Source deals with API calls and responses.

4. User Experience is the top priority :

One thing that we all have to keep in mind that the real test of application is, if it is able to provide user a nice experience.

At the end of the day, user only notices the user experience of the application and not the architecture used. So if you have to make some design sacrifices to make the UX better, do it.

The real test of the machine is the satisfaction it provides to the mind. There is no other bigger test.

5. Testing Advantages :

Main motive behind MVP pattern was to make the testing of layers easy.

Basic idea is to keep the presenter and model android free, so that they can be tested without Android instrumentation by the JVM itself.

Views can then be tested by Android Instrumentation tests.

Mockito and Espresso can come handy for testing purposes.

Conclusion :

MVP, in my opinion is so far the best way to architect your android application project. It simplifies many issues like testing and making views lighter. Combine it it RxJava and dependency injection and you’ve got a nice recipe for android projects.

I am learning more about RxJava and testing frameworks will share my views on that soon.



Business In Boxers 3 : The Psychological Roller Coaster

One of my favorite things about instrumental music is that the listener is encouraged to use his or her imagination. I have been a huge Owl City and Adam Young fan since forever. Lately he is releasing sets of instrumental music called as Adam Young Scores inspired by incidents that made a lasting impression on the world.  Reading about all those incidents has made a lasting impression on my mind for sure, our small failures and success doesn’t even matter to the world, we have got to make it large.

This month has been a wacky psychological roller coaster. A slow motion wave on the ocean stirring my emotion up like a rain cloud. When you are trying to start something new and you know odds are against you I guess this happens, you become very paranoid in some sense. Each blow shakes your confidence and you have to build that up again. It’s exhausting some time. Here is where a nice snack helps 😀


At start of this month we all were very keen towards making the MVP (minimum viable product) ready but my mentor and CEO of the startup where I am working suggested to do a market research and look for idea validation. So one afternoon me and my friend visited few shops and tried to convey the idea to them . Very few got the idea and showed interest. I guess demography plays a vital role here. We all made peace with it that we will need real tangible product to make people excited about it.

But soon things got hard. Workload at intern got high and juggling both my startup and intern got really difficult. This made me think that how hard it would be to manage it with a full-time job. This was the first blow. And soon other members of team took off for exams or campus preparation or god knows what excuse.

Lately realized that to become good entrepreneur you should know the shit you are dealing with. Though I know that we learn stuff on the way, but still first we should invest in our-self. That’s why I am learning rails and other things needed to run the company. Attention to details will cause a momentarily pain in the ass. But it will be worth all the while.

As an overview all startups working on similar domain look the same. It is the one who really dig deeper and strive for great brand experience that makes all the difference. This what I believe in and want to do.

That’s why I took sales course before going to sellers for local survey. This is why I was studying business plan of Vinod Khosla to know and set my goals. This is why I am honing my technical skills. It all comes down to this, you should believe that whatever you are doing can be hard, but in the end it will be worth all the while.

I guess now we are on track now. Now that I have stopped counting on the members who were just pretending to be part of the team, I can be sure of what we can do in certain time. We are now clear to drop every operation and just build the damn product. Period. After 8 months I’ll be graduating and I want to be ready with a use-able product before that.

Meanwhile I am also trying to concrete the company goals. So that we can start moving. Breaking down to achievable goals with deadlines and metrics to measure how we are moving.

One more habit I am trying to develop is to write down my very specific goals for the day twice. In morning and at evening. It kinda helps you keep track. I have also started to workout and exercise more regularly than I used to, it’s good to do more of what makes you feel good about yourself because it reflects in all the other things that you do.

I am still not properly over of thinking whether I am making the right bets or not, I guess you can never tell. It like I am on emotional state of PMS.

But I know one thing for sure, whatever you do your job is to tell the story.

Thanks. See you next time.