Strive to learn : 8 Ways to optimize for learning at work as a software engineer

avi-richards-183715-e1499951479114

A large open space with amazing ergonomic chairs where people discuss and execute upon disrupting ideas. It’s right next to company’s game room where you unwind after a hard work day.  Here is we as engineers, get to work on products that our customers love and we love delivering that delight by continuous delivery (or something else :P).

Yet the most prominent thing that excites and should excite an effective engineer is the opportunity for learning at work. Optimizing for learning is a high leverage activity and should be on the top priority for every engineer.

Here are 8 ways to optimize our work for learning, deeply inspired by the book The Effective Engineer.

1. Study code for core abstractions written by the best engineers at your company

pydev_refactoring

I have been lucky enough to be the dumbest engineer at Squad. But that has allowed me to learn very aggressively during working hours just by reading through libraries and modules written by other awesome engineers.

So next morning, open that black box that you’ve been importing for so long in your code and dig through it.

2. Do more of what you want to improve

In more relatable terms, if you think you want to improve writing SQL queries, do more of that. If you want to improve at doing code reviews, do more of that.

Practice and deliberately touch your weak points instead of cutting corners. You’ll be amazed how helpful your fellow engineers/friends will be in helping you do so.

3. Go through as many technical materials available

We at Squad have a dedicated slack channel where engineers share good to read articles, blogs and podcasts.

I’ve made a pact to go through each and every article that is shared on that channel irrespective of the domain or the tech it. And so far this has been a catalyst for my learnings on things that I didn’t even know were there to learn.

4. Master the programming language that you use

Read a good book or two on them. Get to the internals of the language that you use primarily at work. We at Squad use python heavily for the back-end, machine learning, data analytics and everything.

Personally, I’ve added 2 great books to my reading list that I’ll be picking next:

  1. Fluent Python
  2. Mastering Python Design Patterns.

5. Send your code reviews to the hardest critics

1_INwRDJ_vspfJKkyFpv5jww

At Squad, code reviews are in the DNA of engineering processes. I’ve been very fortunate to be on-boarded to the Squad codebase by one of the best and hardest code critics at the company. It really helped me in developing high code quality standards and also the art of reviewing code.

Not only that taught me how to write better code but also how to deliver your code reviews in a respectful manner that the other person doesn’t feel discouraged, something that I always keep in mind while doing code-reviews myself.

6. Enroll in classes on areas where you want to improve

Courses on sites like edx, coursera, Udacity have amazing courses that we can take in our spare time. Let it be compilers, database, machine learning, infrastructure, these platforms have amazing courses on all of them.

Personally, I try to keep exactly one online course in-progress all the time.

7. Participate in design discussions of projects you are interested in

pu2ppth4dc2fsg0i

Don’t ask for an invitation. Ask the engineers if they’d mind you being a silent observer or even a participant in the design discussion.

8. Make sure you are on a team with at least a few senior engineers whom you can learn from

This will help increase your learning rate at least 80% of the time.

At Squad, I get to work with one of the most awesome engineers I’ve got an opportunity to work with. That has helped me in learning and polishing things like estimations, product thinking, designing, communication etc.

Conclusion

Our work fills a large part of our life. Making sure that our work is driving our learning and improvements helps big time in maintaining contempt and keep progressing on the path to become a better effective engineer.

Resources

  1. http://www.effectiveengineer.com/
  2. https://blog.fogcreek.com/the-effective-engineer-interview-with-edmond-lau/

That’s all, folks!

 

Advertisements

Practical Problem Solving Framework: Inspired By The Toyota Way

toyota-7-step-pratical-problem-solving-process

We all will agree to a certain point that having a system/process for anything reduces chances of errors.

As an engineer or someone people look forward to propose solutions to problems it’s beneficial to have a framework in place to solve problems effectively.

Recently I was reading The Toyota Way, and it suggested a framework to Practical Problem Solving. It almost felt trivial that this sort of framework would be invaluable to software engineers too (in fact for everyone).

When confronted with a problem, first we want to make it crystal clear and get a grasp of the real point of cause. That’s followed by a series of 5 WHYs? to investigate the root cause. And finally countermeasures, evaluations, and countermeasures.

1. Initial Problem Perception

Large, vague and complicated problem is presented. The first step is to perceive all the information available at this point information of time.

Ex. “Hey! Metric X is showing incorrect value”

This doesn’t show the actual problem, but just a perception of how some internal user saw it.

2. Clarify The Problem

Next step is to clarify the problem to scope it down. Go and see the problem yourself. Analyse it and get a clear understanding.

As you are seeing the problem first hand, we want to gather as much information as possible.

Ex. So the entire analytics data was actually not consistent.

3. Locate Point Of Cause

Next step is to dig a little deeper and try finding the point fo cause.

Where is the problem observed? Where is the likely cause? This will lead us to the vicinity of the root cause, which we find in step 4.

Ex. Analytics system is working correctly, just that it sometimes doesn’t get updated every 5 minutes like it’s supposed to.

Here we rule out other possible causes, like a bug in the code or wrong data was tracked in the first place.

4. Ask, 5 WHYs? Investigation of  the root cause

Here from the direct cause, we expose and go deep to the root cause of the problem by asking WHY five times.

Ex.

1. 1st why – Why was data inconsistent: Because analytics didn’t get updated on time.

2. 2nd Why – Why analytics were not updated on time – Because scheduled ETL jobs didn’t run on time.

3. 3rd Why – Why the schedule jobs didn’t run on time – Because CPU usage was 100%

4. 4th Why – Why CPU reached 100% – Because server instance size was not enough to handle increased number jobs.

5. 5th Why – Why server size was not enough to handle the spike in usage –  Because our auto-scaling is slow.

By asking a series of 5 whys, we can generally get to the root cause of the problem and fix it there instead of just duct-taping it and be waiting for it to rise again.

5. Countermeasures

This step is fixing the root cause of the problem so that this doesn’t come up again.

Ex. Moved to a more sophisticated auto-scaler to manage spikes in usages and setting up alerts to monitor the performance.

6. Evaluate

After the countermeasure have been executed, it’s important to evaluate the effect post that. Was the problem solved?

Ex. “Now analytics are always in sync and even if they miss getting updated, we get an alert to know it beforehand and take action.”

7. Standardize

This resonates with another Toyota principle of jidokameaning building in the quality.

How can we standardize the countermeasures such that similar problems are not faced again? How can we propagate our learnings across the organization?

Ex. “Document and standardize the process that for all our instances and jobs proper alerts must be in place so that we know when they are malfunctioning”

Conclusion

This was my take on how we can learn from a cross-discipline organization like Toyota on how to have a process and framework in place to solve problems effectively.

Afterall, problem-solving is supposed to be fun and having a proper framework in place, helps us keep it that way!

That’s all, folks!

 

8 System Design Principles I learned After Doing It Wrong More than 50 Times!

laptop-3190194_960_720

 

At Squad, we strive to build awesome products to solve customer(internal and external) needs. As a product engineer, paramount part of your job is to design and build products. Dig deep into the root cause of the problems, design solutions and implement them as the end product.

Over the course of my journey so far, here are the 8 system and product design principles that I’ve learned from other awesome people at Squad, from feedback and simply doing it not right enough multiple times.

1. What is the underlying problem that led to the feature request?

At Squad, you don’t just code the requirements into the software. As a product engineer, it’s your responsibility to remove the layers and expose the root problem that led to the feature requirement.

Get to know the root cause of the problem that you are trying to solve. Or even better, as the lean principles say “genchi genbutsu” i.e go and see it yourself.

2. How can you make the feature more robust, reliable and usable?

Once the essential feature requirements are finalized, we must press on how can we make the feature more robust, reliable and usable?

Things to ponder upon and take into consideration can be :

  1. The persona of the users that’s going to use that.
  2.  Scenarios in which that feature would be used. Ex, if in the case of fires, than show more data than needed for faster resolutions.
  3. Building in the quality in the product itself or “Jidoka” as said in lean.

3. What is the first iteration going to be?

Given the time and resources you have., what is the best possible first iteration of the product going to be? If it’s a large system or something you are building from scratch, there are always going to be iterations.

The main idea here should be to move fast and get things shipped. Good enough and shipped on time is always better than perfect and in-development forever.

4. How easy will it be to make iterations on the current feature?

The design should incorporate all the non-functional requirements to make future iterations easy.

Scale the feature? Change a component? Use a different 3rd party service? Your implementation should be flexible enough to incorporate and encourage these enhancements.

Design patterns are your best friend here.

5. What are the potential bottlenecks with scale?

Scale-land is where everyone wants to be, but it is scary. It breaks what was not supposed to break and has witnessed more horror stories than a haunted castle.

What are the potential bottlenecks that are not a problem now, but will break at 5X, 10X or 100X scale?

List them down on the feature ticket, or better document it in the code itself.

6. What’s the data that has to be captured and how will it be consumed?

Every feature in the product will need some data that needs to be captured to track it. It can be but not limited to:

  1. Action logs.
  2. Event logs.
  3. Metrics
  4. Failures.
  5. Anamolies.

What affects this majorly is how that data will be consumed? Store it in a structure that will make the consumption of data easy and efficient. Afterall, the only motive to store data is to use it.

7. How good the developer experience will be when interacting with the code base of that feature?

There can be many developers who’ll use or modify the code that you are going to write.

How will be their experience when doing that? Ex. Will the test cases you wrote, make them feel confident enough to make changes fast?

Few points to consider:

  1. Is the code well documented?
  2. Are test cases strong enough?
  3. Is the code, re-usable where it makes sense?
  4. Are functions small and code, simple to read?

8. What metrics will determine that the feature has been implemented successfully?

Finally, after all the fun-time you had creating the feature, what will determine that the feature has been implemented successfully?

The data you tracked will be of paramount importance here.

It can be the case that to track this quantitatively is not possible, but can you track the qualitatively in that case?

The idea here is that you can’t improve what you can’t measure?

Processing 100,000 requests? Fewer errors by the users? 95% work done by the new system instead of old one?

This can and will involve more stakeholders of the team and not just the developer.

Conclusion

Obviously, this is not the exhaustive things to take into consideration while designing a system or a product as an engineer. This just covered what I have learned so far by just doing things wrong or not right enough multiple-times.

It’s fun to build stuff! Continuously improve (“Kaizen” in lean)! Keep iterating! Keep shipping!

 

 

That’s all, folks!

 

Introduction to Ingressing With Kubernetes

 

Single responsibility is a magical notion. Whatever it touches, it makes it more manageable and efficient.

With Kubernetes, we have the power to spawn many services. As many of them as we would like. But how inbounds requests are routed among these services?

Ingressing is a powerful way to decouple routing rules with core application logic.

According to kubernetes,

Ingress is a collection of rules that allow inbound connections to reach to reach cluster services.

Overview

In this post, we’ll deploy a couple of services in the kubernetes cluster and then define an ingress to route the requests to one of them according to the rules.

By the end of this post, we’ll have a basic understanding of ingressing  and a working demo to showcase its power.

More On Ingress

To allow inbound connections to reach cluster services, ingress configures a layer 7 load balancer and provides the following:

  1. TLS.
  2. Path-based routing.
  3. Name-based virtual routing.
  4. Custom Rules

With ingress, connections can’t reach our services directly. Instead, they reach the ingress endpoint and then are routed to a service based on rules.

With this in mind, let’s move forward to a working example.

Step 1: Spawn first service and deployment

We’ll be creating two services and deployments, named cats and dogs.

In this step, we’ll be spawning our first service.

Above is the .yaml file for our cats-deployment. Run the following command to create the cats-deployment.

kubectl create -f cats-deployment.yaml --validate=false

Now, we’ll create our cats-service.

Run the following command to create our cats-service.

kubectl create -f cats-service.yaml --validate=false

As you can see in the deployment file, we are also specifying a volume associated with the container named /home/docker/cat_volume.

Run the following commands after starting your minikube VM to host a file at that volume’s path.

minikube ssh
mkdir cat_volume
echo "

cat service content

" > "index.html"

Tada! We have our first service and deployment up and running.

 

Step 2: Create the second service and deployment

We are going to name this one dogs.

Following the steps given, above create the deployment and service for our faithful friends dogs.

Here are the YAML files.

 

Step 3: Hit the endpoints of our services to see the content we just hosted on them.

Run the following command to get port numbers for the services.

kubectl get services

This will list all the services running the in kubernetes cluster along with their post numbers.

We should see something like this.

kube_services

Get the port numbers and hit the browser to reach the pages of the two services we just hosted.

Use the following command to get base IP of the minikube VM

minikube ip

Here is how our two services cats and dogs are looking.

 

 

Step 4: Create the ingress for our services.

Following is the YAML file that we’ll use to create the ingress.

First, we need to start the ingress controller.

minikube addons enable ingress

With the following command, create the ingress.

kubectl create -f pets-ingress.yaml --validate=false

As we can see in the YAML file, we are doing name-based virtual routing between cats.myweb.com and dogs.myweb.com, routing them to our cats and dogs service respectively.

For the sake of our demo to work, we’ll have to add these hosts in our /etc/hosts file.

Add the following line in your /etc/hosts file.

192.168.99.100   cats.myweb.com dogs.myweb.com

 

Step 5: Hit the paths to see the ingress controller in action!

dogs_2cats_2

Congrats! Our ingress is working as expected and routing the names to their services like a routing ninja!

 

Conclusion

In this post, we got to know basics of ingressing and created a working demo to get the feel of its power.

There is a lot that ingress can do, let’s all keep exploring untill we fully learn how to harness its power.

 

 

That’s all, folks!

2018, New Year: Let’s Set The Rhythm For What Lies Ahead

Here are the 9 things I would love to incorporate into my life as I set myself to see the sun of 2018.

1. Stop chasing the long dream, start conquering the micro-goals:
Start cultivating a passionate dedication to the pursuit of short-term goals, being micro ambitious.
Putting my head down and work with pride, whatever is the job in the hand.
If we see too far in front of us, we won’t see shiny thing right in front of us.
Develop the habit of doing small things the great way.

2. Stop waiting for being ready:
Stop waiting for the moments of the future to be ready or to define oneself.
People, places, our choices all have already shaped us into who we are today.
These moments have already happened, and they will happen again.
You won’t be ready, untill you start.

3. Excercise :
Run, play a sport, do yoga or whatever. Take care of my body. I am certainly going to need it.

4. Be a teacher, share what I know:
Don’t take for granted what you know. Rejoice in what you learn and spray it.

5. Read, read and read more:
I am always going to be on the dumber end of the spectrum. Read as much as you can to cross the chasm to get to less dumber side.

6. Constantly work on the weaknesses:
Each day, every day, keep hitting the weaknesses by yourself.
Hopefully, they wont suck that much when the year ends.

7. Remeber it’s all luck, have gratitude:
Understand truely that you cant take all the credit for your success, nor we can blame others for their failures.
Be more humble and more compassioante.
Emptahy is intuitive but it’s also something that we can work on intellectually.

8. Define myself from what I love:
Not by what I hate. If someone asks us what music we like, we go “I hate EDM”, what food you like, “I hate Chinese”. From this year, I’ll try to define my choices because of why I love them, not why I hate the others.
Be demonstrative and generous in the praise for those you admire.
Be pro-stuff, not just anti-stuff.

9. Don’t rush:
You don’t have to know what you are going to do for the rest of your life. Don’t panic. Let the river inside you flow at its own pace.

 

PS: Note to self

Life is long, tough and tiring. You’ll sometimes be happy and sometimes be sad, and then we die to get submerged into nothingness. There is only one way to make the best of our empty existence. Fill it.
Learn as much as you can. Take pride in what you do. Have compassion.  Share ideas. Demonstrate what you love.

Happy new year!

 

Basics Of Kubernetes Volume Management : Mounting a simple hostPath directory

kubernetes

Kubernetes is a system for automating deployment, scaling, and management for containerized applications.

As we know, containers, which create the Pods, are ephemeral in nature. All data stored inside a container is deleted if the container crashes. However, kubelet will restart it with a clean state, which means that it will not have any of the old data.

To overcome this problem, Kubernetes uses Volumes. A Volume is essentially a directory backed by a storage medium.

Volume Type :

A volume that is mounted to a pod can be seen as a directory. Each directory is backed by a directory type.

Kubernetes provides many directory types like emptyDir, hostPath, secret, nfs etc.

You can read more about kubernetes volume  here.

In this blog we are going to use the volume type hostPath.

hostPath Volume Type

With hostPath volume type, we can share a directory from the host to a pod. So, even if the pod dies, the data is persisted as the directory is present at the host machine.

Demo Time : 

Fun time! The best way to get our head around volumes is to quickly try an working example.

Step 1 : Make sure the minikube VM is running with the kubernetes cluster

As we are using minikube to run our kubernetes cluster, let’s first ensure that our minikube VM is up and running.

minikube_starting

Step 2 : Create the directory we want to mount in the host

In this case, out minikube is the VM which is acting as host for our kubernetes cluster.

Let’s SSH into the minikube and create the directory and file that we need.

minikube_ssh_creating_index_page

As we can see, we did following things in this step :

  1. We ssh’ed in the minikube VM.
  2. Create the my_vol directory that we’ll need to share with pod.
  3. Create the file index.html in the my-vol directory.
  4. Got the path of the directory.

Step 3: Create the deployment specifying the volume that we want to mount

 deployment_yaml

Given above is the what our deployment file looks like.

Let’s type that content by hand so that we pay heed to each and every property of the file.

Note that the path that mount path in the container is the place where nginx looks for the html page by default.

So, basically what’s happening is, we are telling the container that your “/use/share/nginx/html” path is mapped to the “/home/docker/my-vol” path in the host machine,

Once that file has been saved, use the following command to create the deployment.

deployment_created

Step 4: Create the service yaml file and start the service:

Once our deployment is up, we need a service to tie it all together.

service_yaml

Once the service file is created. Create the service by the command given below.

start_service

Step 5: Head over to the IP of the minikube and post number of the service to view the web page

We can use the command

kubectl get my-nginx-web-service

to get the post number

web_page

And there we go! We can see our own page read from our mounted volume, running inside a container in a kubernetes cluster!

Deploying a nginx application using Kubernetes for Self-Healing and Scaling

Kubernetes is an open source system for automating deployment, scaling and management of containerized applications. A more technical term for it is, container orchestrator which is used to manage large fleets of containers.

Minikube is an all-in-one single node installation for trying out kubernetes on local machines. And the following post covers deploying a nginx application container using kubernetes in minikube.

If you don’t have, then this link has it all to install minikube and kubectl (command line tool to access minikube) : Download and install minikube and kubectl

Step 1 : Making minikube up and running

Ensure that minikube is running.

starting_minikube

Step 2 : Open the minikube dashboard

Minikube comes with a GUI tool that opens in the web browser. Open the minikube dashboard with following command :

opening_dashboard

It should open the dashboard in a browser window and it’ll look something like this:

first_look_dashboard

Looks cool! No?

Step 3 : Deploy a webserver using the nginx:alpine image

Alpine linux is preferred for containers because of its small size. We’ll be using the nginx:alpine docker image to deploy a nginx powered webserver.

Now, go the deployments section and click the create button, which will open an interface like below.

create_app_filled

Fill in the details as shown in the image.

We can either provide the application details here, or we can upload a YAML file with our Deployment details.

As shown, we are asking kubernetes to create a deployment with nginx:alpine image as container and that we want 3 pods (or simply instances) of that.

A pod in kubernetes is a scheduling unit, a logical collection of one or more containers that are always scheduled together.

Go on and click that awesome deploy button!

Step 4 : Analyzing the deployment

Once we click the deploy button. Kubernetes will trigger the deployment. Deployment will create a ReplicaSet. A ReplicaSet is a replication controller that ensures that specified number of replicas for a pod are running at any given point of time.

Flow is something like this:

Deployment create ReplicaSets, ReplicaSets create Pods. Pods is where the real application resides.

deployment_overview

As expected, we have our deployment, replica set and pods in place.

We can also, check our deployment via command line using kubectl.

deployment_overview_cli

Step 5 : Create a Service and expose it to the external world with NodePort

So far, we have our pods up and running. But how do we access them?

This is where a service comes into play. K8S provides a higher level abstraction called as a service that logically groups pods and policy to access them. This grouping is done via labels and selectors.

Then we expose the service to the world by defining its service type and service redirects our request to one of the pod and load balances them.

Create a my-nginx-webserver.yaml file with the following content:

https://gist.github.com/priyankvex/3b34ec02c82934b84c8dfb68272ed4f1

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-web-service
  labels:
    run: my-nginx-web-service
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-nginx-webserver

Enter the following commands to create a service name my-nginx-web-service

creating_service_cli

We can now verify that our service is running :

service_running

Step 6 : Accessing the application

Our application is running inside the minikube VM. To access the application from our workstation, let’s first get the IP address of the minikube VM:

minikube_ip

Now head to the address and port number of the service we got in above step.

application_running

And our app is running! Amazing, give yourself a pat now!

Taste of self-healing feature of the kubernetes system :

One of the most powerful feature of kubernetes is self-healing capabilities (just like Piccolo. DBZ, anyone?). While defining our app we created a replica set with 3 pods. Let’s go ahead and kill one pod and kubernetes wil create another one to maintain the running pod count 3.

self-healing.png

As we can see in the image. We deleted the bottom-most pod and K8S created a new one instantly.

Such kubernetes! Much HA (High Availability)!

Taste of scaling with Kubernetes:

Now, our app is receiving a crazy amount of traffic and three nginx pods are not enough to handle the load. Kubernetes allows us to scale our deployments with almost zero effort.

Let’s go ahead and spin up a new pod.

scaling_menu_option.png

scaling_to_4.png

Click OK. Now let’s go and check our pods.

scaled_deployment.png

As we can see in the image, we have now 4 pods running to handle the increased traffic.

Isn’t it amazing? We just horizontally scaled our application with the power of kubernetes.

This was just the tip of the iceberg what Kubernetes can do. I am also exploring the kubernetes and containerized architecture just like you, hopefully we’ll be back with another post soon with more kubernetes stuff!

That’s all, folks!