Tracking Metrics to Surface and Solve Problems: Metric Tracking Practices I’ve Learned So Far

It is a nice pleasant evening, you are sipping coffee and reviewing your code one final time, just so that you can gather enough confidence to hit the deploy button.

But a fact of life as a software engineer is that things can go wrong. Small changes may result in unexpected outcomes, including outages, errors or negatively impacting customers.

And when problems occur, either we can do random checks and validations that may or may not solve the problem or we can have a disciplined problem-solving approach that relies on data rather than intuitions.

Metrics and Telemetry

To enable a disciplined problem-solving method, we need our software to track the right metrics and the right places. We need to design our systems so that they are continually creating telemetry.

What is Telemetry?

The DevOps handbook defines telemetry as,

An automated communication process by which metrics are collected at remote points and are sent over to receiving equipments.

When designing systems, it is a high leverage activity to include creating telemetry as a first-class citizen to enable and ease tracking metrics at all the levels needed, right from business level metrics to deployment pipeline level.

Levels Of Metrics Tracking

As engineers, the software we write impacts the organization at multiple levels, from infrastructure to the product, to business. Thus, to resolve problems quickly, we need to track metrics all these levels.

Following levels of metrics have been really useful for me to keep a checklist for adding the right metrics in the software that I write.

1. Business Level Metrics:

These metrics directly affect the business. Thus are really imported to keep an eye on.

Examples include sales transactions, numbers of items clients sent, total successful items processed, hourly processing rates, etc.

2. Application Level Metrics:

These metrics track the functioning of the application.

Examples include latency of the APIs, response time of queries, number of errors etc.

3. Infrastructure Level Metrics:

These metrics track the infrastructure that runs our application.

Examples include CPU usage, available memory, IOPS spikes etc.

4. Product Level Metrics:

These metrics track the product progress and results. As product engineers, it’s a high leverage task to track product-related metrics too.

Examples include A/B test results, feature toggling results, product progress, product extensibility, and configurability etc

By having telemetry coverage in all these areas, we will be to see the health of everything that our software relies on or things that rely on our software.

Conclusion

With the limited time that I’ve spent in the software industry, I’ve come to realize the importance of metrics. Even the parts that don’t involve any software must be tracked and measured. The key idea here is that you can not improve what you don’t measure.

With right telemetry built into the software that we write, we’ll be able to not only solve the problems they arise but also surface the latent ones before they catch fire.

That’s all, folks!

 

Advertisements

Devops and The Principle Of Flow

lean-software-development-1-728

In the technology value stream, work typically flows from Development to Operations, steps consisting of functional areas between our business and our customers.

As stated in the lean principles developed by Toyota, we should optimize to get a single-piece fast and smooth flow for our releases.

We increase flow by:

  1. Making work visible,
  2. Reducing batch sizes and intervals of work
  3. Building in the quality, preventing defects from being passed to downstream work centers.

Why a fast flow is needed?

By speeding up the flow through the technology value stream, we reduce the lead time required to fulfill internal and external customer requests, further increasing the quality of the work while making us more agile.

Our goal is to decrease the amount of time required to deploy the changes into production and increase the reliability of those services.

Make our work visible

agile-pm-kanban-board

A significant difference between manufacturing and technology value streams is that our work is invisible.

It’s so easy for work to keep bouncing off between teams and yet have no visual control over it.

To prevent this and make out work more visible, we can use something like a Kanban board. (I prefer Trello for this).

Ideally, our Kanban board will span the entire value stream, defining work as completed only when it reaches the right side of the board.

Work is not done when development completes, but only when our application is running successfully in production.

Limit Work In Progress (WIP)

In technology, our work is far more dynamic than manufacturing. Teams have to satisfy demands of multiple stakeholders. As a result daily work gets dominated by urgent requests for work coming through every communication channel possible.

We can limit multi-tasking by using Kanban board, such as by codifying and enforcing WIP limits for each column on the board.

For example, we may set a WIP limit of three cards of testing. When there are already three cards in the testing column, no new cards can be added.

Using Kanban ensures that work is visible and WIP doesn’t get piled up.

Reduce Batch Sizes

one-piece-flow

Another key component to creating smooth and fast-flow is performing work in small batch sizes. Prior to the lean manufacturing revolution, it was common practice to manufacture work in large batches.

However, large batch sizes result in skyrocketing levels of WIP. According to lean principles, the ideal is a single piece flow, where each batch size is of just one.

Let’s take an example:

Suppose we have ten brochures to mail and mailing each one of them requires 4 steps:

  1. fold the paper
  2. insert the paper into the envelope
  3. seal the envelope
  4. stamp the envelope

Now in the traditional batch processing flow, we will perform each step sequentially for all ten envelopes.

In the lean one-piece flow, only one envelope can be at any given step. In other words, we fold the paper, insert it into the envelope, seal the envelope and stamp it before starting the next one.

How is one-piece flow dramatically better?

In the above example, suppose each step takes 10 seconds. In batch processing, we get our first complete envelope after 310 seconds, but with the one-piece flow we get it just after 40 seconds.

Worst, what if we find that the way we have folded the paper, doesn’t allow the envelope to be sealed. In which case we’ll be in a bigger trouble?

Eliminating hardships and wastes in the value stream

According to the Toyota Production System pioneer Shiego Shingo, a waste is:

The use of any material or resource beyond what the customer required or is willing to pay for

In software development value stream, a waste is anything that causes a delay for the customer, such as activities that can be bypassed without affecting the result.

The following are some common categories of waste that we encounter when implementing lean in software value stream.

  1. Partially done work
  2. Extra processes
  3. Extra features
  4. Task switching
  5. Waiting on QA or testing or acceptance testing
  6. Defects and bugs
  7. Non-standard or manual work

Explaining each of the above point deserves a post of its own. Will do that soon.

Conclusion

Improving flow through the technology value stream is essential to achieving DevOps outcomes. We do this by making work visible, limiting WIP, reducing batch sizes and eliminating wastes from our processes.

All of this will allow us to become more agile and will help in reducing lead times dramatically, and at the same time increasing the quality of releases.

That’s all, folks!

 

Deploying a nginx application using Kubernetes for Self-Healing and Scaling

Kubernetes is an open source system for automating deployment, scaling and management of containerized applications. A more technical term for it is, container orchestrator which is used to manage large fleets of containers.

Minikube is an all-in-one single node installation for trying out kubernetes on local machines. And the following post covers deploying a nginx application container using kubernetes in minikube.

If you don’t have, then this link has it all to install minikube and kubectl (command line tool to access minikube) : Download and install minikube and kubectl

Step 1 : Making minikube up and running

Ensure that minikube is running.

starting_minikube

Step 2 : Open the minikube dashboard

Minikube comes with a GUI tool that opens in the web browser. Open the minikube dashboard with following command :

opening_dashboard

It should open the dashboard in a browser window and it’ll look something like this:

first_look_dashboard

Looks cool! No?

Step 3 : Deploy a webserver using the nginx:alpine image

Alpine linux is preferred for containers because of its small size. We’ll be using the nginx:alpine docker image to deploy a nginx powered webserver.

Now, go the deployments section and click the create button, which will open an interface like below.

create_app_filled

Fill in the details as shown in the image.

We can either provide the application details here, or we can upload a YAML file with our Deployment details.

As shown, we are asking kubernetes to create a deployment with nginx:alpine image as container and that we want 3 pods (or simply instances) of that.

A pod in kubernetes is a scheduling unit, a logical collection of one or more containers that are always scheduled together.

Go on and click that awesome deploy button!

Step 4 : Analyzing the deployment

Once we click the deploy button. Kubernetes will trigger the deployment. Deployment will create a ReplicaSet. A ReplicaSet is a replication controller that ensures that specified number of replicas for a pod are running at any given point of time.

Flow is something like this:

Deployment create ReplicaSets, ReplicaSets create Pods. Pods is where the real application resides.

deployment_overview

As expected, we have our deployment, replica set and pods in place.

We can also, check our deployment via command line using kubectl.

deployment_overview_cli

Step 5 : Create a Service and expose it to the external world with NodePort

So far, we have our pods up and running. But how do we access them?

This is where a service comes into play. K8S provides a higher level abstraction called as a service that logically groups pods and policy to access them. This grouping is done via labels and selectors.

Then we expose the service to the world by defining its service type and service redirects our request to one of the pod and load balances them.

Create a my-nginx-webserver.yaml file with the following content:

https://gist.github.com/priyankvex/3b34ec02c82934b84c8dfb68272ed4f1

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-web-service
  labels:
    run: my-nginx-web-service
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-nginx-webserver

Enter the following commands to create a service name my-nginx-web-service

creating_service_cli

We can now verify that our service is running :

service_running

Step 6 : Accessing the application

Our application is running inside the minikube VM. To access the application from our workstation, let’s first get the IP address of the minikube VM:

minikube_ip

Now head to the address and port number of the service we got in above step.

application_running

And our app is running! Amazing, give yourself a pat now!

Taste of self-healing feature of the kubernetes system :

One of the most powerful feature of kubernetes is self-healing capabilities (just like Piccolo. DBZ, anyone?). While defining our app we created a replica set with 3 pods. Let’s go ahead and kill one pod and kubernetes wil create another one to maintain the running pod count 3.

self-healing.png

As we can see in the image. We deleted the bottom-most pod and K8S created a new one instantly.

Such kubernetes! Much HA (High Availability)!

Taste of scaling with Kubernetes:

Now, our app is receiving a crazy amount of traffic and three nginx pods are not enough to handle the load. Kubernetes allows us to scale our deployments with almost zero effort.

Let’s go ahead and spin up a new pod.

scaling_menu_option.png

scaling_to_4.png

Click OK. Now let’s go and check our pods.

scaled_deployment.png

As we can see in the image, we have now 4 pods running to handle the increased traffic.

Isn’t it amazing? We just horizontally scaled our application with the power of kubernetes.

This was just the tip of the iceberg what Kubernetes can do. I am also exploring the kubernetes and containerized architecture just like you, hopefully we’ll be back with another post soon with more kubernetes stuff!

That’s all, folks!

5 notes on MVP architecture pattern for Android

                                                           Image credits Macoscope

MVP (Model View and Presenter) is an architectural pattern inspired by the popular MVC pattern.

MVP addresses two main points :

  1. Make views as dumb as possible. The dumber the better.
  2. Make each layer loosely coupled and easily testable in isolation.

I am using MVP in one of my production project and have used in some dem0 apps. Here are my 5 notes on using MVP for android.

  1. Package Structure :

Android project contains lots of code and files even for application of medium complexity. Even when not following MVP I have found that arranging the project files in such a way that files that are accessed together are put in same package is more efficient and intuitive than any other approach.

What I prefer doing is create separate package for separate verticals of the app and put all related files like activities, fragments, views, presenters, adapters etc in that package.

ex. packages like add task, view task, list task for a To-Do app.

2. Libraries that are useful for MVP :

In MVP you want your model and presenter to be independent of the life cycle of view. For this, you can use dependency injector library like Dagger2.

Other than that, using RxJava and reactive programming principles for creating presenter is also becoming increasingly popular.

Libraries you can use for this purpose are : RxAndroid and EventBus.

3. Managing Remote and local data sources in the Model :

Android apps have to fetch data from the server. At the same time fetched data must be cached to make the app usable offline and increase the speed.

What I prefer doing is to create three model classes :

1. Remote Data Source

2. Local Data Source

3. Data Repository

All presenters talk to Data Repository class. Data repository model contains references to Local and Remote data repository and calls data from either according to situation.

As the name suggests Local Data Source deals with cached data and disk storage whereas Remote Data Source deals with API calls and responses.

4. User Experience is the top priority :

One thing that we all have to keep in mind that the real test of application is, if it is able to provide user a nice experience.

At the end of the day, user only notices the user experience of the application and not the architecture used. So if you have to make some design sacrifices to make the UX better, do it.

The real test of the machine is the satisfaction it provides to the mind. There is no other bigger test.

5. Testing Advantages :

Main motive behind MVP pattern was to make the testing of layers easy.

Basic idea is to keep the presenter and model android free, so that they can be tested without Android instrumentation by the JVM itself.

Views can then be tested by Android Instrumentation tests.

Mockito and Espresso can come handy for testing purposes.

Conclusion :

MVP, in my opinion is so far the best way to architect your android application project. It simplifies many issues like testing and making views lighter. Combine it it RxJava and dependency injection and you’ve got a nice recipe for android projects.

I am learning more about RxJava and testing frameworks will share my views on that soon.

Thanks.

Business in Boxers : 1 Month Into Running a StartUp

Why the Series? 

(Inspired by book : Business In Blue Jeans)

Hi! This is Priyank. This series is about me pen downing my start-up ride. Don’t know if it’s fast or slow, all I really know is I’m gonna enjoy the ride.

Why the name?

Boxers is what you’ll find me almost every time in and business is what’s always in my mind and thus the name “Business In Boxers”.

The Beginning

Last month on 15th Feb, me and my friend sat down seriously deciding on which business to start. The interesting thing was I did this with 3 friends. 3 awesome business ideas that are practical and scalable. Next, we 4 formed a team. A team with a vision to launch multiple businesses in a very short period of time.

Our first startup what we call as Pebbles Media deals with creating engaging applications for local business owners that can help them discover new customers in their locality. Cool idea that we all thought will get us many clients.

We approached 5 clients in different domains from food business to education to mechanical work. Two of them showed interest and we started working on their products.

15 days later one of our clients refused to pay the advance thus we dropped him. Other client wanted a custom product built for them at a very unreasonable price, again dropped him.

The Pivot

Witnessing all this, we all after 1 hour long discussion decided that it is the time for an early pivot. We never wanted to enter service based B2B sector.

The pivot that will need us to make products that will be used by multiple small businesses and their customers making us a product based company.

A pivot at this early stage raised doubts in our minds but I guess it was the red flag that we all managed to see and decided to pivot and not to run for quick money.

In Love With Business Books

lately I have been reading a lets of business books. Getting ideas and knowing how various aspects of a startup works. My favorite is The Lean Startup. A must read for everyone.

The Speed Breakers

No surprise here, first month has been filled with speed breakers. I was not expecting any smooth ride either. This month we have seen from clients saying no to all saying yes at the same time. Client giving us advance and then we turning him down because we wanted to pivot.

Wrap Up!

This month we dealt with real clients. Realized that we don’t want to go into service based sector.

For next month we are planning to get our MVP out in the market and test our leap of faiths. Too much action for next month. Let’s see what’s in the store next.

See you next time.

 

PS : It is fun to write blog when you are pretty certain that no one will read it 😀

 

 

 

 

 

 

 

 

Practice Shapes : Android App To Learn Shapes

Only way to learn something is to learn it by yourself.

Recently I have been playing with Canvas in android and it has been fun! 😀

As they say knowledge is knowing tomato is a fruit. Wisdom is not putting it in vegetable basket. And best way to persist new gained “wisdom” is to put it in use.

Practice Shapes is a fun app both for kids and grown-up kids (Yes. You!), to test their drawing skills. Though I know drawing on a mobile screen with your fingers is not your favorite thing, but still you will enjoy it! 😀

Download the app from play store :

 

This app is open source : You can fork it at https://github.com/priyankvex/Practice-Shapes/

 

TECH STACK : 

  • Android SDK
  • Bitmap Images of Shapes

The power lies in not reinventing the wheel, but to use it to move forward.

OPEN SOURCE LIBRARIES USED : 

  1. Universal Image Loader
  2. Sugar ORM
  3. Fancy Buttons

To Create Images : 

To create images for the app on which user draws, i used GIMP. These are just simple images with transparent background.

Right now, there are total of 18 images in the app under 3 difficulty levels.

If you can add more images feel free to do so.

Screenshots of app : 

drawing_ss

hard_shapes_ssscore

Download the app from play store :


How Mobile Devices Can Change the Educational Technology and Education?

Educational technology is growing rapidly and thus is catching everyone’s attention. At the same time mobile devices are part of are lives from a long time now. Students and teachers are increasingly embracing tablets and e-book readers in their personal lives.

Accessibility of educational technology is one of the biggest barrier that we are facing. Mobile devices are best way to make this problem less serious.

Advantages of mobile devices to be used as platform for ed-tech :  

  1.  Easy accessibility and availability.
  2. Inexpensive than other tools like desktops and laptops.
  3. Smaller is size and thus easy to carry.
  4. Rich application environment to distribute content.
  5. Highly accepted device by all people.

“It’s about better engaging students in class, seeing how instructional support areas can use the devices in terms of working with students, and finding out if faculty and staff can use them to be more efficient and effective,” says Lisa Young, an instructional design and educational technology faculty member at SCC.

With growing popularity of mobile devices we have enormous possibilities to enhance the current education system. We can put content on e-books, applications etc. The current generation is so innate with mobile devices that use of mobile as a platform for ed-tech will only make it more intuitive and fun for them.

 

“Like any new technology, it takes a while to see where it fits,” says Ron Bonig,

Of course this will take some time. We are still in phase where educational technology is not that crosses everyone’s mind as a necessity. Some people still think smartphones are hindrance to education. But i very firmly believe that string development programs in this field will cause them to go with the flow.

Despite its potential mobile computing is yet to reach the mass educators and students as a platform. But who knows we are at the starting of a revolution that will change the education scene forever!