The Illustrated Guide To Webhooks

Imagine that you’re a software developer for a company that monitors nuclear reactors.

On average, every 1 hour, a warning gets triggered at any individual nuclear reactor. Since these warnings could lead to something catastrophic, they must be monitored in real-time, as every second counts.

Assuming you have one million clients, all of whom need to be notified, what’s an efficient way to do so, while also minimizing server load?

The Naive (And Terrible) Solution

The most straightforward way to solve this problem is to create an API that allows any client to ping your servers to figure out if any warnings have triggered.

At first glance, this looks like a good idea, since writing a RESTful endpoint like this isn’t particularly hard.

However, there’s a problem here. You know that on average, a warning only triggers every 1 hour.

You also know that the client is heavily concerned about these warnings, and since the client needs the warnings in “real-time”, let’s just conservatively assume that the client checks every 1 second for a warning.

Average number of warnings per hourNumber of times a client checked per hourAverage number of checks with no results (no warning found)
136003559
6036003540
120036002400
360036000

In the table above, you can see that there’s clearly a problem here. Regardless of the number of average warnings per hour, the client is still checking 3600 times an hour. In other words, as the average number of warnings goes down, the number of pointless checks goes up.

At 1 warning per hour, across 1 million clients, you would have over 3.5 billion unnecessary checks per hour. Even if we disregard how horribly inefficient this is, the real question is whether your servers can even comfortably handle that load without crashing.

Clearly, we can’t allow the client to grab the data on their own every 1 second, because this would cause a horrifically large amount of traffic (over 3.5 billion requests). Remember, at an average of 1 warning per hour, across 1 million clients, we should only have to send, on average, 1 million responses.

Sounds like a tough problem, but that’s where webhooks come in!

Introducing: Webhooks!

Webhooks are like reverse APIs. They’re like interviewers saying “Don’t call us, we’ll call you”.

Instead of having your clients ping your API every 1 second, you simply ping the client whenever a trigger occurs. Since you are only notifying the client whenever a warning occurs, it means that the client only gets notified when a warning exists. Therefore, there is no unnecessary traffic. You send out exactly 1 million responses, one per warning, and it’s still done in real-time.

But how does the client “receive” the request?

Remember how I said a webhook is like a reverse API? The client is the one who writes the RESTful endpoint. All we have to do is record the URL to the endpoint, and each time the warning triggers, we send an HTTP request to the client (not expecting a response, of course).

What the client does with your POST’d request doesn’t matter. The client, in fact, can do anything they want, as long as their endpoint is registered with you.

This process can be generically applied to an infinite number of clients, by linking each client to the endpoint that they provided.

To summarize,

  1. Clients need to create a RESTful endpoint.
  2. Client gives you the URL to the endpoint.
  3. You save the endpoint somewhere.
  4. Anytime a warning triggers for that particular client, you send a POST request, with the warning enclosed, to the endpoint that they gave you.
  5. Client receives the POST request automatically, in real-time, and handles the warning in whatever fashion they want.

Conclusion

A webhook is an incredibly useful and simple tool that allows you to send data to clients in real-time based on some event trigger. Since the data is sent immediately after the event trigger, webhooks are one of many effective solutions for real-time event notification.

Additionally, since webhooks work in a “don’t call us, we’ll call you” fashion, you will never have to send a request unless an event trigger happens, which results in much lower server traffic.

In the example given above, at 1 warning per hour, across 1 million clients, using webhooks reduces the number of API calls from over 3.5 billion per hour to exactly 1 million per hour.

So the next time you have a situation where you need to notify clients based on some sort of event trigger, just remember this simple motto — “Don’t call us, we’ll call you”.

Advertisement

Learn OpenAPI in 15 Minutes

An OpenAPI specification (OAS) is essentially a JSON file that contains information about your RESTful API. This can be incredibly useful for documenting and testing your APIs. Once you create your OAS, you can use Swagger UI to turn your OAS into a living and interactive piece of documentation that other programmers can use.

Typically, there are libraries that can analyze all of your routes and automtaically generate an OAS for you, or at least a majority of it for you. Sometimes, it’s not feasible to generate it automatically, or you might want to understand why your OAS is not generating the correct interface.

So in this tutorial, we’ll learn OpenAPI in 15 minutes, starting right now.


First, here is the Swagger UI generated by the OAS we will be examining.

This is what an example “products” GET route with query parameters will look in Swagger UI when expanded:

And lastly, here is what a parameterized route will look like:

The routes can be tested inside of Swagger UI, and you can see that they are documented and simple to use. A Swagger UI page can easily be automatically generated if you have the OpenAPI spec (OAS) completed, so the OAS is really the most important part. Below are all of the major pieces of an OAS.


{
  # Set your OpenAPI version, which has to be at least 3.0.0
  "openapi": "3.0.0",
  # Set meta-info for OAS
  "info": {
    "version": "1.0.0",
    "title": "Henry's Product Store",
    "license": {
      "name": "MIT"
    }
  },
  # Set the base URL for your API server
  "servers": [
    {
      # Paths will be appended to this URL
      "url": "http://henrysproductstore/v1"
    }
  ],
  # Add your API paths, which extend from your base path
  "paths": {
     # This path is http://henryproductstore/v1/products"
    "/products": {
      # Specify one of get, post, or put
      "get": {
        # Add summary for documentation of this path 
        "summary": "Get all products",
        # operationId is used for code generators to attach a method name to a route
        # So operation IDs are optional, and can be used to generate client code
        "operationId": "listProducts",
        # Tags are for categorizing routes together
        "tags": [
          "products"
        ],
        # This is how you specify query parameters for your route
        "parameters": [
          {
            "name": "count",
            "in": "query",
            "description": "Number of products you want to return",
            "required": false,
            # Schemas are like data types
            # You can define custom schemas, which we will see later
            "schema": {
              "type": "integer",
              "format": "int32"
            }
          }
        ],
        # Document all possible responses to your routes
        "responses": {
          "200": {
            "description": "An array of products",
            "content": {
              "application/json": {
                "schema": {
                  # This "Products" schema is a custom type 
                  # We will look at the schema definitions near the bottom
                  "$ref": "#/components/schemas/Products"
                }
              }
            }
          }
        }
      },
      # This is a POST route on /products
      # If a route has two or more of a POST/PUT/GET, specify it as one route
      # with multiple HTTP methods, rather than as multiple discrete routes
      "post": {
        "summary": "Create a product",
        "operationId": "createProduct",
        "tags": [
          "products"
        ],
        "responses": {
          "201": {
            "description": "Product created successfully"
          }
        }
      }
    },
    # This is how you create a parameterized route
    "/products/{productId}": {
      "get": {
        "summary": "Info for a specific product",
        "operationId": "getProductById",
        "tags": [
          "products"
        ],
         # Parameterized route section is added here
        "parameters": [
          {
            "name": "productId",
            # For query parameters, this is set to "query" 
            # But for parameterized routes, this is set to "path"
            "in": "path",
            "required": true,
            "description": "The id of the product to retrieve",
            "schema": {
              "type": "string"
            }
          }
        ],
        "responses": {
          "200": {
            "description": "Successfully added product",
            "content": {
              "application/json": {
                "schema": {
                  # Custom schema called "Product"
                  # We will next examine schema definitions
                  "$ref": "#/components/schemas/Product"
                }
              }
            }
          }
        }
      }
    }
  },
  # Custom schema definitions are added here
  # Note that ints, floats, strings, and booleans are built-in
  # so you don't need to add custom schemas for those
  "components": {
    "schemas": {
      # define your schema name
      # custom schemas are referenced by "#/components/schemas/SCHEMA_NAME_HERE"
      "Product": {
        "type": "object",
        # define which of the properties below are required
        "required": [
          "id",
          "name"
        ],
        # define all of your custom schema's properties
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64"
          },
          "name": {
            "type": "string"
          }
        }
      },
      # Sometimes you will want to return an array of a custom schema
      # In this case, this will return an array of Product items
      "Products": {
        "type": "array",
        "items": {
          "$ref": "#/components/schemas/Product"
        }
      }
    }
  }
}

If you want to try out the OAS above, here is a version of it with no comments that can be passed into Swagger UI or any Swagger editor, like https://editor.swagger.io/.


{
  "openapi": "3.0.0",
  "info": {
    "version": "1.0.0",
    "title": "Henry's Product Store",
    "license": {
      "name": "MIT"
    }
  },
  "servers": [
    {
      "url": "http://henrysproductstore/v1"
    }
  ],
  "paths": {
    "/products": {
      "get": {
        "summary": "Get all products",
        "operationId": "listProducts",
        "tags": [
          "products"
        ],
        "parameters": [
          {
            "name": "count",
            "in": "query",
            "description": "Number of products you want to return",
            "required": false,
            "schema": {
              "type": "integer",
              "format": "int32"
            }
          }
        ],
        "responses": {
          "200": {
            "description": "An array of products",
            "content": {
              "application/json": {
                "schema": {
                  "$ref": "#/components/schemas/Products"
                }
              }
            }
          }
        }
      },
      "post": {
        "summary": "Create a product",
        "operationId": "createProduct",
        "tags": [
          "products"
        ],
        "responses": {
          "201": {
            "description": "Product created successfully"
          }
        }
      }
    },
    "/products/{productId}": {
      "get": {
        "summary": "Info for a specific product",
        "operationId": "getProductById",
        "tags": [
          "products"
        ],
        "parameters": [
          {
            "name": "productId",
            "in": "path",
            "required": true,
            "description": "The id of the product to retrieve",
            "schema": {
              "type": "string"
            }
          }
        ],
        "responses": {
          "200": {
            "description": "Successfully added product",
            "content": {
              "application/json": {
                "schema": {
                  "$ref": "#/components/schemas/Product"
                }
              }
            }
          }
        }
      }
    }
  },
  "components": {
    "schemas": {
      "Product": {
        "type": "object",
        "required": [
          "id",
          "name"
        ],
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64"
          },
          "name": {
            "type": "string"
          }
        }
      },
      "Products": {
        "type": "array",
        "items": {
          "$ref": "#/components/schemas/Product"
        }
      }
    }
  }
}

The Simplest Guide To Microservices

Almost every tutorial and article you’ll see on microservices begins with a fancy graph or drawing that looks something like this:

Except, unless you already understand microservices, this drawing tells you nothing. It looks like the UI calls a “microservice”, which calls a database. Not particularly enlightening.

So let’s get down to the real question — why? Why would anyone do it like this? What’s the rationale? And since this is The Simplest Guide To Microservices, we’ll start by looking at this issue from the very basics.

What Happens If I Don’t Use Microservices?

Imagine you have an app, and this app has two main components to it: an online store, and a blog. This is without microservices, so they’ll be combined inside of one app. This is called a monolithic app. The store can’t exist without the app.

Suppose the store and the blog have nothing in common — that is, the store will never request any information from the blog, and the blog will never request any information from the store. They are completely separate, decoupled pieces.

Now, for simplicity sake, let’s assume the store and blog can both only support one customer at a time. What would happen if the blog now has three customers, while the store still only has one? We wouldn’t be able to support the three customers on our blog, so we’d have to scale our monolithic app. We can do this by creating two more instances of our app.

Now we’ve got three instances of our app! But take a moment to see if you notice the issue with scaling it this way.

The issue is the store! The only reason we needed to scale our app was because we couldn’t support enough customers for our blog, but what about our store? Clearly, we didn’t need two extra stores too.

In other words, we didn’t actually want to scale our entire app. We just wanted to scale the blog, but since the blog and store are inside of one app, the only way to scale the store is to scale the entire app.

This is the first issue with monolithic apps, but there are other issues too.

Since you now know the basic idea behind a monolithic app, let’s add some coupling into the mix to make this more realistic. Suppose, instead, the store actually makes calls to the blog in order to query for the blog’s merchandise. In other words, the store needs the blog to be working for it to display whatever merchandise we have up on our blog.

The key thing to note here is what happens when the blog component crashes or dies. Without the blog component, the store doesn’t work, because it can’t fetch the list of merchandise without the blog.

Notice that this happens, even when we scale the app, meaning the entire app instances becomes useless.

This is another issue with monolithic apps, which is that they are not especially fault tolerant. If a major component crashes, all components that rely on it will also crash (unless you restart the component, or have some other way to restore functionality). Even in the case of restarting components, we really do not want to restart the store component if the blog component fails, because the blog is what caused the crash, not the store.

Wouldn’t it be nice if we could just point the store component to a different blog component if the one that it’s currently using fails?

As you’re probably noticing, you can’t have two blog instances in one app as per our design constraints, so this isn’t possible with our monolithic app approach (note that you could still technically do this in a monolithic app, but it’d be very cumbersome and would still have scaling issues). Microservices to the rescue!

Enter Microservices

A microservice architecture is essentially the opposite of a monolithic architecture. Instead of one app, containing all of the components, you simply separate out all of the components into their own apps.

Monolithic apps are basically just one giant app with many components. A microservice is just a tiny app that usually only contains one component.

We’ve added in the UI and the DB (database) parts into this diagram as well, to slowly increase the complexity of the architecture. Notice that because each microservice is an actual app, it needs to be able to exist on its own. This means that we can’t share one giant database connection pool like you can in a monolithic app. Each microservice should be able to establish its own connections to the database.

However, microservices still depend on each other. In this case, our store microservice needs to contact the blog microservice to get a list of merchandise. But since these are technically two separate apps, how do they communicate? The answer is HTTP requests!

If the store wants to fetch data from the blog, the only way to do it is through a RESTful API. Since the store and blog are separate in a microservice architecture, you cannot directly call the blog anymore as you could in a monolithic architecture.

Another crucial thing to note is that the store doesn’t really care who is on the other side of that GET request, so long as it gets a response. So this means we can actually add in a middleman on that GET request. This “middleman”, which is basically just a load balancer, will forward that GET request to a live blog, and then pass back the response.

So notice that it no longer matters if any individual blog instance dies. No store instance will die just because a blog died, because the two are completely decoupled by a RESTful API. If one blog dies, the load balancer will just give you a different, live instance!

And notice, too, that you don’t need to have an equal number of store and blog components like you do in a monolithic architecture. If you need 5000 blog instances, and 300 store instances, that’s completely okay. The two are separate apps, so you can scale them independent of each other!

Conclusion

We’ve taken a look at both monolithic and microservice architectures. In monolithic architectures, the only way to scale your app is to take the entire thing, with all of its components, and duplicate it. This is inefficient because you often only want to scale a specific component, and not the entire app.

Additionally, when you use a monolithic architecture, a single failure or crash can propogate throughout the entire app, causing massive failures. Since it is harder to implement redundancy (multiple copies of the same component) in a monolithic app, this is not a particularly easy problem to solve.

Microservices can also have a single point of failure as well, but because microservices are smaller, quicker to startup, and easier to scale, you can often create enough redundancy, or restore dead instances in time, to prevent catastrophic failures.

To make a microservice architecture work, it’s crucial that each microservice represents a single component. For example, the authentication for an app should be a microservice, the online store UI should be a microservice, and the financial transaction mechanism should be a separate microservice. The three of these together can make up an online store, but all three of them must have the ability to exist on their own.

Remember that the microservice architecture is not a silver bullet. It has its own disadvantages as well. For example, if your app has no need to scale (like if it only has a handful of users), then a microservice architecture is way overkill.

With that being said, I hope you’ve gained some insight on how microservices work, and how it compares to a monolithic approach.

Happy coding!