home/blog/engineering
Diagram of a C++ gRPC service serving requests from Ruby and Android Java clients
Profile picture of Justin Capalbo
Justin Capalbo Staff Engineer
Posted on Feb 16, 2022

gRPC Services with a Type Generation Example for NodeJS and Typescript

#typescript#microservices#grpc

In this post, I'll cover the "why" of gRPC, the structure of protobuf files, and some of the Typescript interop offered by two NodeJS packages for gRPC: @grpc/grpc-js and @grpc/proto-loader.

This post assumes some familiarity with microservice architecture and RESTful API calls. With all APIs, including the well known REST and SOAP flavors, there are two players:

  1. The client. A device, browser, or back-end service sends data over the internet using one of many standard protocols. SOAP and REST, for example, both communicate using HTTP. For example, the client may request user information from a back-end service.

  2. The server. Another application's API receives the request. A request is usually comprised of a header and body data in JSON, XML, or some other known format. The application handles producing some response based on the request provided. The server might have exclusive access to a database of user info, for example. In this case, the API would be responsible for querying the database and returning user information to the requestor.

A deep discussion on the "how" and "why" of RESTful microservices (statelessness, etc.) is best left for another post. Instead, we'll dive into gRPC, an alternative to REST and SOAP that's been gaining some traction recently. gRPC is another protocol for two back-end services can use to communicate that confers some unique advantages.

What is gRPC?

gRPC is an efficient means of service to service communication and is an alternative to REST and SOAP. gRPC sends messages over HTTP/2 using a binary serialization standard called protocol buffers. Google initially released the gRPC serialization standard in 2016. Protocol buffers are often referred to in the shorthand "protobufs".

RPC is a ubiquitous term that stands for "Remote Procedure Call", a fancy way of saying that we want to run some code on another machine. The "g" doesn't actually stand for "Google", even though they created and continue to maintain gRPC. They use the letter G to name releases, such as version 1.12, Glorious.

Like any HTTP request made with a library like Axios/Fetch in Node or HttpClient in .NET, gRPC is language agnostic. As such, clients needn't concern themselves over the implementation details of the API on the server. Rather, they need only be aware that the gRPC service exists to serve the request, as well as the name and shape of the request and response.

gRPC's Niche

gRPC comes with a couple of advantages, or rather, use cases in which it might be preferable to REST.

  1. Performance over flexibility. Many RESTful APIs use a JSON format for data transfer, while SOAP uses XML. These structures are designed to be human readable, but use more bandwidth as a consequence. The performance gains from gRPC's binary serialization are one of the leading benefits cited as a reason to use gRPC. In fact, JSON over REST can be roughly 7 to 10 times slower than gRPC. Protocol buffers and the HTTP2 protocol are the means by which gRPC achieves these speeds. Thus, use-cases involving high request counts may see more benefit than ones with larger and more infrequent requests.

  2. Protobuf files make contracts between services a first-class citizen. With REST, we need to use tools like OpenAPI/Swagger to produce contracts for server calls that we can provide to a calling service. While these tools are great, they're a code-first approach to a problem that's really about specifying an API's capability within a domain, and the types of messages that two services might exchange. In gRPC, the contract (in the form of a .proto file) is the first step in developing a new service. It's then some physical service's job to implement the contract, and another service's job to act as the client. As long as the caller and server reference the same contract, they can never disagree about the messages they should be exchanging. In this way, gRPC protobufs are closer to the WSDL specification for a SOAP service.

Now that we know a little more about what gRPC is and when we might consider it for a project, let's dive into an example using Typescript.

Hello, .proto!

"Hello, World" for the tiny gRPC service we'll build starts with our service definition. Whatever language our caller or server are using isn't really important just yet. Google maintains reference documentation for protocol buffer version 3, the newest production-ready version of the syntax. In our contract, we want a single function that accepts a name and location, and returns a greeting to the caller.

1// greeter_service.proto
2
3syntax = "proto3";
4
5package greeter_service;
6
7service GreeterService {
8 rpc Greet (GreetRequest) returns (GreetResponse) {}
9}
10
11message GreetRequest {
12 string name = 1;
13 string location = 2;
14}
15
16message GreetResponse {
17 string greeting = 1;
18}

Pretty straightforward, but let's break this down.

1syntax = "proto3";
2
3package greeter_service;

You'll see something like this at the top of most protobufs.

  • The syntax keyword indicates which version of the .proto syntax we're using - in this case, version 3.

  • The package keyword is an optional specifier that is used to prevent name clashes between multiple message types. It's useful to think of the package as being similar to something like a namespace. Thus, it's important for package names to be unique. At EF Go Ahead Tours, our microservices all have unique names, which makes this requirement pretty trivial.

1service GreeterService {
2 rpc Greet (GreetRequest) returns (GreetResponse) {}
3}

The service keyword specifies the various rpc methods/functions that some service must provide, as well as the types of the request and response messages. A single .proto file can contain multiple logical service keywords, and a single service can have as many rpc as required.

1message GreetRequest {
2 string name = 1;
3 string location = 2;
4}
5
6message GreetResponse {
7 string greeting = 1;
8}

Lastly, the message keyword specifies the shape of the messages that will be passed back and forth. It's possible to nest messages, just like nesting objects in JSON. Arrays are there as well. This allows for some pretty robust messages with structures that everyone who's used to JSON should be familiar with.

The only thing that should stand out as a little unusual here are the things that look like assignment operators (=) with a number after them. These numbers specify the field order for each field in the message. This is the order in which the binary data will be arranged before the messages are transmitted. Having a mismatched field order between client and server might cause errors, or might not. Imagine having two string fields with mismatched field order. The server would just see two strings, but we might end up with data from the name field on the client in the location field on the server! Quite tricky to debug.

It's crucial that there are no discrepancies in field order between clients and servers.

Building the Greeter Service

Now that we've got a protobuf definition, let's build a Typescript service that can handle the request. I ran a few commands to set up my dependencies in a brand new Node project.

1. Project setup

For this project, we start with an empty npm package that I've called greeter-service. The project consists solely of the following:

  • package.json

  • a .gitignore that's set to ignore the proto directory and node_modules

  • greeter_service.proto, our protobuf from the previous section

  • ./src/main.ts, which will serve as the entry point for our application

2. Dependencies and scripts

npm i @grpc/grpc-js @grpc/proto-loader ts-node

npm i --save-dev rimraf typescript ts-node-dev

Of these, the two gRPC packages and rimraf will be part of our build workflow. We'll also need a couple of scripts:

1"scripts": {
2 "dev": "tsnd --transpile-only --ignore-watch node_modules ./src/main.ts",
3 "generate": "rimraf ./proto && proto-loader-gen-types --longs=String --enums=String --defaults --oneofs --grpcLib=@grpc/grpc-js --outDir=proto greeter_service.proto"
4}

package.json

dev will run our development server, while generate will generate the types necessary for us to write code for our handler and server setup.

The generate script looks a little daunting, but that's largely due to the number of options that proto-loader-gen-types accepts.

  • rimraf will remove any existing generated code from the ./proto directory in the project, paving the way for fresh code generation if we modify our .proto file. We'll also want to create a .gitignore that ignores the contents of this directory. If building for production, you can simply run the type generation as a build step prior to compiling the server.

  • We specify @grpc/grpc-js as the preferred code gen library of choice.

  • outDir matches the directory we just cleaned up using rimraf. This is where the generated types will go.

  • greeter_service.proto specifies the proto file we want to generate code for. Wildcards (*) can be used here in order to specify multiple protobufs for a single run of the code generation.

Running generate now should generate a proto folder at the project root that consists of several helper types. Since we added proto to .gitignore, we won't source control this folder. Furthermore, any time our protobuf definition changes, we can simply run the script again.

3. Building greeterHandler

A handler refers to a function on the server that will be invoked whenever a client calls a given RPC. The handler's job is to accept the request message specified in the protobuf, and provide the required response message. For our simple case, we'll need a Greet function that accepts an object with name and location properties, that returns an object consisting of a single greeting property.

With Typescript, we can use the generated type GreeterServiceHandlers to specify what the shape of our handler (a Javascript object) should look like:

1// ./src/greeter.handler.ts
2
3import { GreeterServiceHandlers } from "../proto/greeter_service/GreeterService";
4
5export const greeterHandler: GreeterServiceHandlers = {
6 Greet: (call, callback) => {
7 const { name, location } = call.request;
8 callback(null, {
9 greeting: `Hello, ${name}, how is the weather in ${location}?`,
10 });
11 },
12}
13

When the handler is invoked, the request data is available on the call object, as seen above. The callback function is used to "return" the response to the client, in lieu of the return statement you'd typically see in an Express REST API. The first parameter in callback is reserved for an error, if the server happens to catch an Exception and wants to report it back to the caller. If we had more than one RPC in our protobuf, we'd just implement them as siblings of Greet.

4. Starting the server

Finally, we're ready to start the server by writing the implementation for main.ts.

1// ./src/main.ts (Server project)
2
3import { Server, ServerCredentials } from "@grpc/grpc-js";
4import { loadSync } from "@grpc/proto-loader";
5import { loadPackageDefinition } from "@grpc/grpc-js";
6import { ProtoGrpcType } from "../proto/greeter_service";
7import { greeterHandler } from "./greeter.handler";
8
9const proto = loadPackageDefinition(loadSync("./greeter_service.proto")) as unknown as ProtoGrpcType;
10
11const grpcServer = new Server();
12grpcServer.addService(proto.greeter_service.GreeterService.service, greeterHandler);
13grpcServer.bindAsync("0.0.0.0:50051", ServerCredentials.createInsecure(), (err, port) => {
14 if (err) {
15 console.error(err);
16 } else {
17 console.log(`gRPC Server bound on port: ${port}`);
18 grpcServer.start();
19 }
20});
21

There's a lot to unpack here, but the actual code is only around 6-8 lines long.

1// Loading the service definition from the protobuf
2const proto = loadPackageDefinition(loadSync("./greeter_service.proto")) as unknown as ProtoGrpcType;

@grpc/grpc-js provides us with the useful loadPackageDefinition function, which constructs a utility object for us based on the service definition that it parses from the specified .proto file. We'll use this in a bit.

1// Construct the server and add the Greeter Service, mapping greeterHandler
2const grpcServer = new Server();
3grpcServer.addService(proto.greeter_service.GreeterService.service, greeterHandler);

Here, we make use of the proto helper object by adding the GreeterService to our gRPC server. Importantly, this step is where we set up the mapping between the gRPC service and the handler that we created earlier to handle requests to this service.

1grpcServer.bindAsync("0.0.0.0:50051", ServerCredentials.createInsecure(), (err, port) => {
2 if (err) {
3 console.error(err);
4 } else {
5 console.log(`gRPC Server bound on port: ${port}`);
6 grpcServer.start();
7 }
8});

Lastly, we start the service using 0.0.0.0:50051 as an alias for localhost, port 50051. When using SSL on a production server, createSsl() should be used in lieu of createInsecure(), but this will suffice for dev work. In the callback, we show a helpful message and finally start the server.

Server file structure including generated proto folder, src with main and handler, proto file and npm project files at root
Server file structure for the project

Building a Greeter Client

At this point, if we execute npm run dev on our server, we'll be able to make Greet requests to the handler using any gRPC client like BloomRPC. To finish the example, though, let's create a new project and make our own client using Typescript.

1. Project Setup and Dependencies

For the initial setup, I created an npm package called greeter-client and performed the following initial setup steps:

  • Copy the exact same .gitignore, package.json scripts we created for the server.

  • Install the same dependencies and dev dependencies that we installed on the server

  • Create an empty file ./src/main.ts as a place to put our client logic.

  • Lastly, let's also copy greeter_service.proto from the server project into the client project, taking care that any changes made to the .proto file are made on both the client and the server! This is where a strategy like a monorepo, or a shared package to manage .proto files can be extremely beneficial - we employ the latter solution at EF Go Ahead Tours to manage our myriad protobufs.

As a final setup step, don't forget to npm run generate to generate types, just like on the server!

2. Write the client code

Your client might end up becoming an Express service, a GraphQL service, a serverless function deployed in Azure/AWS, or a CRON job. As such, I'll show the bare minimum code necessary to get a response from the Greeter Service. The rest is left as an exercise for the reader. We'll do our client implementation in ./src/main.ts .

1// ./src/main.ts (Client project)
2
3import { credentials, loadPackageDefinition } from "@grpc/grpc-js";
4import { loadSync } from "@grpc/proto-loader";
5import { ProtoGrpcType } from "../proto/greeter_service";
6import { GreetRequest } from "../proto/greeter_service/GreetRequest";
7
8const proto = loadPackageDefinition(loadSync("./greeter_service.proto")) as unknown as ProtoGrpcType;
9const greeterClient = new proto.greeter_service.GreeterService("0.0.0.0:50051", credentials.createInsecure());
10
11const greetRequest: GreetRequest = {
12 name: "Justin",
13 location: "MA",
14};
15
16greeterClient.Greet(greetRequest, (err, response) => {
17 if (err) {
18 console.error(err);
19 } else {
20 console.log(response.greeting);
21 }
22});

We can break this into a couple of distinct parts.

1// Create the client
2const proto = loadPackageDefinition(loadSync("./greeter_service.proto")) as unknown as ProtoGrpcType;
3const greeterClient = new proto.greeter_service.GreeterService("0.0.0.0:50051", credentials.createInsecure());

Much like server creation, we load the package definition for the Greeter Service from the protobuf. This time though, we use this definition to construct a client object instead. After specifying our server's address and our credentials, we're now ready to make requests.

1// Create the request object
2const greetRequest: GreetRequest = {
3 name: "Justin",
4 location: "MA",
5};
6
7// Invoke the RPC "Greet"
8greeterClient.Greet(greetRequest, (err, response) => {
9 if (err) {
10 console.error(err);
11 } else {
12 console.log(response.greeting);
13 }
14});

Using the generated GreetRequest type, it's easy to make sure that our request is structured properly. From there, it's as simple as calling Greet on the client, and handling the response in the callback. As long as the server is running, the output of this code is:

'Hello, Justin, how is the weather in MA?'

It is very cold, thank you.

2a. Addendum regarding Promises

When you're used to working with Promises and async functions, you can Promisify this code with a wrapper function, like so:

1// ./src/main.ts
2
3import { credentials, loadPackageDefinition } from "@grpc/grpc-js";
4import { loadSync } from "@grpc/proto-loader";
5import { ProtoGrpcType } from "../proto/greeter_service";
6import { GreetRequest } from "../proto/greeter_service/GreetRequest";
7
8const proto = loadPackageDefinition(loadSync("./greeter_service.proto")) as unknown as ProtoGrpcType;
9const greeterClient = new proto.greeter_service.GreeterService("0.0.0.0:50051", credentials.createInsecure());
10
11const start = async () => {
12 const greetRequest: GreetRequest = {
13 name: "Justin",
14 location: "MA",
15 };
16 const response = await greet(greetRequest);
17 console.log(response);
18}
19
20const greet = (request: GreetRequest): Promise<GreetResponse> => {
21 return new Promise((resolve, reject) => {
22 greeterClient.Greet(request, (err, response) => {
23 if (err) {
24 reject(err);
25 } else {
26 resolve(response);
27 }
28 });
29 });
30}
31
32start();
33

The amended version is a little simpler to work with when you're already deep inside a stack of async functions, possibly making other API calls using await already.

Folder structure for client project including generated proto directory, src directory with main.ts, greeter proto and npm project files at root
Folder structure for client project

gRPC Gotchas

As with any technology, keep a few things in mind when considering gRPC as an addition to your architecture.

  1. Since contracts are so strict, we're limited in how we can modify them in production. Change management is an important part of any schema driven technology. Much like GraphQL or in event-driven architecture, we should prefer deprecation or versioning of fields, functions, and messages. If a client has an out of date version of a .proto service definition, breaking changes are a real risk when deploying. The strictness of the data contract is a double-edged sword. Since the serialization and deserialization of protobufs depends on the order of the data, even out-of-order fields can cause breakage. This is why "field order" is a reigning concept in .proto files, vs. the laissez faire data arrangement within a nesting level in JSON or XML.

  2. Keeping .proto files in sync across multiple servers and clients is necessary work. This is a natural consequence of point 1. As I mentioned earlier, we at EF Go Ahead Tours use a shared package to manage .proto file versioning across services and clients. As long as both the server and the client are using a compatible version of the package, there's no risk of a schema mismatch. Additionally, by following an additive strategy for new fields and functions, and only deprecating (never deleting) unused functionality, it's not possible for a client on an old version of a .proto file to cause a runtime error based on protocol issues alone.

  3. Browsers don't make good gRPC clients. gRPC is fairly limited to server-to-server communication in the backend. Browsers don't typically support the requisite behaviors that make it possible to send HTTP/2 requests with Javascript, a hard requirement for a lot of the functionality that gRPC has to offer. There are multiple sources online that discuss workarounds to this obstacle, but it's easier to focus on gRPC as a backend to backend communications strategy.

  4. Nullability and optional data can be tricky to handle. Strangely, protocol buffers do NOT support null values. There are a number of ways to achieve optionality in protobufs that are a little out of scope for this article, such as declaring a field optional in your .proto file and making use of some generated functions like hasName() when inspecting an incoming message for a missing name value, versus specifically checking for null or undefined. This can be a bit tricky to get used to, but it's well-covered in documentation, articles and stack overflow answers.

  5. Tight coupling and overuse can still lead to accidental complexity or a distributed monolith. Just like REST, utilizing gRPC between services can be a quick and convenient way to get access to some data your service is normally not privy to. Introducing too many of these calls into your architecture can lead to a spider web of interdependencies. In an effort to make things more performant, you might end up accidentally designing a monolith that other services rely too heavily on. This introduces a single point of failure to your system and can make scaling or retiring old functionality very difficult. In the worst case, circular dependencies between services can arise. If your calling services don't need real-time consistency when data fetching, strategies like replication or event-driven architecture can reduce or eliminate the need for direct calls all together. For the GraphQL enthusiasts out there, tools like Apollo Federation can provide a similar benefit.

That last point kind of snuck in there, since the same caveats apply to RESTful API design. But it's important to keep in mind that the benefits of gRPC can't outweigh risky design decisions from the outset. "If you've got a hammer, everything looks like a nail" is true more often than we'd sometimes like to admit.

Spider web with beads of water

If you map out your service-to-service calls and they resemble this picture, you probably have some tech debt to pay down.

Impact on our Organization

At Go Ahead Tours Engineering, gRPC has been our go-to choice of protocol when we happen to have need of service to service communication.

  • The speed of gRPC makes it an attractive alternative to REST, at the small cost of needing to be a little bit more planful with how we manage our service contracts. This ensures that when we do need inter-service communication, it's speedy and reliable.

  • The strictness of protobuf contracts means that we have to be more thoughtful in how we choose to expose functionality. For user facing clients, we're already extremely comfortable with another schema/contract driven framework for service design in Federated GraphQL. Developer error around service contracts and leaky boundaries have been kept to a minimum as a result.

  • With all of our protobuf definitions managed by a single shared package, discoverability and transparency of the capabilities of the various gRPC services is on the high end. This makes onboarding new developers into the system more rapid, as we have a single source of truth for what our service mesh is capable of.

Conclusion

gRPC can be a powerful REST or SOAP replacement if your use case calls for it. The strictness prescribed by protobufs can be a bit of a paradigm shift, but can also encourage good habits and a cleaner architecture. Like any technology, it's important to make sure that you aren't using gRPC as a solution in search of a problem. Take a step back from your use case and observe carefully. The speed and type safety offered by gRPC may be just what the doctor ordered!

View the service and client source code on Github. Not production ready!

© EF Education First