Google Remote Procedure Call (gRPC) is a high-performance, open-source framework for implementing APIs via HTTP/2. It’s designed to make it easier for developers to build distributed applications, especially when code might be running on different machines.
gRPC was initially developed by Google as technology for implementing Remote Procedure Calls (RPCs). Today, gRPC is an incubated project of the Cloud Native Computing Foundation, which means it is used in production and is supported by a healthy pool of contributors.
To understand why Google developed gRPC, let’s briefly look at the timeline of API design.
RPC is one of the oldest ways to design and build an API. RPCs allow you to write code as though it will run on a local computer, even though you might actually call a service running on a different machine (usually on your local network).
In practice, this enables developers to use direct actions (like SendUserMessages, addEntry, etc.) without having to account for network details. RPC messages are lightweight and efficient, but they are also tightly coupled with the underlying system. This makes them difficult to integrate, change, and more likely to leak details about the system.
When the REST API architecture was introduced, it solved some of these challenges by providing a uniform way to access data and resources using generic HTTP methods like GET, POST, PUT, and DELETE. Although REST simplifies data access, the API often returns more metadata than is needed. REST APIs also require more information about the network (as in, where to send a request), so they are not as lightweight and efficient as RPCs.
By adopting newer technologies, gRPC updates the older RPC method to make it interoperable and more efficient. Today, this is an appealing choice when developing APIs for microservices architectures.
Some of the advantages of gRPC include:
Overall, gRPC offers a high-performance, flexible framework that is ideal for inter-service communications in highly distributed microservices architectures.
The advantages and benefits of gRPC largely stem from the adoption of two technologies:
gRPC uses Protocol Buffers (or Protobufs) to define services and messages instead of XML or JSON. It’s a language-neutral mechanism for serializing structured messages that the services will send to each other.
Similar to the concept of the OpenAPI Specification for REST APIs, the API contract in gRPC is implemented in a .proto text file where a developer defines how they want the data to be structured. Then, a protoc compiler automatically compiles the .proto text file into any supported language. At runtime, the messages are compressed and sent in a binary format.
This provides two key advantages:
Traditionally, REST APIs used HTTP/1.1 as the transport layer. While REST APIs can also be delivered over HTTP/2, gRPC’s exclusive use of HTTP/2 introduces some key advantages. One of these advantages is the ability to send communication using binary. Additionally, HTTP/2 supports the ability to process multiple parallel requests instead of handling one request at a time. Communication is also bidirectional, which means a single connection can send both requests and responses at the same time.
Overall, this improves performance and reduces network utilization, which can be especially valuable in a busy microservices architecture. There are some limitations, however. HTTP/2 is not generally supported by modern web browsers, so you may need to use a reverse proxy like NGINX to deliver the application.
Today, REST is the most dominant API design style, so it provides a useful reference point to compare with gRPC. Both REST and gRPC are valid approaches to building APIs for web applications and microservices, and one isn’t necessarily better than the other. That said, it is useful to understand their key differences to pick the best tool for the job.
Some of the key differences between gRPC and REST fall under these categories:
While REST APIs can take advantage of HTTP/2, RESTful services traditionally use text-based HTTP/1.1 as the transport layer. gRPC exclusively uses HTTP/2, a binary protocol that is more efficient and enables features like header compression and multiplexing over a single TCP connection.
REST APIs typically use JSON as the data format for sending and receiving data. JSON is text-based, easy to read and write, and widely supported. gRPC APIs use Protobufs, which are in a binary format that provides a smaller payload and faster interaction. However, Protobufs cannot be easily read on their own.
REST APIs support a request-response model with limited support for streaming. In contrast, gRPC APIs are delivered over HTTP/2 and support several communication patterns including Unary (request-response), server streaming, client streaming, and bidirectional streaming.
REST is a resource-centric model that supports standard HTTP methods like GET, POST, PUT, and DELETE. Every request must contain all the information needed to process it. Additionally, the API contract is typically written using the OpenAPI Specification with the coding of client and server being treated as a separate step. In contrast, gRPC is a service-centric model where messages and services are defined in the .proto file. The file can be used to generate code for both the API client and server.
REST can be slower due to its text-based data transmission over HTTP/1.1. Each request requires a TCP handshake which can introduce some latency. gRPC supports multiple streams over HTTP/2 so several clients can send multiple requests at the same time without establishing a new TCP connection. It also takes advantage of HTTP/2 features like header compression.
REST uses standard HTTP status codes for error handling. In contrast, gRPC offers much more granularity to define error status codes and ensure they are consistent. By default, the gRPC model is quite limited, but is most commonly extended using a richer error model developed by Google.
REST is widely supported by virtually every language but provides no built-in code generation features. Implementation is left entirely to the developer. With its protoc compiler, gRPC provides native code generation for multiple programming languages.
In summary, the choice between gRPC and REST depends on what you need to accomplish. gRPC provides an efficient, high-performance method for services to communicate in a distributed application. That said, it cannot be read directly by web browsers and other clients, and requires an API gateway or reverse proxy like NGINX to interact with front-end clients. It’s an excellent option for internal APIs that are part of an event-driven microservices architecture.
REST, on the other hand, is widely adopted and supported in virtually any language. It is human and machine-readable since data is exchanged using JSON or XML. In addition, it has a much lower learning curve to get started and is supported by many web browsers, which makes it ideal for publicly exposed APIs.
gRPC is one of the best options for communication in a microservices architecture. This is partially due to performance, but also because of its flexibility in language support. Developers can easily build and generate gRPC clients and servers that run in their preferred language. Since gRPC describes the API contract in a binary format, microservices can communicate independent of the languages used to build them.
One of the most common gRPC-based microservices architectures is to put an API gateway in front of the microservices and then handle all internal communications over gRPC. The API gateway handles incoming requests coming from HTTP/1.1 and proxies them to the microservices as gRPC requests over HTTP/2.
As the adoption of gRPC continues to grow, developers and security operations teams need to ensure that effective security solutions are in place. Because gRPC messages are in binary format, issues might arise for devices and tools that expect to see ASCII-based communications.
gRPC APIs are also vulnerable to many of the most common API security threats. Standard API security practices like access control, encryption, and runtime protection are equally important in gRPC-based architectures.
gRPC applications and APIs require a holistic approach to security. Some of the best practices to secure gRPCs include:
Ultimately, you should verify that your API gateway, web application firewall (WAF), and other API management and security tools are up to the task of protecting your gRPC applications and APIs in production. They should be able to import the .proto file for each service and use it to apply security protections for the gRPC application and APIs.
gRPC is gaining a lot of traction as a popular alternative for developers and large companies like Netflix and Lyft to use in microservices architectures. That said, gRPC isn’t a replacement for REST APIs nor is it an inherently better way to build APIs. gRPC is simply an alternative to consider if you are primarily building APIs for an internal microservices environment and need efficient, real-time communication.
Looking ahead, gRPC will likely continue to gain traction for cloud-native applications due to its performance benefits and ease of development. Meanwhile, developers who need to publicly expose APIs will continue to use REST in their applications. REST will also continue to exist in cloud-native environments due its backward compatibility and deep integration with existing API infrastructure and operations.
NGINX offers a variety of free resources to meet you at any point of your gRPC journey.