unnamed.png
Ë
Jeff Dwyer By Jeff Dwyer • June 2, 2018

Why prefab.cloud switched to gRPC

Prefab.cloud switched from a RESTful JSON & Protobuf API to gRPC and we couldn’t have been happier with our decision. Let’s walk through what that meant for us and how we made the decision.

RESTful JSON & Protobufs

Prefab.cloud launched as ratelim.it, a solution for super scalable shared rate limits as a service. As you can imagine, ratelimits implies some rigorous latency and scalability requirements, so those have always been top of mind for us. In order to be efficient on the wire (and because frankly Dropwizard makes it really easy) we launched with both a protobuf and an JSON version of our rate limit APIs.

Soon after releasing the rate limit APIs we exposed our internal Feature Flag APIs and it made sense to do those as both protobuf and JSON as well.

Screen Shot 2018-06-01 at 9.21.56 PMScreen Shot 2018-06-01 at 9.21.56 PMWhy gRPC?  Language Support

Once you have protobuf APIs and you start thinking about having more than 1 client language (we support Java and Ruby today, but would like to support everything) it becomes really compelling to consider gRPC. Why? Well, the promise of unbelievably fast HTTP/2 RPC clients written in basically every language by engineers at Google is pretty hard to pass up. The alternative was trying to figure out what the cool/most-current HTTP library was in a host of languages and that didn’t sound like fun.

Why gRPC?  Bi-Directional Streaming

Beyond simply getting a robust client in many languages however, the HTTP 2 streaming support was very interesting. One of the core reasons that people choose prefab.cloud over a homegrown solution is to get a faster, lower latency solution than you’re going to get with simple “polling” solutions. gRPC means that we can leave a TCP connection open, thus avoiding connection overhead on each request and gRPC streaming means we get a non-hacky server push for no more code than a more traditional synchronous call.

What does gRPC look like?

Basically gRPC is just an extension of the protobuf format. You define protos as you normally would, but now you can also add in services and methods that describe your API. It's delightfully self-documenting. Here's an example of the prefab distributed config service.

 

 Note the two different types of service calls here. Upsert a new config value is a simple unary call. But by just adding stream to the GetConfig the generated code will have a robust server push capability, ie whenever the server receives a new value it can pop it in the stream and immediately the client will get a new ConfigDelta .

 

Takeaway

gRPC has been a big win for our ability to develop robust, ultra performant clients. Easy streaming functionality has been fabulous and really changed how we think about some problems. That said, gRPC is fairly cutting edge and it definitely made us learn some new things in order to get it working in AWS. Specifically why Traditional AWS ELB & ALB won’t work for gRPC and SSL and GRPC in AWS (aka you don’t miss AWS cert management until it’s gone). 

Overall if you are:

  1. Seriously considering multiple client languages
  2. Already using protobufs
  3. In search of ultra-low latency
  4. Looking for streaming

you should definitely evaluate gRPC.

Try Feature Flags as a Service