Video Summary and Transcription
This talk explores the challenges of API communication in a multi-protocol environment and the limitations of REST. It discusses how contract testing can address these challenges by focusing on API communications and reducing reliance on end-to-end tests. The talk also examines the limitations of specifications like OpenAPI and JSON schema and the challenges of endpoint evolution and versioning. It highlights the benefits of consumer-driven contract testing in ensuring API compatibility and provides an overview of the PACT framework as a standardized solution.
1. Introduction to Beyond REST and Contract Testing
Thank you for joining my talk on Beyond REST, contract testing in the age of gRPC, Kafka, and GraphQL. Today, we will examine whether REST is the problem and if there are superior technologies for API communication. We'll explore different classes of technologies, such as OAS, ASIN KPI, GraphQL, and IDLs like Protobufs and Avroans. Using PACT as a concrete implementation, we'll understand how contract testing fits into these technologies. Let's understand why contract testing exists and the context in which it operates. Research shows that internal microservices are a major focus for teams.
OK, well, thank you, everybody, for coming to my talk on Beyond REST, contract testing in the age of gRPC, Kafka, and GraphQL. My name is Matt Fellowes. I'm a Principal Product Manager at SmartBear. I was a co-founder of Pactflow, which joined the SmartBear family back in April this year. And I'm a core maintainer of PACT, which is a consumer driven contract testing framework, an open source one. And the subject, or the course, up until much of today's talk.
Prior to working at Pactflow, I was a consultant. I'm a recovering consultant, and I was lucky enough to work at some of Australia's biggest and largest well known organizations and their distributed systems, and really seen them evolve over my career. There have been, of course, a huge amount of technological change since the days where SOAP, which was the predominant technology when I joined the industry, over the years. And in my relatively short career, I've worked from that SOA, SOAP starting point to rest and microservices, the rise of public cloud and IOT, event sourcing, events framing, modern data pipelines, and of course serverless architectures. And I found in many of these situations and contexts that contract testing was still really relevant and would often look to introduce contract testing into places where there was benefits we had in shifting left, moving faster and solving those problems in itself. But of course during those rollouts or often, I would be on the receiving end of some kind of snarky comment, usually from another competing consultancy of course, that had the following shaped argument. If we just used insert blank technology, then we wouldn't need contract testing. But is it true? Well, today we're going to examine that statement. We're going to ask the question is REST really the problem? And could we save ourselves the trouble of having to think about contract testing, running tests by using a superior technology to API communication? We're going to learn briefly what contract testing is, why it exists in the problem that it solves. And we're going to look into, you know, to see if history is repeating itself or if these new technologies and architectural trends really do solve the problem. Specifically, we're going to look at a few classes of technologies. We're going to look at specifications such as OAS and ASIN KPI to a degree, GraphQL and also IDLs and things like Protobufs and Avroans, Rift. There are of course others, but these are by far the most common alternatives suggested to me by my consultant interlocutors. We'll look at these from a general contract testing lens, but of course we'll use PACT, which is obviously a tool that I work on, as a concrete implementation to help us understand and how it works in practice. And hopefully you'll see that, you know, while PACT has evolved to meet some of these needs, we'll also see that the problem and solution is much more general than any specific technology or language that we've discussed today.
Let's quickly talk about, you know, or understand why contract testing exists and the context in which it operates. I think starting with a customer quote is a good way to set the scene, just to help us, you know, feel the problem. This is a quote from a PACT flow prospect reaching out for some help and for argument sake and to protect the innocent, let's call him Bill. And Bill is a leader of a testing organization for over 40 teams in a very large banking institution. And you can see here, he's basically describing working in this big environment with a highly volatile, sort of, you know, testing environment, which makes it challenging, and he's trying to work out how he can use contract testing to test, you know, all the things he's got. He's got RESTful services, GraphQL, Kafka, third party systems, you know, all these things, right? And where you can take away from this, he works in this chaotic environment, it's complex, and he's looking for ways to bring some process and control to that situation. Now, if it sounds anything at all like your company or architecture, you're not alone. Research from SmartBear's state of quality, as well as Postman's API report, really back this up. And we can see that, for one, internal microservices are becoming a massive focus for teams.
2. Challenges and Solutions for Microservices Testing
You can see that most companies operate in a multi-protocol environment, and many face challenges with the speed of delivery, complexity of systems, and the scalability of their ecosystem. Mature organizations are particularly affected by these issues. The traditional approach of end-to-end integrated testing for microservices can be slow, flaky, and difficult to debug. However, contract testing offers a solution by reducing the reliance on end-to-end tests and focusing on API communications. By mocking out dependencies and validating mocks, contract testing ensures that systems can communicate effectively.
You can see here that 61% said they're going to see the most growth for microservices. But actually behind the scenes, you'll read that there's actually internal services about making data available internally to other teams to create more value, which is really interesting. But you can see that also most companies operate in a multi-protocol environment, 81% or so, and almost 60% have three or more protocols.
Now, of course, while microservices aren't new, and many lessons have been learned, there's these issues we're starting to see, you know, really a decade on that we're still emerging from this new way of doing things. In the report, or both these reports, we can see that 50% of people stated that experience or skills were a barrier to getting microservices going, and 35% stated that complexity of systems, this is the second issue, is becoming the problem. And the obstacles are around the speed of delivery, or the expected speed of delivery, versus the time to actually test and build stuff is really at odds with one another. And what I found most interesting of all was that mature organizations are ones feeling the pain. And so you think, why is it counterintuitive? If they're mature, they've probably got all these practices and technology and whatnot to deal with it, but I actually think the reasons are well understood.
The first reason, or one of those reasons, is how we test microservices today. Most companies rely on test and aid distributed systems using a form of integration testing called end-to-end integrated tests. This involves basically taking all your applications and deploying them all into one big shared environment and then running a battery of tests against the whole system, so all the layers of your system, right? And then, if that works, you can then deploy. Now, this might give you a high degree of confidence. They do, if they pass, they do tend to be quite slow, and they also tend to be very flaky, and they tend to be very hard to debug. And because of all this, they give you feedback much further down the lifecycle, right? Because you've had to deploy them before you can get that feedback. It also means they're very difficult to deploy. You probably can't deploy things independently, and you probably have a distributed monolith rather than a nice coherent set of cooperative components working towards a single end in mind.
This creates a problem when you start to scale the size of your ecosystem, both people and software. As you do this, as you add new components and people into the system, you see this nonlinear response to things like the cost complexity time or number of environments, build time, the cost associated with change, and developer idle time. But if you look really carefully, you'll notice that you only really start to feel the pain a bit further down. That inflection point is not there at the start, so you sort of come into the system thinking it's easy to use, but then as you scale, you eventually hit this tipping point where it becomes real painful. And so no wonder Bill's having a bad time. This is kind of explaining what is going on there.
Okay, so what's the solution? Well, one of the solutions is using things like contract testing. Contract testing can help by reducing a lot of the end-to-end tests and replacing them with a way of testing your API communications, which is often what end-to-end testing aims to do. Contract testing is a technique to ensure that two systems are able to communicate. And we look at those integration points and we do it by mocking out the dependencies. So, if you're an API consumer, you mock out the provider and you replay those mocks against it later on in real life against the real provider. And if those mocks have been validated, we're feeling confident that these two systems are able to communicate. And the benefit of this way of working is that it's much simpler to understand. You're just testing a single integration point at a time.
3. Introduction to Rest and Other Specifications
You don't need dedicated test environments, you can run it on your dev machine. These tests scale linearly and we can deploy the services independently. Let's now return back to the original hand-waving argument from my consultant, Interlocutor, and ask the question, is rest really the problem? And could we save ourselves all the trouble by using something else? Let's start with OpenAPI and its counterpart, JSON schema. But you can also think about this as applying to things like AsyncAPI and SOAP and other specifications.
You don't need dedicated test environments, you can run it on your dev machine. Because of this, you get fast and reliable feedback that's easy to debug. These tests scale linearly and we can deploy the services independently. And of course we can actually track this over time, which gives us this ability to evolve them. So, we talked about this at the previous TestJS Summit. Go back and watch that video if you want the rest of the detail of that talk.
Okay, so hopefully we've got a bit of a grasp of the problem we're trying to deal with, and how contract testing might be able to help with that. Let's now return back to the original hand-waving argument from my consultant, Interlocutor, and ask the question, is rest really the problem? And could we save ourselves all the trouble by using something else? Let's start with OpenAPI and its counterpart, JSON schema. But you can also think about this as applying to things like AsyncAPI and SOAP and other specifications. To an extent, GraphQL comes into this mix as well. We don't have time to talk about GraphQL specifically today.
4. Solving Problems with Specifications
Specifications contain all the bits needed to communicate what an API can do. JSON schema helps define the shape of data, but it has limitations. An API is not incompatible with a spec, as the spec is abstract and not always clear. Optional and nullable fields in open API documents can make it difficult to understand available data. Polymorphic endpoints further complicate the picture.
So, how does it aim to solve a problem? Well, the first thing is that specifications contain all the bits needed for humans and to communicate what an API can do. And it uses things like JSON schema to tell us what the shape of the incoming data is and what the response shape should look like. Or what that can be as well. And then we can generate API clients and servers from that OAS.
So we know we're not going to break any changes. Right? So, if we can generate a client code from the OAS, are we not always guaranteed to have a working system? Well, of course the answer is no. Otherwise we wouldn't be here. And if you're old enough to remember SOAP, you'll remember that we had all these things in features too. A clear specification, a clear schema and client generation and schema and service generations. But it didn't solve the problem. And REST obviously has some better redundancy principles built into the design of it. You know, Postel's law and extensibility. But it really isn't enough actually.
And if you're looking here, if you look at this quote, this is from the JSON schema website. It basically says if you're going to do this kind of testing with JSON schema to validate your request, you're going to need two types of validations. One is a structural tool, at the schema level, which is what JSON schema can do. And one at the semantic level. And that needs to be done in code, probably. So JSON schema still can't do that. It even tells you that.
So the aphorism that an API is not incompatible with a spec starts to sum up how I think about and how we think about open APIs. What I mean by that is, schema is actually abstract. And so it's very difficult to say that an API actually does implement a spec because a spec is actually abstract and not exactly that clear. Examples. So, optional and nullable fields. In sufficiently advanced open API documents, you'll see the use of optional and nullable fields but you won't know in what situations those fields will or will not be present. So, in some cases, a lot of those fields turned out to be optional. And so now it's really difficult to actually understand what data will be available at what point in time. But certain consumers may need that information. Combine this with polymorphic endpoints.
5. Challenges with Endpoints and Evolution
Endpoints that accept different inputs and outputs create challenges in determining the correspondence between them. SOAP had Schemetron for protection, but it required XSLT and functions. Client-generated SDKs make it difficult to know how consumers use the interface, necessitating a different evolution mechanism, often versioning.
So these are endpoints that can accept different shaped inputs and different shaped outputs. It now becomes very difficult to say for every single resource in your document, which input is going to correspond to which output and under what conditions. In the case of SOAP, we actually had Schemetron to help us with this. It had this extra layer of protection you could put over the top of it, but it needed XSLT to do that, which used functions to do it. Of course, we lose sight of the API service area as well. So if we're using client-generated SDKs, well, we don't know what our consumer is doing, so we have to assume they're using the entire interface. Which means we need a different mechanism for evolution. The standard one here is going to be versioning, which we'll touch on in a second.
6. Challenges with Client SDKs and Versioning
Client SDKs can be modified and used in unexpected ways, leading to drift. Versioning is a common but painful practice that requires building, maintaining, testing, and releasing multiple versions of software. The cost of maintaining code is usually greater than building it. Avoiding this overhead is desirable.
Lastly, but not leastly, client SDKs can be modified and used in unexpected ways. I was talking to one of our solution engineers in Europe who works with some of the world's largest Smagger and Smagger Hub customers. He mentioned that basically 90% of all the customers who use CodeGen will actually modify the generated code when the OS changes after that initial generation. That opportunity for drift is definitely still present.
Of course, let's briefly touch on versioning. Versioning is the most common practice here, but it is painful. We don't want to do it if we can avoid it. Teams need to build another version of the software, maintain it, test it, release it. Now we've got multiple versions we need to maintain so now we've got more code to maintain, of course. The cost of maintaining code is usually much greater over the life cycle than building it. Assuming we don't know what our consumers are using, functionality always carries forward through those versions because we have to assume they need it and want it. And then we need to get consumers eventually onto those new versions, which require us again to monitor, coordinate, communicate people onto those versions. Really, this is the cost and if we can avoid this overhead, we should.
7. Interface Definition Languages and Understanding
Let's talk about interface definition languages like Protobufs, Avro, and Thrift. Protobufs are often suggested as a solution to avoid contract testing because they have built-in schema evolution. However, syntax and semantics are different, and just because we can communicate doesn't mean we understand each other.
So let's talk about the second class of issues here. We're going to talk about interface definition languages. So we're talking about things like Protobufs, Avro and Thrift, and I should say out of the gate here that Protobufs by far is the most common comment that came up with me, is that we could just use Protobufs, we wouldn't need contract testing because it's got schema evolution built into it from the start. In fact, it's so good, you can go forwards and backwards in time. You can use older clients with newer servers and newer clients with older servers and they can all communicate magically. And of course, it supports code gen as well. And so we can create servers and clients from those definitions. So if it's true, that that's the case, then why is Protobufs and gRPC the number one requested feature on our open source roadmap? Well, the answer here is that colorless green ideas sleep furiously. Well, what am I saying here? Obviously, this is an absurd statement. It actually comes from a guy named Noam Chomsky, who was writing about this in 1950s in his dissertation about language. And he was talking about what he's making a point here is about syntax and semantics, that syntactic well-formedness does not mean it's semantically understandable. So grammar and syntax is different. There's no meaning here. So what we're trying to say is, just because we can communicate with one another, we must still need to better understand what it is we're saying.
8. Challenges with Protobufs and Optional Fields
Let's consider an example with a Kafka queue and a consumer reading orders. If we change the order structure, the consumer still needs the total value. Optional fields can create bugs, as seen in a case with gRPC and Protobufs. Version three of Protobufs removes required fields, making the API incomprehensible. Managing breaking changes and transport safety in Protobufs can be challenging due to limited type systems.
So let's take a bit of an example here. So let's say we've got a Kafka queue, a topic where we're posting orders that are being completed on that topic. And we have a consumer on that topic that's reading all the orders coming in, and telling up the totals, just to do a report or something, right? So it's going to find all these orders, every time one comes through, it looks at the total, reads the total value and prints it out or saves it somewhere.
Now, let's say we need to change that order structure. Maybe we need to split the total into some different values. Maybe just split it up into, GST or tax or what have you, or we need to move it somewhere else in the payload, right? Either way, it's changed. And now, we've pushed out this new message. Well, just because the consumer can sort of read the message doesn't mean it didn't need that total value, right? It still needs the value. Otherwise, it can't do its job.
This problem manifests itself even further and gets much more difficult when you combine it with optional fields. In researching this talk, I spoke with a team that worked with gRPC and Protobufs in a global payments processing company. And it was a strange bug appearing when merchants were occasionally not receiving payments from the provider. And after careful analysis and debugging, they discovered there was this new merchant service that had been changed as a configuration API had a new Boolean field on it for enabling the saving payments for a merchant. And they'd found there'd been one single client that didn't know about this update. And every time it sent a request to the merchant service, it would inadvertently disable merchant payments because the default field of course is going to be false. So you can see how this kind of bug can be particularly nefarious to find and resolve. And it's actually a case where you probably don't want the forwards and backwards compatibility. Maybe a network explosion would be better. And in fact, to make things worse, version three of Protobufs gets rid of required fields altogether, which feels very much like what happened with SOAP in the old days, where everything was an optional field and it was just really an incomprehensible API. So coming back to the challenges with Protobufs, we talked about semantics and optional fields. We still need to manage breaking changes. So field descriptors and things like this, technically or theoretically, they should be easy to manage, but actually, in practice, it's really easy to accidentally refactor your code and change field descriptors. We may need to consider how we manage transport safety in the case of Protobufs, which can go over different types of transports. We may want to look at narrowing the types. IDLs, a lot of these technologies, in fact, don't have really rich type systems. GraphQL is a good example, it does. But let's say you want to store a date, a semantic date, or a particular format of a date, or a SIMVIR, all these kinds of things. Well, the language won't have those primitives built into it. You need to put that on top of it. That's actually a challenge.
9. Consumer-Driven Contract Testing and API Evolution
It actually could be a problem. And, of course, if we use SDKs and whatnot, we're going to lose visibility just like before into real-world client usage. Let's return to what lessons we can learn from all this and see if we can understand why consumer-driven contract testing is so effective in helping us. Your provider contract is not your API. It is just one representation of your API. We use record and replay to test the representative examples against the real provider. This gives us confidence it's actually going to work because we're testing it. We also get specification by examples for this because this is what we're doing. And that reduces ambiguity. We can actually time travel. We can actually do time travel in that service evolution because we now know pairs of application versions and what contracts are valid between them at any point in time. We can go and say, does this version of the consumer, can it read messages from this producer, and kind of vice versa? We encode those transport concerns in the contract. We can do narrow type matching the contract. And because we've got all the contracts, we know what all of our consumers are doing. We have the surface area that gives us a mechanism to evolve without having to do versioning. Contract testing provides this generalized data exchange that works across transports, protocols, and content types.
It actually could be a problem. And, of course, if we use SDKs and whatnot, we're going to lose visibility just like before into real-world client usage. And, of course, if we accept these problems, we now need a new way of coordinating changes.
All right. So, hopefully, you've started to see a bit of a theme here. And, obviously, these technologies do give us some benefits that don't actually solve this one problem we've been talking about.
So, let's return to what lessons we can learn from all this and see if we can understand why consumer-driven contract testing is so effective in helping us. The first point I want to make is that your provider contract is not your API. It is just one representation of your API. And, in fact, any observable change in the behavior and API will be deemed breaking by certain consumers. This observation is known as Hiram's Law. But while we can't make this law disappear, we can find ways to reduce the ambiguity. And one of those ways is to bring the consumer's perspective into the picture, which is, of course, how contract testing and PACT can help us.
We use record and replay to test the representative examples against the real provider. This gives us confidence it's actually going to work because we're testing it. We also get specification by examples for this because this is what we're doing. And that reduces ambiguity. We can now see how it's supposed to work. It improves API comprehension. We can actually time travel. We can actually do time travel in that service evolution because we now know pairs of application versions and what contracts are valid between them at any point in time. So we can actually go forwards and backward in time. We can go and say, does this version of the consumer, can it read messages from this producer, and kind of vice versa? Very cool. We encode those transport concerns in the contract. We can do narrow type matching the contract. And of course, because we've got all the contracts, we know what all of our consumers are doing. We have the surface area that gives us a mechanism to evolve without having to do versioning.
So I hope you can see the contract testing does provide this generalized data exchange that works across transports, protocols, and content types. Whether you're providing Restful APIs over HTTP or protobufs, over gRPC or Kafka, or running Avro to a file system as part of an ETL data pipeline, the problem we're trying to solve is really the same. It's, can we communicate some data between multiple versions of evolving applications? So while PAK doesn't have native support for all of these technologies out of the box, it can be extended by plugins.
10. Conclusion and Call to Action
You can write a plugin in your language of choice to support transports, protocols, and matching rules. PACT now supports multiple languages and has released beta support for JS. The lack of standardization for design and testing contributes to microservices challenges. Contract testing reduces testing complexity and ambiguity in API specifications. PACT provides a standardized workflow for testing API communications across languages, transports, and protocols. Thank you for attending the talk, feel free to reach out with any questions.
So you can write a plugin in your choice, language of choice, to support things like transports or protocols or matching rules. And then you can distribute that out to all the languages that support plugins. So we've added that support to PAK-JVM, so Java, Kotlin and whatnot, PAK-Go, and we've just released beta support for JS. So you're all the first to know about that. And the first official plugin we created was protobufs, of course.
So here's an example of protobuf, or gRPC slash protobuf failing, where the provider has not matched the exact content type of the consumer. So, what are our takeaways here? Multiprotocol internal microservice adoption is accelerating. And that the lack of standardization for design and test is contributing to the challenges of microservices for all. We learned about Hirem's Law and the need to reduce ambiguity, and how, really, our API is just, is not a specification itself, it's just one view of the API. And contract testing is an approach that can help both reduce the complexity of our testing, but also the ambiguity inherent in those API specifications. And lastly, we saw how PACT can wrap all this up into a standardized workflow for testing those API communications across languages, transports and protocols.
So, thank you so much for coming to my talk. I really hope that was interesting and you learned something. If you have any questions, do reach out to me on any of the channels below and enjoy the rest of the conference.
Comments