Apache Kafka Simply Explained With TypeScript Examples

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

You’re curious about what Apache Kafka does and how it works, but between the terminology and explanations that seem to start at a complex level, it's been difficult to embark. This session is different. We'll talk about what Kafka is, what it does and how it works in simple terms with easy to understand and funny examples that you can share later at a dinner table with your family.


This session is for curious minds, who might have never worked with distributed streaming systems before, or are beginners to event streaming applications.


But let simplicity not deceive you - by the end of the session you’ll be equipped to create your own Apache Kafka event stream!

This talk has been presented at JSNation 2023, check out the latest edition of this JavaScript Conference.

FAQ

Apache Kafka is an event streaming platform that is distributed, scalable, high-throughput, low-latency, and has an amazing ecosystem and community. It handles the transportation of messages across multiple systems, including microservices, IoT devices, and more.

Apache Kafka is known for being distributed, scalable, high-throughput, low-latency, and having a strong ecosystem and community. It can handle trillions of messages per day and store data persistently across multiple servers.

Apache Kafka simplifies the handling of real-time data by untangling data flows and supporting real-time monitoring, processing, and reporting. It uses a push-pull model where producers push data into the cluster, and consumers pull data from the cluster.

In Apache Kafka, producers are applications that create and push data into the cluster, while consumers are applications that pull and read data from the cluster. Producers and consumers can be written in different programming languages and run independently.

Apache Kafka ensures data persistence and reliability by storing data on multiple servers (brokers) with replication. This means if any server goes down, the data is still available. Data is stored persistently on disks and can be read multiple times by different applications.

A topic in Apache Kafka is an abstract term for a set of events that come from one or more sources. It can be seen as a table in a database, with messages ordered by offset numbers. Topics are divided into partitions for distributed storage.

Apache Kafka uses partitions to split topics into chunks, each with its own offset numbers. It ensures data ordering within partitions using keys, such as a customer ID. Data is replicated across brokers to ensure reliability.

Common use cases for Apache Kafka include real-time data streaming for e-commerce platforms, monitoring and reporting systems, IoT device data handling, and any application requiring high-throughput, low-latency message transportation.

Apache Kafka supports different programming languages by allowing producers and consumers to be written in various languages. This flexibility helps in integrating Kafka with diverse systems and applications.

Event-driven architecture in Apache Kafka involves handling data as a continuous flow of events rather than static objects. This allows for real-time data processing, replaying events, and answering complex queries based on the event stream.

Olena Kutsenko
Olena Kutsenko
27 min
01 Jun, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Apache Kafka is a distributed, scalable, and high-throughput event streaming platform that plays a key role in event-driven architecture. It allows for the division of monolithic applications into independent microservices for scalability and maintainability. Producers and consumers are the key components in Kafka, allowing for a decoupled system. Kafka's replication and persistent storage capabilities set it apart from alternatives like Redis and RabbitMQ. Kafka provides easy access to real-time data and simplifies real-time data handling.

1. Introduction to Apache Kafka and Shoputopia

Short description:

Hello everyone. Today I wanted to talk to you about Apache Kafka, an amazing project that has become the default standard for data streaming. Let me give you an example of how Apache Kafka can make a significant difference in a project. Imagine building an e-commerce product based on the movie Zootopia, called Shoputopia. As the project grows, it's important to avoid putting everything into a single monolith. Instead, we should consider dividing the monolith into independent microservices to ensure scalability and maintainability.

Hello everyone. My name is Elena. I work at Ivan where we support and contribute a lot to open source projects. Today I wanted to talk to you about one of those amazing projects which exists already for over a decade and became default standard for data streaming.

This is obviously Apache Kafka. But before we give a definition for Apache Kafka, I wanted to give you an example of a project where Apache Kafka makes a significant difference both to the users of the system as well as to developers. And my ingenious project idea is based on an animation movie which you might have seen, Zootopia. If you haven't seen it, no worries. However, if you have, you will recognize some of our characters because today, you and me, we are going to build the first e-commerce product of Zootopia and we'll call it Shoputopia. And like in any e-commerce project, we want to have some inventory of products. We are going to sell some simple user interface to start with where our lovely customers will be able to search for products, select what they need, put an order and wait for delivery.

And at start, maybe during MVP stage, you might be tempted to put everything into a single monolith where your frontend and your backend will be next to each other. You will have some data source there as well, and there is nothing bad about monoliths per se. However, once you have more customers and your shop becomes more popular and you start adding more and more modules into this monolith, very soon the architecture flow and the information flow of the system have a risk to become a mess. A mess that is difficult to support and difficult to expand. And assuming our development team is growing, no single individual will be able to keep up with the information flow of the system. And you might have been on those shoes when you are joining a project and they bring you the architecture, you're like, Oh my God, how do I navigate it? Whom I should talk to to understand this whole system? At this point of time, we'll have to make a tough conversation on how we can divide our monolith into a set of independent microservices with clear communication interfaces.

2. Importance of Real-Time Data and Apache Kafka

Short description:

Our architecture needs to rely on real-time events for meaningful recommendations. We also want easy access to real-time data without over-complicating our lives. That's where Apache Kafka comes in, untangling data flows and simplifying real-time data handling.

What's even more crucial, our architecture must be as close to real time communication as it is possible to rely on real time events so that our users don't have to wait till tomorrow to get meaningful recommendations based on their purchases done today or yesterday. What is also important would be really cool to have a support for real time monitoring, processing and reporting that is coming as a set package of functionality.

Also as engineers, we want to get the work with real-time data in an easy fashion, which doesn't really over-complicate our life. And this is a lot to ask, however, that's why we actually have Apache Kafka and Apache Kafka is great at untangling data flows and simplifying the way that we handle real-time data.

3. Introduction to Apache Kafka

Short description:

Apache Kafka is an event streaming platform that is distributed, scalable, high-throughput, low-latency, and has an amazing ecosystem and community. It can handle transportation of messages across multiple systems, including microservices, IoT devices, and more. Apache Kafka deals with entities described by continuously coming events, allowing for a flow of events and the ability to approach data from different angles. It plays a key role in event-driven architecture, coordinating data movement and using a push-pull model to handle incoming messages.

So with this I wanted to move to a definition of Apache Kafka, and I know definitions are really boring, however, I wanted to be us on the same line so that we kind of can understand each other. So Apache Kafka is an event streaming platform that is distributed, scalable, high-throughput, low-latency, and has an amazing ecosystem and community. Or simply put, it is a platform to handle transportation of messages across your multiple systems. It can be micro services, can be IoT devices, can be a teapot in your kitchen sending information about the water to your mobile phone, so anything.

Apache Kafka platform is distributed, meaning that it relies on multiple servers with data which is replicated over multiple locations, making sure if any of those servers go down, we are still fine. Our users can still use the system. It's also scalable, so you can have as many of those servers as you need and they can handle trillions of messages per day, ending up in petabytes of data persistently, and that's the word that's important, persistently stored on the disks. And also what is awesome about Apache Kafka is its community and also a wide ecosystem, including the libraries, you'll see JavaScript later in action, and also the connector so you don't really have to reinvent. It exists already for decades, so there are a lot of connectors which are already built making it easy to connect Apache Kafka with your systems as well.

So, to understand how Apache Kafka works and more importantly, how we can work effectively with Apache Kafka, we need to talk about Kafka's way of thinking about data. And the approach which Kafka takes is simple, but also quite clever. Instead of working with data in terms of static objects or final facts, final set of data which is stored in a table, in a database, Apache Kafka deals with entities described by continuously coming events.

So in our example, for our online shop, we have some products which we are selling. And the information about the products and their states, they can store in a table, in a database. And this gives us some valuable information, some final compressed results. However, if after you store the data you come up with more questions about, I don't know, the search trends, the peak times for some products, you can't truly detect that information from the data you stored unless you planned it in advance. So, we can see that data in the table as a compressed snapshot and one-dimensional view or a single dot on an infinite timeline of the data.

What if instead you can see this data as a flow of events. For example, a customer ordered a tie. Another customer searched for a donut. Then we dispatched the tie to the first customer and the second one decided to buy the donut. And so on, we have more events coming to the system. So, instead of seeing the single data point, we see the whole life cycle of product purchase. What is more, we can replace those events. We can't really change the past events, they already happened, but we can go and replace them again and again, and approach the data from different angles, and answer all the questions which we might have in our mind even later. And this is called an event-driven architecture, and I'm quite sure many of you are familiar with that. But let's see how Apache Kafka plays with event driven architecture. So here in the center I put the cluster, and on the left and on the right we will see applications which interact with the cluster. So Apache Kafka coordinates data movement and takes care of the incoming messages. It uses a push-pull model to work with the data, which means that on one side we have some structures which will create and push the data into the cluster.

4. Producers, Consumers, and Topics

Short description:

Producers and consumers are the key components in Apache Kafka. Producers are the applications that engineers write and control to push data, while consumers pull and read the data. They can be written in different languages and platforms, allowing for a decoupled system. In the cluster, events from various sources are organized into topics, which can be seen as tables in a database. The messages within a topic are ordered and have offset numbers. Unlike traditional queue systems, consumed messages in Apache Kafka are not removed or destroyed, allowing for multiple applications to read the data repeatedly. The data is also immutable, ensuring the integrity of past data.

And those are applications that we engineers write and control and they are called producers. On the other side we have other structures which will push the data, pull the data, read the data and do whatever they need to do with the data. They are called consumers. And you can have as many producers and as many consumers as you need.

Also those consumers will be reading data from the cluster in parallel. It's a distributed system. And what is amazing is that here in this picture producers and consumers, they can be written in different languages. I mean not everyone is a fan of Javascript. So you can actually mix different applications in different languages and different platforms. And this is how Apache Kafka helps to decouple the system.

Also when you send data with your producers and something happens to your producers, consumers don't really depend on the producers directly. There is no synchronization which is expected. It wasn't me. Can you hear me? Yeah. Go. And yeah. You can pause technically producers, you can, for example, your consumers go down, it's fine, the consumer will restart and will start from the moment where it left off. So because we store the data persistently on the disks, we can kind of do that, interactions without direct communication between producers and consumers.

So now we know a bit about producers, consumers, let's look what happens inside the cluster. Let's look at the data structure we have there. So a set of events that comes from one of some kinds of sources is called a topic. A topic is actually an abstract term, we'll come to this later, but let's say it's how we talk about stuff, not exactly how it's stored on the disk. And you can see a topic as a table in a database, so you can have multiple different topics inside your system. And the messages in the topic are ordered. This is actually a bit more complex, we'll touch it later, but they all have their offset number. You can see a topic as a queue, but here is a twist. In Apache Kafka, unlike in many other queue systems, the consumed messages are not removed from the queue and not destroyed. You can actually read the data again and again by multiple different applications or the same application if you need to process this data one more time. Also, the data is immutable. So whatever comes there, you can't really change the past data.

5. Demo of Producers and Consumers in Apache Kafka

Short description:

I wanted to show a quick demo using Apache Kafka. I will demonstrate producers and consumers and provide more experiments in the repository. We can create a producer that communicates securely with the Kafka cluster using SSL. Once the producer is ready, we can generate and send data to the cluster. To verify the data, we can create a consumer using Node-RD Kafka and start reading the data.

And it's kind of obvious if someone bought a donut. You can't really go into the past and change that fact, unless of course you're Michael J. Fox and you have a DeLorean, but otherwise, if you don't like the donut, you'll have to throw it away. Cool.

With this, I wanted to show a quick demo. Actually, I prepared a GitHub repository where you can check more stuff later. I will show producers and consumers, but there is more experiments in the repository which you can reproduce. You will need Apache Kafka cluster.

Apache Kafka is an open source project. You can set the server locally on your machine or using Docker or using one of the available managed versions for Apache Kafka. Since I work at Ivan I need to mention that we have Ivan for Apache Kafka, which actually you can try with a free trial from Ivan.

Let's create a producer. A producer can be like a lambda function or something else. That's why it needs to know where the cluster is located. Also, how to communicate to that cluster in a secure way so that no one can eavesdrop on what kind of information we are exchanging. And that's why we're using SSL. There are different ways actually to authenticate. I think the most common is actually using TLS or SSL.

So once we created the producer, we can start it, and once it's going, there are different events you can subscribe to. The most probably useful one is when it's ready. It's like once the producer is ready, we can generate data and start sending it to the cluster. So we specify the topic name, the data itself, some extra parameters which are less important, and also I try to make it a continuous flow of events so I hope the JavaScript gods will not be offended that I'm using the while true loop here. And if you're wondering what I have in the data, it's just generated data for the customers. And also in the repository you will find a lot of different types of scripts which you can run.

So here if I run npm run produce and you can actually clone the repository and see it, we start sending the data. To verify that the data comes to the cluster we can create the consumer which is kind of the same. So here by the way I'm using Node-RD Kafka which is a wrapper around the libRD Kafka library. That's my probably favorite JavaScript library for that. And yeah so here we just do a similar way. We connect to the stream and start reading the data.

6. Brokers, Partitions, Replication, and Conclusion

Short description:

Let's add a couple of other concepts to the story: brokers and partitions. Each partition has its own enumeration for the record, making it difficult to maintain order. Keys can be used to ensure message ordering. Replication is another important concept, with each broker containing replicas. Feel free to try Ivan for Apache Kafka with our free trial.

So this is pretty straightforward for the minimal setup, technically that's what you need, not much more. But let's add a couple of other concepts to the story. Brokers and partitions. I already mentioned that Kafka clusters consist of multiple servers. So those servers in Kafka world, we call brokers. When we store the data on the multiple servers, it's like a distributed system, so we need to somehow cut our topic into chunks. And what we'll do, we'll split it and we'll call those chunks partitions.

And this is the tricky part here, like the enumeration right now on the slide looks super nice, but it's actually a lie. Because all of the partitions are independent entities. So technically, you can't really have this throughout offset numbers. So each of the partitions will have their own enumeration for the record. And this makes it difficult when you store data on different servers and you then read the data. How do you maintain the order of the records and make sure that the order in which they came will be the same as they go. So for this, we are using keys. And we, for example, can use a customer ID as the key and this ensures that we can guarantee the ordering of the messages.

Also to mention another important concept, replication. So distributed systems. So each of the brokers will actually contain not only the partition data, but also some replicas. So this is a replication factor of two. Usually, actually, we prefer three so that you can also take care of maintenance windows. But in general, yeah, so you have replicated data. I believe that I am already running out of time. Here is the link, again, to the repository. There are more examples there which you can play with keys and just clone it and it will work. And you can also connect to me later if you want, if you have any questions later. And feel free to try Ivan for Apache Kafka. We have a free trial for Ivan for Apache Kafka and you don't need to have a credit card details or anything else. With this, thank you so much for listening to me. First of all, great talk. I love the animations.

7. Introduction to Kafka and GDPR

Short description:

I've heard of Kafka before, but that really drove home a lot of the concepts. Our first question has to do with GDPR. Can you explain how data sticking around works with GDPR? Technically, you can keep the data in Apache Kafka for as long as you need, but it's more common to consume and store the data in other data stores. GDPR doesn't really come into play here, as you can set a TTL to remove the data later or compress it by the key.

Those were, like, I've heard of Kafka before, but that really drove home a lot of the concepts. So yeah, that was awesome.

Our first question has to do with GDPR. So you talked about how the data is immutable or the data sticks around for a long time. So what's the story on data sticking around with GDPR? So technically speaking, and this is probably not a big secret, you can keep the data in Apache Kafka for as long as you need. However, usually you wouldn't really keep there for too long, because also, you probably will consume the data and store it maybe in some data stores, like, I don't know, data lakes, if you have a lot of data. So that's why with GDPR, it doesn't really come. You can put also TTL, so you actually can remove the data later. You can also compress the data, if you do it by the key, you can only keep the item with the freshest key. So yeah. Awesome.

8. Event Removal in Kafka Queue

Short description:

Events in the Kafka queue are persistently stored and can be removed based on time, size, or compression by key. The default option is to store the data for a specified period, such as two weeks. Alternatively, the maximum size of the topic can be set, and when it is exceeded, older messages are removed. Another option is to compress the data by a specific key, such as customer ID, resulting in the removal of older messages. However, it is not possible to selectively indicate which events to remove.

The next question is, how and when are events destroyed in the Kafka queue, since they are not removed after consumption? I don't know if you had an example of that? Could you repeat? How are the events removed? How do they get out of the queue? It's persistently stored. They are removed either when the time comes, so you can say, I want to have the data stored for two weeks. There is actually a default value there. Or you can say I want to keep the maximum size of the topic, and once the size is increased, the older messages start to be removed. Or I want to compress by the key, for example, customer ID, so then the older messages are removed. You can't really go and indicate which one to remove. That would be inefficient.

9. Consumer Offset and Data Schema in Kafka

Short description:

For individual consumers, the offset keeps track of the consumed data. Kafka allows storing any type of data, but it is recommended to restrict it. Different ways to restrict the data include using versioning and avoiding text or JSON formats.

Okay. I guess for an individual consumer, how do they keep track of what they've already consumed? Yes. The offset, which probably was a more complex scenario, the offset, so we have it per partition, and consumers know how to work with multiple partitions. So they will keep up which data was consumed. And for example consumers goes down, stops, and then it needs to restart. So it remembers the last consumed items. Okay. Yes, so that's how it works. Great!

And there's a question about schema. So the data that you're storing in the events, does Kafka require, or are you able to kind of restrict the data that you store in a schema, or is it kind of freeform? So Kafka actually doesn't care what data you, I mean you shouldn't store their streaming movies, but when it comes to normal data objects you can store whatever you want. I was using json just for the sake of my love to json. But you actually can and should restrict. And there are different ways how you can do it, because technically the schema evolves, so you want to have versioning on that, so you shouldn't really try to use text format or even json. Yeah, okay.

10. RD Kafka Library and TypeScript Support

Short description:

The RD Kafka library is a powerful choice for JavaScript with its wide range of features and high performance. Although it may not be the most user-friendly, it provides good TypeScript support and allows you to ensure event adherence to specific properties.

This question is, what is better about the RD Kafka library? I guess, are there other ones? There are three or even more. It depends on your use case. This is probably the most important decision you will have to make if you use it for JavaScript. I like this library, because it wraps the complete Java library, so it's technically supports the widest range of features, and also it's the most performant one, at least up to my knowledge. However, it might be not the most user-friendly, to be honest. And I'm guessing the TypeScript support varies between them, but the one that you showed, does it have good TypeScript support? You can pass a type in to make sure that the events adhere to those properties? I noticed some things which I didn't really like. I was like, no, you don't support it, but it's good. I would say it's better than nothing.

11. Alternatives to Kafka and Consistency

Short description:

When considering alternatives to Kafka, it depends on your data storage and reliability needs. If you don't require data storage or don't mind losing data, a queuing system may suffice. Redis and RabbitMQ are often compared to Kafka, but the key difference lies in Kafka's replication and persistent storage capabilities. Producers and consumers in Kafka are separate entities, ensuring consistency by allowing data to be sent quickly and stored. The speed and efficiency of data processing can vary between producers and consumers, but there are techniques to optimize performance. In comparison, RabbitMQ requires additional development to ensure stable connections and data retention.

Awesome, so someone is asking, are there any alternatives to Kafka or like something that you would use or recommend aside from Kafka? I think it depends because you can use, if you don't really need to store the data or you kind of don't care about losing data, then you can use some just queuing system because Kafka is, I mean, Kafka is amazing. I think like it can do so many different things, but also with it, it comes responsibility of maintaining the cluster and taking care of that. If you don't really need all those replicas and distributed system, you can just choose a queue. Okay.

And this kind of leads into a similar question. So how's it different than something like Redis? Because I know with Redis, you can do a queuing or messaging system. I think with Redis it's completely different. There is RebitMQ. That's where actually usually it's compared. So Redis is a data store and Kafka is a streaming solution. But it was also like, often the question I hear is like, how is different from RebitMQ, for example? And the difference is that this replication of data, this persistent storage of the data. So that if you don't really have to maintain the data inside by yourself, Kafka does it, and also your producers, consumers can randomly stop working because it's live when your servers go down and you actually, it's kind of the total normal scenario for Apache Kafka. So that's kind of the biggest difference because it supports all the ecosystem of making sure that you are not losing data at all.

And that leads into the next question, which is, how do you guarantee consistency between producers and consumers? Okay. So between producers and consumers, they are separate entities. We kind of separate those. You don't really have that problem. I'm just thinking right now, you don't truly I think have that problem. So you are on one side, you send the data. So your producer is only responsible for sending data quickly. And then data comes and stores in the middle, you have it there. And then consumers, they don't really know about producers at all. They don't truly need to know. They actually know about this kind of topic, and then they read the data one by one. So, but maybe the consistency is the thing is that how are behind your producer, because it takes time, right? And sometimes some of your consumers can be slow. So maybe the difference between how far behind your consumers in processing the data, which was produced by the producers. I mean, probably that's very complex too, but it's just kind of like how long it takes, how efficient the system. So you can measure that. And there are different tricks to make it faster. Okay.

And one last question for you, how do reliable message queues in RabbitMQ, or say you can do reliable message queues in RabbitMQ, is that different from Kafka? It's quite different. So to be honest, I might not go so deep in detail with RabbitMQ, but with RabbitMQ, you will have to build a lot of on top to make sure that you have a stable connection between the... So if anything goes down, that you're not losing data. Versus, I think, Kafka is built of... That is the primary part, this replicated data and not losing it. Okay, great. Well, can we get one more round of applause for Elena? Thank you so much, Elena. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

React's Most Useful Types
React Day Berlin 2023React Day Berlin 2023
21 min
React's Most Useful Types
Top Content
Watch video: React's Most Useful Types
Today's Talk focuses on React's best types and JSX. It covers the types of JSX and React components, including React.fc and React.reactnode. The discussion also explores JSX intrinsic elements and react.component props, highlighting their differences and use cases. The Talk concludes with insights on using React.componentType and passing components, as well as utilizing the react.element ref type for external libraries like React-Select.
TypeScript and React: Secrets of a Happy Marriage
React Advanced 2022React Advanced 2022
21 min
TypeScript and React: Secrets of a Happy Marriage
Top Content
React and TypeScript have a strong relationship, with TypeScript offering benefits like better type checking and contract enforcement. Failing early and failing hard is important in software development to catch errors and debug effectively. TypeScript provides early detection of errors and ensures data accuracy in components and hooks. It offers superior type safety but can become complex as the codebase grows. Using union types in props can resolve errors and address dependencies. Dynamic communication and type contracts can be achieved through generics. Understanding React's built-in types and hooks like useState and useRef is crucial for leveraging their functionality.
Making Magic: Building a TypeScript-First Framework
TypeScript Congress 2023TypeScript Congress 2023
31 min
Making Magic: Building a TypeScript-First Framework
Top Content
Daniel Rowe discusses building a TypeScript-first framework at TypeScript Congress and shares his involvement in various projects. Nuxt is a progressive framework built on Vue.js, aiming to reduce friction and distraction for developers. It leverages TypeScript for inference and aims to be the source of truth for projects. Nuxt provides type safety and extensibility through integration with TypeScript. Migrating to TypeScript offers long-term maintenance benefits and can uncover hidden bugs. Nuxt focuses on improving existing tools and finds inspiration in frameworks like TRPC.
Stop Writing Your Routes
Vue.js London 2023Vue.js London 2023
30 min
Stop Writing Your Routes
Top Content
Designing APIs is a challenge, and it's important to consider the language used and different versions of the API. API ergonomics focus on ease of use and trade-offs. Routing is a misunderstood aspect of API design, and file-based routing can simplify it. Unplugging View Router provides typed routes and eliminates the need to pass routes when creating the router. Data loading and handling can be improved with data loaders and predictable routes. Handling protected routes and index and ID files are also discussed.
Faster TypeScript builds with --isolatedDeclarations
TypeScript Congress 2023TypeScript Congress 2023
24 min
Faster TypeScript builds with --isolatedDeclarations
Top Content
This talk discusses the performance issues in TypeScript builds and introduces a new feature called isolated declarations. By running the compiler in parallel and using isolated modules, significant performance gains can be achieved. Isolated declarations improve build speed, compatibility with other tools, and require developers to write types in code. This feature has the potential to further increase performance and may be available in TypeScript soon.
Full-stack & typesafe React (+Native) apps with tRPC.io
React Advanced 2021React Advanced 2021
6 min
Full-stack & typesafe React (+Native) apps with tRPC.io
Top Content
Alex introduces tRPC, a toolkit for making end-to-end type-safe APIs easily, with auto-completion of API endpoints and inferred data from backend to frontend. tRPC works the same way in React Native and can be adopted incrementally. The example showcases backend communication with a database using queries and validators, with types inferred to the frontend and data retrieval done using Prisma ORM.

Workshops on related topic

React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Mastering advanced concepts in TypeScript
React Summit US 2023React Summit US 2023
132 min
Mastering advanced concepts in TypeScript
Top Content
Featured WorkshopFree
Jiri Lojda
Jiri Lojda
TypeScript is not just types and interfaces. Join this workshop to master more advanced features of TypeScript that will make your code bullet-proof. We will cover conditional types and infer notation, template strings and how to map over union types and object/array properties. Each topic will be demonstrated on a sample application that was written with basic types or no types at all and we will together improve the code so you get more familiar with each feature and can bring this new knowledge directly into your projects.
You will learn:- - What are conditional types and infer notation- What are template strings- How to map over union types and object/array properties.
Deep TypeScript Tips & Tricks
Node Congress 2024Node Congress 2024
83 min
Deep TypeScript Tips & Tricks
Top Content
Featured Workshop
Josh Goldberg
Josh Goldberg
TypeScript has a powerful type system with all sorts of fancy features for representing wild and wacky JavaScript states. But the syntax to do so isn't always straightforward, and the error messages aren't always precise in telling you what's wrong. Let's dive into how many of TypeScript's more powerful features really work, what kinds of real-world problems they solve, and how to wrestle the type system into submission so you can write truly excellent TypeScript code.
Best Practices and Advanced TypeScript Tips for React Developers
React Advanced 2022React Advanced 2022
148 min
Best Practices and Advanced TypeScript Tips for React Developers
Top Content
Featured Workshop
Maurice de Beijer
Maurice de Beijer
Are you a React developer trying to get the most benefits from TypeScript? Then this is the workshop for you.In this interactive workshop, we will start at the basics and examine the pros and cons of different ways you can declare React components using TypeScript. After that we will move to more advanced concepts where we will go beyond the strict setting of TypeScript. You will learn when to use types like any, unknown and never. We will explore the use of type predicates, guards and exhaustive checking. You will learn about the built-in mapped types as well as how to create your own new type map utilities. And we will start programming in the TypeScript type system using conditional types and type inferring.
Build a powerful DataGrid in few hours with Ag Grid
React Summit US 2023React Summit US 2023
96 min
Build a powerful DataGrid in few hours with Ag Grid
Top Content
WorkshopFree
Mike Ryan
Mike Ryan
Does your React app need to efficiently display lots (and lots) of data in a grid? Do your users want to be able to search, sort, filter, and edit data? AG Grid is the best JavaScript grid in the world and is packed with features, highly performant, and extensible. In this workshop, you’ll learn how to get started with AG Grid, how we can enable sorting and filtering of data in the grid, cell rendering, and more. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid into your React application.
We all know that rolling our own grid solution is not easy, and let's be honest, is not something that we should be working on. We are focused on building a product and driving forward innovation. In this workshop, you'll see just how easy it is to get started with AG Grid.
Prerequisites: Basic React and JavaScript
Workshop level: Beginner
Build a Powerful Datagrid With AG Grid
React Summit 2024React Summit 2024
168 min
Build a Powerful Datagrid With AG Grid
Top Content
WorkshopFree
Brian Love
Brian Love
Does your React app need to efficiently display lots (and lots) of data in a grid? Do your users want to be able to search, sort, filter, and edit data? AG Grid is the best JavaScript grid in the world and is packed with features, highly performant, and extensible. In this workshop, you’ll learn how to get started with AG Grid, how we can enable sorting and filtering of data in the grid, cell rendering, and more. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid into your React application.