Unlocking the Potential of Real-Time Event-Driven Applications With JavaScript

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In today's fast-paced digital world, real-time event-driven applications are at the heart of delivering dynamic and responsive user experiences. This session will delve into the technical intricacies and immense potential of building such applications using JavaScript.

Together, we will explore the core concepts of event-driven architecture (EDA) and its implementation in JavaScript. Key topics will include a technical overview of the event loop and non-blocking I/O, WebSockets, and Message Brokers.

We will further our journey with a look at how to apply these technologies into different use cases such as live data feeds, and collaborative applications while ensuring low latency and fault tolerance.

This talk has been presented at JSNation US 2024, check out the latest edition of this JavaScript Conference.

FAQ

Jerdot is a technical lead at AWS Safegate, working on Apron control software to enhance airport performance, safety, and sustainability.

Event-driven architecture is a software design pattern focused on producing, detecting, consuming, and reacting to events, which are state changes in an application.

JavaScript handles asynchronous operations using the event loop, which manages the execution of synchronous and asynchronous code by processing callback functions in various queues.

Key components of the Node.js runtime include the V8 engine, LibUV, and a wide JavaScript library that allows access to C++ features for file systems and networking.

Event emitters in JavaScript are used to emit events, listen for changes, and stop listening to events, allowing for decoupled parts of an application to communicate.

WebSockets provide full duplex communication over a single connection, allowing for real-time communication with lower overhead compared to traditional HTTP request-response cycles.

RabbitMQ is an open-source message broker used for reliable, flexible, and scalable message routing between different services, supporting protocols like AMQP.

MQTT is a lightweight messaging protocol used in IoT applications where there is low bandwidth or unreliable networks. It operates on a publish-subscribe model.

Challenges of event-driven architecture include ensuring data consistency, managing complexity as the system grows, and handling event ordering and debugging due to its asynchronous nature.

Performance can be improved by using high-performing message brokers, deploying components close together, and processing events in batches. Fault tolerance can be enhanced by running brokers in clusters, ensuring data durability, and implementing retries for transient failures.

Jarred Utt
Jarred Utt
20 min
21 Nov, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Hi, I'm Jerdot, a technical lead at AWS Safegate. Today, I'll discuss event-driven architecture and the potentials of real-time event-driven systems in JavaScript. We'll explore JavaScript runtimes, the event loop, and the queues involved. Event-driven architecture involves producing, detecting, consuming, and reacting to events. It is used in microservices, IoT systems, and real-time data processing. Tools like event emitters and WebSockets are used to simplify building event-driven applications. Connection management is crucial, and RabbitMQ and MQTT are popular message brokers. Performance optimization can be achieved by using high-performing message brokers, deploying producers and brokers close together, and considering fault tolerance. Message processing should include storing messages until successfully processed, handling events multiple times with unintended side effects, and using automatic retries and dead letter queues for transient failures.

1. Introduction to Event-Driven Architecture

Short description:

Hi, I'm Jerdot, a technical lead at AWS Safegate. Today, I'll discuss event-driven architecture and the potentials of real-time event-driven systems in JavaScript. I have experience in embedded systems and web development. Let's dive in!

Hi, my name's Jerdot, and I'm a technical lead at AWS Safegate working on Apron control software that increases airport performance, safety, and sustainability. Today I'm going to talk about event-driven architecture and some of the potentials of having real-time event-driven systems and JavaScript.

Outside of my work at AWS Safegate, I'm American, but I moved to Sweden four years ago. My background's mainly in embedded systems and web development. And outside of working with computers, I like music production, playing computer games, and golfing.

We have a packed agenda for today, and we'll start from the ground up with some JavaScript on-time components, a little introduction into the JavaScript events loop, a brief overview of event-driven architecture, event emitters, WebSockets, RabbitMQ, MQTT, and then what it means to take all of these things into production. Let's get started.

2. Exploring JavaScript Runtimes and the Event Loop

Short description:

JavaScript is actually synchronous, blocking, and single-threaded, but different runtimes allow asynchronous operations. Today, we'll focus on Node.js, which has runtime components like the V8 engine and LibUV. Node.js has a simple setup with a Memory Deep and a call stack. We'll explore examples of synchronous and asynchronous code execution and understand the role of the event loop.

So JavaScript. A lot of people think a lot of different things, that it's non-blocking, and that it's async I.O., but that's not the nature of actual JavaScript. As a language, it's actually synchronous, blocking, and single-threaded. But there's different runtimes that allow us to do asynchronous things that have varied performance JavaScript, both in the browser and also on the server.

Today I'm primarily going to focus on Node.js, but most runtimes have the same sort of tools with different names and slightly different behaviors. Some of the runtime components in Node are the V8 engine, LibUV, Crypto. There's a lot of C++ built features to help you access file systems and networking. And there's also this wide JavaScript library that allows you to use C++ features directly from JavaScript.

So when I look at Node.js, it kind of looks like this. And during runtime, we have quite a simple setup. We have a Memory Deep, where we have all of our declared variables and functions, everything happening there, and we have our call stack. Queue functions are pushed onto the call stack, and when a function returns, it's popped off the call stack. This is quite simple, a last-in, first-out queue. We'll look at some short examples. We have some really, really complicated code here on the top left. On the right, we have our runtime, and on the bottom left, we have a console output. If we start to step through this code, we'll see that the first function enters the call stack. This is executed, and we get the results in the console. The same with the second statement. We get the output into the console, and the third statement into the console. And nothing there seems weird, right? We executed it in order. This is a synchronous script.

What happens if we try to do something asynchronous? We get our first console log statement, and we get the output. We have this read file statement. And this is where it gets a bit interesting, is that this will actually be shoved off into libuv. And we'll continue on with our synchronous code. So we'll go first, and then third. And then after some time, when we're done reading our file, we will process the callback function from that read file function. So we end up with an output that was first, third, and second. And we might wonder, you know, why is that? And this is kind of what the event loop is doing.

3. Understanding the Event Loop

Short description:

The event loop is a design pattern that orchestrates the execution of synchronous and asynchronous code. It consists of six queues that hold callback functions. We start with the microtask queue, then move to the timer queue, and finally process the nextTick and promise callbacks. The loop continues in a clockwise direction, processing callbacks until the microtask queue is empty.

And the event loop is essentially just a design pattern that orchestrates the execution of synchronous and asynchronous code. It's always running in the background of your Node.js process. It's important to know that all of your synchronous code runs first, and isn't really a part of the event loop.

We'll take a little deeper look into the event loop. It's a very, very nice drawing here. It can be a little bit complicated if you haven't seen it before. But there's essentially six different queues at each iteration of the event loop. And they hold callback functions that are eventually executed on the normal call stack when they're ready.

We always start in the middle with a microtask queue. This contains two sub queues, the nextTick queue and the promise callback queue. And each one of these sub queues, we execute all of the processNextTick functions until they're empty. And then we continue on with all of the native promise functions until it's empty. Once we've processed both of these sub queues, we move on to the timer queue, typically known as the timer space. This executes all callbacks associated with setTimeouts and setEnderable. And once that queue is completely drained, we move back to the middle, and we again process all of the nextTick and promise callbacks in order. You might start to get the idea. We move clockwise around the outer loop, and after each iteration, we move back to the middle and process nextTick and promise callbacks.

4. The Event Loop - IO Queue and Microtask Queue

Short description:

The IO queue handles async methods from modules like FS and HTTP. We process this and move back to the microtask queue. Then we continue to the check queue for setImmediate callbacks. After processing nextTick and promises, we enter the closure phase. On the last step, we check for more nextTick and promises. If the microtask queue is empty, the event loop exits.

The IO queue handles all of our async methods from modules like FS and HTTP. We process all this, and we move back to the microtask queue, and then continue to the check queue where we handle all callbacks associated with setImmediate. We once again come back into microtask queue, process all of our nextTick and promises, and then we go into our closure phase. This is handling all of the callbacks associated with the close event in async task. On the last step of the loop, we go back and we once again check for all of our nextTick and promises. And if there's anything in here, we will actually do a whole other cycle as part of this event loop. But if the microtask queue is completely empty, the event loop will technically exit.

5. Understanding Event-Driven Architecture

Short description:

Hopefully this gave you a better understanding of how these things are implemented and executed. Event-driven architecture is about producing, detecting, consuming, and reacting to events. Producers generate events, consumers listen and react, channels transmit events, and processors handle logic in response to events. Events are raised when something changes in the application, and producers send events after they've happened. Event-driven architecture is used in microservices, IoT systems, and real-time data processing.

Hopefully this gave you a little bit better understanding around how these things are being implemented and executed, and why promises are actually so powerful inside of async dash JavaScript.

Now that we have a basic understanding of how async dash JavaScript works, we can talk a little bit more around event-driven architecture. This is just a definition, and it's all about producing, detecting, consuming, and reacting to events. It's kind of in the name, event-driven. Some key terms to think about is what is an event, and this is essentially just a state change inside of your application, and it's any occurrence or action that might trigger a response. It isn't always a response to an event, but it's always probably an event.

Producers are the things that generate the events. Consumers listen for and react to events. Channels are pathways through which events are transmitted. And processors are components that handle the logic in response to events. Some core concepts to understand about event-driven architecture is that if something in your application changes, you should always raise an event, whether or not it's going to be consumed. Producers send events after they've happened, not before, and the producer does not care about what happens after the event is sent. The producer's only job is to propagate the change to the system, and it doesn't care what actually happens after that. You shouldn't design events for particular consumers nor hold specific instructions because events are immutable. And so when you start building the system out, you end up with something high-level like this, where you have producers that go into some sort of event middleware, and then you have consumers on the other side that are handling events and maybe triggering new events and becoming producers themselves, and you have this kind of cycle.

Some of the use cases for event-driven architecture are microservices. You might already be doing some event-driven things without even realizing it. EDA patterns are perfect for enabling communication between different services. Instead of having point to OIDs, you can kind of spread everything out. IoT systems heavily use EDA patterns to handle data from numerous sensors and devices, and a lot of systems that require real-time data processing like financial trading also use EDA patterns. For us, we provide a solution that has a real-time view of what's happening at the airport, features like visualization of all the aircraft and vehicles moving on the ground, controlling, monitoring, and visualizing thousands of sensors. So our application is this one big event-driven mess sometimes. Through my experience working in this, there's a lot of pros and cons.

6. Tools and Techniques for Event-Driven Architecture

Short description:

Event-driven architecture provides the pros of decoupling components, enabling flexibility and scalability. However, it also comes with challenges such as data consistency, complexity, and event ordering. To address these, tools like event emitters and WebSockets are used to simplify building production applications.

Some of the pros are decoupling of components. Your producers and consumers don't need to know about each other. This gives you a lot of flexibility that you can add new consumers and services without necessarily changing the producers' data, and this allows you to scale. You can add in new components or remove them without affecting others and allows you to enable some real-time processing.

Some of the cons. Data consistency can be quite difficult. When sending events to multiple consumers, it can be really challenging to ensure everything's processed, especially if there's some slowdown in a service or transient failures. Complexity is also quite difficult, especially as the number of events and consumers grow. This makes debugging more challenging because of the asynchronous nature. And events ordering can also be quite complex if you need specific ordering of your events as they happen.

So now we talk about this, we'll talk about a couple of tools and things around EDA that make building production applications a lot easier. The first being event emitters, which is a native library inside JavaScript, and we'll focus on three main functions. Emits, which allows you to emit an event. On which allows you to listen for changes on a certain event, and off, which allows you to stop listening for changes to a certain event.

For things like that, you might sort of think about WebSockets, which provides a full duplex communication over a single log-lift connection. This is a lot better compared to traditional HTTP request-response cycles. It has a lot lower overhead on the network and also on its timing. And this also allows servers to push updates directly to clients so they have real-time communication.

7. Event Emission and WebSocket Communication

Short description:

You can emit events within your application, handle errors, and manage async tasks. WebSockets provide efficient communication, allowing real-time updates from servers to clients. Socket.io is a popular package for WebSocket functionality. Connection management is crucial for handling dropped connections or service restarts.

You can emit emits from one part of your application, listen to them in other parts. You can implement robust error handling by listening for error events specifically. And you can also leverage these to handle async miss operations by emitting events when async tasks are complete. This is really nice for internal process communication, but doesn't really expand outside of a single process.

For things like that, you might sort of think about WebSockets, which provides a full duplex communication over a single log-lift connection. This is a lot better compared to traditional HTTP request-response cycles. It has a lot lower overhead on the network and also on its timing. And this also allows servers to push updates directly to clients so they have real-time communication.

And JavaScript, they're natively available in the WebSocket library. But there's also a popular NPM package called Socket.io. And this provides methods such as onOpen, onMessage, onAir, onClose. And this follows closely to other event-driven patterns in packages.

One thing to note, that connection management is super important to ensure your application behaves well if their connection drops or service restarts. This is something that you have to do on your own.

8. Connection Management and Message Brokers

Short description:

Connection management is crucial. RabbitMQ is an open-source message broker with support for different protocols. It uses TCP connections, channels, queues, and exchanges. RabbitMQ offers publish and subscribe patterns and remote procedure calls. MQTT is a lightweight messaging protocol for IoT applications.

One thing to note, that connection management is super important to ensure your application behaves well if their connection drops or service restarts. This is something that you have to do on your own.

Another really good option for propagating events inside your system is RabbitMQ. RabbitMQ is an open-source message broker, so it takes sending messages around between different services, producers, and consumers. It has support for a lot of different protocols, but the main one that is usually used is MQP. And this allows for reliable, flexible, scalable message routing.

Some key terms to know about RabbitMQ, if you start looking into it or somebody's talking to you about its connection, is a TCP connection using the MTP protocol. A channel, which is a multiplex virtual connection inside of MQP. This is how you do different actions and you can have multiplex communication across the single TCP socket. Queues, which are basic components for storing and managing specific messages, and exchanges, which are like a routing agent that allows you to put messages to one or more queues.

There's a couple of well-known patterns in RabbitMQ. The first would be publish and subscribe. This fits really well into admin architecture. We can have a publisher on the left-hand side that publishes into an exchange, which is just a messaging agent that allows you to route messages to different people. And consumers are able to bind their own queues to this exchange. So each consumer that we see on the right-hand side, consumer A, consumer B, and consumer C, can actually consume updates from this publisher at their own pace and deal with it at their own way. So this is really, really powerful inside of RabbitMQ. It's a propagate change.

Another thing that you might look at is remote procedure calls, where service A needs to send an instruction to service B. In this case, service B instantiates a command queue. Service A interacts with this command queue by sending a correlation ID for the message and a reply queue, which is a queue that service A is to try to. So when service A sends the command to service B or this command queue, service B processes it and sends the response with the correlation ID on with this provided reply queue back to service A.

When we look at combining these things inside of your microservices, you will see a lot of things like this, where you have exchanges going in either direction. You have this exchange that's being exposed by service A, where client A is subscribed to it, and you also have command queues where client A can say, edit, get me all of the data for the past 24 hours. It's not enough that I just get the real-time updates.

Another really good broker, which is more lightweight, would be MQTT. This basically is just a messaging protocol, so don't confuse it with brokers themselves. This is commonly used in IoT applications, where you have like low bandwidth or unreliable networks, or maybe there's power saving with your batteries, you don't want to have a consistent connection. And this operates on a basic publish-subscribe model, which allows you to decouple your producers and consumers. Some of the things you'll hear about when discussing MQTT are topics, which is just a hierarchical string that acts as a routing mechanism.

9. MQTT Brokers and Performance Optimization

Short description:

MQTT has quality of service configuration with three levels. MQTT brokers automatically send updates to subscribed clients. Combining RabbitMQ, MQTT, event emitters, and WebSockets allows for decoupled services and clean code. Performance can be improved by using high-performing message brokers, deploying producers and brokers close together, and considering fault tolerance.

So this is kind of your key. You have publishers, these are the people that send messages to the broker, and subscribers, and they read messages from the broker. Or rather, the broker sends messages to the subscriber. It's also important to know that MQTT has a really nice quality of service configuration. There's three different ones, and at level zero, the message is delivered at most once, and loss is acceptable. At quality service level one, the message is delivered at least once, but duplicates may occur, and at level two, it's delivered exactly once and should have no duplicates.

If we look at an example of how MQTT brokers actually behave, we can see on the left-hand side, we have two publishers and we can see the data that they're publishing and the topic that they're publishing to. So when they publish this, any client that is subscribed on that topic will automatically receive an update. So in this case, the subscriber on the top right is subscribed to the topic runway status. And as soon as the publisher on the top left publishes a new value, it will automatically be received by the client on the top right. So this is really, really nice. Now, if we kind of plug all those things in together that we talked about, you know, RabbitMQ, MQTT, event emitters, WebSockets, you'll get something like this, which is just a hypothetical thing. It's not a drawing of one of our services, but on the left-hand side, we have a radar that feeds a radar service via TCP. And that can be really, really hard to spread across a system. So we have to shove that into RabbitMQ by an exchange where we can have multiple services like Service A and Service B that are subscribed for this data. Service A might send this data somewhere else and Service B is taking other data in and Service B creates its own kind of exchange that is going into another service and that service is taking data from MQTT, and then we republish that, and then we can get it into a WebSocket to send it else. It seems really complicated, but in this way, we're able to decouple our services and put a lot of the logic inside of the middleware. This makes your code extremely clean. But as we talked about before, one of the cons is with EDA is that the complexity.

Moving on, there's some really nice things to know about performance and doing the right things with systems that are set up like this. The normal way you can recommend is using well-known and high-performing message brokers like RabbitMQ or Apache Kafka. You might want to look into different serialization formats like Protobuf. Deploying producers and brokers and all of your applications close to each other in the same data center, same availability zone will also help a lot. Deploying handlers, as we saw before, JavaScript handles async things really well, and even process events and batches if it's appropriate. But performance goes only so far if your application is unstable. You also need to think about fault tolerance. The more data that you have being thrown around inside of your application, the harder it is to make it high quality. Always run your broker in a cluster and ensure your data is considered to be replicated. We cancel forwarding data loss. Ensure that your events are jarably stored.

10. Message Processing and System Building

Short description:

Configure brokers to store messages in memory until successfully processed. Handle events multiple times with unintended side effects. Use automatic retries and dead letter queues for transient failures. Practice chaos engineering and document producers and consumers well. Focus on building simple complexity. EDI enables scalability, flexibility, and real-time processing. RabbitMQ supports complex routing. MQTT allows reliable message delivery in constrained environments.

Most brokers you can configure this so the message is stored in memory until the consumer successfully processed it. From the consumer side, you should always be able to handle the same events multiple times with unintended side effects such as item potency. And having these automatic retries for transient failures with exponential backoffs. So retry five times at different intervals, and if that doesn't work, you should send it to what is called a dead letter queue so that you can analyze it later.

Basically any message you can process, send it there, and then you can read the logs and figure out what happened. And some general tips for building systems like this is having good monitoring and alerting to keep track of your system health and performance and practicing chaos engineering by regularly testing how your system behaves by injecting faulty messages intentionally. Another thing that is really nice is documenting your producers and consumers really well. If you think back on my small example from before, it can be a bit confusing and there's the difference between complex and complicated.

Complex means that you can trace through some documentation and figure out how it's working even if there's a lot of things. But complicated means that it's really hard to find the right path to the system. So focus on building simple complexity instead of making things complicated. To wrap things up, right, EDI is all about scalability, flexibility, and enabling real-time processing. RabbitMQ is a really nice broker, open source, free to use, that supports various messaging patterns in complex routing. MQTT is a super lightweight broker with publish and subscribe model that allows you to have reliable message delivery in constrained environments. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Routing in React 18 and Beyond
React Summit 2022React Summit 2022
20 min
Routing in React 18 and Beyond
Top Content
Routing in React 18 brings a native app-like user experience and allows applications to transition between different environments. React Router and Next.js have different approaches to routing, with React Router using component-based routing and Next.js using file system-based routing. React server components provide the primitives to address the disadvantages of multipage applications while maintaining the same user experience. Improving navigation and routing in React involves including loading UI, pre-rendering parts of the screen, and using server components for more performant experiences. Next.js and Remix are moving towards a converging solution by combining component-based routing with file system routing.
Design Systems: Walking the Line Between Flexibility and Consistency
React Advanced 2021React Advanced 2021
47 min
Design Systems: Walking the Line Between Flexibility and Consistency
Top Content
The Talk discusses the balance between flexibility and consistency in design systems. It explores the API design of the ActionList component and the customization options it offers. The use of component-based APIs and composability is emphasized for flexibility and customization. The Talk also touches on the ActionMenu component and the concept of building for people. The Q&A session covers topics such as component inclusion in design systems, API complexity, and the decision between creating a custom design system or using a component library.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Master JavaScript Patterns
JSNation 2024JSNation 2024
145 min
Master JavaScript Patterns
Top Content
Featured Workshop
Adrian Hajdin
Adrian Hajdin
During this workshop, participants will review the essential JavaScript patterns that every developer should know. Through hands-on exercises, real-world examples, and interactive discussions, attendees will deepen their understanding of best practices for organizing code, solving common challenges, and designing scalable architectures. By the end of the workshop, participants will gain newfound confidence in their ability to write high-quality JavaScript code that stands the test of time.
Points Covered:
1. Introduction to JavaScript Patterns2. Foundational Patterns3. Object Creation Patterns4. Behavioral Patterns5. Architectural Patterns6. Hands-On Exercises and Case Studies
How It Will Help Developers:
- Gain a deep understanding of JavaScript patterns and their applications in real-world scenarios- Learn best practices for organizing code, solving common challenges, and designing scalable architectures- Enhance problem-solving skills and code readability- Improve collaboration and communication within development teams- Accelerate career growth and opportunities for advancement in the software industry
React Patterns Made Simple
React Day Berlin 2024React Day Berlin 2024
62 min
React Patterns Made Simple
Top Content
Featured Workshop
Adrian Hajdin
Adrian Hajdin
Learn widely used React patterns, including HOCs, Compound Components, Provider Patterns, Functions as Child, and Portals, to write cleaner, more efficient code and create scalable, maintainable applications.Overview In this workshop, viewers will learn about key React patterns that can make their code more efficient, readable, and maintainable. We'll introduce each pattern, explain how it works, and demonstrate practical examples. By the end of the session, participants will have a solid understanding of how to use these patterns in their projects.Learning GoalsHOCs Compound Components Provider Patterns Functions as Child Portals Modularity Maintainability Real-world Application.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.