When JavaScript Meets OpenTelemetry: It's Observability O'Clock

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Ever used an app that crashed and thought, "Why can’t this just work?" For users, it’s frustrating. For businesses, it’s expensive. As JavaScript developers, we’re stuck in the middle - juggling frontend tweaks, backend performance, and monitoring tools to figure out what went wrong. That’s where observability comes in because understanding your code is like going to the gym. No shortcuts, no pills. You gotta put in the work to put on the muscle.  It’s not just some SRE buzzword anymore but a must-have for developers. Observability helps you spot issues, understand system behavior, and fix problems faster. And with OpenTelemetry, you get an open-source framework to manage logs, traces, and metrics without vendor lock-in. 

In this session, you’ll learn, all things observability, optimize both frontend and backend, and integrate OpenTelemetry to gain insights into your system. Moreover, you'll also learn how to practically integrate OpenTelemetry into your JavaScript apps, to create spans, track activities, name them effectively. The result? Fewer headaches, better performance, and apps that not just works but you completely understands.

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

Yash Verma is a software engineer working in the observability space and a contributor to the OpenTelemetry Project. He enjoys cooking and biking.

Observability is the ability to understand the internal state of a system from its external outputs or signals, allowing engineers to ask questions and get answers to ensure systems behave as expected.

Monitoring tells you when something is wrong, while observability goes deeper, explaining why and where the issue occurred and helping with debugging.

The OpenTelemetry Project is a framework providing APIs, SDKs, and tools for creating telemetry data and sending it to a backend of choice, supporting distributed tracing, metrics, and logging.

OpenTelemetry is important because it is a vendor-agnostic framework that simplifies telemetry data handling, supporting multiple languages and allowing integration with various backends.

In OpenTelemetry, a trace consists of spans, which are its building blocks. Each span has a name, ID, parent ID, start and end time, and metadata, forming a complete transaction flow.

The OpenTelemetry Collector acts as a proxy between applications and backend observability frameworks, enabling data filtering, batching, compression, and routing to reduce costs and manage telemetry data.

In OpenTelemetry for JavaScript, tracing and metrics are stable, while logging is still being developed. The community actively works on client-side instrumentation.

To start using OpenTelemetry in Node.js, install OpenTelemetry SDKs and APIs, and implement auto or manual instrumentation to begin tracing and collecting metrics and logs.

Semantic convention in OpenTelemetry provides a standardized naming system for traces, metrics, logs, and resources to ensure consistent data interpretation across systems.

Yash Verma
Yash Verma
24 min
16 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Yash Verma, software engineer discussing observability and OpenTelemetry, covering the history and importance of monitoring in modern observability. Evolution of tools like Prometheus from Facebook's Project Scuba for real-time queries. Challenges in distributed systems and the shift to tracing for system understanding. OpenTelemetry's role as a vendor-agnostic framework for efficient telemetry data handling and the significance of semantic conventions. Community engagement in OpenTelemetry, emphasizing practical application insights and the industry's movement towards value-driven observability.

1. Introduction to Observability and OpenTelemetry

Short description:

Yash Verma, software engineer in observability. Talk on JavaScript and OpenTelemetry. Definition of observability. History of modern observability and the significance of monitoring.

Hi, everyone. My name is Yash Verma. I am a software engineer who is working around observability space. I am one of the contributors to the OpenTelemetry Project, and I love cooking, and biking is one of my sidecars. This is my pretty dog. His name is Ty. And, yes, we have been together for about 12 years right now, and we are still going. Good to go.

So, moving on. Today's talk is titled as, when JavaScript meets OpenTelemetry, it's observability o'clock. So the first question that pops in our mind is that, why are we even talking about observability in the first place? Well, because it has never been so important and has never been so complex at the same time. So, let me real quick walk you through today's agenda, what we are going to see in this session. So the first, we will talk about what is observability, then moving on with what are the different telemetry signals, and moving on with a deep dive into the OpenTelemetry Project and a bit of its history.

Then we'll talk about the OpenTelemetry JavaScript specific code demonstration. And finally, we'll talk about the state of the server and the client side of the observability thing with respect to the OpenTelemetry. So, I wanted to start this talk with OpenTelemetry. But before, we need to understand and we have to have a clear understanding that what exactly is observability at first? Because, to be honest, if observability is the why, then OpenTelemetry is the how. So, first we need to understand the why it's happening around there. So control theory defines it as, it's the ability to understand the internal state of our system from its external outputs or signals I would say. It's the process where we ask multiple questions and we get answers until we are satisfied that, yes, we are fully convinced that our systems are behaving the way they are. So moving on, a bit of its history of where we started and what we now call modern observability. So all this started back in early 2000. In those days, if you were to ask someone that, hey, how do you keep a monitor of your system? The answer like, oh, yeah, we keep track of a bunch of metrics. So by that, what they mean is that these colorful lines and graphs and that's what you see on the screen right now. So these graphs told us that when something is wrong. Metrics were everything back then, mostly because it was cheaper to store and the storage was not so cheap in that time because it was really expensive back then. So there's also one thing I want to talk about in this talk. I see many people talking about like, hey, observability versus monitoring stuff. But honestly, how I look around it is like monitoring is just a subset of observability. So monitoring tells you that, hey, something is wrong.

.

2. Evolution of Modern Observability Tools

Short description:

Modern observability roots with Facebook's Project Scuba. Real-time queries and visualization for system behavior. Development of login and time series tools like Prometheus.

It tells you why it's wrong, where it's wrong. And probably you can get something that how you can debug that thing. So it's around this thing. So moving on with the history, by 2010, we were starting to see the roots of the modern observability. So around this time, Facebook was becoming a massive company. And they had millions of users joining and scaling became a problem for them. The team at Facebook came up with a solution which they called Project Scuba. It worked on large distributed and arbitrary datasets, which they called wide events.

So the goal was pretty simple. It lets engineers run real-time queries and visualize the result to understand the behavior of a system. Say, for example, if someone from the APAC region tries to log into Facebook, but later that day, when they travel to Europe, some features don't work or have a longer loading time. So using Scuba, engineers could figure out what changed, what broke, why it broke in a specific location or time. So these are the things that engineers used to deep dive into the things. And wide events gave them a complete picture of everything working on the systems.

So moving forward with around 2012. And here's the link to the Scuba project if you really want to read about it. I suggest, you know, please go and read about it. It's a really interesting project and you will have an idea how things worked around back then. Right. So moving on by, you know, like 2012, we saw a boom in the login and time series tools. Tools like Prometheus, the Elixstack and many others. You know, like developers started moving just beyond the console log because systems were getting more complex, user base were getting, you know, they were becoming insanely huge.

3. Challenges in Distributed Systems and Tracing

Short description:

Developers adapting to complex systems. Shift to distributed applications. Introduction of distributed tracing for system understanding.

You know, like developers started moving just beyond the console log because systems were getting more complex, user base were getting, you know, they were becoming insanely huge. And this made and this probably pushed us in a way to store logs and so that we could use it in the future and run some analytics on top of that. Right.

So by 2017 and 2016, like around this time, things get even crazier because now we are not building a simple application anymore. Right. Back in the days, we had a basic front end, a back end and some data. But now a lot of application lives inside the browser itself. And everything is now a part of a distributed system. Welcome to the new era of software engineering.

Everything was distributed. The databases lie somewhere else. That DB lies in some other location. The front end lies somewhere else. The back end folks, you know, everything was distributed. Yeah. So, you know, like, and that means more complexity. That means more fear and like more fear of that. What are the possibilities that my small code changes to a simple front end would not have a ripple effect somewhere else on the line? Maybe something can break. We don't know about that exactly. So this is where distributed tracing comes in. It shows you the entire journey of a request flowing across a system and services like a transaction flow, something like if this happened, probably this was the reason behind that. And this is the potential like failure with respect to this particular step. And it also tells you that it took this much time while executing this operation. Every detail out there. I would say. Right. So this is what gives. So that's what tracing gives you.

4. Understanding System Tracing and Context

Short description:

Tracing benefits in system understanding. Definition of trace and spans. Context propagation for system visibility.

So this is what gives. So that's what tracing gives you. It's all about the causality. Right. It helps you. It also helps you to figure out that when a system is slowing down, why it is slowing down, why it fails sometimes. It's all the picture. Right.

Now the question is, what is a trace? Is it an independent entity or how exactly does it look like? So a trace is made up of a single or a number of spans. Spans are considered as a building block of a trace. Each span has a thing like a span name, span ID, parent span ID. If it's a part of a longer transaction, it has things like start time, end time and a bunch of other metadatas. So, you know, like it's what this is what a simple trace looks like. It's interesting. Right.

All these spans that are spread across a different system, across microservices or even within the same system. Right. All these are connected through a mechanism called context propagation. So now we. So that's how we get a full visibility into our system on a service to service level view or about a bigger picture, I would say. The next question is, are logs enough? Well, when I first started learning about tracing, I asked myself one question. Wait, that hey, we have the logs, we have the metrics. For a very brief period of time. Right. But why did they give us the level of visibility which the tracing gives? So the answer lies in the context. Now, tracing is all about context. It's full of rich context data. You know, like metrics are fine.

5. Log Challenges and Rise of OpenTelemetry

Short description:

Logs in production can be challenging due to volume and cost. Tracing provides comprehensive transaction insights. The emergence of open telemetry and the consolidation of tracing standards.

It's full of rich context data. You know, like metrics are fine. Logs are great. But they usually lack a deeper context that tells you why something is happening. Logs are great for the dev environment. Structured logs are even better if you're using them in a production environment. But here's the problem that logs in production could be a messy thing because of their huge volume and sometimes it can be a pain. They are hard to search. They are hard to scale sometimes. And they even cost a lot because they are in huge volumes. So one tiny error can bring down your entire production system and you never know from where it happened. Right.

Particularly, it's a very common thing like we developers also tend to push the basic console log message to the production environment which also becomes a huge concern when it comes to the cost of it. You know, like that's a different story for that. But yeah, it's true. So particularly, what I love about tracing is that you can pull out metrics and logs out of that or out of a trace, but you cannot do the reverse. So tracing gives you an end-to-end picture of your entire transaction.

So now we will talk about the rise of open telemetry. By 2017, the observability space has exploded with a lot of vendors giving out their proprietary SDKs and saying that they are the observability tools users. We are great in all sorts of things. At the same time, in the world of tracing, we could see a rise of these two competing standards gaining traction. It's called open census and open tracing. But having two standards may make things a little messy because some people were confused about which one to use or should we use both. In 2019, the community from these two projects came up and they decided, hey, let's combine them into one. And that's how open telemetry was born. And yes, some people also joked about it because it feels like the classic ex-KCD comic which says that trying to fix two standards, we now created a third one. And now we have a total of three standards in there.

6. OpenTelemetry Framework and Instrumentation

Short description:

OpenTelemetry becoming the standard for distributed tracing, metrics, and logging. OpenTelemetry is a framework for handling telemetry data through instrumentation. Two types of instrumentation: auto and manual for control over data tracing.

Right. And yes, some people also joked about it because it feels like the classic ex-KCD comic which says that trying to fix two standards, we now created a third one. And now we have a total of three standards in there. But this time, something was not, you know, something worked out actually because in 2022, open census and open tracing was, you know, like deprecated by the community itself. And the goal was clear, that open telemetry should be a de facto standard in the world of distributed tracing. Right. So open telemetry was started with the vision of distributed tracing initially, but it moved on with metrics. And now in logging also it's doing well. Right.

So now let's understand what exactly is open telemetry itself. Some people believe that it's just another observability tool. It is just an observability language I would say and everything. But it's not. Right. At its core, open telemetry is a framework. Right. It's a protocol of how we handle telemetry data. It gives you the set of APIs, SDKs and tools to create telemetry data and send it to your backend of your choice.

But you know, like, but how do you get your application to emit this data? It's through the process called instrumentation. Right. So there are two types of instrumentation. One is auto instrumentation and where you add one or two lines in their import packages on there and boom, you are good to go. You get your trace. You get your metrics. You get your logs. Another one is manual instrumentation, which is a bit more, which requires a bit more effort from your side. But actually, I would say you can get a full control of what data you want to get, what you want to trace out, how deep you want to go into a trace in order to debug a performance issue within your application.

7. Importance of OpenTelemetry and Collector

Short description:

OpenTelemetry's importance lies in being a vendor-agnostic framework, handling telemetry data efficiently. The OpenTelemetry Collector facilitates data filtering, batching, and routing, reducing costs. To start using OpenTelemetry in Node.js, install scripts, SDKs, and APIs, beginning with auto instrumentation for beginners.

It's especially helpful when you need to observe sensitive data or use a third party. It's all about the context, which I talked about. The more you engineer around this thing, the more data you will get. This leads us to the question of why OpenTelemetry matters. People are moving towards OpenTelemetry because it's a vendor-agnostic framework. Your application only needs to emit telemetry data, and OpenTelemetry handles the rest, including routing it using the OTLP protocol.

OpenTelemetry supports 12 different languages, including JavaScript. The OpenTelemetry Collector, considered the heart of the project, sits between your application and observability framework. It allows filtering, batching, compressing, and data routing to different destinations. By using techniques like sampling, you can reduce egress costs. There are two versions of the OpenTelemetry Collector: core and contrib, the latter offering more features used by the community.

To start using OpenTelemetry in your Node.js application, install the necessary scripts, SDKs, and APIs. For beginners, auto instrumentation is recommended to understand the system workings. If you wish to delve deeper, manual instrumentation provides more control. These steps are simple and effective for integrating OpenTelemetry into your application.

8. Utilizing OpenTelemetry in Node.js Application

Short description:

OpenTelemetry Collector plays a central role in managing data efficiently. Start with auto instrumentation for beginners in OpenTelemetry. Be cautious about sending console data to the production system. Create spans by tagging them with the parent span for longer transactions.

So OpenTelemetry Collector is like a central piece of the proxy sitting in between your application and your backend observability framework. You can do things like you can filter the data, you can batch them, sometimes you can even compress the data, you can drop the data and even you can write to different files or even Kafka, whatever is the requirement for you. So you can save a ton of data cost even by using some techniques like sampling to reduce the data, like tail sampling, head based sampling or a combination of both, which is the most useful context for data resulting in, I would say, it helps you to reduce the egress cost cost sometimes. So there are two versions of the OpenTelemetry Collector. One is the core, which has the basic feature and another one is a contrib. The OpenTelemetry Collector contrib, it includes a more processor exporter, which is actively used by the community, which is actively used by the members, but it's not an official.

Now let's get started with how you can start using OpenTelemetry into your Node.js application. Right? Let's start with that. So it's a pretty much a simple step. You just have to install a few scripts on there, install OpenTelemetry SDKs, OpenTree APIs, you know, and it starts, if you are starting out at first, if you're a first time user for OpenTelemetry, please go with the auto instrumentation. It's very helpful. Though the amount of data is very huge, you get a lot of data, but you will be able to see that how things work around there. And later if you have, you know, if you're really interested about go deeper inside, go with the manual thing. That's a recommended way I would say.

Here's how you can set up a trace exporter. It's a basic use step. You just have to copy paste some of the code down there and you're good to go. There's nothing sort of a very big sort of a code base, you know, it's very small. You just have to copy paste around this and then get started around this. Also, please be aware that you should not send, you know, like console data to the production system because it costs you really a lot of things around there. It's cost you really not, I would say. For the development purpose, you can console all the data and see that whether it's working or not. But for the production case, please try to avoid sending data to the production, like the console logs of thing and all different which you see in the console, right. Let's see how you can create spans around there. This is a simple step, how you can create a span. You can tag them with the parent span if they are part of a longer transaction. If it's the first thing, then it will become a root span also. That's the basic thing around there. There's another one thing that how you can record span exception.

9. Handling Span Exceptions and Semantic Conventions

Short description:

Learn about recording span exceptions and the significance of semantic conventions in OpenTelemetry. Understand the importance of naming conventions within software engineering.

So there's another one thing that how you can record span exception. Like when some span does not, you know, like when some, when some feature does not work, the span was recorded and the error. We can also do one thing, like we can assume that if this thing happened in this particular span, we can beforehand, we can give some certain logics around there that how to figure it out and how we can, you know, pull that information or logs that information out into the into a longer trace so that we later when we come back, you will be able to understand how the things work around there, right. So we can record the exceptions and perform based operation upon that also.

What is a semantic convention? A semantic convention, it is a very important thing in the OpenTelemetry world because trust me, in software engineering, naming is a big challenge. Naming a variable, you know, or naming a function or naming anything is a big challenge. So OpenTelemetry comes up with this idea of semantic convention, where it is defined that certain things are called that, you know, like things from trace, things from metrics, logs and, you know, resources, everything has given a particular name. Right. If it's because some way it's called HTTP, for example, if you are, let's say, for example, if you say, if I have to give a port, you know, like if I have to check a request is coming from HTTP or HTTP or not, we can do things like request on HTTP or to define the thing that where exactly it is coming from, which is defined across the entire system. You cannot change it generally for that also. So everything has its own convention, the trace has its own, the metrics has its own, the logs has its own, everything out there.

10. OpenTelemetry JavaScript and Client-Side Progress

Short description:

Explore the status of signals in OpenTelemetry, emphasizing stable tracing and metrics but noting ongoing work on logs. Learn about specification compliance and the evolving OpenTelemetry.js repository for client-side instrumentation.

So now we have to check that, how much is things are done in the OpenTelemetry world with respect to JavaScript, right? So with respect to the signals, the tracing is pretty much stable. You know, the metrics are stable. However, the logs are not fully stable. I would say some parts are still left. Some there is a great amount of work which is done by the community itself within the, you know, you can always, you are always welcome to check out our GitHub repository around there and work around with these things, right?

So there's also one thing which I want to share with you. It's called the specification compliance. So you if you want to check that, hey, but you are, you know, like you're particular if you want to use something like, you know, get a trace ID with a schema URL, whether it's provided in the, whether it's available within the GIS until now, you can definitely go and check out. There are a whole lot of things. The link is here. You can go and check out which features are in, you know, like actively has been produced from the community and which are still in progress, right?

Now talking about the client side of thing, which is talk about the front end of the client side of things, right? So let's talk about it. Right. A lot of things are in still experimental phase. I would say the core OpenTelemetry.js repository does not fully have a full grown standard with respect to the OpenTelemetry client instrument set of thing. But I would say there's a whole lot of thing going in the OpenTelemetry.js contrib repository. You can see a lot of things happening with it. People from across the world are deciding that how you can, you know, I would say understand, analyze and standardize the flow of data from the front end and capture it, store it and then correlate it and then correlate it and then correlate it with the back end side of things. So there's a whole lot of thing. I would say you are definitely welcome to see how the things look around it. Also, this is a snapshot from the latest thing that what are the works which have been doing in the OpenTelemetry world? So it's like, I would say, there are things which have been done like storing session data on resources, implementation of a page view, event instrument, you know, everything out there. Event and logs API have been made stable on there. Then, you know, everything. However, one thing is very important, I would say. Work in progress is the core of web vital plugins. You know, few works have been pretty much, I would say, is in the progress. It's huge. It's huge. So the vision is clear that we will make OpenTelemetry a place where you can, it's capable of, you know, like analyzing, getting the data, work on that. We will get an end to end visibility from your front end to the back end. There's also what is called OpenTelemetry real user monitoring, I would say. You can go and check out it's pretty good.

11. Community Engagement and Observability Evolution

Short description:

Explore community involvement and resources in OpenTelemetry. Start with the demo project for practical application insights. Highlight the shift towards value-driven observability in the industry.

I would say a very good amount of works have been going around into that particular field. So I would recommend you to go there. Right. So now let's come about another thing, which is, I would say, like for the front end side, few companies are using their front end application and they run a lot of OpenTelemetry inspired observability into the application. But it's not a fully community standard accepted, but it's not a fully accepted community standard yet. So if you have got some idea and want to contribute about it or just want to learn around there, what's happening in the community, you are always welcome to the, you know, we have a, we have certain office hours designed for there. So, yeah, that's the best place.

Now, if you are not aware of what is OpenTelemetry and you don't know where to start. So the OpenTelemetry demo project is a good place to start. Then you can just start and you know, like it's a project which has an e-commerce inspired application with a bunch of microservices in multiple languages. You have a checkout. We have, you know, like add to cart service, which triggers across different services in different languages. And a great way to see how things works into action and how you can start using OpenTelemetry for the work. Right.

So, yeah, I think that's it for my talk. However, if there's one key take away from this talk is right now, it's like Open Telemetry has definitely changed the way how we look in the observability space until now. OK, because now we are not running in the race for having a stable SDK. And but now we need more value out of what we have. The vendors actually who were already competing against each other in terms of, hey, we have this stable SDK. We have this SDK. You know, now the vendors are fighting to provide, you know, like against each other to provide us with an efficient and cost effective observability, which is very helpful for us. Right. So, yeah, that was all about my talk. I hope you enjoyed it. If you want to check around that how you can get involved in this OpenTelemetry, these are the few links. And yeah, thank you for tuning in. I hope you really had a good conference. Bye bye.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
Workshop
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
React Performance Debugging
React Advanced 2023React Advanced 2023
148 min
React Performance Debugging
Workshop
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
High-performance Next.js
React Summit 2022React Summit 2022
50 min
High-performance Next.js
Workshop
Michele Riva
Michele Riva
Next.js is a compelling framework that makes many tasks effortless by providing many out-of-the-box solutions. But as soon as our app needs to scale, it is essential to maintain high performance without compromising maintenance and server costs. In this workshop, we will see how to analyze Next.js performances, resources usage, how to scale it, and how to make the right decisions while writing the application architecture.
Maximize App Performance by Optimizing Web Fonts
Vue.js London 2023Vue.js London 2023
49 min
Maximize App Performance by Optimizing Web Fonts
WorkshopFree
Lazar Nikolov
Lazar Nikolov
You've just landed on a web page and you try to click a certain element, but just before you do, an ad loads on top of it and you end up clicking that thing instead.
That…that’s a layout shift. Everyone, developers and users alike, know that layout shifts are bad. And the later they happen, the more disruptive they are to users. In this workshop we're going to look into how web fonts cause layout shifts and explore a few strategies of loading web fonts without causing big layout shifts.
Table of Contents:What’s CLS and how it’s calculated?How fonts can cause CLS?Font loading strategies for minimizing CLSRecap and conclusion