Why Your Performance Work Is Not Seen

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Between core-web vitals and profiling tools there's always one big obstacle, company buy-in. There are many ways your performance work might not be seen or prioritised. So let's talk about them.

This talk has been presented at JSNation 2024, check out the latest edition of this JavaScript Conference.

FAQ

Vinicius is a professional originally from Brazil, currently living in Sweden, and working for Volvo Cars. His talk reflects his extensive experience in performance optimization in software engineering.

The speaker discusses the difficulty of getting performance work prioritized from the backlog and suggests focusing on demonstrating the value and impact of performance improvements to get such tasks prioritized.

Data plays a crucial role in performance governance by providing evidence-based insights that help in setting realistic goals, framing the data effectively, and making informed decisions to improve product performance.

The speaker emphasizes the importance of integrating engineering and product perspectives, using real user data for better insights, and segmenting performance metrics to align improvements with actual user experiences and product goals.

Vinicius recommends starting small by gaining the support of immediate managers and gradually building evidence through data to demonstrate the impact of performance improvements on product metrics, thus securing broader executive support.

Effective performance monitoring involves choosing the right tools that fit the team's workflow, setting meaningful performance budgets, and continuously iterating and improving based on real user data and feedback.

The talk mentions lab tooling and real user monitoring (RUM) tooling as the two facets of the tooling ecosystem essential for performance analysis and improvement.

The talk focuses on the challenges of making performance work visible and prioritized within organizations. It discusses not just the technical aspects, but also governance, driving performance governance, and managing priorities in product development environments.

Vinicius Dallacqua
Vinicius Dallacqua
25 min
13 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This Talk discusses the importance of performance work and how to drive performance governance. It emphasizes the need to frame data, set meaningful budgets, and understand both engineering and product perspectives. The Talk also highlights the significance of delivering value, establishing solid performance governance, and gaining buy-in from management. Additionally, it emphasizes the importance of measuring performance, optimizing through data, and collecting performance data using tools like Speedcurve and Bugbear Sentry. The speaker also mentions their training program and expresses gratitude towards the audience.

1. Performance Work and Governance

Short description:

My talk is about why performance work is not seen. We're going to be talking about governance and how to drive performance governance. We care about performance because applications can be used in unpredictable conditions. Lab tooling and real user monitoring provide important data. Getting performance work prioritized is a challenge when dealing with backlogs and time constraints. Proving value and setting up a governance process require data.

My talk is about why performance work is not seen, so this is a little bit just of setting a little bit of expectations. It's not going to be a super deep technical talk, so we're going to be talking about governance, and we're going to be talking about how to drive performance governance.

My name is Vinicius, and, yes, I'm still originally from Brazil, and still living in Sweden. I work for Volvo Cars. One thing that we have in this room now, and I would like to think that we all care about performance. So, hopefully, we all care about performance. This is actually something that is very, very easy to understand why we, as engineers, care about performance. Our applications can be used in many different set of conditions that are not very predictable, and performance as a subject is also not very predictable. Our users can put our application to conditions that are just very hard for our applications to perform.

When it comes to performance that we are trying to gauge, we hopefully all know about lab tooling, and if we know about lab tooling, we most likely know about Lighthouse. When it comes to tooling and monitoring and metrics, we have the lab, and we also have the Rome tooling, so real user monitoring tooling. So it is important to have the two facets of the tooling ecosystem, so you have your lab data and you have those early regressions being caught, and you have your real user metrics where you're going to have actual representation of how your application runs in the real world.

But I don't know how many of you work within product teams, and we all know the backlog. The backlog is that entity, that presence that is always looming on our progress, and it is mostly where all good intentions go to die. The question for most of us that are trying to put some performance work out there to the world is how do you get your work prioritised out of the backlog? If you're like me, you have caught yourself, this is by the way, one of our Swedish offices, but those are all by me, and this is within the main lounge of our quite nice Gothenburg Swedish office. If you're like me, you have caught yourself, just like Jake, sitting down and wondering how can you make sure to get that nice work out there? How do you get the performance work and improve your metrics and shout to the world that you improve the user's performance? Just like everyone else out there, it also works into a product. Most often than not, things around you are on fire. And you're trying to wonder, how do I manage to get this work done? I want to work on performance. But things are always on fire, and you have to deal with that. And let's not forget, you have the backlog, that looming entity just beside you, reminding of its existence, and sometimes, somehow, the backlog is also on fire! And now you catch yourself trying to figure out how do you even do this kind of work? So how do we do it? How do you get our backlog that is on fire? Just as a little addendum, I try to get this kind of scene out of our favourite models to generate an image, and this is what I came up with. So it's pretty true to life, if you ask me. Even the fish eyes and stuff like that. But, you know, as a product team, we are always busy shipping features. At least that's the lie we tell ourselves, right? But we are always forced into thinking of what is the next thing we can ship and how quickly can we ship it? If you work for an open company, we have your stakeholders on top of you, or you're trying to get your company open, and you have your stakeholders on top of you. There is always this kind of time constraint in trying to focus on shipping features. And you're always fighting the clock, so it is very hard to get performance work prioritised because you're always chasing the next thing to ship.

The real question on trying to drive performance is how do you prove value? So, to prove value in this kind of setting is how do you manage to prove that the other overhead of working on performance actually will benefit your both your engineers and your users, right? And how do you set up this governance process? How do you justify the overhead? How do you manage the flow versus friction, both for your engineers and also to get work out in time? How do you get better deliverables out of the work you're getting? Because if you're working on performance, something that is not strictly feature-related, how can you make sure that whatever you ship becomes a better version of your product? Not just from the engineering perspective. And the answer is always data. And although not this guy, the answer is always data. So the data that we're trying to talk is, again, bringing back to the lab and run data.

2. Framing Data and Performance Monitoring

Short description:

You need to frame your data, set up meaningful budgets and realistic goals for lab tooling. Understand both the engineering and product perspectives to deliver an impactful product. Aggregate performance metrics and product metrics together for better insights. Getting better attributions is crucial for good performance monitoring.

So you're trying to bring perspective from your automated tooling, your CICD, but also from your users. But when on a product setting, most important is how you frame that data. So how do you make sure that you... Because we all can set up some instrumentation around our product and grab a lot of data and put it in a dashboard, but how do you make sure that that dashboard is valuable, right? Is used? And how can you take the most out of the data you use?

So let's take a little quick journey into the lab tooling. So let's take a look into Lighthouse. And within your lab tooling, you have to make sure that you've set up your data in terms of good guardrails. So how do you make sure that your data has good guardrails into that? So for the lab tooling, you have to make sure that you have meaningful budgets and realistic goals, and also achievable goals. So your achievable goals is... So you make sure that you ship progressive improvements. So you have quick iterations of your product. And from those quicker iterations, you're gonna ship value faster. So framing your data into a good set of constraints, into good realistic budgets that you can reach. So once you team your lab tooling, once you have that part done, then comes the time of looking to your real user data.

But for the real users, you have to make sure that you understand not only from the engineering perspective, you also understand from the product perspective. Because your application is using different settings in different ways, and you have to understand that you are trying to solve things from two perspectives. You have your engineering team and your product team. And you can guess which one is which from their photo, by the way. But you have to make sure to represent both sides of the story. The engineering and your product. In order to deliver the most impactful version of your product. So how do you get more out of your round data? So in order to get your round data into better shape, you need to make sure that you aggregate your performance metrics and your product metrics together. Because if you only look from one perspective, you will blindside the other. And we all are engineers and we always care about the engineering part of the metrics. But we have to make sure that whatever we are trying to ship is moving the needle within the product side as well. Because if you're trying to get buy-in to work in performance, you need to make sure that that is the goal, the objective you're trying to be. And getting better attributions is one of the best things you can do if you're trying to set up good performance monitoring. So having the metrics on its own is only one part of the story. You have to understand from that metrics, from that monitoring, what kind of problems are surfacing. Because if you just have numbers for the metrics and you don't have good attributions, you don't even know where to start. You just have numbers.

3. Delivering Value and Establishing Governance

Short description:

To improve performance, focus on delivering value for key markets and percentiles. Avoid blind spots by capturing all percentiles and understanding different segmentations. Tim Varicky's talk on percentile segmentation is highly recommended. To establish solid performance governance, assess which metrics represent your product and build solid KPIs and SLOs.

So also setting up your data in a good way so you can achieve that deliverable. So how do you frame your data from your user's perspective? But most importantly, how do you slice and group your run data? So slicing and grouping your run data from a product lensing, it means that you have your metrics being observed from the product perspective. You're going to get a segmentation that's based on the key market and user base for your product, and you are focused on delivering value for those key markets and percentiles. And you have a better impact and you make sure that you're improving your perspective, right? You're improving your user's experience out of that.

Improving performance for all of your users is a very nice overarching goal, but not always a very possible priority from a product's perspective. Because that's a very, very big scope. And working on a bigger scope is a lot harder to ship valuable improvements. Because normally the thresholds are way, way too high. And sometimes when you start setting up a performance governance, you have a long way to cover until you get to the green part of the metrics. So you have to make sure that you iterate fast and you ship improvements as soon as possible. When it comes to data, you also have to make sure to avoid blind spots. And avoiding blind spots in your metrics means that you are not segmenting your data in one dimension. You're not only observing your data from the P75 perspective, because that can lead you to blindside any kind of improvements and regressions that happen before that. Because when a regression hits the P75 percentile, that's pretty much an incident, right? So most of your users are being affected. So you want to make sure that both for regressions and for improvements you are capturing all of your percentiles and you're understanding how the different segmentations happen. Because all the improvements should be celebrated and the regressions should be visualized across the spectrum. And the reason why you do that is because the bigger your context, the harder it is for you to build any sort of correlations and thus it makes very hard to make attribution for improvements or regressions. There is a very good talk by Tim Varicky from Performance Now last year. So if you're interested in this kind of segmentation of percentiles, this talk is a really, really good one. I definitely recommend it.

So right now, you already started thinking about percentiles and data segmentation, and you're trying to make sure that you're really constraining that backlog. You have a good strength holding to it. You are strengthening your code performance governance. So how can you make sure that your performance governance is solid over time? So you need to understand and assess which metrics better represent your product. Because when it comes to web vitals, you have a different set of metrics, and it's great. Because they are very well standardized and very well documented, but not all of the pages have the same importance and not all of the features the same usage. So products are not unidimensional. So you have to understand which part of that subset better represents your project and you can build KPIs around it and really guard your metrics and really guard your product's perspective across the board. So building really solid KPIs and SLOs means you understand which metrics will better represent your product and how to track them over time. And establishing your own performance metrics is also a really good idea if you want to establish a governance.

4. Understanding and Establishing Governance

Short description:

Understanding your product, getting buy-in from management, documenting improvements, and establishing a good governance model are key to success. Measure, monitor, report, and repeat.

Because your story is told different from any other product. So you should understand what parts of your product and the user journey you need to track and how. And getting buy-in from management then becomes a much, much easier story if you manage to get that part of the story right.

Because you have your segmentation, you have your metrics, you know how to slice the data, you know what kind of markets you're observing, how you observe your users, and you understand how to perceive any kind of regression or improvement early on. And stronger buy-in means that you have to get a good correlation of performance improvements and conversion rates. But conversion rates also means different things for different products.

So make sure to document well and have all of your improvements well documented as well and understand how to report that. Because you're reporting sometimes to two different set of skill sets. So sometimes you have to report for product and you have to report for engineering. And those are two different types of reports. Some metrics matter more to ones than others. So this in the end needs to matter not only to engineering but it also needs to matter for product if you are trying to establish a sustainable model for governance. And this is really, really key in order to establish a good governance model.

And this talk is part of a bigger write-up. This QR code is the link for the article. Yeah, the key takeaways, really, for this talk is you measure, you monitor, you report, and you repeat. And that was about it.

5. Establishing Performance and Gaining Buy-In

Short description:

Establishing performance work, understanding different team dynamics, and building a sales pitch to different organizations are key to success. Start from your immediate manager, collect data, and gradually gain buy-in from upper management. Focus on building better attributions through data from day zero.

APPLAUSE When did this sort of story start for you? When did you start optimizing and seeing where you got stuck and moved on a little bit?

So, I have been establishing performance work for a long time now. So it has been part of my work at Clona and it has been part of my work at Spotify as well. It's been greatly part of my work at Volvo. And each product has different natures and you have to understand how to best sell an initiative. And that comes on understanding the people you're trying to talk to and understanding how the different team dynamics work and what kind of levels you need to work.

Because sometimes you need to establish a lot more engineering work and sometimes you need to establish a lot more product work. Because sometimes the part will understand very easily that you need to work on performance and sometimes you just need to work with engineering on leveling up on knowledge, and this kind of stuff, tooling. There's a lot of tooling work that you might need to do. So it changes a lot.

So the key take away really becomes on what kind of product are you trying to sell. So it becomes kind of a salesman position on how to understand the sales pitch and how to build a sales pitch to different organizations. Different audiences. Exactly.

So rather more people in your senior leadership team that care about performance. Buy-in can come in different kinds of scopes. So normally if you are trying to work on an organization that has no background in performance, it might as well start on getting buy-in from your immediate manager and trying to set up work so you can have that kind of allowance to work on building toolings and start scoping metrics. And from that moment onwards, you work into buy-in from upper management, if need be. So it depends on what kind of work are you trying to establish, and what kind of sphere.

So different engineers have different backgrounds, of course, and different realms of influence. So sometimes you work in a very big company where you have technical steering groups that you have to go through and a platform that you have to go through and set up a wider scope for tooling and instrumentation. So it really will depend on the organization trying to work. But I would say if you're trying to start working on performance, start from your immediate manager. Just get something out, get that MVP, start collecting data immediately, because data is king.

Yeah, it becomes a salesman work where you really, really want to work towards getting that allowance for getting yourself more time to work on performance. And that means building better attributions through data from day zero. So if you're trying to work on performance, it's very tempting to work on performance improvements right away. So you're trying to get that nice improvement out of the gate immediately, but you don't even understand where the actual bottlenecks of your application is, or what parts of your application the users struggle with the most."

6. Measuring Performance and Optimization

Short description:

Building better attributions through data from day zero is crucial for performance optimization. Starting with data allows you to identify bottlenecks and pain points to improve conversion rates. When assessing performance for third-party scripts, prioritize not impacting page load times and be mindful of task execution times. Initiating the conversation about measuring and optimizing should start with data and focus on short iterations and incremental improvements. Popular tools for collecting performance data include Speedcurve and Bugbear Sentry.

And that means building better attributions through data from day zero. So if you're trying to work on performance, it's very tempting to work on performance improvements right away. So you're trying to get that nice improvement out of the gate immediately, but you don't even understand where the actual bottlenecks of your application is, or what parts of your application the users struggle with the most. So how can you make sure that your conversion rates are going to be affected by certain performance work? Because in the end, if you're trying to get buy-in from product, it really needs to try and move the needle towards better conversion rates, whatever that means for your product team. And you have to understand, then, where are the pain points. So without data, you can't understand the pain points. So that's why the suggestion is always to start with data first.

Another question here is, do you have any recommendations for ways to assess performance for third-party scripts? Yes, so third-party scripts... And the disclaimer is that the person who's asking is developing a third-party script. I have been into third-party scripts, too, so I've worked for Klana for almost three years, and within Klana I worked for their embedded merchants portal that they had, so within the checkout. So if you're familiar with the PayPal checkout or Stripe checkout, but that's embedded within the merchant site, so in a way they are a third party. So you have, as a third party, the first and foremost thing you want to make sure is that you're not impacting the load times of the page. So if you are trying to set up data around you as a third party, you have to understand that you can't affect that load time for your client. But also you have to understand that you as a third party should have less priority when it comes to run-time timing. So if you're trying to execute tasks, you have to be really friendly on the task execution times as well. So IMP is a very important metric for any third party out there, so if you're trying to set up guardrails around budgets for that, that's two places that you want to be really, really strict on.

What if they're still reluctant? Oh, I like this sort of doomsday question. How do you initialize the conversation about measuring and optimizing before it's too late? Well I can tell you that it very hardly will be too late, because it's never too late to start measuring, but when you are in a certain scenario where your performance is way out of budget, so that's where the progressive enhancement in making sure that you have short iterations, fast deliverables, and you start moving the needle towards the good kind of spectrum on the metrics as soon as possible is key. But it always starts with data, so you want to understand where you are currently, and then you start projecting deliverables over time so you can track that progress over time, but always shipping through in small iterations. Because if you're trying to go from the very far end of bad to the very far end of good, that's a big, big scope to cover. Most of the times you don't have that time allowance right away, so you have to make sure that you ship incremental improvements and always constantly moving the needle towards the good threshold.

Which are your favorite tools for collecting performance data? Collecting performance data. So on the Rome parts, there are tons of really, really good performance service out there. So you have Speedcurve, you have the Bugbear Sentry outside, they also do performance. But you have to understand which kind of tool will better represent your usage.

QnA

Collecting Performance Data

Short description:

Favorite tools for collecting performance data include Speedcurve and Bugbear Sentry. The choice of tool depends on your team's willingness to use them effectively. intentionally slowing the product can prove the impact on product metrics, but it's an unconventional approach. The training program mentioned is weightlifting.

As a DevOps person, I appreciate that answer very much.

All right, let's go for this one. Which are your favorite tools for collecting performance data? Collecting performance data. So on the Rome parts, there are tons of really, really good performance service out there. So you have Speedcurve, you have the Bugbear Sentry outside, they also do performance. But you have to understand which kind of tool will better represent your usage. Because the tool you use is only as useful as much as your team is willing to use them. So setting up a good tool, it means that you have a good flow versus friction. So it depends on your development team skills as well to which tool you're going to use.

Is intentionally slowing the product a good way to prove the impact on product metrics? I guess if you're trying to understand the impact of how slow can you go, yes. I mean, you definitely will prove a point. So if you're going to try that, definitely let me know how that goes. Who's this question from? And is there a manager in the room? If that happens, definitely let me know how it goes, honestly. I'm fascinated by this question. It is a good question. It's quite a, you know, rebel type of thing to do. Let me see. One quick last one. Wait. Wait, wait, wait. Let's see. There is one question about a favorite. What is your training program? Is that for this room? I don't think so. Training program? What's my training program? I lift weights. Oh.

Training Program and Gratitude

Short description:

The speaker talks about their training program, which involves weightlifting. They express some confusion and gratitude towards the audience.

Let's see. There is one question about a favorite. What is your training program? Is that for this room? I don't think so. Training program? What's my training program? I lift weights. Oh.

You know what? I was thinking education. But yeah, that's the thing. I was thinking education. But you... Where is my head at? I don't know.

All right. I want to thank you so much. Let's give it up one more round of applause for... Please. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top ContentPremium
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
A Practical Guide for Migrating to Server Components
React Advanced 2023React Advanced 2023
28 min
A Practical Guide for Migrating to Server Components
Top Content
Watch video: A Practical Guide for Migrating to Server Components
React query version five is live and we'll be discussing the migration process to server components using Next.js and React Query. The process involves planning, preparing, and setting up server components, migrating pages, adding layouts, and moving components to the server. We'll also explore the benefits of server components such as reducing JavaScript shipping, enabling powerful caching, and leveraging the features of the app router. Additionally, we'll cover topics like handling authentication, rendering in server components, and the impact on server load and costs.
Power Fixing React Performance Woes
React Advanced 2023React Advanced 2023
22 min
Power Fixing React Performance Woes
Top Content
Watch video: Power Fixing React Performance Woes
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.

Workshops on related topic

Building a Shopify App with React & Node
React Summit Remote Edition 2021React Summit Remote Edition 2021
87 min
Building a Shopify App with React & Node
Top Content
Workshop
Jennifer Gray
Hanna Chen
2 authors
Shopify merchants have a diverse set of needs, and developers have a unique opportunity to meet those needs building apps. Building an app can be tough work but Shopify has created a set of tools and resources to help you build out a seamless app experience as quickly as possible. Get hands on experience building an embedded Shopify app using the Shopify App CLI, Polaris and Shopify App Bridge.We’ll show you how to create an app that accesses information from a development store and can run in your local environment.
Build a chat room with Appwrite and React
JSNation 2022JSNation 2022
41 min
Build a chat room with Appwrite and React
Workshop
Wess Cope
Wess Cope
API's/Backends are difficult and we need websockets. You will be using VS Code as your editor, Parcel.js, Chakra-ui, React, React Icons, and Appwrite. By the end of this workshop, you will have the knowledge to build a real-time app using Appwrite and zero API development. Follow along and you'll have an awesome chat app to show off!
Hard GraphQL Problems at Shopify
GraphQL Galaxy 2021GraphQL Galaxy 2021
164 min
Hard GraphQL Problems at Shopify
Workshop
Rebecca Friedman
Jonathan Baker
Alex Ackerman
Théo Ben Hassen
 Greg MacWilliam
5 authors
At Shopify scale, we solve some pretty hard problems. In this workshop, five different speakers will outline some of the challenges we’ve faced, and how we’ve overcome them.

Table of contents:
1 - The infamous "N+1" problem: Jonathan Baker - Let's talk about what it is, why it is a problem, and how Shopify handles it at scale across several GraphQL APIs.
2 - Contextualizing GraphQL APIs: Alex Ackerman - How and why we decided to use directives. I’ll share what directives are, which directives are available out of the box, and how to create custom directives.
3 - Faster GraphQL queries for mobile clients: Theo Ben Hassen - As your mobile app grows, so will your GraphQL queries. In this talk, I will go over diverse strategies to make your queries faster and more effective.
4 - Building tomorrow’s product today: Greg MacWilliam - How Shopify adopts future features in today’s code.
5 - Managing large APIs effectively: Rebecca Friedman - We have thousands of developers at Shopify. Let’s take a look at how we’re ensuring the quality and consistency of our GraphQL APIs with so many contributors.
Build Modern Applications Using GraphQL and Javascript
Node Congress 2024Node Congress 2024
152 min
Build Modern Applications Using GraphQL and Javascript
Workshop
Emanuel Scirlet
Miguel Henriques
2 authors
Come and learn how you can supercharge your modern and secure applications using GraphQL and Javascript. In this workshop we will build a GraphQL API and we will demonstrate the benefits of the query language for APIs and what use cases that are fit for it. Basic Javascript knowledge required.
0 To Auth In An Hour For Your JavaScript App
JSNation 2023JSNation 2023
57 min
0 To Auth In An Hour For Your JavaScript App
WorkshopFree
Asaf Shen
Asaf Shen
Passwordless authentication may seem complex, but it is simple to add it to any app using the right tool.
We will enhance a full-stack JS application (Node.js backend + Vanilla JS frontend) to authenticate users with One Time Passwords (email) and OAuth, including:
- User authentication – Managing user interactions, returning session / refresh JWTs- Session management and validation – Storing the session securely for subsequent client requests, validating / refreshing sessions
At the end of the workshop, we will also touch on another approach to code authentication using frontend Descope Flows (drag-and-drop workflows), while keeping only session validation in the backend. With this, we will also show how easy it is to enable biometrics and other passwordless authentication methods.