Video Summary and Transcription
This Talk discusses the importance of performance work and how to drive performance governance. It emphasizes the need to frame data, set meaningful budgets, and understand both engineering and product perspectives. The Talk also highlights the significance of delivering value, establishing solid performance governance, and gaining buy-in from management. Additionally, it emphasizes the importance of measuring performance, optimizing through data, and collecting performance data using tools like Speedcurve and Bugbear Sentry. The speaker also mentions their training program and expresses gratitude towards the audience.
1. Performance Work and Governance
My talk is about why performance work is not seen. We're going to be talking about governance and how to drive performance governance. We care about performance because applications can be used in unpredictable conditions. Lab tooling and real user monitoring provide important data. Getting performance work prioritized is a challenge when dealing with backlogs and time constraints. Proving value and setting up a governance process require data.
My talk is about why performance work is not seen, so this is a little bit just of setting a little bit of expectations. It's not going to be a super deep technical talk, so we're going to be talking about governance, and we're going to be talking about how to drive performance governance.
My name is Vinicius, and, yes, I'm still originally from Brazil, and still living in Sweden. I work for Volvo Cars. One thing that we have in this room now, and I would like to think that we all care about performance. So, hopefully, we all care about performance. This is actually something that is very, very easy to understand why we, as engineers, care about performance. Our applications can be used in many different set of conditions that are not very predictable, and performance as a subject is also not very predictable. Our users can put our application to conditions that are just very hard for our applications to perform.
When it comes to performance that we are trying to gauge, we hopefully all know about lab tooling, and if we know about lab tooling, we most likely know about Lighthouse. When it comes to tooling and monitoring and metrics, we have the lab, and we also have the Rome tooling, so real user monitoring tooling. So it is important to have the two facets of the tooling ecosystem, so you have your lab data and you have those early regressions being caught, and you have your real user metrics where you're going to have actual representation of how your application runs in the real world.
But I don't know how many of you work within product teams, and we all know the backlog. The backlog is that entity, that presence that is always looming on our progress, and it is mostly where all good intentions go to die. The question for most of us that are trying to put some performance work out there to the world is how do you get your work prioritised out of the backlog? If you're like me, you have caught yourself, this is by the way, one of our Swedish offices, but those are all by me, and this is within the main lounge of our quite nice Gothenburg Swedish office. If you're like me, you have caught yourself, just like Jake, sitting down and wondering how can you make sure to get that nice work out there? How do you get the performance work and improve your metrics and shout to the world that you improve the user's performance? Just like everyone else out there, it also works into a product. Most often than not, things around you are on fire. And you're trying to wonder, how do I manage to get this work done? I want to work on performance. But things are always on fire, and you have to deal with that. And let's not forget, you have the backlog, that looming entity just beside you, reminding of its existence, and sometimes, somehow, the backlog is also on fire! And now you catch yourself trying to figure out how do you even do this kind of work? So how do we do it? How do you get our backlog that is on fire? Just as a little addendum, I try to get this kind of scene out of our favourite models to generate an image, and this is what I came up with. So it's pretty true to life, if you ask me. Even the fish eyes and stuff like that. But, you know, as a product team, we are always busy shipping features. At least that's the lie we tell ourselves, right? But we are always forced into thinking of what is the next thing we can ship and how quickly can we ship it? If you work for an open company, we have your stakeholders on top of you, or you're trying to get your company open, and you have your stakeholders on top of you. There is always this kind of time constraint in trying to focus on shipping features. And you're always fighting the clock, so it is very hard to get performance work prioritised because you're always chasing the next thing to ship.
The real question on trying to drive performance is how do you prove value? So, to prove value in this kind of setting is how do you manage to prove that the other overhead of working on performance actually will benefit your both your engineers and your users, right? And how do you set up this governance process? How do you justify the overhead? How do you manage the flow versus friction, both for your engineers and also to get work out in time? How do you get better deliverables out of the work you're getting? Because if you're working on performance, something that is not strictly feature-related, how can you make sure that whatever you ship becomes a better version of your product? Not just from the engineering perspective. And the answer is always data. And although not this guy, the answer is always data. So the data that we're trying to talk is, again, bringing back to the lab and run data.
2. Framing Data and Performance Monitoring
You need to frame your data, set up meaningful budgets and realistic goals for lab tooling. Understand both the engineering and product perspectives to deliver an impactful product. Aggregate performance metrics and product metrics together for better insights. Getting better attributions is crucial for good performance monitoring.
So you're trying to bring perspective from your automated tooling, your CICD, but also from your users. But when on a product setting, most important is how you frame that data. So how do you make sure that you... Because we all can set up some instrumentation around our product and grab a lot of data and put it in a dashboard, but how do you make sure that that dashboard is valuable, right? Is used? And how can you take the most out of the data you use?
So let's take a little quick journey into the lab tooling. So let's take a look into Lighthouse. And within your lab tooling, you have to make sure that you've set up your data in terms of good guardrails. So how do you make sure that your data has good guardrails into that? So for the lab tooling, you have to make sure that you have meaningful budgets and realistic goals, and also achievable goals. So your achievable goals is... So you make sure that you ship progressive improvements. So you have quick iterations of your product. And from those quicker iterations, you're gonna ship value faster. So framing your data into a good set of constraints, into good realistic budgets that you can reach. So once you team your lab tooling, once you have that part done, then comes the time of looking to your real user data.
But for the real users, you have to make sure that you understand not only from the engineering perspective, you also understand from the product perspective. Because your application is using different settings in different ways, and you have to understand that you are trying to solve things from two perspectives. You have your engineering team and your product team. And you can guess which one is which from their photo, by the way. But you have to make sure to represent both sides of the story. The engineering and your product. In order to deliver the most impactful version of your product. So how do you get more out of your round data? So in order to get your round data into better shape, you need to make sure that you aggregate your performance metrics and your product metrics together. Because if you only look from one perspective, you will blindside the other. And we all are engineers and we always care about the engineering part of the metrics. But we have to make sure that whatever we are trying to ship is moving the needle within the product side as well. Because if you're trying to get buy-in to work in performance, you need to make sure that that is the goal, the objective you're trying to be. And getting better attributions is one of the best things you can do if you're trying to set up good performance monitoring. So having the metrics on its own is only one part of the story. You have to understand from that metrics, from that monitoring, what kind of problems are surfacing. Because if you just have numbers for the metrics and you don't have good attributions, you don't even know where to start. You just have numbers.
3. Delivering Value and Establishing Governance
To improve performance, focus on delivering value for key markets and percentiles. Avoid blind spots by capturing all percentiles and understanding different segmentations. Tim Varicky's talk on percentile segmentation is highly recommended. To establish solid performance governance, assess which metrics represent your product and build solid KPIs and SLOs.
So also setting up your data in a good way so you can achieve that deliverable. So how do you frame your data from your user's perspective? But most importantly, how do you slice and group your run data? So slicing and grouping your run data from a product lensing, it means that you have your metrics being observed from the product perspective. You're going to get a segmentation that's based on the key market and user base for your product, and you are focused on delivering value for those key markets and percentiles. And you have a better impact and you make sure that you're improving your perspective, right? You're improving your user's experience out of that.
Improving performance for all of your users is a very nice overarching goal, but not always a very possible priority from a product's perspective. Because that's a very, very big scope. And working on a bigger scope is a lot harder to ship valuable improvements. Because normally the thresholds are way, way too high. And sometimes when you start setting up a performance governance, you have a long way to cover until you get to the green part of the metrics. So you have to make sure that you iterate fast and you ship improvements as soon as possible. When it comes to data, you also have to make sure to avoid blind spots. And avoiding blind spots in your metrics means that you are not segmenting your data in one dimension. You're not only observing your data from the P75 perspective, because that can lead you to blindside any kind of improvements and regressions that happen before that. Because when a regression hits the P75 percentile, that's pretty much an incident, right? So most of your users are being affected. So you want to make sure that both for regressions and for improvements you are capturing all of your percentiles and you're understanding how the different segmentations happen. Because all the improvements should be celebrated and the regressions should be visualized across the spectrum. And the reason why you do that is because the bigger your context, the harder it is for you to build any sort of correlations and thus it makes very hard to make attribution for improvements or regressions. There is a very good talk by Tim Varicky from Performance Now last year. So if you're interested in this kind of segmentation of percentiles, this talk is a really, really good one. I definitely recommend it.
So right now, you already started thinking about percentiles and data segmentation, and you're trying to make sure that you're really constraining that backlog. You have a good strength holding to it. You are strengthening your code performance governance. So how can you make sure that your performance governance is solid over time? So you need to understand and assess which metrics better represent your product. Because when it comes to web vitals, you have a different set of metrics, and it's great. Because they are very well standardized and very well documented, but not all of the pages have the same importance and not all of the features the same usage. So products are not unidimensional. So you have to understand which part of that subset better represents your project and you can build KPIs around it and really guard your metrics and really guard your product's perspective across the board. So building really solid KPIs and SLOs means you understand which metrics will better represent your product and how to track them over time. And establishing your own performance metrics is also a really good idea if you want to establish a governance.
4. Understanding and Establishing Governance
Understanding your product, getting buy-in from management, documenting improvements, and establishing a good governance model are key to success. Measure, monitor, report, and repeat.
Because your story is told different from any other product. So you should understand what parts of your product and the user journey you need to track and how. And getting buy-in from management then becomes a much, much easier story if you manage to get that part of the story right.
Because you have your segmentation, you have your metrics, you know how to slice the data, you know what kind of markets you're observing, how you observe your users, and you understand how to perceive any kind of regression or improvement early on. And stronger buy-in means that you have to get a good correlation of performance improvements and conversion rates. But conversion rates also means different things for different products.
So make sure to document well and have all of your improvements well documented as well and understand how to report that. Because you're reporting sometimes to two different set of skill sets. So sometimes you have to report for product and you have to report for engineering. And those are two different types of reports. Some metrics matter more to ones than others. So this in the end needs to matter not only to engineering but it also needs to matter for product if you are trying to establish a sustainable model for governance. And this is really, really key in order to establish a good governance model.
And this talk is part of a bigger write-up. This QR code is the link for the article. Yeah, the key takeaways, really, for this talk is you measure, you monitor, you report, and you repeat. And that was about it.
5. Establishing Performance and Gaining Buy-In
Establishing performance work, understanding different team dynamics, and building a sales pitch to different organizations are key to success. Start from your immediate manager, collect data, and gradually gain buy-in from upper management. Focus on building better attributions through data from day zero.
APPLAUSE When did this sort of story start for you? When did you start optimizing and seeing where you got stuck and moved on a little bit?
So, I have been establishing performance work for a long time now. So it has been part of my work at Clona and it has been part of my work at Spotify as well. It's been greatly part of my work at Volvo. And each product has different natures and you have to understand how to best sell an initiative. And that comes on understanding the people you're trying to talk to and understanding how the different team dynamics work and what kind of levels you need to work.
Because sometimes you need to establish a lot more engineering work and sometimes you need to establish a lot more product work. Because sometimes the part will understand very easily that you need to work on performance and sometimes you just need to work with engineering on leveling up on knowledge, and this kind of stuff, tooling. There's a lot of tooling work that you might need to do. So it changes a lot.
So the key take away really becomes on what kind of product are you trying to sell. So it becomes kind of a salesman position on how to understand the sales pitch and how to build a sales pitch to different organizations. Different audiences. Exactly.
So rather more people in your senior leadership team that care about performance. Buy-in can come in different kinds of scopes. So normally if you are trying to work on an organization that has no background in performance, it might as well start on getting buy-in from your immediate manager and trying to set up work so you can have that kind of allowance to work on building toolings and start scoping metrics. And from that moment onwards, you work into buy-in from upper management, if need be. So it depends on what kind of work are you trying to establish, and what kind of sphere.
So different engineers have different backgrounds, of course, and different realms of influence. So sometimes you work in a very big company where you have technical steering groups that you have to go through and a platform that you have to go through and set up a wider scope for tooling and instrumentation. So it really will depend on the organization trying to work. But I would say if you're trying to start working on performance, start from your immediate manager. Just get something out, get that MVP, start collecting data immediately, because data is king.
Yeah, it becomes a salesman work where you really, really want to work towards getting that allowance for getting yourself more time to work on performance. And that means building better attributions through data from day zero. So if you're trying to work on performance, it's very tempting to work on performance improvements right away. So you're trying to get that nice improvement out of the gate immediately, but you don't even understand where the actual bottlenecks of your application is, or what parts of your application the users struggle with the most."
6. Measuring Performance and Optimization
Building better attributions through data from day zero is crucial for performance optimization. Starting with data allows you to identify bottlenecks and pain points to improve conversion rates. When assessing performance for third-party scripts, prioritize not impacting page load times and be mindful of task execution times. Initiating the conversation about measuring and optimizing should start with data and focus on short iterations and incremental improvements. Popular tools for collecting performance data include Speedcurve and Bugbear Sentry.
And that means building better attributions through data from day zero. So if you're trying to work on performance, it's very tempting to work on performance improvements right away. So you're trying to get that nice improvement out of the gate immediately, but you don't even understand where the actual bottlenecks of your application is, or what parts of your application the users struggle with the most. So how can you make sure that your conversion rates are going to be affected by certain performance work? Because in the end, if you're trying to get buy-in from product, it really needs to try and move the needle towards better conversion rates, whatever that means for your product team. And you have to understand, then, where are the pain points. So without data, you can't understand the pain points. So that's why the suggestion is always to start with data first.
Another question here is, do you have any recommendations for ways to assess performance for third-party scripts? Yes, so third-party scripts... And the disclaimer is that the person who's asking is developing a third-party script. I have been into third-party scripts, too, so I've worked for Klana for almost three years, and within Klana I worked for their embedded merchants portal that they had, so within the checkout. So if you're familiar with the PayPal checkout or Stripe checkout, but that's embedded within the merchant site, so in a way they are a third party. So you have, as a third party, the first and foremost thing you want to make sure is that you're not impacting the load times of the page. So if you are trying to set up data around you as a third party, you have to understand that you can't affect that load time for your client. But also you have to understand that you as a third party should have less priority when it comes to run-time timing. So if you're trying to execute tasks, you have to be really friendly on the task execution times as well. So IMP is a very important metric for any third party out there, so if you're trying to set up guardrails around budgets for that, that's two places that you want to be really, really strict on.
What if they're still reluctant? Oh, I like this sort of doomsday question. How do you initialize the conversation about measuring and optimizing before it's too late? Well I can tell you that it very hardly will be too late, because it's never too late to start measuring, but when you are in a certain scenario where your performance is way out of budget, so that's where the progressive enhancement in making sure that you have short iterations, fast deliverables, and you start moving the needle towards the good kind of spectrum on the metrics as soon as possible is key. But it always starts with data, so you want to understand where you are currently, and then you start projecting deliverables over time so you can track that progress over time, but always shipping through in small iterations. Because if you're trying to go from the very far end of bad to the very far end of good, that's a big, big scope to cover. Most of the times you don't have that time allowance right away, so you have to make sure that you ship incremental improvements and always constantly moving the needle towards the good threshold.
Which are your favorite tools for collecting performance data? Collecting performance data. So on the Rome parts, there are tons of really, really good performance service out there. So you have Speedcurve, you have the Bugbear Sentry outside, they also do performance. But you have to understand which kind of tool will better represent your usage.
Collecting Performance Data
Favorite tools for collecting performance data include Speedcurve and Bugbear Sentry. The choice of tool depends on your team's willingness to use them effectively. intentionally slowing the product can prove the impact on product metrics, but it's an unconventional approach. The training program mentioned is weightlifting.
As a DevOps person, I appreciate that answer very much.
All right, let's go for this one. Which are your favorite tools for collecting performance data? Collecting performance data. So on the Rome parts, there are tons of really, really good performance service out there. So you have Speedcurve, you have the Bugbear Sentry outside, they also do performance. But you have to understand which kind of tool will better represent your usage. Because the tool you use is only as useful as much as your team is willing to use them. So setting up a good tool, it means that you have a good flow versus friction. So it depends on your development team skills as well to which tool you're going to use.
Is intentionally slowing the product a good way to prove the impact on product metrics? I guess if you're trying to understand the impact of how slow can you go, yes. I mean, you definitely will prove a point. So if you're going to try that, definitely let me know how that goes. Who's this question from? And is there a manager in the room? If that happens, definitely let me know how it goes, honestly. I'm fascinated by this question. It is a good question. It's quite a, you know, rebel type of thing to do. Let me see. One quick last one. Wait. Wait, wait, wait. Let's see. There is one question about a favorite. What is your training program? Is that for this room? I don't think so. Training program? What's my training program? I lift weights. Oh.
Training Program and Gratitude
The speaker talks about their training program, which involves weightlifting. They express some confusion and gratitude towards the audience.
Let's see. There is one question about a favorite. What is your training program? Is that for this room? I don't think so. Training program? What's my training program? I lift weights. Oh.
You know what? I was thinking education. But yeah, that's the thing. I was thinking education. But you... Where is my head at? I don't know.
All right. I want to thank you so much. Let's give it up one more round of applause for... Please. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Comments