These are also packages built by the OpenTelemetry community, which use the same fundamentals to automatically collect data for you. So we'll see in the red box in the code snippet, registering an instrumentation to automatically collect data is really straightforward. You just list the ones that you want. You import them, and then it's off to the races.
Some examples that are particularly interesting for the front end are document load and user interaction. These are going to tie into browser APIs, to automatically collect data about how users are interacting with your application, how your apps are performing, and collect things like web vitals, for example. It's definitely a balance between manual and auto instrumentation, depending on the level of granularity of data that you need to collect, and ultimately, use a developer, know your applications best, and use cases the best, and so, from that, that can inform the type of telemetry data that you want to collect, as well as what attributes you want to attach to the events you're collecting, but some guidelines about what to collect.
For events, a couple of key events that you'll want to be looking at are page load, navigation events, that's especially important if you have a spa versus a multi-page app, since the browser events will behave a little bit differently when it comes to navigation. Significant user actions, you know, important button clicks or form submissions, errors are especially important to track, and page unload, you can use that and connect that to page load to kind of view an overview of a user's interaction with a specific page. With each of those events, you can collect additional metadata with them which can be used later on to query your data.
So some attributes that you might want to collect are things around app data. You know, what user is currently logged in, what team are they on, are they a paid user, even things like what feature flags do they have enabled, because later that will mean that when you see an interesting data point, you can slice and dice it in different ways, and, you know, maybe a bug is occurring, but only for users who have a certain feature flag enabled. And so by collecting that at this point in your instrumentation later on when you're querying it, you'll be able to view your data in different ways which can lead to interesting insights. Next category of attributes you might want to collect are capabilities. These would be things like what browser the user is on, their screen size and resolution, what connection are they on, are they on really slow 3G, are they offline, are they on wi-fi, and as developers, it's really common to want to know, hey, can I use this new browser API, can I use this CSS feature? And we have generic data like, can I use. But depending on your user base, your specific application, that might skew very differently from the generic statistics. So if you're collecting this data and you say, I want to use X feature really, really badly, that would be super important for us, you can go look in your telemetry data and you can see, oh, actually our user base skews in one way or another as far as what browsers they're using. For example, maybe you can say, oh, wait, actually, we can adopt this technology a lot early on than we thought we could have. Next are things like app metadata, what app version or build number is the user running? So, with all of these attributes being collected and data being collected, we then need to send it off to somewhere to store and query from later. And this is the role of processors and exporters in open telemetry.
A processor, we see here, we're adding a batch man processor, will collect all the data as you are adding it in your application. And then send it off in batches. There's some other customizations that you can do with open telemetry. But sending it off in batches is kind of a good standard. And then from there, you have an exporter. This will do the work of doing any data conversion if the back end that you're sending it to doesn't format metrics, logs, and traces in the exact same way that open telemetry does, and then it handles the transportation, right? And so in this case, we could send it off to Tempo directly or to Loki or to Mimir. One optional component is that we could run the collector. So up to this point, kind of been talking about our front end sends data to our back end data stores. But there are definitely a lot of good use cases for having a middle component there that can act as a man in the middle for your front end to your back end. One of those primary reasons would be security.
Comments