Video Summary and Transcription
Carly Richards discusses the challenges of using different tools for synthetic monitoring and end-to-end testing, emphasizing the need for a unified approach with playwright and Elastic Synthetics. The Playwright API and Elastic Synthetics can be used together to create tests and monitoring tools, ensuring consistent user experience and documenting application features. By bringing development and SRE teams together and using a common tool, collaboration and defect identification can be improved.
1. Challenges of Tooling and the Need for Unification
In this part, Carly Richards discusses the challenges of using different tools for synthetic monitoring and end-to-end testing. She explains how the lack of collaboration between development and operations teams, differing priorities, and the use of different tooling can hinder automation efforts. Carly emphasizes the need for a unified approach and introduces playwright and Elastic Synthetics as a solution. By utilizing these tools, teams can create scripted journeys that serve both as tests and monitoring tools, ensuring consistent user experience and documenting application features. Carly also provides an overview of the implementation process and highlights the importance of maintaining the monitors within a project structure.
Hi, everyone. It's great to see you at React Summit. My name is Carly Richards, I'm a developer advocate at Elastic. And I want to talk about synthetic monitoring and end-to-end testing. You might wonder why these things might seem totally different constructs. But despite the emergence of DevOps and SRE practices, and about trying to unite among custom workflows, I'm finding that, for various reasons, we're still using different tools to try and achieve rather similar levels of automation, albeit at different points in the development cycle.
I'm going to talk about my own experiences of why I think that happens. And then I'm going to show how we can unite alongside a common tooling by using embedded end-to-end testing within a synthetic monitoring tooling, to allow us to basically do both of these things using the exact same scripted journeys.
So I used to work in a bank before I joined Elastic, and for the majority of that time, I found that dev and ops were not as joined up as we'd like to say. They were basically warring factions with competing interests, and the emerging best practices from SRE and DevOps coming through the observability space often were picked up by production management, but it never really permeated back very easily into the developer side of the fence. And it took a lot of convincing for that to happen. And I honestly think that stemmed from three key reasons that sadly I think still exist and some way today. The first is that we often have Dev and SRE as separate departments or separate teams, rather than being part as one multidisciplinary team. And with the separate departments with competing agendas, lack of communication and empathy tends to build up, which can be a big problem because it causes each group to feel like when someone comes with an idea of how to collaborate together, that we're throwing things over the fence for the other to pick up and break the flow and basically call into question the direction in which we knew we were meant to build as a developer or as an SRE. The second is often that there isn't a common prioritisation between these two groups. SRE can often find minor defects or have ideas for automation or enhancements to the application ecosystem but quite often those are deprioritised and instead you'll have a product owner pushing for new features for users that developers will happily be going away to build and we don't ever address this balance. And the final challenge that I've seen the first hand as well is that often development production teams are using different tooling to achieve similar objectives. So in my last role at the bank we were actively using cypress for developing end-to-end tests meanwhile colleagues in SRE were writing similar automations in Selenium because it integrated into their observability platform and they could use it for periodic monitoring and checks against the production application. And it meant that we couldn't really use the end-to-end tests in production without either of us switching to a new tooling. And the reality is the end-to-end testing using automation frameworks to validate the user journey through automated tests and synthetic monitoring where we run periodic scripts to either check the accessibility or endpoints or the achievability of user behavior to make sure it's still operating correctly are effectively two sides of the same coin. And if we unite amongst a common tooling we can end up using these same scripts not only as testing and monitoring tools in the same application suites across different environments, but we can also use them as a common way of documenting the user journey and how we expect users to use their application features.
So using playwright and Elastic Synthetics we can do that. The way that this works as you'll see in grey going through each of the boxes is that we'll have JavaScript and TypeScript journey files that will basically utilize playwright to automate those interactions and run locally as end-to-end tests against a locally running web app. Then through peer review, we can run the same monitors as end-to-end tests within the GitHub actions CI pipeline or any other CI provider that you're using. Once we get to deployment stage of our app, we can then push these same monitors using our own API key into the appropriate location, either the Elastic-specific location or our own running private location, to do the monitoring instead against the production web application, which will then store the results within Elasticsearch and make available the dashboards in Kibana so that we can actually see what's going on.
So how do we get started with this? Well, we need to create a project to house these monitors so that they can be maintained within source control using from both parties. The easiest way to do that is using the init wizard after installing the Elastic Synthetics Playwright wrapper via NPM. It'll give you a sample project structure looking like this, which will include synthetics configs for the specific configuration for the project and some example monitors including the TypeScript journeys that you can see there under the journeys folder. We then are able to write our own behavioral tests building on the examples using Elastic Synthetics. You'll see here on the top that the page object which is the playwright page object is exposed within our journey and also within each stick.
2. Using Playwright API and Elastic Synthetics
We can use the Playwright API to locate elements, perform actions, and make assertions to ensure the expected results. These tests can be run locally and in CI pipelines, and the same definitions can be pushed to production using Elastic Synthetics. SRE audiences can monitor the state and track performance over time. By bringing development and SRE teams together and using a common tool, we can collaborate, identify defects, and build better applications. Thank you for attending React Summit!
And we can make use of the Playwright API to do things like locate elements via CSS selector, which you can see on line 16, by the various helper attributes introduced in versions 2017 and later, such as get by test ID. We can perform clicks and actions that a user would against these appropriate HTML elements and then we can perform various assertions to see that the appropriate result is showing there.
And then we can run these locally and make sure that all of the changes associated with our new feature are passed. We can then run the same things within our CI pipelines as well and then we can push those same definitions to production to the Elastic Synthetics location which will allow you to run those same tests as monitors against your production application.
Then SRE audiences can monitor the state. They can see the basic pings, they can see as highlighted in the background of these cards what the duration has been over time using the chart in the background. Then when things go wrong we can actually see the individual steps, we can see what particular expectations or errors happened and we can then do smart things like have alerts or anomaly detections to try and pick up if there's perhaps a degradation in performance if these particular tests are taking longer as per our trends.
So for me DevOps is always meant to be about bringing the dev and the SRE factions together. We need to leave that cultural baggage behind that's been sitting for so long and work together and if we use a common tool there's some great benefits to be afforded in terms of documenting the user journey, collaborating together, to pick up defects in testing and in production and just generally best building better applications for users.
So thank you so much it's been great speaking here at React Summit. Hopefully I get to catch you around the conference and if not feel free to check out the code example on the QR code on the screen or you can also contact me with any questions at currentlyellrichmond.
Comments