We also have state and UI code mostly shared between targets. We even have cases where code shared between web and TV targets, even though that UI-wise it looks completely different. But state-wise, it can be very similar and you may want to leverage your abilities there. And in order to share, we use different techniques.
One of these is Create React Contexts, basically very powerful and straightforward. The idea here is that on the common code base, you're using components which are available in your context. So in this case, on the target level, you can define which components to pass in and your common code becomes just agnostic to whatever you're going to pass in and it can enforce the interface of those components with the TypeScript.
With the feature flags, with assigned audiences, other popular and very powerful technique because basically with the feature flags you can toggle on or off your component, straightforward. But with the audience complementary to this, you can gradually define on which conditions you want to toggle it on or off. And let's say if you have incident for something, for let's say you haven't tested that there is a memory leak in your fancy new feature on certain targets, you can specify even the versions for which targets you want to toggle it off. For sure, we have target-specific models. As a legacy we also have model swapping. I'm not a big fan of this approach but model swapping, yeah, with us for a while and it's still there but the main downside of this that you need to provide model with a completely same interface without any help of TypeScript or anything else.
And I just want to remind you, on TVs, all the targets are different so we have lots of code shared but it doesn't mean that it's guaranteed that this shared code is going to work out of the box so we need to test something. Well, utilizing cinema room like on this slide kind of cool but requires lots of space at your home and at least you will need good iron cone over summertime. And to be fair with you, we started with this approach where we had sort of cinema rooms. So we have still have those in our office but we come up with something better for the remote working. And on this slide you see very simplified high level version of the architecture of our remote virtual lab where you can see there are two entry points. One is a web app and the other is one end to end tests. So with the web app, you can, yeah, it's really useful for, like, exploratory testing and manual testing when end to end tests can still access our remote devices through the API. API layer is responsible for authorization, cuing, and proxying request to the Raspberry Pi service. Now, on the Raspberry Pi, we have this shared responsibility to control camera in front of TV to make the recording of it and to control TV itself to toggle it on, off, restart, and control the remote. And let me show you how the interface looks like. So we start on the page, we can occupy the device now, device books for us. So when I'm testing, no one else will be using this device. And as you can see, I can use just the web interface, I can use this fancy remote to control it. For some of the targets, we even implemented debugger. And for those who were working with Cardova many years ago before Chrome introduced remote debugger, this plugin can be familiar, but it's been duplicated for a while and we are supporting their own version of it. I also want to bring your attention to the fact, it's not loaded, here we go, that devices, those devices, they are located in different locations. So some of them are located in Poland, others in the UK, and definitely not in your home.
Comments