While under upward development, we want to show the progress of the AI system. For example, on a development or a UAT or QA environment. However, we don't want to release this half-baked feature to production yet. So we need a way to use either the math randomizer, or the AI randomizer, depending on the environment.
This is the initial code that uses a math randomizer to serve a random comic. Now, let's see how feature flag can make the web app serve a comic from either the math randomizer or from AI randomizer. On line six, we check a feature flag called feature flag AI randomizer. When it's on, we use AI randomizer to serve the comic. When it's off, we use a math randomizer instead. The feature flag is stored as an environment variable. On development environment, the value is set to be on, while in production, the value is set to be off. This way we can safely build and deploy AI randomizer code little by little and regularly to both dev and production. It can be continuously tested on dev and not affecting any production users. Once the AI randomizer feature is complete, we can turn the flag on on prod. That is when we release the feature. However, the code itself has been laying dormant on prod for a little while. So that's one way to do feature flagging. This example is for backend feature flagging. One nice thing about this technique is that it's very versatile and it can be used in other parts of the software. For example, on a web app front-end code or mobile app front-end code.
Right, so how about automated testing? How are we going to test features with flags? Let's check the test for this code. This is the initial test when the web app is only using the math randomizer. When the random API is called, it should return a fake comic, which is the only comic stored in the test data storage. When the flag is introduced, we explicitly set the flag to be off for the existing test. Then we add a test for when the flag is on. This way we have automated tests for both when the flag is on and off. For this particular example, the test is similar. They return the fake comic either when the flag is on or off. It's because we only have one fake comic in our test data storage. In many cases, a new feature has a different behavior and therefore different tests for when the flag is on and off.
So, we just went through the secret art of feature flagging. To sum up, we start by introducing the flag, we test for both when the flag is on and when the flag is off, and then we write the implementation code for when the flag is on and when the flag is off. We make sure that the flag is on onDEV and off onPROD. And then we deploy the code to DEV and we deploy the code to production as well, and it's going to be dormant on production because the flag is off. Once the release, once the feature is complete and we want to release it, we turn the flag on onPROD. And once we are happy that the feature has been behaving in an expected way onPROD for a little while, we can then remove the implementation code for when the flag is off, remove the test for when the flag is off. And then once we're sure that the flag is not used anywhere, we remove the flag. So this is the secret art of feature flagging.
Comments