That's it. Thank you. All right. Cool. So we have loads of questions for you. We'll get started with this one, actually. How do you tackle the sheer amount of code generated by AI? Doesn't it get overwhelming quite quickly? I mean, that's definitely one of the reasons we're only going three times faster at this stage, because it is accelerating a lot of the boilerplate you have to write, but since we're still validating it, we're not getting a 10x or 100x kind of acceleration. That's one answer. And the other answer is, of course, in our architecture standards and in the guidelines that we're writing, we're trying to bring all our knowledge of how to create better systems, so typically creating the right levels of abstraction, so you don't create thousands of useless lines of code. Exactly. Thank you.
And that actually links nicely onto how do you know that the QA agent is testing it properly? So things like scope, true equals true, and things like that. We do measure code coverage, clearly. Well, we designed a QA agent to test the features, that's why we also separate the understanding of what the feature does from the tech migration plan, that's part of the answer. But to give a more transparent answer, we've also tried generating tests on top of code just to increase the code coverage. So very interesting exercise where, clearly, you're completely biased because you're generating the test from the code. But one interesting learning from doing that is we actually identified bugs in the code because the AI has this weird intelligence where it didn't test what the code did, but it tested what the intention of the function was, and so it did identify this edge case that hadn't been captured before. Thank you for that.
And someone has a question about wouldn't your multi-agent approach and somewhat specific driven development also be suitable for any new feature in an enterprise setting? So that's very interesting. I mean, so Theodore, I'm the co-founder of Theodore, we're a tech consultancy. And so our job is to build tech products for a wide range of clients. And clearly, the reason I'm speaking today about modernizations is, as soon as AI became this kind of existential, not threat, but existential question for us, we also realized it was unclear how we would leverage it in product building. So that's really an important thing. So we knew that if we invested in modernizations, we would learn a lot while creating value very quickly. And some of these learnings could then be captured in product building. So yes, on some projects now, we're able to go, I'd say, yeah, almost 2, 3x faster on like new builds. But still with a lot of constraints, you need to have basically the tech leads, harnessing the AI needs to have a lot of autonomy on the product. Because they don't have, like if you're dependent on a product owner, and if that product owner is dependent on a lot of stakeholders to make decisions, then the AI is basically going too fast. And the bottleneck is not the AI anymore, not the coding anymore. It's really like the product decisions.
Comments