There's one question, and I had the same one. As you switched from Node to Python and reskilled the team with AI assistance, did the code quality suffer, and in the end, did that matter? Tough question. I think, at first, it would have done. However, we were quite fortunate in that there was other engineers and principal engineers that did Node Python, so we were able to call on them to look at it. Now, at CDO, we do have lots of tracking metrics, so we use the things in GitLab for monitoring code quality. We use Orca as well for testing vulnerabilities, and we have a very hard stance in that we don't want any, or nobody wants any vulnerabilities in it from that point of view, but we've already read into the teams to check those. I would say, no, it didn't from that point of view, but I can only go off the dashboard from those.
How big was the team that actually worked on it and executed in the end? So, the team at the start was, I think, eight people, including the tech lead. We did purposely have it small because they had to move fast, and they had to make decisions, and I think sometimes when you have multiple teams, you can spend more time discussing things, and we knew this was very much, we had a forced deadline on us, and we would agree the kind of requirements that they wanted, and we wanted to move fast, so we kept that team pretty nimble from that point, but that's not stayed that, now. Now, it's scaled to four teams off the back of it, but I think starting off small and then growing it and taking that learnings rather than just trying to throw it out, the whole business really, really, really worked, and especially when now you've got the kind of AIDLC kind of concepts and being able to trial that in one team first really works and see how it goes.
Nice, very nice. Thanks for the talk, Tim. It was great. That's always nice. You mentioned Amazon Q. Did the team use any other AI tools? Are you prescriptive about what tools slash models engineers can use? So, in insurance and generally CDL, we have a lot of policies that kind of prevent us using things, so we do have an AI group that me and anybody within the business can submit a tool of their choice, but it has to go through compliance. Yes, AWS things are easy because they're already signed up and sealed, but we have things like Google Gemini, which I love for my job and find that one more useful. They do have access at some level to chat GPT and other ones, but we have a list of approved things. The designers, for example, they have Figma, so they can do all the AI stuff in that, but, yes, we have a list of approved tools that they can use. It's just we like to, to be blunt, rinse the partnership with AWS, so we get everything they offer, we kind of take and bring in, so that's probably why it's more mature in that AWS space. And then in the hackathons and innovation days, it's the same for the tool set or do people get to experiment in those days specifically? So, yes, when it comes to tools, you've still got to put it through the process, if you like, and what you can get is like a status of you're investigating and then you just got to be careful about the data you can put in, so we do allow that to happen, it's just that you've got to do a bit of paperwork to get it finished. Everyone's really supportive and it's becoming a lot easier, asking the questions around which foundational model it is and if it's using Claude or Gemini, then it becomes a lot easier for us to do. So it's just picking those, but the company is receptive to it, it's just that we have to do our due diligence. Yeah, that makes sense. All right, how do you give the team space to learn and adopt these practices versus the pressure that you had to deliver? Really great question and I think it's changed recently, so we do have things like the hackathons and the enablement days. We also give everybody, it might not sound like a lot now, but we give people 5% to do what they want to do and the reason that might feel low is that we've recently moved to a product operator model where we use initiatives to drive the work and if you're doing learning as part of that initiative, it just counts to that, so really, it's unlimited. It's like, if you need to do that learning to achieve that goal, you can use that time and you don't have to log it elsewhere, you can just put it as part of that. It's just a discovery, right? It's how you're going to build that and everything, so it did scare people at first when we dropped it from their 10% time or whatever, but once we explained, look, no, just articulate it as part of the initiative and actually that had a secondary help. It helps with our R&D tax credits, so if you're in the UK in technology, you're probably claiming those, so it made it really easy to log that because it's research, right? You're researching the tools to do, so yeah, that's how we do it. We don't have a limit, but obviously, we're very try to be focused on the things that we try to achieve, so it's those tools that you think will help.
Comments