In the last two or three years, they've had to get a little bit more aware of front-end best practices, full stack best practices. But there's still a lot of emphasis on like, well, front-end design. Wait, front-end is a very big topic. You could be, you know, somebody could be using Next.js and trying to build out an app, or they could be using other kinds of things. And so there's a lot of emphasis right now on front-end, just because it is slightly easier for us to be able to build benchmarks around it for model teams to optimize. That is not to say that models aren't good at back-end, but there's still a lot of work to be done on like, full stack applications, scaffolding and development. I think this is a matter of time type issue. So right now, I would say yes, it's probably impacting front-end jobs more, but it is also going to impact those back-end ones probably within the next year or two. Right, makes sense.
Any thoughts on tagging level of AI use for PRs? Yes. I have a lot of people ask me variations of this question. I have people ask me, should we be annotating our individual source files with the lines that AI generated? Should I be annotating components, modules, any of these things? I think that we now have to assume people are going to be using AI and maybe 70, 80%, 90% of it's being generated by AI. This becomes a philosophical issue for especially open source projects, because it's now very easy for people to just throw some code over the fence and feel good if their contribution is accepted. I feel like you need to come up with some sort of policy that tries to discourage people being lazy and encourages them to at least have done a first pass review themselves to understand how it works. So if somebody sends me a PR and it is obvious that they have used AI to not just generate the code, but also generate the summary of what is this thing about, I'm going to call bullshit. And I think that's okay for us to call bullshit on. I feel like we just need to start making sure that there are agreed on practices, whether it's an open source project or a team, agreed on practices and make sure that if we are asking people for reviews, there is some evidence that they've actually thought about the problem. Maybe that means we introduce practices like, okay, well, I could do a very quick first pass review. If it looks like you don't understand it, or you just solved this in the wrong way, sorry, I'm just going to say, you're going to have to redo this in some fashion. But I think for pull requests, I am much more open to people tagging that AI was used. For individual files, I think that's kind of a losing battle, because especially if you're in a team project, it becomes out of date so fast.
If we reduce hiring of junior engineers, what does the pipeline look like to train qualified senior engineers? Absolutely. I think this is a fundamental problem that we're dealing with. We are not going to be able to lean on the exact same quotient of senior developers forever. If you assume that models and agent quality are going to get much, much, much better in the next couple of years, maybe that will help solve some of the problem. I don't think it's going to solve all the problem at all. My personal perspective is that companies who are aggressively already reducing junior hiring are not taking a long-term view on this. As an industry, we need to be much more mindful about reducing those junior roles. We need to keep them in place. Agreed.
Comments