So actually the company is losing money instead of gaining value. And then also, we've seen company leaders like, for example, a CEO from Fiverr sending a memo to all of the employees about AI is coming for your jobs. And, you know, there are many more examples similar to this. I've spoken with many different CEOs and also company leaders, and I saw that being the case in many different situations. In some cases, I also saw, you know, different CEOs trying to wipe out the homepage of the of the current website and then saying to engineering leaders and engineers how easy that is to be to be the case. And then, you know, it all trickles down to engineering leaders and their teams to being able to manage such unrealistic expectations. And this is what we're going to talk about next. Yeah, I've heard a lot of different unrealistic expectations from many different individuals. And, you know, believing AI is a plug and play solution that you can just, you know, plug it in into existing workflows and things are just going to work out very, very well. But, you know, it's really the case that plug and play rarely works across different improvements, and especially when it comes to processes. And, you know, expecting AI to eliminate all manual work is something that is also, you know, it's very sensational. And it's just not possible. Understanding the cost and complexity of building internal LLMs. Yeah, it's a big difference if you're building a model yourself or if you're using existing models like GPT, for example. You know, there can be a difference, can be, you know, between three or three hundred days when you're building a certain feature with calling the existing model. You can just call that API and it will do a lot of that heavy lifting for you. But if you build your own model, then you're going to need a lot bigger team and a lot more time and research to do that. Yeah, assuming AI will just magically understand the whole context and the specific of the specific domain, expecting AI to be replacing entire teams, assuming AI will just work without failures. And then also, you know, thinking that there are no ethical, legal and security risks when it comes to AI. You know, if you're not careful, if you're just, you know, if you're just generating a lot of codes and you're not you're not reviewing correctly, then, you know, the problem is going to be with security, with ethical reasons, legal reasons as well, because the human accountability should always be on the human side and never on AI. So, yeah, also, you know, assuming AI will exponentially increase the performance over time and so on. And, yeah, this is a good illustration of what's happening. And it all starts with sensationalistic takes by non-public individuals. That then trickles down to company leaders experiencing FOMO with AI and forcing AI usage and AI feature development. What we've seen from a lot of different companies is that they're rebranding their companies to AI companies and also forcing development of AI features. What happens a lot of times is that this backfires back and, you know, by trying to enforce too many different features, you lose the essence of what is actually what problem are actually solving for the users and so on. A good case for this happening is Duolingo, where they completely made the AI transformation and a lot of users didn't like that and they stopped using it. And such things, you know, feeling FOMO from company leaders then trickles down to engineering leaders and teams operating within unrealistic expectations. So this becomes a big problem then for engineering leaders and engineers. And that's also the reason why a lot of engineering leaders feel that AI is impacting the industry negatively.
Comments