And if there's a very high tolerance, well, great. If you really don't get a lot of value from the AI, then you might have some kind of hybrid collaboration. Maybe some folks want to use AI for their particular work. Others don't. If there's not a lot of value to AI and it's a very high tolerance, it really doesn't matter too much. But if there's a very high value of using AI and it has a very high tolerance, a very high entropy tolerance, where inaccuracy is okay, it's not going to be the end of the world, it's not going to be a big risk to you, to the organization, to the business, to whatever it is you're leading. That could be an AI-led process. That could be a vibe-coded process. So by having this decision framework, we can then make choices.
For example, let's suppose that the task that we're going to do is dealing with customer requests for support tickets in some kind of ticketing system that starts with a chatbot. So the question is, does this process tolerate randomness? Well, it might depend on what the system is, what kind of tickets we're getting from customers. If it's a banking system where financial transactions are going on and we get a support ticket because something went wrong with the financial transaction, that might be very different from a support ticket for something far less urgent. Then whatever that is, let's say we decide as an organization, as leaders, it's medium tolerance, we have some flexibility, then the question becomes, what's the value? Well, let's suppose the value is very high, because if we add AI and LLM to sorting the tickets, dealing with the tickets, responding to tickets, highlighting urgency, well, that could let us deal with other things, other problems that are more potentially important and can surface the really important things that we need to focus on.
So that would be, for this, perhaps you would decide it's human-in-the-loop collaboration. We have an AI perhaps be the first level of interface with the customer, but allow the customer to upgrade it to a human, or perhaps have the AI categorized by urgency and then have humans review depending on the level of urgency, for example. So as we can see, by making those choices, by starting with the question, what is the entropy tolerance, that helps define how we deal with AI within the process itself. There are other examples we could think of, of low, medium, and high entropy tolerance processes. For example, what might be some low tolerance processes? Medical diagnosis systems, very low tolerance for inaccuracy, financial transaction processing, very low tolerance for slightly off numbers that aren't deterministic, that aren't consistent.
Comments