While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot. While the stakes were still relatively low in this case, Air Canada only had to pay out about $812 Canadian dollars. The precedent that this legal case sets is really important. And speaking of court, a Manhattan judge imposed a $5,000 fine on two lawyers, Peter DeLuca and Stephen Schwartz, who used ChatGPT to generate a legal brief for their 2023 lawsuit, which, as it turned out, included six fictitious case citations. In this case, ChatGPT hallucinations, or the AI confidently stating misinformation is fact, landed them in some pretty serious hot water. Notably, this is different than that first case with the Chevrolet dealership, where the AI was manipulated into returning false information. In this case, the lawyers simply asked a question and were given incorrect falsified information in return. In addition to the fine and the sanctions on the lawyers, and also perhaps more importantly, the judge dismissed their lawsuit entirely.
And of course, not everything is quite so clear cut and obvious as a chatbot giving factually wrong information. Sometimes it's not about a single piece of content, but rather trends and biases that are only noticeable when we look at large samples of AI output material over time. For example, the Conversation, a non-profit independent news organization, looked at over 100 mid-journey generated images over the course of six months and found several recurring biases. This included ageism and sexism, where images were more likely to show older men in senior positions, and only included signs of aging like wrinkles or gray hair in depictions of men. They also found signs of racial bias in what they describe as an assumption of whiteness. They tested results for the titles journalist and reporter and found that, when race was not specified in the prompt, the resulting images depicted exclusively light-skinned individuals. There are also biases around urbanism, where the AI tends to place all individuals in cities versus rural areas, even though only about half of the world's population actually lives in a city. So what can we do to help mitigate some of these issues?
At this current point in time, it is fair to assume that any gen-AI tech that we're incorporating into our applications has the possibility, the potential, to return these hallucinations, biases, misinformation, and other similar shortcomings. We cannot simply ignore that reality. But does that mean we have to throw the whole thing in the garbage and walk away? Not necessarily. In an October 2023 article shedding light on AI bias with real-world examples, IBM states that identifying and addressing bias in AI begins with AI governance, or the ability to direct, manage, and monitor the AI activities of an organization. Note that they are also not suggesting we throw the baby out with the bathwater here. Rather, they're recommending the inclusion of human checkpoints, human involvement in processes that we combine with these AI tooling. They've created the following list of practices to help ethically implement AI. First, compliance. AI solutions and AI-related decisions must be consistent with relevant industry regulations and legal requirements. This one is, of course, just kind of setting the baseline. Anything that we build with AI needs to be within the bounds of the law. Of course, this becomes slightly more complex when you're building a global product, as many of us are, and because these laws are changing pretty quickly. In the European Union, the Parliament, Commission, and Council reached a political agreement on the Artificial Intelligence Act in December of last year, which, at this point, is looking to be the world's first comprehensive regulation of AI.
Comments