Ethical AI for the Rest of Us

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

When you were a teenager, someone probably sat you down to explain “the birds and the bees”. For many, this was an uncomfortable topic; maybe even one that was avoided for as long as possible. In the development community, I’ve noticed a similar approach being taken to discussing the ethics of AI. But in the famous words of Salt-N-Pepa (mostly): “Let’s talk about AI, baby! Let’s talk about you and me! Let’s talk about all the good things and the bad things that may be.” AI is not going away anytime soon: it’s wildly interesting, full of potential, and capable of so much good – however, it also has the potential to cause serious harm. So let’s get real for a bit and talk about what should be considered in order to use AI responsibly: bias, misinformation, dataset sources, accountability and more. After all…if you’re not ready to talk about AI, then you’re probably not ready to have it.

This talk has been presented at React Summit 2024, check out the latest edition of this React Conference.

FAQ

Katherine Grayson-Nanz is a developer advocate at Progress Software.

The inclusion of AI features is a major trend in the tech industry, with many companies incorporating AI into their products, often due to pressure from stakeholders who fear being left behind.

Examples of generative AI tools mentioned include ChatGPT, DALL-E, and Mid-Journey.

Using AI without intention can lead to user frustration, damage to brand reputation, bias, discrimination, and misinformation, which can have real-world harmful effects.

The Progress study found that 65% of organizations experience data bias in AI today, and 78% are concerned that data bias will become a bigger issue as AI use increases.

The Canadian Civil Resolution Tribunal ruled that Air Canada had to pay damages after a chatbot on their website provided incorrect information, establishing that companies are responsible for information provided by AI on their websites.

Best practices for ethically implementing AI include compliance with legal regulations, building trust with users, maintaining transparency, ensuring efficiency, and incorporating human oversight at critical decision points.

Trust is crucial because users need to feel confident in the reliability and ethical standards of the AI systems. Companies need to build an established culture of trust and ethical development to make users comfortable with AI features.

The Human-in-the-Loop system involves human oversight in AI processes, ensuring that AI recommendations or decisions are reviewed by humans before being finalized, which adds a layer of quality assurance and reduces the risk of bias and errors.

Retrieval Augmented Generation (RAG) is a system that supplements base AI knowledge with personalized content and data from specific sources, enhancing accuracy and reliability of AI-generated results by providing specific documents as references.

Kathryn Grayson Nanz
Kathryn Grayson Nanz
21 min
18 Jun, 2024

Comments

Sign in or register to post your comment.
Video Summary and Transcription
AI implementation without considering user benefits can lead to harm and bias. Legal cases highlight the need for AI accountability and addressing biases. Trust, transparency, and efficiency are crucial for building AI systems. Consider the impact of AI on user experience and engage with users. Human oversight is necessary to ensure safety and respect.

1. Introduction to AI and Its Challenges

Short description:

AI is a major trend in our industry, but there is a push to incorporate it without considering the benefits to users. Rushing to implement AI without intention can lead to harm, frustration, and bias. A recent study found that 65% of organizations experience data bias in AI today.

Hey there. I'm Katherine Grayson-Nanz, a developer advocate at Progress Software. Now, if you haven't been living under a rock for the last year, you've probably been hearing a lot about a particular technology these days. AI. We are well out of the awkward beginning phase now, past the point where it was fun to play with, but also, you know, consistently turning out images of people with 12 fingers.

These days, we are seeing more and more serious uses of generative AI in our everyday lives. From Instagram's Ask Meta AI to Notion's AI writing prompts or Twitter, the inclusion of an AI feature is a pretty major trend within our industry right now. And to be clear, I am not saying trend here in a negative way. I'm just attempting to capture the fascination with this in the current moment, because I've heard and seen myself the push that so many of us are getting to incorporate AI into whatever we're building, often regardless of whether it would actually be beneficial to our users or not.

This can often come from folks who aren't directly involved in the development process. Maybe a product manager, a VP, a sales person, something similar. There can be a lot of fear right now that a product is getting left behind if it's not leveraging AI in some way right now. So there's this big push for us to all get on the bandwagon. When that kind of pressure is being applied to implement a new technology, especially in our industry with the pre-existing pressure to move fast to get there first, things can get a little messy. There are tons of amazing ways that AI can be leveraged to improve a product. But I think most of us have also seen instances where AI has just been kind of elbowed in for the sake of saying that a product is AI-powered. Like that earlier example with the Instagram search bar, the AI chat. When we use AI technology without intention, we can actually do a lot more harm than good. Sometimes that just looks like frustration or disappointment from our users, which damages our brand and our reputation. But other times it can look like bias, discrimination, and misinformation, which has the possibility to do a lot of real-world harm. In fact, a recent global study commissioned by Progress found that 65% of organizations experience data bias in AI today and 78% are concerned that data bias will become a bigger issue as their AI use increases.

2. Building AI Safely and Legal Implications

Short description:

I am not anti-AI, but I am against moving fast and breaking things. We need to build safely with this rapidly evolving technology. Let's focus on generative AI, which is accessible through tools like chat-GBT, DALI, and Mid-Journey. While AI-powered features can enhance user experience, they can also create problems, such as the case of Chevrolet dealerships and Air Canada's chatbot. Courts are recognizing the legal implications of AI on websites.

I want to make it clear as we're diving into this, I am not anti-AI. What I am is anti-move fast and break things. Because when we do that, when we move fast and break things, what we harm is our users, our human beings. This implementation of AI is still so relatively new, and there is a lot we don't know yet. It's all changing so rapidly. The kind of content that we were getting back from GPT-3 is so different than what we're seeing today with GPT-4, and that's going to be so different from what we'll see years from now, GPT-5. It's incredibly exciting to see the tech develop so quickly to make such awesome leaps and bounds forward. It's a very cool time to be a developer and a privilege to get to work with this technology. But it's also a challenge, and it's our responsibility to build safely with this fast-moving tech. So let's talk a little bit about how we can do that while still keeping our users' best interests at heart.

Now, the word AI is doing a lot of heavy lifting these days, and not all of that is accurate or descriptive. In addition to being kind of a buzzword, it's also become a catch-all phrase, especially for less technical folks that lump together everything from algorithms to predictive text to pre-made chatbots, which may or may not include any actual AI tech. But for our purposes here, in this talk, I want to focus primarily on generative AI. Gen AI is what's making the most waves for our particular audience, developers here now at this time of writing, because it has become so easily accessible. Things like chat-GBT, DALI, Mid-Journey. Let's be real here, most of us don't have the resources to train our own data sets, so we're using these Google or Meta or OpenAI products because they're our most accessible entry into the technology. So that's what we're going to be focused on today. All of those tools, and of course, plenty others out there, have APIs that we can leverage to start building AI-powered features into our applications really easily. And of course, plenty of folks, including I'm betting many of you, have already done so. Of course, we have also seen a fair share of situations where AI-powered features have created pretty terrible user experience or done some serious damage. For example, at the end of last year, we saw a handful of Chevrolet dealerships regret their placement of chat-GBT-powered chatbots on their website after it became a Twitter joke to manipulate said chatbots into selling them cars for a fraction of the retail price. Now, did Chevrolet actually lose any money on this? Almost certainly not. In fact, in one of these screenshots, we can even see the please confirm all information with the dealership safety net that they baked right in from the beginning. However, they did lose serious reputation. When you Google Chevrolet of Watsonville, several of the first autocomplete search results are still references to this mishap months and months later, and they probably will continue to be for many months yet to come. Not what you want from a brand positioning standpoint. Sometimes, though, we do see legal repercussions, not just social. Courts are beginning to rule that what an AI says on your website actually does hold weight. Just a few months ago, in fact, in February, the Canadian Civil Resolution Tribunal determined that Air Canada would have to pay damages after a customer was given incorrect information regarding bereavement flight costs by a chatbot on their website. As Tribunal member Christopher Rivers stated, in effect, Air Canada suggests that the chatbot is a separate legal entity that's responsible for its own actions.

3. AI Accountability and Addressing Biases

Short description:

While a chatbot has an interactive component, Air Canada's responsibility for information on its website is undeniable. Legal cases involving AI, such as the one with the Chevrolet dealership and the two lawyers who used ChatGPT, highlight the need for accountability. Biases in AI output, whether related to age, race, or urbanism, need to be addressed. IBM suggests AI governance and the inclusion of human checkpoints to mitigate these issues. Compliance with regulations is crucial, especially with the changing legal landscape, such as the European Union's Artificial Intelligence Act.

While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot. While the stakes were still relatively low in this case, Air Canada only had to pay out about $812 Canadian dollars. The precedent that this legal case sets is really important. And speaking of court, a Manhattan judge imposed a $5,000 fine on two lawyers, Peter DeLuca and Stephen Schwartz, who used ChatGPT to generate a legal brief for their 2023 lawsuit, which, as it turned out, included six fictitious case citations. In this case, ChatGPT hallucinations, or the AI confidently stating misinformation is fact, landed them in some pretty serious hot water. Notably, this is different than that first case with the Chevrolet dealership, where the AI was manipulated into returning false information. In this case, the lawyers simply asked a question and were given incorrect falsified information in return. In addition to the fine and the sanctions on the lawyers, and also perhaps more importantly, the judge dismissed their lawsuit entirely.

And of course, not everything is quite so clear cut and obvious as a chatbot giving factually wrong information. Sometimes it's not about a single piece of content, but rather trends and biases that are only noticeable when we look at large samples of AI output material over time. For example, the Conversation, a non-profit independent news organization, looked at over 100 mid-journey generated images over the course of six months and found several recurring biases. This included ageism and sexism, where images were more likely to show older men in senior positions, and only included signs of aging like wrinkles or gray hair in depictions of men. They also found signs of racial bias in what they describe as an assumption of whiteness. They tested results for the titles journalist and reporter and found that, when race was not specified in the prompt, the resulting images depicted exclusively light-skinned individuals. There are also biases around urbanism, where the AI tends to place all individuals in cities versus rural areas, even though only about half of the world's population actually lives in a city. So what can we do to help mitigate some of these issues?

At this current point in time, it is fair to assume that any gen-AI tech that we're incorporating into our applications has the possibility, the potential, to return these hallucinations, biases, misinformation, and other similar shortcomings. We cannot simply ignore that reality. But does that mean we have to throw the whole thing in the garbage and walk away? Not necessarily. In an October 2023 article shedding light on AI bias with real-world examples, IBM states that identifying and addressing bias in AI begins with AI governance, or the ability to direct, manage, and monitor the AI activities of an organization. Note that they are also not suggesting we throw the baby out with the bathwater here. Rather, they're recommending the inclusion of human checkpoints, human involvement in processes that we combine with these AI tooling. They've created the following list of practices to help ethically implement AI. First, compliance. AI solutions and AI-related decisions must be consistent with relevant industry regulations and legal requirements. This one is, of course, just kind of setting the baseline. Anything that we build with AI needs to be within the bounds of the law. Of course, this becomes slightly more complex when you're building a global product, as many of us are, and because these laws are changing pretty quickly. In the European Union, the Parliament, Commission, and Council reached a political agreement on the Artificial Intelligence Act in December of last year, which, at this point, is looking to be the world's first comprehensive regulation of AI.

4. Building Trust, Transparency, and Efficiency in AI

Short description:

The AI Act establishes risk categories and regulations for AI systems, while transparency requirements ensure unbiased data is used. Trust and transparency are crucial for building AI systems that users feel comfortable with. Companies should prioritize protecting customer information and establish a culture of trust. Transparency in AI algorithms and datasets used is essential. Implementing a RAG system can enhance accuracy and reliability. Efficiency is another important aspect, with AI designed to achieve business goals and reduce costs.

The AI Act includes the establishment of different risk categories for AI systems and associated regulations for them, as well as transparency requirements. In the U.S., President Biden issued an executive order in late 2023 regarding AI, requiring all federal agencies to complete a series of AI safety and security-related actions with strict timelines, all of which are reported to have been met as of April 2024. Many of these actions included updating existing rules and regulations to include the existence of AI, like affirming that existing prohibitions against housing and employment bias does include things like AI screening tools. They also created guidelines and committees to inform future AI legislation. This is obviously just one piece of a puzzle that is still developing, but it is something that we all need to keep an eye on if we plan to leverage AI in our products. Ensure that you're aware of the most recent legal requirements and comply with new regulations as quickly as possible.

Trust. Companies that work to protect customers' information build brand trust and are more likely to create trustworthy AI systems. Trust is always a crucial aspect of product development. What we do with our users' information, how reliable our services are, whether we're avoiding dark UX patterns, all of that has a strong impact on both the user experience and the overall company success. AI adds a new layer to this because most lay people, and even some developers, don't fully understand how it works. Making use of a black box technology does require a lot of trust. For users to feel comfortable taking advantage of any AI features we build into our product, we need to have already demonstrated our reliability to them in other ways. We need to have built an established culture of trust and ethical development, ideally before we start introducing AI into the picture. If that doesn't exist at your company, now is the perfect time to take a look at what user responsibility means to you personally, to your team, and to your company and brand as a whole. Ensure that any announcements, any documentation, any marketing communications related to AI in your app explains the situation really clearly. Will user input data be used to further train the LLM? Can users opt into or out of that feature? All that kind of information and more needs to be shared with users using basic language that they can understand.

Transparency. Because of the complexity of AI, an algorithm can be a black box system with little insight into the data used to create it. Transparency helps ensure that unbiased data is used to build the system and that the results will be fair. Speaking of that black box nature of AI, that also means that we need to be as transparent as possible about what dataset is informing our results and where that data came from. For those of us who are working with open AI or similar, that might look like implementing a RAG or Retrieval Augmented Generation system to help enhance accuracy and reliability. RAG allows us to supplement base AI knowledge with personalized content and data. For example, if you wanted to use GPT-4 to answer user questions about your product, you could give it access to things like your SharePoint, your marketing or documentation files in Google Drive, your ticketing system. That not only helps to return more accurate and relevant results, but it also maps those results to specific documents, which adds sources and credibility to your results. Bad data is going to generate bad results. We might not have the resources to completely train an LLM from scratch, nice as that may be, but there are still things that we can do to help improve the efficacy and accuracy of the tools that we are using.

Efficiency. AI should be designed to help achieve business goals, improve speed to market, and reduce costs. This one seems self-evident, right? And yet I think there's some really good guidance here as to where and when we include AI features in our product.

5. Considering AI Usage and Human Oversight

Short description:

Consider the need for AI, its impact on user experience and cost reduction. Engage with users to understand their preferences for automation. Human oversight is necessary to ensure the safety and respect of users. Stay updated on legal requirements, build trust, and use transparent AI systems. Implement checkpoints and reviews to maintain the human touch. Developers have a responsibility to use AI ethically and consider its high risks and rewards.

It's tempting to make something AI-powered because that's what everyone's doing, but I'd encourage you to think more about this. Do we need AI here? Will it speed things up? Does it reduce costs? And perhaps most importantly, does it truly improve the user experience? Is this use case actually enhanced by AI? At the risk of sounding cliché, we have to make sure we're not so preoccupied with whether or not we could that we didn't stop to think if we should.

I would also really encourage you to speak with some actual users about AI while you're making this determination. Sometimes there are things that we can technically speed up or enhance, but those may not be the things that users are interested in automating. Content creation is a great example here, because yes, AI can draft a blog post, can write song lyrics, can create illustrations, but do users really want that?

Finally, human touch. Processes like the Human-in-the-Loop system offer options or make recommendations that are then reviewed by humans before a decision is made to provide another layer of quality assurance. AI may be a powerful tool, but at least at this point in time, it's not completely trustworthy. As we saw in those earlier examples, there are lots of ways in which unsupervised AI usage can land us in some pretty bad situations. Consider where a human review checkpoint would be most useful in your AI approach and ensure that it's not being skipped. These aren't just boxes to be checked, they're crucial to the safety and the respect of our users. It is easier than ever before to take advantage of AI to enhance our websites and applications, but part of doing so responsibly is being aware of the inherent risk and doing whatever we can to mitigate it.

So keep an eye on legal updates, be ready to potentially make changes in order to comply with new regulations, build trust with your users by sharing information freely and removing or reducing that black box feeling as much as you can. Consider implementing a retrieval augmented generation system to enhance the accuracy and reliability of your results and reduce hallucinations. Make sure you're listening to what users want and implementing AI features that enhance rather than diminish their experience. And establish checkpoints and reviews to make sure that the human touch has not been removed from the equation entirely.

Software exists for humans, humans do not exist for software. AI is right now high risk and high reward. As developers, we have a responsibility to build steps into our processes for using ethically. It's a privilege to have the opportunity to work with such groundbreaking tech to be developers in such an exciting time. I know that I am not alone in my eagerness to find out where AI is going to take us in the next 3, 5, or 10 years. I cannot wait to see what you build with it. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps