AI Engineer End-to-End Workflow: Productivity Challenges and Their Solutions

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Building generative AI applications requires a GenAIOps mindset that takes you on a journey from model selection to prompt ideation, prototype evaluation, and app deployment. In this talk, we’ll explore the GenAIOps journey from an AI Engineer perspective and identify productivity challenges, and the tooling solutions that can mitigate these issues and help streamline their developer experience.

This talk has been presented at Productivity Conf for Devs and Tech Leaders, check out the latest edition of this Tech Conference.

FAQ

Nithya Narasimhan is an AI advocate at Microsoft.

The focus of the talk is on productivity challenges and solutions for AI engineers, particularly in the journey from catalog to code to cloud.

An AI engineer is someone who can pick a model, customize it through various techniques, evaluate it for quality, and deploy it for real-world applications.

'Exposure to AI' refers to a task being exposed to AI if AI-powered software can reduce the time needed to complete the task by at least 50%.

Contoso Chat is a generic retail application used to demonstrate the integration of a chatbot into a retail website to improve customer interaction and sales.

The desired features are conversational interaction, grounding responses in catalog data, context awareness, and safety from malicious activities.

The three stages are ideation, augmentation, and operationalization.

Infrastructure as code allows defining infrastructure needs through code, ensuring consistency across team members and enabling use of tools like Azure Developer CLI for provisioning.

The Azure AI Inference API provides a unified abstraction to interact with various models, allowing developers to swap models with minimal effort and enhance productivity during the ideation phase.

AI-assisted evaluation involves using another AI to grade the responses of the first AI, often referred to as 'LLM as a judge,' to scale evaluation against large datasets.

Nitya Narasimhan
Nitya Narasimhan
22 min
27 Mar, 2025

Comments

Sign in or register to post your comment.
  • Va Da
    Va Da
    P4
    AI is taking over the world.
Video Summary and Transcription
Today's Talk explores productivity challenges and solutions for AI engineers, focusing on an application called Contoso Chat. The end-to-end workflow is examined from a productivity perspective, introducing the concept of Exposure to AI. Building an AI application involves ideation, augmentation, and operationalization stages. Provisioning and setup are crucial steps, with infrastructure as code being a productivity tool. GitHub Codespaces and development containers provide consistent development environments. Azure AI Inference API allows easy iteration and prototyping. AI-assisted evaluation involves training AI models and using evaluators for grading responses. Custom evaluators can be created. Overall, the Talk emphasizes the importance of productivity throughout the AI engineer's journey.

1. Introduction

Short description:

Today, I want to talk about productivity challenges and solutions for the AI engineers journey. I will distinguish between apps and ops and explore the use of AI-based efficiencies. We will focus on an application called Contoso Chat to discover how AI can improve productivity throughout the workflow. An AI engineer has the skills to pick and customize a large-language model, evaluate its performance, and deploy it for real-world applications.

Hello, my name is Nithya Narasimhan and I am an AI advocate at Microsoft. Today, I want to talk to you about productivity challenges and solutions for the AI engineers journey from catalog to code to cloud. But to set the stage, I actually want to start by asking, what does productivity mean and who is this AI engineer? And I want to think of it in two steps.

First, I want to distinguish between apps and ops. And second, I want to use this really nice question, is my task exposed to AI, as a way to ask, is there room for AI-based efficiencies in this step of my workflow? For the bulk of this talk, we'll really focus on setting the stage with an application called Contoso Chat that I've been working on for a year or more and use that to kind of walk through this end-to-end workflow and say, in every stage of this, is there room for efficiencies and productivity with AI?

So to set the stage, we all, when we think about AI and productivity, we're really thinking of ourselves as app developers. And we're using AI for efficiency in our coding workflows. That means we might be scaffolding out a new app. We might be writing tests. We might be writing documentation, debugging, getting explainers, et cetera, using tools like Copilot. But when I think about an AI engineer, an AI engineer is somebody who has the skills to go all the way from the very first step of picking a large-language model and then customizing it through prompt engineering or fine-tuning or retrieval-augmented generation, and then evaluating it to make sure that it's providing good quality responses for a lot of diverse inputs, and then deploying it so it can actually be integrated with real-world applications to deliver that service.

2. Productivity Perspective

Short description:

Let's look at the end-to-end workflow from a productivity perspective. Chip Huyan's AI Engineering book introduces the concept of Exposure to AI, where tasks can be considered exposed to AI if it reduces the completion time by 50% or more.

Now, that's a lot. But how can I now look at that end-to-end workflow from a productivity perspective? And I want to recommend all of you read this book if you haven't. This is Chip Huyan's AI Engineering. And within the first chapter or so, she actually has this term that caught my attention called Exposure to AI. And it comes from this paper called Elundo et al. But what I really liked is it kind of says exposure to AI. It defines a task as being exposed to AI if AI and AI-powered software can reduce the time needed to complete that task by at least 50%. And I thought this is something I can measure.

3. Contoso Chat Application

Short description:

In this part, we explore an application called Contoso Chat. It is a retail website that uses a chatbot to help customers find products. The chatbot needs to be conversational, grounded in data, contextual, and safe. Building such AI applications can be complicated and requires a paradigm shift.

So if in my end-to-end workflow, if there are parts of that workflow where I can actually use AI to reduce the time it would have taken me to do the task, then that's a productivity measure.

So now let's actually talk about an application. So I'm going to play this. And this is Contoso Chat. So Contoso Chat is this application that kind of is probably familiar to a lot of you. It's a generic retail application. And I'm going to actually see if I can play this and let it speed up just a little bit.

So when we look at this, this is the equivalent of a retail website. And it has a ton of products. It's a camping and hiking website. And what they've realized is that customers coming to this are not able to find the things they want. So they call the customer hotline and they're losing sales. So they decide to integrate a chatbot right into the site so the customers can chat with their product catalog. So they want four features.

First, it has to be conversational. A customer should be able to ask questions the exact same way they would on a customer hotline using natural language. And it should respond in natural language, conversational. Second, it should be able to ground the responses in the data on that site. So if a customer says, what should I buy for a trip to Andalusia? It should come back with content from that catalog, this tent, that camping stuff, etc. Third, it should be contextual. Since the customer has been chatting, it should know about the chat history. It could also use the customer's login to say, okay, let me go fix your, look at your previous purchases. And can use that contextually to help provide relevant responses to their questions. Last but not least, it has to be safe. What that means is that we want to make sure malicious users can't do things like jailbreak the system. They shouldn't be able to ask me what my rules are or try to change my rules. And I need to program that into the system as well.

That's a lot. Building generative AI applications can be complicated. And it requires us to have a paradigm shift.

4. Building an End-to-End AI Application

Short description:

Building an end-to-end AI application involves three stages: ideation, augmentation, and operationalization. In the ideation phase, the AI engineer selects models and tests them with a single prompt. In the augmentation phase, the model is customized and evaluated using diverse inputs. Finally, the application is operationalized by going through multiple steps, with a focus on productivity and consistency at both individual and team levels.

Because natural language for inputs and outputs means there is no end of possibilities. And we need to check against them all. So when you think about building an end to end, we're really thinking of three stages.

The first ideation is where your AI engineer says, what is my application? What model should I use? So over here, I know I'm going to do retrieval augmented generation. So I know I need to pick at least one embedding model and one chat model. But then in this first phase, they get a model and there are so many options. Which model do I choose? I pick one. And then I test it out with a single prompt and say, does it do what I think I need for my app? And if it does, OK, I've got my model, I can move to the next stage.

In the next stage, I then start augmenting it. I start customizing that model. This is when I do prompt engineering. This is when I bring data from other sources for RAG. This is when I might have to fine tune. But regardless, I have now got myself to a point where I've really made that into a robust prototype. And I've been doing this by testing it with a single prompt manually. But at this stage, I also do evaluation. I take a lot of data sets and I say, I want to run a scale evaluation, test against diverse inputs and see if the quality and safety metrics hold. At that point, I'm ready to go to operationalization.

Now, this looks very simple, right? Not so fast. If you look under the covers, there's a ton of steps. This is where I need productivity because I have to make sure I go through all these steps. And if I could somehow use AI to speed it up, that would be an efficiency. So let's look at what the challenges are. We can actually think of challenges in two different ways. First, there's productivity at an individual developer level. Second, is there's productivity at the level of the team. And those are two different things. At an individual level, it's like saying whatever task I'm working on, can AI help me make that efficient? But when we think about the team as a whole, because this is a complex project, we really want to think about consistency and reproducibility. Whatever kind of technique or tool we're going to use has to work across everybody. And so, when we think of the workflow, I really want you to think about these steps.

5. Provisioning and Setup

Short description:

Provisioning involves ensuring the availability of necessary resources in the cloud, such as Azure Managed Identity for authentication, Azure Container Apps to host the application, Azure OpenAI Service to deploy models, Azure AI Search for knowledge retrieval, and Cosmos DB for customer database and history. Provisioning is done on the infrastructure side, while setup focuses on the development environment. To improve productivity in these steps, infrastructure as code can be used, which allows defining infrastructure needs through files in the code base.

Provisioning and setup. Provisioning is where I say that is the architecture diagram for my application. And I need to make sure I have all those resources provisioned in the cloud. So here, I have Azure Managed Identity. I need some resource for authentication. I have Azure Container Apps. I need some resource to host my application. Azure OpenAI Service. I need a resource that will deploy my models. Azure AI Search. I need a resource that will do knowledge retrieval for me. Cosmos DB. I need a resource that has my customer database and history. And so on.

Provisioning is done on the infrastructure side. Setup is really about my development environment. And so, those first two steps are just me getting ready to kind of start building, right? Once I do that, I have to go through ideate, evaluate, and deploy. So let's take a look at how we can start looking at productivity in these steps. Productivity and provisioning. So I'm really quickly going to show you that, on the one hand, I have a code space running, which has the Contoso chat repo. And on the other side, I have a code space running, which has a GitHub models code spaces. And we'll talk about each of these in a minute. But in my first step, I want to look at provisioning infrastructure. How could I make that efficient? So I want to ask, what is exposed to AI in this step? And it turns out that the solution to both the challenge of consistency across the team and exposure to AI come from the same source. I am going to use infrastructure as code. Infrastructure as code is where I'm able to define my infrastructure needs through files that I can put into my code base. So over here, let me actually close this out a little bit so you can see clearly. I actually have this Azure YAML file, and this is telling me what resources are going to be needed. So it says, OK, first of all, here are the top level apps and resources that I will need to set up in environment variables. There's an infrastructure folder over here, which has more details on every resource I need to implement.

6. Infrastructure and Development

Short description:

The minute I've defined my infrastructure as code, everyone on my team gets the same experience. The Azure Developer CLI or AZD can read infrastructure files and handle provisioning. Copilot Edits allows quick modifications of code. Infrastructure as code and configuration as code ensure consistency across the team. Development containers provide a container with pre-installed dependencies for a consistent development environment.

And these are all files. Why is that important? The minute I've defined my infrastructure as code, I can check it in, and now everyone on my team gets the exact same experience. There's a tool called the Azure Developer CLI or AZD, which knows how to read this infrastructure files and actually do the provisioning for me, which means if I want to change something, I can commit it to the data into the repo, and everyone on my team will get that same change and the same provisioning experience out of the box.

But wait, the minute it's code, I can now use Copilot to do something when I want to modify it. So over here, let's go back in here. I actually am using this new feature called, if you haven't tried it, you definitely should. Let me go ahead and first get my prompt. Copilot edits allows you to basically do edits with multiple files. It's a new feature, so you can, if you install the Copilot chat extension and look at the stab, you can try it out. But over here, what I really wanted to do, and I'm going to actually close this file just to give you the full experience, I can go in here and I'm going to copy over that prompt that I had in my slide, which says, hey, I know AZD kind of manages this infra and I know nothing about Bicep or Terraform. Can you just locate wherever the file is that talks about AI models and change the GPT-4 capacity to 40? Because it looks like I need more code. And lo and behold, if you saw that, it found the file and it went ahead and said, hey, this is the line to change the capacity. Should I go ahead and do it now? For now, I'm going to undo this because this is actually working code and I don't want to change it. But you get how quick that was. So, look at the amount of time I saved. I didn't know anything about AZD. I don't know anything about YAML. I don't know anything about Bicep, don't know where the file is. Save me all the time. I said what I wanted done. It got it done because infrastructure was as code.

Next, that was for the provisioning part. Now we want to talk about the development. So provisioning was about infrastructure. How about the development environment? Same deal. The way we have infrastructure as code, we now have this ability called configuration as code. If I can define my configuration in files that get checked into my code base, then everyone on the team gets the exact same experience. In this case, we achieved this through something called dev container or development container. A dev container is a Docker container configuration that you can check into your code base. And everyone who kind of checks out your code basically activates that container and they're getting a container with all the dependencies pre-installed.

7. GitHub Codespaces and Development Environment

Short description:

In the case of GitHub Codespaces, you can launch the dev container from the GitHub repo and get the same experience. The configuration is a code file, allowing easy modifications. Modify the development environment with minimal effort. Add features for different dependencies in the dev container.

In the case of GitHub Codespaces, you can go to GitHub repo and literally launch the dev container from the repo and you will get this container set up in a VM in the Azure cloud all ready to go with no effort on your part. And everyone on the team gets the exact same experience.

Again, now the configuration is a code file, so I can change it. So here I'm going to do a kind of funny demo. Before I kind of started this, I looked at what extensions I could kind of install. So there's this really cool extension called VS Code Pets, I'm going to show it to you just in a minute. And what it does is it puts little pets on your timeline that keep walking around while you code. I thought it was pretty cute. Now, I'm not saying this is going to boost your productivity or maybe it does, but what I really wanted to tell you is you can see that I actually have a dev container.json.

Now, in here, I just showed you how you can modify your development environment and then check it in with very minimal effort. But you could not think about that dev container being where you're adding in other dependencies. Please go ahead and add a feature for the Node environment, add a feature for the Azure CLI, add a feature to support Jupyter Notebooks, etc.

8. Adding Extensions to Dev Container

Short description:

Talking about the ease of adding extensions, such as the VS Code pets extension, to the dev container configuration. Modify the development environment with minimal effort and add features for different dependencies in the dev container.

Now, I'm talking about it as a pets extension, but it could have been something a little bit more substantial. It could have been an Azure for copilot extension, etc. But for now, just to show you how easy it is, sorry, that's the wrong prompt. Let me go ahead and grab the new prompt.

So this new prompt, I'm saying, OK, in this code base, I would like you, copy and paste errors. OK. So I'm going to have you add the VS Code pets extension to my dev container configuration. And remember, I kind of maybe don't know where the dev container file is. I don't know where to add it within that file, etc. What it's able to do is it finds it out for me, goes through, and there, it's added that in there. Right? So I'm going to undo this for now, but you get the drift. It's super easy.

Now, in here, I just showed you how you can modify your development environment and then check it in with very minimal effort. But you could not think about that dev container being where you're adding in other dependencies. Please go ahead and add a feature for the Node environment, add a feature for the Azure CLI, add a feature to support Jupyter Notebooks, etc. So this is development.

9. Building with Azure AI Inference API

Short description:

Picking a model and testing it with a prompt using the Azure AI Inference API allows for easy iteration and prototyping. The API provides a unified abstraction to any supported model, enabling code to be written once and models to be swapped with minimal effort.

Now let's come to where we're starting to build the application. So what's exposed to AI here? So at this point, I want to build an application by picking a model and testing it with a prompt. And then I want to iterate on it until I get a working prototype. To do this, I'm actually going to show you something called the Azure AI Inference API, which is something we all use, but there will be others who do similar things. And the way this works is as follows. When you think about a model and you want to program to it during the ideation phase, you have to pick a model, you have to test the prompt. And if it's not good enough, you have to throw it away, go and deploy a new model, test a prompt, and so on. Wouldn't it be cool if in this first phase, you have the ability to just, from one place, try out various models with a prompt without having to go through all this hassle? Could I make it more productive? Well, it turns out that every model has an SDK or some library for you to talk to it. So OpenAI has an OpenAI SDK, Mistral has its own SDK, Coher might have its own. So no matter what, if I start writing code, I have to write each one of these and then I have to change that. The Azure AI Inference API takes that problem away from you. It gives you a unified abstraction to any model that it supports. And so what you would do is you'd write your code to this, and this would then map to the specific model. All you tell it is, hey, today I'm using this model. It'll figure out how to map these calls onto a model-specific SDK. And so you're able to now write your code once and swap models with minimal effort. That's a productivity hack right there.

10. GitHub Codespaces Models and Azure Inference API

Short description:

GitHub Codespaces Models allows for easy experimentation with cutting-edge models and ideation. The examples provided by GitHub include raw SDKs and Azure inference. By running a basic.py example, the user can utilize the Azure Inference API to ask a question and receive a response while also making changes to the code to call two models and modify the question for a specific use case.

Well, wait, we can do more. Because the minute I have an API, it's code. And now I can again go ahead and try to do things with it from a copilot to speed up my ideation process. So here I'm actually using a different code space for a very simple reason. This is called GitHub Codespaces Models, and I've actually launched a Codespaces on that. Codespaces Models is associated with the GitHub Marketplace. So let me actually show you that GitHub Marketplace models. OK. So if you've never played with it, you should. The GitHub Marketplace has got a whole bunch of models available, and you don't need an Azure account. All you need is a GitHub account. They're free to use. They're rate limited. But it gives you a really nice place for you to go play around with a lot of cutting-edge models and ideate. Like, get a sense of, is this going to be good for my use case? Now, what we've got with the Codespaces Models repo is GitHub has set up all of these examples using the raw SDKs, but also Azure inference.

So you can kind of use a Mistral SDK if you want, but there's also this inference SDK. And what I want to do is I want to go ahead and say, hey, I know that over here, and I'm going to run this first, this basic.py, I know that there is this basic.py, which is an Azure Inference API example. And what it does is it uses a model. Let's see where that is. It uses gpt40mini, and it's asking a simple question. What's the capital of France? But what I want to do is I want to use this to ideate, right? So kind of here is my new prompt. I'm going to put it in here. And I'm going to say, can you please update this? But I now want you to change the code so it calls two models, not one. So because I want to be able to look at both of them. And I want you to change the question because I'm actually building an app for tourists, right? So I can go ahead and say, OK, this time, I'm actually going to accept it so we can run it. And when I take these responses, it wrote the code for me. I'm going to run this. And now, without me even having to know how the API was written, I said what I wanted done and it changed the code. And there you go. First, it's running it on that very first model, gpt40.

11. Evaluation and AI-assisted Evaluation

Short description:

gpt40 and DeepSeek are different models used for chat completion and reasoning respectively. The AI-assisted evaluation process involves training one AI to answer customer questions and another AI to grade the responses. The grading is done using evaluators, such as the custom evaluator for coherence, which assesses the response based on a scale of one to five. A new custom evaluator can be created, such as the Emojiness metric, which evaluates the number of emojis in a response.

And for what it's worth, gpt40 is a chat completion model, but DeepSeek is a reasoning model. So gpt40 is already back. Here are the answers because you just ask for it. I'm done. Reasoning is still thinking through and saying, wait, let me understand the nuance of this question. And I'm going to come back. And there it did. Right? Now, immediately, this is great for me because with one prompt, I was able to run my app question on multiple models, see it, and I'm looking at it going, you know what? I think I'm going to use gpt40. Pretty nice, right?

So now next, that was productivity and ideation. How about evaluation? How does evaluation work? Now, by default, we use something called AI-assisted evaluation, which is already exposed to AI. Let me explain. When you do an app with AI, you're basically kind of writing a prompt and you're testing it manually, right? You write this prompt, you check out for the responses and so on. But when you want to test it against a large number of inputs, you need to scale it. How are you going to scale it? Natural language means that it could be anything. You scale it by creating another AI to grade the first one. They call it LLM as a judge. And so effectively what you've got is you first train one AI to answer customer questions, and then you've trained another AI to grade the first AI. And the way it does that, and I'm going to show that to you in a second, we use a technology called Prompty, but there are other ways for you to do it, is it writes what's called an evaluator. And in here, let me see if I can find that folder for you for just a second. It'll be under source. We'll take a look at what that looks like. So under here, under evaluators, I have a custom evaluator for coherence. So coherence is saying, hey, the response coming back, I want you, my AI, to use this prompt template, take the response from my chat AI, and I want to use these instructions to grade it. And the instructions are saying grade it on a scale of one to five. And here are examples of what a one looks like, what a two looks like. Use this to figure out what to grade it and give me a grading. And so if I look at this, I have an example of a custom evaluator. But I am somebody who wants to write a new custom evaluator for my app. So what I'm going to say is, hey, I want to have a new metric called the Emojiness. And what I want you to do is I want you to take this coherence as an example, and I want you to create a new evaluator for me that is going to build a rating one to five, that sees how many emojis there are in that particular response.

12. AI-assisted Evaluation and Custom Evaluator

Short description:

The AI-generated text was evaluated using an AI-assisted evaluation process. Initially, the evaluation rated the text as one, indicating the absence of emojis. However, after adding emojis to the response, the evaluation rated it as five. A custom evaluator was created to assess the responses using AI.

And see, it built that for me. So let's go ahead and look at this. And you can see it took that example that it had, and it actually came up with a similar text, like a similar way of thinking. And now if I run this, it's coming back to me with a rating of one.

And let's see if it's right. It says, OK, a rating of one means there were no emojis. And apparently there were no emojis in this. So let's check this out. I'm actually going to wrap this so we can see all the thing. And so you can see in the sample question, oh, there were actually a lot of emojis. So let's try this again. Oh, OK, this was in the sample question, and that's the context, but it wasn't in the answer. The answer did not have any emojis. That's why it rated it. But now let's see if we can put these emojis in the answer and try it again. And this time, it gave me a five because now it's able to use the thing that it came up with. I just wrote a custom evaluator to evaluate my answers, and I did it with AI.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Web Apps of the Future With Web AI
JSNation 2024JSNation 2024
32 min
Web Apps of the Future With Web AI
Web AI in JavaScript allows for running machine learning models client-side in a web browser, offering advantages such as privacy, offline capabilities, low latency, and cost savings. Various AI models can be used for tasks like background blur, text toxicity detection, 3D data extraction, face mesh recognition, hand tracking, pose detection, and body segmentation. JavaScript libraries like MediaPipe LLM inference API and Visual Blocks facilitate the use of AI models. Web AI is in its early stages but has the potential to revolutionize web experiences and improve accessibility.
Code coverage with AI
TestJS Summit 2023TestJS Summit 2023
8 min
Code coverage with AI
Premium
Codium is a generative AI assistant for software development that offers code explanation, test generation, and collaboration features. It can generate tests for a GraphQL API in VS Code, improve code coverage, and even document tests. Codium allows analyzing specific code lines, generating tests based on existing ones, and answering code-related questions. It can also provide suggestions for code improvement, help with code refactoring, and assist with writing commit messages.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps