Let’s Build K.I.T.T. With JavaScript

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In the TV series Knight Rider, the concept of an intelligent car was pure sci-fi. Over 40 years later, we now have the tools to bring it to life - purely in the browser. This talk explores combining AI techniques like RAG and function calling with advanced browser APIs (e.g., WebBluetooth) to create a browser-based AI agent with personality, memory, and autonomy, powered by WebAssembly and WebGPU - fully offline and privacy-preserving!

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

Nico is a frontend developer, part of the Google Developer Experts program in web technologies and machine learning and AI, who presented at JS Nation about recreating the iconic car KIT from Knight Rider using JavaScript.

KIT is short for Knight Industries 2000, an AI-powered car from the 1980s TV series Knight Rider, recreated by Nico in JavaScript as part of a side project.

The key features of KIT include natural language communication, intelligence, a personality, a history with Michael Knight, and a physical form, all of which Nico aimed to recreate using JavaScript and machine learning.

Nico uses Transformers.js for processing language, Whisper for transcribing spoken words, Kokoro.js for text-to-speech conversion, and large language models (LLMs) for intelligence and conversation, including prompt engineering for personality.

Nico uses Transformers.js and the LLM library to implement machine learning models directly in the browser, allowing KIT to process language and interact intelligently.

Vector databases are used to store and retrieve scenes from the Knight Rider series based on semantic similarity, allowing KIT to reference past events and provide contextually relevant responses.

Tool calling is implemented by allowing the application to execute functions based on structured output generated by the LLM, enabling KIT to perform actions like searching for specific scenes.

Nico uses a Lego model of KIT with an Arduino microcontroller and Bluetooth low energy communication to control the car's movements via the browser.

Nico undertook the project for fun and to explore the capabilities of JavaScript and machine learning, enjoying the process of learning and building new things.

Nico faced challenges such as ensuring function calling consistency with smaller models, parsing screenplay data for vector databases, and managing performance constraints with available hardware.

Nico Martin
Nico Martin
24 min
12 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Speaker introduces a side project involving JavaScript and iconic car recreation at JS Nation. Nico, a frontend developer, explains the concept of KIT from Knight Rider and its human-like features, emphasizing the use of machine learning in building it. Using Transformers.js involves defining tasks and models, with options like Whisper for transcribing speech to text and Kokoro.js for text to audio conversion. Adding intelligence through LLMs enhances the capabilities of the pipeline. Models small enough to run on device, allowing browser usage using LLM library. Creating engines and new conversations, streaming replies with added tokens for efficiency. Implementing Kit reasoning process for problem-solving and fun in learning and building using LLMs.

1. Introduction to Side Project

Short description:

Speaker introduces a side project involving JavaScript and iconic car recreation at JS Nation.

Okay. So, hi everyone. So cool to be here at JS Nation and to end the first day with a talk about my latest side project where I try to recreate the maybe most iconic car in television history in JavaScript. But before we start, I have one little thing. Yes, I have a lot of demos in my talks and yes, you are allowed to clap. And I'm telling you this because I'm Swiss and in Switzerland we are really, really like to get permission before we show emotions. And if you don't feel like clapping, you can go to yay.nico.de. I see it's already working and you can launch some fireworks in my slide. I built this for JS Nation. I've never tried it before, but hey, what could go wrong? Yeah.

2. Exploring KIT and Machine Learning

Short description:

Nico, a frontend developer, explains the concept of KIT from Knight Rider and its human-like features, emphasizing the use of machine learning in building it.

But who am I? My name is Nico. I'm a frontend developer or as we learned today, I'm a frontend. I mean, I can't turn it off. I can't turn it off if it's too much. Yeah. I'm also part of the Google Developer Experts program in web technologies and also machine learning and AI, which means that I try to bridge the gap between on-device AI and the browser.

Let's start with a very simple question. Who is KIT? So KIT is short for Knight Industries 2000 and it's this advanced AI-powered car from the 1980s TV series Knight Rider. The whole series is about Michael Knight who tries to solve cases, fights the bad guys and KIT who is this supercomputer, sophisticated AI that helps him. Maybe we should also rephrase the question, what is KIT? And in the 1980s it was pretty simple. KIT is just a very advanced computer doing very advanced computer things. Simple.

Then we learned about machine learning, about artificial neural networks, so maybe KIT is just a huge artificial neural network that learns by changing the connections between those neurons. But now in the age of AI, I have a simpler explanation. So maybe KIT is just an AI agent that connects different services with a decision-making process. Sounds good. Let's build it. If you want to build it, you first need to dig into the features. What are the features that make KIT human-like? First of all, we can talk to it in natural language. It is intelligent, so it can process information. It has a personality, it's not just a soulless machine. It has a history, so Michael Knight and KIT, they lived through so many adventures, so they can talk about all that history. And last but not least, it has a body, so it has a physical form that we can interact with. Now, if you want to build that, and I'm a front-end engineer, so naturally I want to see how far I can go in the browser. If you want to build that, we need a lot of machine learning. And as we learned today, we can actually use machine learning in the browser. And there's one library that I use quite a lot. We heard about that already, and that is Transformers.js. I will keep this part very short. So in the end, if you want to use Transformers.js, we can basically import this pipeline function from Transformers.js.

3. Utilizing Transformers.js for Tasks

Short description:

Using Transformers.js involves defining tasks and models, with options like Whisper for transcribing speech to text and Kokoro.js for text to audio conversion. Adding intelligence through LLMs, including closed and open models, enhances the capabilities of the pipeline.

We can then define a task and also the model that we want to use. And then we can basically pass anything through this pipe, it will run it through the model, and we get the output back. So the input type and the output type heavily depend on the use.

How can we use Transformers.js to solve our first problem? There are two models we can use. The first one is Whisper, allowing us to transcribe spoken words into text. On the other hand, Kokoro.js is an abstraction layer on top of Transformers.js for converting text into audio blobs.

Adding intelligence to the pipeline involves utilizing LLMs, particularly large language models like GPD 4, Cloud, Gemini, and more. Different types of LLMs, including closed models for server farms and open models with open-source weights and architectures, offer various options for enhancing intelligence in the pipeline.

4. Enhancing Pipeline Intelligence with LLMs

Short description:

Adding intelligence through LLMs enhances the capabilities of the pipeline. LLMs include closed models like GPD 4, Cloud, and Gemini, and open models like Meta with the Lama, Microsoft with Phi, and Google with the Gemma series.

So let's see how that works. I have this little pipeline here. So I can now say, for example, hey, everyone, I'm right now at JS Nation, and it's an amazing conference. Let's hope this works. Hey, everyone, I'm right now at JS Nation, and it's an amazing conference. Do you hear the... thanks. Do you hear the fireworks all the time? Okay, that's annoying. That's not like that. Okay, that works, but it's actually not very intelligent.

So how can we add intelligence to that pipeline? Now, the closest thing we have to intelligence is LLMs. So we have large language models, we can add some text, and it will output some text. There are quite a lot of LLMs that we can use right now. There's the whole range of GPD 4, Cloud, and Gemini. So we have closed models that run in some server farms that we can use in via an API or using their text interface. There are also smaller models like the Gemini Nano or GPD 4 or Mini which are small enough that we can actually also run them on device, but they are all closed.

There's also another range of models. And that's open models. And open means that the trained weights and also the architecture is open source. So we are allowed to use them in our own systems. There's Meta with the Lama, there's Microsoft with Phi, there's Google with the Gemma series. And there are also small open models like the Phi 3, Gemma 3, Quent 3. And there's even open source models. So there's the small LM 2, which is an action or as far as I know the only actually open source model because also the training data is open source that Hugging Face used to train that model. Now with models that are small enough that we can run them on device and they're also we're also allowed to use them on device, we can also run them in the browser.

5. Customizing AI Outputs with System Prompts

Short description:

Models small enough to run on device, allowing browser usage using LLM library. Creating engines and new conversations, streaming replies with added tokens for efficiency. Utilizing prompt engineering to customize AI outputs with precise prompts and context.

Now with models that are small enough that we can run them on device and they're also we're also allowed to use them on device, we can also run them in the browser. There is a library that I use for that called the LLM. With the LLM, we can basically create an engine, we can then create a new conversation, where we pass an array of messages. And then we can either wait for the full reply, or we can use streaming their tokens are added to the reply as they are generated.

Let's try that. So we have the same pipeline as before. But we have this new processing step. Hey there, my name is Nico. And we are right now at JS Nation. It's an amazing conference and we are on stage right now. So could you quickly introduce yourself to the audience. Hello, it's good to be here. Smiling face with smiling eyes. Our nation is amazing. Looking forward to connecting with all the incredible attendees. Microphone.

Okay, that works. Thank you. But in the end, we just created another random chatbot, because we are now talking to some random Jemma LLM from Google. And I wanted to talk to Kit, the car from Knight Rider. So what could we do about that? The easiest thing to fine-tune or to customize the output of an AI model is to use prompt engineering. And prompt engineering is the process of crafting precise and effective input prompts to optimize the output of AI models. What does that mean in our case?

So here I have the same pipeline as before, but now I have a different system prompt. And in the system prompt, I tell it who it is. So it is Kit from Knight Rider. I give it context, it's speaking to Nico. I also have a summary of Kit's tone. And I also have guardrails, because all those smiles, asterisks and all of that, they don't work well in that setup. So let's try the exact same thing with a different system prompt. Hey there, we are right now at JS Nation.

6. Leveraging Memories and Vector Databases

Short description:

Introducing oneself at a conference, exploring memories with Kit, and leveraging a vector database for scene references.

It's an amazing conference, and we are on stage right now. So could you quickly introduce yourself to the audience? Of course, Nico, let's refine the presentation data. Shall I optimize the delivery for maximum engagement? And as you see, that's the exact same pipeline, exact same model as before. But it's behaving very, very differently.

Let's talk about memories. Because oftentimes if we interact with each other, we talk about things that happened in our past. And in the case of Kit, there were a lot, there were serious seasons, episodes full of memories of things that they lived through. My idea was, how can I talk to Kit about those memories?

The other problem that I have is I have a question about something that happened in the past, and now I want my LLM, Kit, robot, whatever, to know what scene I am referring to. The solution to that is called a vector database. The IndexDB is not per se a vector database, but every database can be one if you store the text and its vector representation.

7. Implementing Vector Search Functionality

Short description:

Writing a playwright script for screenplays, using vector representation for semantic similarity, and implementing tool calling in the pipeline.

You then need to write a little playwright script that scrapes all the screenplays in plain text. Then you will figure out that those screenplays were written in pre-World Wide Web times. So you need to write your own parser that parses the script into structured JSON.

Now, what is a vector representation? So that's basically just a numeric or an array of numbers, and that is generated by another AI model, and that's called or that's the category of feature extraction models. In my case, I am using the old miniLM L6v2, which is trained for semantic similarity.

The other problem is now I somehow need to call this vector search function in my pipeline. The solution to that is a concept called tool calling.

8. Optimizing Vector Search Integration

Short description:

Analyzing IndexDB as a vector database, using miniLM L6v2 for semantic similarity, and employing LLM summarizer for efficient plot extraction and vector search function integration.

The problem is the IndexDB is not per se a vector database because in my IndexDB, I have basically just the whole scene and some metadata. Every database is a vector database if you store the thing, the text, and its vector representation. In my case, I am using the old miniLM L6v2, which is trained for semantic similarity. I can pass any text into the extractor, and it will return an array of numbers in the end.

The other problem I have is that putting the whole episode or scene into one embedding is not ideal for vector search due to dilution. My solution involves using an LLM summarizer to extract main plot lines efficiently. The process involves inputting the scene, determining expected plots, providing prompts, and strict formatting of output. This detailed summarization process on my MacBook involved processing millions of tokens to generate the necessary output for vector search functionality.

Now, with an array of summaries and embeddings for scenes, the vector search function can operate effectively. Creating vector embeddings for queries, comparing them with summaries in the index DB, and calculating cosine similarity lead to relevant scene recommendations. To integrate this function into the pipeline, a concept called tool calling is employed. The application facilitates function execution based on prompts, enabling interaction between the LLM and the application for seamless processing of structured outputs.

9. Integrating Vector Search Function

Short description:

Creating vector search function with summaries and embeddings, utilizing tool calling concept for function execution and output processing.

It did work. It took me a couple of days. It also crashed a couple of times, but I was able to do that on my MacBook. In the end, I ended up having something like this. Now I have an array of summaries for each scene and an array of embeddings for those summaries. That also allowed me to create this vector search function, where I have the question, the query as an input.

I will then create the vector embedding of that query. I will then go through the index DB, and I will calculate the co-sign similarity between the query and the summary. I will then return the four scenes with the highest similarity between the summary and the query. The other problem is now I somehow need to call this vector search function in my pipeline.

The solution to that is a concept called tool calling. The concept behind tool calling is that an LLM cannot execute a function, but the application around it can. The LLM can detect when a function should be called. It can then generate structured output, could be text, XML, JSON, or whatever. The application can then execute the function, return the output to the LLM, and the LLM can then work with that output.

10. Utilizing System Prompts for Tool Calling

Short description:

Creating tool calling function with system prompts and user queries for function execution and response processing.

There are libraries to do that, but I wanted to see how far I can go with just basic text in, text out. I created this little tool calling function where I have a very big system prompt. In the system prompt, I tell the LLM, if you want to call a function, you can use this XML snippet called function call with the name of the function with the query. I then have some instructions, some examples, and I then basically just have the user query as the question.

So, let's see how that works. Let's head over to kit.niko.dev. Then we can wake it up. There we go. And then we can ask something basic. So, whenever I press the space bar, it will listen. Good morning, Kit, how are you? Good morning, Niko. I'm well, thank you. How can I assist you today? So, that's a pretty basic input. But let's try something more special. Let's, for example, ask, listen, do you remember the connection between Sonny and the deadly poison gas exchange and did something tragic happen back then?

I need to look up the episode to provide you with the correct information. I remember a tragic incident occurred during an exchange of poison gas between Sonny and Fidel. A canister was accidentally discharged, killing a bystander named Laurie. That event left Michael deeply affected. Okay. So, what you see here is... Thank you. So, what you see here is that I have a very big system prompt. In the system prompt, I have all the tools.

11. Integrating KidStone Tool and Car Control

Short description:

Exploring tool integration with KidStone, API communication for car control via Bluetooth, utilizing microcontroller board for car speed control.

First of all, I have KidStone and all of that, and then I have a list of all the tools it can use. Now I have my question, and then it generates this little XML snippet. My application will then parse the response, will figure out, oh, there's a search episode function, I need to call that. It will then return the output of that function as a new message in the conversation, and Kid can then use that to generate a response with all the context that it needs to actually do that.

Okay. So, that works. But I also wanted it to have a body. So, in the series, Kid is a Pontiac Firebird, and the Pontiac Firebirds, they start at around 20,000 francs. Believe it or not, that's not in my budget for side projects, and also, I am not a mechanic, so I had no idea how to deal with a real car. But I'm still a Kid at heart, so I was able to create this little Lego Kid. Now the question is, how can I communicate with my car through the browser? And that, I can do that using an API called the Bluetooth, and that's a browser API that allows me to communicate with Bluetooth low energy devices.

On the car, I basically have a little Arduino microcontroller board that is connected to this LED strip, to the wheels, and then I have two batteries, one for the wheels and one for the microcontroller board. And on the board, I then have a very small C++ script that exposes one BLE server with one BLE service with one BLE characteristic, and that characteristic has two bytes. And one byte controls the speed of the wheels on the left side, and one byte controls the speed of the wheels on the right hand side. In my application, I can then connect to the device, connect to the characteristic, change the value of those bytes, which then changes the speed of the car.

12. Implementing Kit Reasoning and Fun

Short description:

Exploring Kit implementation with reasoning process for problem-solving and fun in learning and building using LLMs.

Okay, I have one minute left, so let's see how that works. Let's turn it on. Now you see it is blinking, and I can then connect to that. There we go. Beautiful. And I can now use arrow keys to control that. But using arrow keys is quite boring, and I wanted to see if I can implement that into my LLM workflow. So let's check that. Hey Kit, I think we need to get out of here. Could you go full speed forward? I'm accelerating to full speed forward to get us out. Actually, I did add a little timeout, so I didn't want it to go off the cliff in the end. But that is a very simple example, and I could have achieved that by just basic pattern recognition where I'm listening for words like full speed and forward, but I wanted to have an actual reasoning process.

So what I can do next is I can, for example, say, Hey Kit, we have a problem. There is a road block in front of us, and on the right-hand side, there's a huge river, so we can't go right either. What should we do next? I need to navigate around the road block. Let's try turning left to find an alternative route. Okay. As you can see, there's an actual reasoning process where it tries to figure out the best tool to solve the problem.

So let's wrap it up. We were actually able to create Kit in pure JavaScript. It listens and talks using Jesper and Kokoro. It is intelligent using the Jemma2 LLM. It has a personality using just prompt engineering. It had a history using all the scenes from the Knight Rider series and agentic rack. And it finally has a body using the Bluetooth and tool calling. So we now know what I did, how I did it, but the real question is why? Why should I spend hours trying to recreate a car from a series I didn't even watch that I didn't even watch when I was a kid? And the solution is quite simple, just because it's fun. It's fun to learn stuff. It's fun to build stuff. And now with LLMs, we just have one more tool. We have one more tool in our toolbox that we can use to build amazing stuff.

13. Challenges with Small Models and XML Usage

Short description:

Discussing challenges with small models and the necessity of function parameter lists for efficient LLM usage.

So that was it from my side. Thank you so much for your attention.

The thing is, with the small, if you use big models like Gemini or with a Claude or JetGPT, then the whole structure, if you tell the LLM, hey, call this function once and if you have the results, then call it again. One thing that often happens when I have a small model is that it would first call the function, then it would get the result back and then it will try to call the same function again. No idea why, just because somehow it is in the context that it has things it has to run this function. Therefore, I had to have a list of function and the parameters that I already call. So if it's the same function with the same parameters, don't call it again and you're done in the end.

That's an interesting problem for that. I found it is a weird thing to go from the big models where, and then function calling is built into that. You don't have to have your own XML thing for that either. That's true, yeah. And they're built for doing that with, and then we step back to these small models and you've got a lot more work to do. Yeah. But you kind of have to do that when you're on the web. And also the format, whether I should use JSON or XML or just text, was very- You have to ask about that. Yeah. The thing is, I tried it with JSON before, but if you want to force an LLM to always output the same JSON structure just in a system prompt, then that works sometimes in the first prompt, but already in the second prompt, it will forget about that or it will output invalid JSON. So for me, I figured out it was easier to have just those XML blocks. It's easier to parse. Also, if the structure isn't correct, it's not a big problem. You can still have the rest of the text. So therefore, I did use XML. Fair enough. Fair enough.

14. Exploring LLMs and Enhancing Response Speed

Short description:

Discussing the importance of understanding code by writing it oneself and exploring ways to speed up responses in coding tasks.

So therefore, I did use XML. Fair enough. Fair enough.

How much LLM help did you have to figure out working with the LLMs? None. None? No. Let's admit it, everyone uses LLMs to get some new ideas, to figure out what's wrong. So of course I heavily use LLMs, but it's not wipe coded. I wrote it myself. I think that's important. And when you're learning something, especially when you want to go and talk about it in front of a bunch of people, you can't just have an LLM write all the code for you, because then you don't know what it did. That's the whole point. I wanted to understand how function calling works or tool calling works. That's why I went into the whole structure XML and all of that. All right.

We've got the last question here. I'm going to pop in. If you'd like to speed up the responses, what would the steps be? Have you got further work to do this to make that? Well, one problem is that right now I have all those steps, first step one, step two, step three. And if I would be able to split up the response and start generating, maybe after the first sentence, I could already start generating the text to speech stuff. That would be faster. Or I would maybe talk to Joshua, because his example was way faster than mine. So I need to dig into his code and see what magic he was using. And he also had a newer MacBook. That's a big point. The power does help, doesn't it? Yeah. Yeah. Fair enough. All right. Well, with incredible demos and a fantastic way to finish it up here on the residence track, let's give Nico another big round of applause. Thank you. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Embracing WebGPU and WebXR With Three.js
JSNation 2024JSNation 2024
27 min
Embracing WebGPU and WebXR With Three.js
Top Content
The 3JS project has evolved into a community-driven effort with numerous contributors over the past 14 years. It started with 3D engine work in Flash and transitioned to using SVGs for rendering in HTML5 before adopting WebGL. The project showcases various projects and frameworks, including a no-code tool powered by 3.js. The team is working on a new render using WebGPU and developing a new shader language called TSL. The hope is that WebGPU will eventually replace WebGL, offering better control and performance.
Building a Voice-Enabled AI Assistant With Javascript
JSNation 2023JSNation 2023
21 min
Building a Voice-Enabled AI Assistant With Javascript
Top Content
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
AI and Web Development: Hype or Reality
JSNation 2023JSNation 2023
24 min
AI and Web Development: Hype or Reality
Top Content
This talk explores the use of AI in web development, including tools like GitHub Copilot and Fig for CLI commands. AI can generate boilerplate code, provide context-aware solutions, and generate dummy data. It can also assist with CSS selectors and regexes, and be integrated into applications. AI is used to enhance the podcast experience by transcribing episodes and providing JSON data. The talk also discusses formatting AI output, crafting requests, and analyzing embeddings for similarity.
The Rise of the AI Engineer
React Summit US 2023React Summit US 2023
30 min
The Rise of the AI Engineer
Top Content
Watch video: The Rise of the AI Engineer
The rise of AI engineers is driven by the demand for AI and the emergence of ML research and engineering organizations. Start-ups are leveraging AI through APIs, resulting in a time-to-market advantage. The future of AI engineering holds promising results, with a focus on AI UX and the role of AI agents. Equity in AI and the central problems of AI engineering require collective efforts to address. The day-to-day life of an AI engineer involves working on products or infrastructure and dealing with specialties and tools specific to the field.
Install Nothing: App UIs With Native Browser APIs
JSNation 2024JSNation 2024
31 min
Install Nothing: App UIs With Native Browser APIs
Top Content
This Talk introduces real demos using HTML, CSS, and JavaScript to showcase new or underutilized browser APIs, with ship scores provided for each API. The dialogue element allows for the creation of modals with minimal JavaScript and is supported by 96% of browsers. The web animations API is a simple and well-supported solution for creating animations, while the view transitions API offers easy animation workarounds without CSS. The scroll snap API allows for swipers without JavaScript, providing a smooth scrolling experience.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps