Building a Voice-Enabled AI Assistant With Javascript

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

In this talk, we'll build our own Jarvis using Web APIs and langchain. There will be live coding.

This talk has been presented at JSNation 2023, check out the latest edition of this JavaScript Conference.

FAQ

Tejas Kumar is the founder of a developer relations consultancy that helps developer-oriented companies build and maintain strong relationships with developers through strategic discussions, mentorship, hiring, and hands-on execution.

The consultancy operates on the philosophy of 'DevRel, not DevSell,' emphasizing building genuine relationships with developers rather than trying to sell them products.

The consultancy works on projects that involve building tools and technology for fun and learning, such as creating a voice-activated AI assistant using web APIs and JavaScript.

The project uses the Web Speech API for speech recognition, the OpenAI GPT-3.5 Turbo model for processing text, and the Speech Synthesis API for converting text to speech.

The main goal of the AI assistant project is to have fun while learning about JavaScript and AI, rather than building a product to sell.

The project uses web APIs, JavaScript, VEET for the dev server, and Visual Studio Code for coding.

Although the AI assistant uses non-standard APIs requiring prefixes, it could potentially be used in production with custom grammars and further development.

Tauri is a tool that allows you to create native desktop applications using web technologies like HTML and JavaScript, with Rust as the backend. It can be used to turn the AI assistant into a native desktop app.

You can support Tejas Kumar's DevRel work by following him and engaging with his content.

Tejas Kumar's consultancy helps developer-oriented companies build great relationships with developers through high-level strategic discussions, mentorship, hiring, and hands-on execution such as writing documentation and giving talks.

The consultancy prefers to use Chrome because the Speech Recognition API works reliably in Chrome, although it can be made to work in other browsers with different implementations.

Tejas Kumar
Tejas Kumar
21 min
05 Jun, 2023

Comments

Sign in or register to post your comment.
  • User avatar
    GitNation resident
    Hi, your video conference is amazing, thanks a lot for that! Question: how would associate this voice-enable AI assistant with an avatar that is lip synced? Thx again!
Video Summary and Transcription
This Talk discusses building a voice-activated AI assistant using web APIs and JavaScript. It covers using the Web Speech API for speech recognition and the speech synthesis API for text to speech. The speaker demonstrates how to communicate with the Open AI API and handle the response. The Talk also explores enabling speech recognition and addressing the user. The speaker concludes by mentioning the possibility of creating a product out of the project and using Tauri for native desktop-like experiences.

1. Introduction to DevRel and AI

Short description:

Hi, I'm Tejas Kumar, and I run a small but effective developer relations consultancy. We help other developer oriented companies have great relationships with developers through strategic discussions, mentorship, and hands-on execution. Today, we're going to build a voice activated AI assistant using web APIs and JavaScript. The purpose is to have fun while learning and celebrating JavaScript and AI.

Hi, I'm Tejas Kumar, and I run a small but effective developer relations consultancy. What that means is we help other developer oriented companies have great relationships with developers. And we do this through high level strategic discussions, and mentorship, and hiring. Or we do it through low level, hands on execution, like we literally sometimes write the docs, do the talks, etc.

In that spirit, it's important for us to kind of, you know, stay in the loop, and be relevant and be relatable to developers to have great DevRel developer relationships. And sometimes to do that, you just have to build stuff. You see, a lot of conferences these days, are a bunch of DevRel people trying to sell you stuff, and we don't like that. It's DevRel, not DevSell.

And in that spirit, we're not going to sell you anything here, we're just going to hack together. The purpose is to have some fun, to learn a bit, and so on. What we're gonna do in our time together is we're going to build a voice activated AI assistant, like Jarvis from Ironman, using only web APIs, just JavaScript. We'll use VEET for a dev server, but that's it, this works. We're gonna be using some non-standard APIs that do require prefixes and stuff, but if you really wanted to, you could use it in production. You could supply your own grammars and so on. The point today, though, is not that, it's to have fun while learning a bit and also vibing a little bit. All in the spirit of celebrating JavaScript and AI.

2. Building the AI Assistant Plan

Short description:

We're going to use the Web Speech API for speech to text and the speech synthesis API for text to speech. We'll give the text to OpenAI's GPT 3.5 Turbo model and then speak the response. It's a straightforward process using browser APIs that have been around for a while.

So with that, let's get into it by drawing a plan in tldraw. We're gonna go to tldraw, and what do we want to do? Well, we want to first have speech to text. This is using the Web Speech API. From there, we want to take this text and give it to OpenAI, the GPT 3.5 Turbo model. From there, we want to speak. So text to speech from OpenAI. This is the plan. We want to do this with browser APIs. We want to reopen the microphone after GPT 4 talks and have it come back here. This is what we want to do. Let's draw some lines. So it's really just speech to text, an AJAX request and text to speech. This is what we want to do. Not necessarily hard. There are some functions here. This is called the speech recognition we're going to use. That's actually a thing introduced in 2013. It's been around for a while. This is the speech synthesis API. So both of these exist in JavaScript in your browser runtime. They're just ready to use. What we're going to do is use them to fulfill this diagram.

3. Building the Speech Recognition Functionality

Short description:

To build ourselves an assistant, we'll use Chrome's speech recognizer. We'll create a new speech recognition object and add an event listener for the result event. When we get a result, we'll extract the transcript from the first attempt. This API may provide multiple guesses, but we'll stick with the first one.

Now, to do that, we're going to use Chrome because this really works in Chrome but there's ways you can get it to work in other browsers. We're going to open the DS code and get started. We have a blank page with a button that says hi. If we want to look at the code, index.html is HTML, some head, removing the default margin. There's actually a little thing here that my just so I know what my face is covering, a little black box. You can see if I bring this down a little bit. That's where my face goes. Anyway. And then we have this black box, the button that does literally nothing in index.tsx.

Let's start by recognizing my speech. Chrome has a speech recognizer built in. It had it since 2013 and it just works. Other browsers have different implementations and so on. But the goal is to build ourselves an assistant. We're not building a product to sell, we're just learning, having fun to build ourselves an assistant. So in that spirit, what we'll do is we'll say const recognition is new speech recognition, speech recognition. And this will predictably fail because you need a vendor prefix in Chrome, but Chrome doesn't use WebKit, Safari uses WebKit. What's the prefix to use this in Chrome? It's WebKit. I don't know why, but there. And this now should give us no error. So it is there. So what do we want to do? We need an event listener. So we'll add an event listen to this called result, rather listen on the result event. And when we get a result, we are going to const text is the results. Oops, we should maybe get event. The events results, the first result and the first attempt of the first result. So this API will do if we let it, will do many guesses about what I said. And I feel like it's good enough that we just run with the first one. So we'll iterate if we need, but we get the first result and then the first attempt of that result. Transcript.

4. Communicating with Open AI API

Short description:

And let's console that log and say you said text. We have speech to text. Now, let's talk to Open AI and see what it says. We'll communicate with the Open AI API by following the API documentation and using a curl request. We'll convert the curl command to a JavaScript fetch request using the Github copilot labs extension. It's like copilot but on steroids and allows code conversions. It works pretty well.

And let's console that log and say you said text. We need to also start recognizing recognition dot start. Hello. My name is Tejas and I run a Deverell agency. Oh, fantastic. Hello. My name is Tejas and I run Deverell. Close enough. It's working. We have speech to text.

What do we do now? Let's talk to Open AI. Give it the text and then see what it says. To do that we're going to communicate with the Open AI API. So to do that we're going to open up the API documentation. We're going to get a curl request right here. This is an image edit. I want to chat completion.

So I'm going to come here, copy this curl snippet, open Visual Studio code and create a function const askOpenAI and this is probably an async function here. And what we'll do is we have a curl. I want to turn this into a fetch. And there's a powerful extension called Github copilot labs. And this is new. It's like copilot but on steroids and it allows like code conversions and things. It doesn't work very reliably but I figured we could try. So to try that, let's go here. Copilot labs. I'm going to open that and I'm going to highlight this text and say using the custom brush, I'm going to say convert this curl command to a JavaScript fetch request. And it's going to spin a bit. Okay, wow. Not bad.

5. Authorization, Body, and Logging

Short description:

We need an authorization, a bearer token, and a request body. The body should be a JSON string with a model and messages. We'll use the Turbo0301 model and start with a system prompt introducing Jarvis, Tony Stark's personal AI assistant. We'll keep responses concise. We'll log everything said in a list and map it as user content.

We need an authorization, which contains a bearer token. And we of course also need a body. What's the matter here, right? We need another curly. We need a request body. That's very important. So we'll do comma body. And what does this thing expect was JSON string first of all. And it needs a model and messages. So we'll do that. We'll just give it this object here.

I'm going to use Turbo0301 just because it's under less load oftentimes. And we'll say, we'll start with a system prompt. So system, and we'll tell it like who it is. We'll give it an identity statement. Okay. You are Jarvis, Jarvis, Tony Stark, Tony Stark's personal AI assistant. Tony Stark, of course is also Iron Man. Keep your responses as terse and concise as possible. Okay. So that's an instruction.

Now, what we need to do is everything that's said we need to keep in a log because you know, chat GPT is conversational. So every time we recognize speech, we need to append that to a list. Okay. So let's do that. So We'll say const things said is an empty array. And not only are we going to console log this, but instead, we'll things said dot push text, which is a string, but this is a string. Okay, perfect. Now, we'll just map. So we'll say things said dot map role is user content. This is perfect.

6. Asking Open AI and Handling Response

Short description:

And so now we're asking open AI. We're pushing it there. We'll console log the response and see what we get. It's 401 because I don't have a bear token. Hello, I need a suit immediately. Probably talking to the wrong model. Error, invalid request error. Role, user, content. Spread the request. We got back undefined, but the request passwords, choices, zero, message, content.

And so now we're asking open AI. So we're pushing it there. And then we'll or another const response is await, ask open AI. This is not an async function. And now that looks good. So we'll just console log the response and see what we get.

Okay, let's take a look. So, so far, so good. Wait, hello. I need a suit immediately. Okay, well, nothing. It's 401. And that's because I don't have a bear token. I'm about to show you my API key, please don't copy it. Be a nice person. Okay, it can be expensive if you abuse it. Anyway, so, got him. You saw nothing, you saw nothing, you saw nothing.

Hello, I need a brand new suit of armor immediately. How do I do it? 400. Probably because I'm talking to the wrong model. Let's take a look here. What's the problem? Error, invalid request error. Role, user, content. Okay, is not of type object. Right, I need to spread that. Thank you. Hello, I need a suit of armor immediately. Okay, we got back undefined, but the request passwords, choices, zero, message, content. And that's what we want to console.log, response.

7. Speaking the Answer Using Speech Synthesis API

Short description:

First, serialize to JSON. Get the answer and speak it using the Speech Synthesis API. Use the speakStringOfText function and set the voice to the desired one.

First of all, let's return this. Serialize this to JSON. And now we need response.choices, zero.message.content. Alright, this will be our answer, and then we'll just console.log this answer just to be sure. Right, answer.

Okay, let's try this again. I need a suit of armor around the world. What should I call it? Avengers Initiative. Oooh, it's happening. So we have speech to text. We are talking to OpenAI. Now we need text to speech, okay? How can we do this? We can do this using the Speech Synthesis API. This is also just a native web API. Keep in mind, we're writing TypeScript but there's no build tool or anything. This is just straight in the browser.

So let's use Speech Synthesis. So we get the answer, we need to speak the answer. So how do we do this? We'll have a function called speakStringOfText, and what we want to do is const utterance. Exactly, I should have let CoPilot write this. Utterance. So a SpeechSynthesis utterance is an utterance of a string. And what we want to do is, okay, that's pretty basic, but we also want to do some voices. So we'll say const voice is SpeechSynthesis.getVoices, and we'll just get the first voice. Which is usually the British one, the one that I want. And we'll say utterance.voice is this voice. And then we speak. And then, let's actually just leave it there. And what we'll do is we'll say, you know, speak answer. How much money do I need to build Avenger's tower? That's cool. But it didn't speak it.

8. Enabling Speech Recognition and Addressing User

Short description:

To enable speech recognition, a click event needs to be added to the button. This ensures that the browser doesn't randomly speak without user interaction. By assigning an ID to the button and using event listeners, we can start the recognition process. However, the AI assistant may still address the user as Mr. Stark unless specified otherwise through the system prompt.

It didn't speak it because it needs an event. So, what we're going to do is, this is a security consideration. You can't just have things speak to you without a user interaction. You need a click event or something like this.

So, to start listening, we'll add a click event to the button that exists. Just so that the browser isn't protective of the computer just randomly speaking at you. Which can be a bit of a scary experience.

Okay. So, what we'll do is, instead of recognition.start, we'll go back to our button in the HTML. What's the ID? Let's give it an ID. ID is start. And this will now make it a global variable. Isn't that ridiculous? So, what we'll do is, instead of recognition.start, we'll do start.add event listener. Click and then we'll recognition.start. We'll do this, save. So now, it's not listening by default, but I'll click this and then speak and then it should work.

Hey Jarvis, how much money is it going to take to build a new car? I'm sorry, Mr. Stark has not provided me with sufficient details to estimate the cost of building a new car. Please provide more information. Why did it speak to Mr. Stark and say Mr. Stark, unless it knows that I'm not Mr. Stark. Maybe we can, through the system prompt, tell it, I'm Mr. Stark. Okay, let's do that. System prompt, you are Jarvis, Tony Stark, of course, is also Iron Man. Your user is Iron Man or Tony. Let's try this again. Jarvis, what is my favorite color on my soup? I'm sorry, Tony.

9. Closing the Loop and Enabling Conversation

Short description:

We have speech-to-text, we're talking to OpenAI, and now we need text-to-speech. I want it to just be on forever and have a long conversation. Let's close the loop and summarize everything we did. When we finish speaking, we'll resolve the promise. Now, we can start recognition again and have an actual conversation.

I cannot determine your favorite soup color as it is not a standard preference. Thinks on Tony.

Okay, so, it's good. We have speech-to-text, we're talking to OpenAI and now we need text-to-speech. Or rather, we have text-to-speech, but it's not a conversation, like it just stops and then it's done. And then I have to click the button to start speaking again. I want it to just be on forever and just have a long conversation. Okay? Let's close the loop and then summarize everything we did. So, how are we gonna do this? When we finish speaking, so here, what we want to do is, utterance.onEnd, we want to, let's return a new promise. Promise, resolve, and notice how we're not handling errors, that's because I like chaos sometimes. When we finish, when it finishes speaking, we'll resolve the promise. Now, we can await speak and when speak is over, we can start recognition again, and now we can have an actual conversation.

10. JavaScript Conference and Flight to Amsterdam

Short description:

Hey, Jarvis, what's the greatest JavaScript conference? Some popular conferences include JSConf, ReactConf, and Node Summit. Can I fly to Amsterdam in my suit? Yes, it would take approximately 7 to 8 hours to fly from New York to Amsterdam at supersonic speeds. Thanks, Jarvis. I'm going to close the tab now. We just created a voice-commanded Jarvis-like AI using browser-based JavaScript APIs. The Speech Recognition API varies across browsers and is not yet standard. The Speech Synthesis API uses voices from the user's operating system. Let's browse the final code and identify any mistakes or improvements. We have new WebKit speech recognition that starts when the button is clicked.

So, let's start. Hey, Jarvis, what's the greatest JavaScript conference on Earth? Sorry, Tony, as an AI assistant, I am programmed to provide factual responses. There is no objectively greatest conference as it can vary depending on individual preferences. However, some popular conferences for JavaScript include JSConf, ReactConf, and Node Summit.

Okay, around June 1st in Amsterdam, what is the greatest conference around? The greatest JavaScript conference in Amsterdam around the 1st of June is JavaScript Conference. Jarvis, that's not a real conference. I apologize. Tony, there is no conference around June 1st in Amsterdam that matches your criteria for being the greatest.

Okay, can I still fly to Amsterdam in my suit? How fast? How long is it going to take me to get to Amsterdam from the United States? Yes, you can fly to Amsterdam in your suit. It would take approximately 7 to 8 hours to fly from New York to Amsterdam, traveling at supersonic speeds. Thanks a lot, Jarvis. I'm going to close the tab now, okay? Because I didn't create a stop mechanism. Understood, user. Stay safe! We just did that. We just created a voice-commanded Jarvis-like AI using only browser-based, native JavaScript APIs. The Speech Recognition API uses whatever browser recognizer is available. That's why we used Chrome. It varies from browser to browser. It is not standard yet. It probably will be, although it has been 10 years in the browser and still not standard. So, your mileage may vary. The Speech Synthesis API uses voices from a user's operating system only. You can give it custom voices, although there are other solutions to do that. I would invite you to experiment and play. Let's browse the code one last time, the final code, identify maybe mistakes we made or things we could do better, and then wrap up. Okay, let's go. I have Google here. Let's open up the thing again, that's fine. What did we do? We have new WebKit speech recognition. When we click the event, when we click the button rather, we start the recognition.

11. Final Conversations and Conclusion

Short description:

The user event is important as it prevents the browser from randomly listening and spying on people. We keep an array of things said and feed it to OpenAI for more context. We have a loop to listen, speak, and resolve the promise. We make a fetch request to the OpenAI completions API. This project is less than 50 lines of code and uses only native web APIs. You can create a product out of this and consider using Tauri, a tool for creating native desktop-like experiences using web languages and Rust. Thank you for joining the session and supporting our DevRel work.

This user event is important because you don't want your browser, rather your browser doesn't want to just randomly start listening to things and, you know, spy on people. We keep an array of things said and feed this to OpenAI. Notice, we're making a bit of a mistake because when we get an answer we should actually append this, so things said.push, and this will give the AI more context.

This looks good and then we can maybe remove some console.logs and we have this loop where we start listening and then once you say something and the machine answers, then you start listening again. To speak, we are using the speech synthesis utterance that just utters some text and we set the voice to a system voice. This is the default one, we can maybe even change this and see what happens. And this looks good and then when it ends we resolve the promise so that we can come back and start. Lastly we have a fetch to OpenAI completions API. This is just a copy-paste and we send all the things set. So this isn't really that hard, it's less than 50 lines of code and we have a voice-activated Jarvis style assistant using only native web APIs.

Let's have one last conversation with it, in an optimised way, with a different voice and then wrap up. Ok, let's do it. So, hey Jarvis, what is the coolest thing about Amsterdam, June 1st? Sorry, I am not programmed to provide subjective opinions. Would you like me to look up some interesting events happening in Amsterdam on the first of June? Sure, that sounds good. Sometimes it takes a while. Based on my search, here are some events happening in Amsterdam on June 1st. One such event is Exly dance festival, music festival that we organize featuring various DJs. Another festival, Apple Arts & Culture Festival featuring a variety of performances and events.

You can also create a product out of this, with the caveats of browser compatibility and so on. You could turn it into an open source project, invite contributions, and actually have something. Two, I would like to recommend the use of an app, or a system, or tool, like Tauri. For those who haven't heard of Tauri, it's a way to create native desktop-like experiences using web languages, HTML, JS, JavaScript, and the back end is then Rust, where you can pass messages between your front-end with browser-based technologies and Rust to create performant things. Indeed, everybody is rewriting things in Rust, and they think they're cool because of it. And indeed, Rust is very cool. So you could really make a native desktop app using Tauri and this and just give people their own JavaScript. I think that's actually pretty cool, especially if it's connected to their own open AI account that really knows them. There's many ways you can take this forward, but I'm going to leave it here. One last thing for Tauri.app, if you wanted to look into that. But I'm going to leave it here. Thank you so much for entertaining this fun little session, and I hope it was meaningful and valuable for the rest of you. If you'd like to support me, our DevRel work, feel free to follow me. And with that, I want to thank you so much for having me, and I hope you enjoy the rest of the conference.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Debugging JS
React Summit 2023React Summit 2023
24 min
Debugging JS
Top Content
Watch video: Debugging JS
Debugging JavaScript is a crucial skill that is often overlooked in the industry. It is important to understand the problem, reproduce the issue, and identify the root cause. Having a variety of debugging tools and techniques, such as console methods and graphical debuggers, is beneficial. Replay is a time-traveling debugger for JavaScript that allows users to record and inspect bugs. It works with Redux, plain React, and even minified code with the help of source maps.
A Framework for Managing Technical Debt
TechLead Conference 2023TechLead Conference 2023
35 min
A Framework for Managing Technical Debt
Top ContentPremium
Today's Talk discusses the importance of managing technical debt through refactoring practices, prioritization, and planning. Successful refactoring requires establishing guidelines, maintaining an inventory, and implementing a process. Celebrating success and ensuring resilience are key to building a strong refactoring culture. Visibility, support, and transparent communication are crucial for addressing technical debt effectively. The team's responsibilities, operating style, and availability should be transparent to product managers.
A Practical Guide for Migrating to Server Components
React Advanced 2023React Advanced 2023
28 min
A Practical Guide for Migrating to Server Components
Top Content
Watch video: A Practical Guide for Migrating to Server Components
React query version five is live and we'll be discussing the migration process to server components using Next.js and React Query. The process involves planning, preparing, and setting up server components, migrating pages, adding layouts, and moving components to the server. We'll also explore the benefits of server components such as reducing JavaScript shipping, enabling powerful caching, and leveraging the features of the app router. Additionally, we'll cover topics like handling authentication, rendering in server components, and the impact on server load and costs.
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
JSNation US 2024JSNation US 2024
31 min
The Ai-Assisted Developer Workflow: Build Faster and Smarter Today
Top Content
AI is transforming software engineering by using agents to help with coding. Agents can autonomously complete tasks and make decisions based on data. Collaborative AI and automation are opening new possibilities in code generation. Bolt is a powerful tool for troubleshooting, bug fixing, and authentication. Code generation tools like Copilot and Cursor provide support for selecting models and codebase awareness. Cline is a useful extension for website inspection and testing. Guidelines for coding with agents include defining requirements, choosing the right model, and frequent testing. Clear and concise instructions are crucial in AI-generated code. Experienced engineers are still necessary in understanding architecture and problem-solving. Energy consumption insights and sustainability are discussed in the Talk.
Power Fixing React Performance Woes
React Advanced 2023React Advanced 2023
22 min
Power Fixing React Performance Woes
Top Content
Watch video: Power Fixing React Performance Woes
This Talk discusses various strategies to improve React performance, including lazy loading iframes, analyzing and optimizing bundles, fixing barrel exports and tree shaking, removing dead code, and caching expensive computations. The speaker shares their experience in identifying and addressing performance issues in a real-world application. They also highlight the importance of regularly auditing webpack and bundle analyzers, using tools like Knip to find unused code, and contributing improvements to open source libraries.
Monolith to Micro-Frontends
React Advanced 2022React Advanced 2022
22 min
Monolith to Micro-Frontends
Top Content
Microfrontends are considered as a solution to the problems of exponential growth, code duplication, and unclear ownership in older applications. Transitioning from a monolith to microfrontends involves decoupling the system and exploring options like a modular monolith. Microfrontends enable independent deployments and runtime composition, but there is a discussion about the alternative of keeping an integrated application composed at runtime. Choosing a composition model and a router are crucial decisions in the technical plan. The Strangler pattern and the reverse Strangler pattern are used to gradually replace parts of the monolith with the new application.

Workshops on related topic

AI on Demand: Serverless AI
DevOps.js Conf 2024DevOps.js Conf 2024
163 min
AI on Demand: Serverless AI
Top Content
Featured WorkshopFree
Nathan Disidore
Nathan Disidore
In this workshop, we discuss the merits of serverless architecture and how it can be applied to the AI space. We'll explore options around building serverless RAG applications for a more lambda-esque approach to AI. Next, we'll get hands on and build a sample CRUD app that allows you to store information and query it using an LLM with Workers AI, Vectorize, D1, and Cloudflare Workers.
AI for React Developers
React Advanced 2024React Advanced 2024
142 min
AI for React Developers
Top Content
Featured Workshop
Eve Porcello
Eve Porcello
Knowledge of AI tooling is critical for future-proofing the careers of React developers, and the Vercel suite of AI tools is an approachable on-ramp. In this course, we’ll take a closer look at the Vercel AI SDK and how this can help React developers build streaming interfaces with JavaScript and Next.js. We’ll also incorporate additional 3rd party APIs to build and deploy a music visualization app.
Topics:- Creating a React Project with Next.js- Choosing a LLM- Customizing Streaming Interfaces- Building Routes- Creating and Generating Components - Using Hooks (useChat, useCompletion, useActions, etc)
Vibe coding with Cline
JSNation 2025JSNation 2025
64 min
Vibe coding with Cline
Featured Workshop
Nik Pash
Nik Pash
The way we write code is fundamentally changing. Instead of getting stuck in nested loops and implementation details, imagine focusing purely on architecture and creative problem-solving while your AI pair programmer handles the execution. In this hands-on workshop, I'll show you how to leverage Cline (an autonomous coding agent that recently hit 1M VS Code downloads) to dramatically accelerate your development workflow through a practice we call "vibe coding" - where humans focus on high-level thinking and AI handles the implementation.You'll discover:The fundamental principles of "vibe coding" and how it differs from traditional developmentHow to architect solutions at a high level and have AI implement them accuratelyLive demo: Building a production-grade caching system in Go that saved us $500/weekTechniques for using AI to understand complex codebases in minutes instead of hoursBest practices for prompting AI agents to get exactly the code you wantCommon pitfalls to avoid when working with AI coding assistantsStrategies for using AI to accelerate learning and reduce dependency on senior engineersHow to effectively combine human creativity with AI implementation capabilitiesWhether you're a junior developer looking to accelerate your learning or a senior engineer wanting to optimize your workflow, you'll leave this workshop with practical experience in AI-assisted development that you can immediately apply to your projects. Through live coding demos and hands-on exercises, you'll learn how to leverage Cline to write better code faster while focusing on what matters - solving real problems.
Building Full Stack Apps With Cursor
JSNation 2025JSNation 2025
46 min
Building Full Stack Apps With Cursor
Featured Workshop
Mike Mikula
Mike Mikula
In this workshop I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own AI generated full stack app on your machine!
Please, find the FAQ here
Free webinar: Building Full Stack Apps With Cursor
Productivity Conf for Devs and Tech LeadersProductivity Conf for Devs and Tech Leaders
71 min
Free webinar: Building Full Stack Apps With Cursor
Top Content
WorkshopFree
Mike Mikula
Mike Mikula
In this webinar I’ll cover a repeatable process on how to spin up full stack apps in Cursor.  Expect to understand techniques such as using GPT to create product requirements, database schemas, roadmaps and using those in notes to generate checklists to guide app development.  We will dive further in on how to fix hallucinations/ errors that occur, useful prompts to make your app look and feel modern, approaches to get every layer wired up and more!  By the end expect to be able to run your own ai generated full stack app on your machine!
Working With OpenAI and Prompt Engineering for React Developers
React Advanced 2023React Advanced 2023
98 min
Working With OpenAI and Prompt Engineering for React Developers
Top Content
Workshop
Richard Moss
Richard Moss
In this workshop we'll take a tour of applied AI from the perspective of front end developers, zooming in on the emerging best practices when it comes to working with LLMs to build great products. This workshop is based on learnings from working with the OpenAI API from its debut last November to build out a working MVP which became PowerModeAI (A customer facing ideation and slide creation tool).
In the workshop they'll be a mix of presentation and hands on exercises to cover topics including:
- GPT fundamentals- Pitfalls of LLMs- Prompt engineering best practices and techniques- Using the playground effectively- Installing and configuring the OpenAI SDK- Approaches to working with the API and prompt management- Implementing the API to build an AI powered customer facing application- Fine tuning and embeddings- Emerging best practice on LLMOps