Transformers.js: State-of-the-Art Machine Learning for the Web

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Learn about Transformers.js, an innovative JavaScript library for running state-of-the-art machine learning models 100% locally in your browser. With support for nearly 2 000 pretrained models, spanning a wide range of modalities and tasks, Transformers.js empowers developers to seamlessly integrate advanced ML capabilities into their web applications. We'll also explore how we leverage emerging web technologies like WebGPU and WebNN to create interactive, privacy-preserving, and scalable web experiences.

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

Joshua, a 25-year-old machine learning engineer from South Africa, is the creator of Transformers JS.

Hugging Face is a platform where the machine learning community collaborates on models, datasets, and applications, aiming to democratize machine learning.

Running models in the browser with Transformers JS offers benefits such as improved security and privacy, real-time applications without network latency, lower costs, and high scalability.

Recent developments include support for 155 architectures, integration with WebGPU and WebNN, and a growing community with 1.4 million unique monthly users.

Yes, models can be pre-cached in applications, allowing them to run without fetching from a server.

The main goal of Transformers JS is to bring state-of-the-art pre-trained models directly to the web and browsers, allowing easy integration of machine learning functionality into web applications.

There are around 1.7 million pre-trained models available on the Hugging Face Hub.

Transformers JS supports a variety of tasks including text, vision, audio, and multimodal tasks, with around 155 different architectures across 27 tasks.

Transformers JS leverages web technologies like WebGPU and WebNN to run models in modern browsers, and it is compatible with multiple environments including Node.js and Electron.

The licensing status of models varies. Some models are commercially usable under licenses like Apache 2, but it is essential to check the specific license on the model's card on the Hugging Face Hub.

 Joshua Lochner
Joshua Lochner
27 min
12 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Joshua introduces Transformers JS and Hugging Face, emphasizing community collaboration and pre-trained models. Transformers JS evolution led to 1.4 million monthly users, supporting 155 architectures. The library's browser-based capabilities offer real-time processing, cost-efficiency, and scalability. Integration enhancements include native web GPU execution and React Native implementation. Web ML implementation focuses on Onyx Runtime for device execution and web GPU for resource optimization. Browser-based ML applications cover vision, speech recognition, and text-to-speech. Advanced implementations include multimodal applications and educational tools. Interactive AI demonstrations showcase semantic search and conversational AI scenarios. Model licensing transitions to ECMAScript for efficiency and model redownloading factors are discussed.

1. Introduction to Transformers JS and Hugging Face

Short description:

Joshua introduces Transformers JS and his background as a machine learning engineer. Hugging Face is a platform for collaboration on models, data sets, and applications with a vast community. The platform offers a large number of pre-trained models, data sets, and applications for interaction and exploration.

Hi there, everyone. My name is Joshua and I'm really excited to talk to you today about Transformers JS. So, first up, a little introduction. Starting with who am I? Well, I'm a 25-year-old machine learning engineer from South Africa and one of my main goals when I joined Hugging Face in 2023 was to bring the power of machine learning directly to the web and to your browser. One of the missions at Hugging Face is to democratize good machine learning and incorporating and expanding our Python libraries and our set of our ecosystem on that front to the JavaScript developer community was really important and that is where this project came from. So, and lastly, I'm very passionate about open source, creator of, I guess, Transformers JS, GingerJS, I think you see a pattern there and a few more.

So maybe a quick step back, what is Hugging Face? Well, Hugging Face is a platform where the machine learning community is able to collaborate on models, data sets and applications. We're a very large community, a very large and growing community and if you have any models, data sets or spaces or applications as we like to call them, we'd love you to join and share those demos on our platform. So, yeah, if you want to search for models on the Hugging Face Hub, we have around 1.7 million pre-trained models that you can search for with a bunch of filters and tags that you can select to basically filter down your search from the 1.7 million to something you're really looking for.

It stays up to date and of course you can select the Transformers JS library tag as well to ensure that you can see which models are compatible with the Transformers JS library. We also have a large selection of data sets. So in this demo, you can see that if you have a data set you'd like to view, you can go to the data set viewer and then even interact with it with natural language and then that will be converted into a query where you can basically select the data in an interactive way from the data set. And lastly, spaces which is what we call our application directory. You can also search for if you want to, let's say, generate a 3D model from the image then you search that in the interface and it'll pop up with something that you can use. And this is where our community really shines. We have a very large community of people who really want to showcase their applications, the models that they've created and this is the best way that people have been able to showcase their models directly from the browser. We also maintain a large collection of open source libraries. You may be familiar with the Transformers library, diffusers, tokenizers, just to name a few. But today we'll be focusing on one in particular, Transformers JS.

2. Evolution and Growth of Transformers JS

Short description:

Transformers JS is a JavaScript library enabling browser-based use of pre-trained models with ease. The project, fully open source and community-driven, experienced rapid growth to 1.4 million monthly users. Starting from a spam classifier, the library evolved through various versions, now supporting 155 architectures. The community's contributions and engagement are highly appreciated.

So what is Transformers JS? Well, Transformers JS is a JavaScript library that allows you to run state of the art pre-trained models directly in the browser. The goal is to make it extremely easy to use, allowing you to create and add machine learning functionality directly into your web applications with as few as three lines of code. Of course, I mentioned before, we're fully open source, publishing everything on GitHub and it's very community driven development. So people would like a feature request, they make a, you know, open up an issue on GitHub. We love the community when we're able to collaborate in that way.

And just another indication of the growth over time. So this is our unique monthly users of the Transformers JS set of models. We're at around 1.4 million unique monthly users currently. And this is basically up double from what we saw around six months ago. And the trend is continuing upwards. We're really, really grateful for the support and the people who have been able to create really cool applications with the library. And also just a little bit of development timeline for those interested. It's a relatively new project. So the idea was basically to, well, the origin of the library was to basically take a pre-trained like spam classifier that I had created in my spare time and I wanted to run it as a browser extension. Unfortunately nothing really existed at the time to allow me to do so. So the next logical step was to do it myself. To basically try to get my pre-trained model in the browser.

The V1 release of Transformers JS happened in March of 2023. And at that point we only supported around five different architectures, but as you'll soon see the number slowly and then quickly started to grow. So next V2, the next logical step was to do a complete rewrite from CommonJS to ECMAScript. And at that point maybe like two months later we were at 19 supported architectures. And then V3, which is our largest release yet, was basically introducing WebGPU and WebNN support. And then we were able to support 119 different architectures. And then where we are currently we're at around 155 different architectures and there's a lot of things planned for the library which I guess I'll cover shortly. And once again just a huge shout out to our community all over the world. Creating demos, contributing to the library, and posting to social media about what they've created, it's really, really great to see from our side as well. So thank you. And one more set of slides on statistics. We're at around a million NPM downloads in the last 30 days.

3. Capabilities and Benefits of Transformers JS

Short description:

Transformers JS supports various tasks with 155 different architectures and 1,850 models on the Hugging Face Hub. Integration with different browsers enables running models on Chrome, Edge, Brave, Firefox AI runtime, and upcoming support on Safari. Running models in the browser offers benefits like security, real-time processing, cost-efficiency, scalability, and versatility in different run times.

We're at around a million NPM downloads in the last 30 days. And at the bottom, number of JS deliver CDN requests for those who don't want to install via NPM. So what can Transformers JS do? We support a variety of tasks. We like to classify them into text, vision, audio, and multimodal tasks. And you can see some of the lists here. Like I mentioned before, we support around 155 different architectures across 27 different tasks. So whether it's text classification or let's say speech recognition, text-to-speech, those kinds of things. And we have around 1,850 ready-to-use models on the Hugging Face Hub. So you just have to replace the model ID with a model that you like or that you have fine-tuned and you're able to hopefully run it in the browser.

We also have really great integration in different browsers. So Chromium-based browsers have really good web GPU support, for example. So Chrome, Edge, Brave, browsers like those. Transformers JS actually powers the Firefox AI runtime in collaboration with Onyx Runtime. And then lastly, Safari, which actually just a few days ago announced that web GPU support will be shipping in Safari 26 Beta. And that means that we'll be able to run Transformers JS models in Mac OS, iOS, iPad OS, and even Vision OS, which is really exciting.

Just a few benefits of why you'd want to run models in the browser. So number one, security and privacy. Since none of your data, and especially none of your sensitive sensor data like microphone or camera input gets sent to a remote server, everything happens locally. Of course, real-time applications is a huge benefit because there's no network latency. Let's say, for example, real-time background removal. You don't have to send a picture of your screen every single frame, which would just be completely It just does not make sense. And then lower costs both for developers, because they don't have to pay for inference or server costs, and then also for the user, where you're using your own compute, so you don't have to use API keys or anything like that. And then I think a huge benefit of distributing over the web is that it's highly scalable. All I need to do to share an application with everyone in the audience today is to give them a link and they can run it on their device. And another reason, like you may have seen a few days ago, is when OpenAI or ChatGPT was down for quite a few hours. So that's another benefit of running on-device. And then also, we greatly benefit from the versatility of JavaScript. So as I mentioned, we're able to run in browsers, but that's only one part of the story. We have support for... You can run in Node.js, Bund, all these different run times.

4. Integration and Execution Enhancements

Short description:

Support for various libraries, frameworks, build tools, and environments. Development of native web GPU execution for GPU-accelerated applications and React Native implementation with GPU support. Process overview: model selection, conversion using libraries like Optimum or Onyx Runtime, and execution with Onyx Runtime for versatile machine learning experiences.

Your favourite libraries or frameworks, whether it be React, Next.js, Svelte, Angular, all of these. And then we have support for different build tools, I guess. And then, of course, the environments, where you can basically, if you want to run in a web worker, as a browser extension, serverless in maybe like super-based edge functions or desktop applications with Electron. Those are all possible.

We're also working on native web GPU execution in collaboration with Onyx Runtime and Onyx Runtime Web, which will greatly improve support for GPU accelerated applications, for example, in your Electron applications or, let's say, an edge function. And then another goal of ours is to be able to implement and run in React Native with GPU support as well. But those are some things we're working on.

Quick slide on how it does at work. So you first take your model that you've either trained in Python land, or you can use one of our pre-converted models. And there's a few sets of libraries, like Optimum or Onyx Runtime, that you can use to convert your models from PyTorch, Jax, or TensorFlow to the Onyx format. And Onyx stands for open neural network exchange. It's basically a format that allows for highly interoperable machine learning experiences.

5. Web Application ML Implementation

Short description:

Creating ML pipelines in web applications using Onyx Runtime for model execution on various devices. Introduction to web GPU as a successor to WebGL, enabling modern API for accelerated graphics and compute. Importance of quantization for resource optimization and considerations for developing web applications, including device features and target hardware. Performance achievements with LLMs on web GPU, showcasing model speeds on different hardware configurations.

And then you write your JavaScript code, so in this case, just a few lines of code to run Whisper. And then behind the scenes, it uses Onyx Runtime to be able to run your model in either web assembly on CPU, web GPU on GPU, or with Web NN on CPU, GPU, or NPU. So let's see how you would start maybe adding ML to your web applications. So just a few lines of code. Number one, you import the library. Number two, you create a pipeline. And then number three, you run it on some input. And you're also able to specify a custom model. So as your second parameter to the pipeline function, you can specify the model you'd like to use. And then some other options. So you have the initialization parameters. So if you want to specify whether you want to run on GPU, so web GPU, as well as quantization settings like Q4, F16, that's in the first step. And then in the second step, at a runtime parameters like the maximum number of tokens that you'll be generating.

A quick little overview of what web GPU is. It's meant to be the successor to WebGL and is a new web standard that allows you to run modern, it's a modern API for accelerated graphics and compute. And more importantly, it's a general purpose API, which means that we can run, you know, machine learning operations, which is really, really important. And you can enable web GPU support by just specifying the device as web GPU. It's a little bit limited availability as I was mentioning earlier, so but we hope to see browsers move towards better support in future. And then WebNN as well, you can also specify the device by saying WebNN either maybe if you want NPU, GPU, or CPU, you can also specify that in this case. And then a quick slide on what quantizations, you can basically, because it's very important, because browsers are extremely resource limited, we encourage users to quantize their models and reduce the precision in exchange for lower resource consumption, lower memory consumption, lower bandwidth to the user because they need to download it once and then lower memory consumption at runtime.

We also expose different maybe APIs that users would like to achieve a little more control. So for example, a segmentation demo like this. Maybe some factors to consider as you are developing for the web, bandwidth is important so the user needs to download the model once so you encourage them to choose models that can run on the target hardware. Accuracy versus speed, what level of quantization are you going to use and what latency and precision is required. Device features, what browser APIs are required, maybe microphone input and web GPU support is one of the top ones, and then target devices, are you building for mobile, desktop, and anything in between. Maybe just a quick run through of some applications that you can build. So of course, privacy focused chat bots, being able to run LLMs in the browser is maybe not new now, but the performance we're able to achieve is quite remarkable, especially on web GPU. So in this case, a 4B model running at around 90 tokens a second on an RTX 4090, and then on this Mac here running a 1.7B model at around 130 tokens per second, which is really great to see. Really major improvements, and major improvements still coming. With the native web GPU, I think there's a lot of improvements that we can still make.

6. Browser-Based ML Applications

Short description:

Real-time captioning with vision language models, reasoning models like DeepSeek R1 matching GVT 4.0 levels, and multimodal applications for image tasks. Speech recognition and text-to-speech applications with models like Whisper and CoCro. In-browser inference for image editing, educational tools, and visualization of attention mechanisms in vision transformers.

Real-time captioning is exemplified by using webcam input with a vision language model to describe visual content, such as the model created by Hugging Face. Support for reasoning models like DeepSeek R1, a distilled version performing on par with GVT 4.0, showcases impressive capabilities of smaller models. Multimodal applications combine text and image input/output, enabling tasks like generating images or converting images to LaTeX locally in the browser. Speech recognition with Whisper and text-to-speech with CoCro demonstrate powerful and efficient model applications.

Background remover models and image editing with models like Segment Anything highlight the utility of in-browser inference for tasks like image manipulation and editing. Educational use cases include visualizing attention mechanisms in vision transformers and understanding tokenization processes locally. Tools for dissecting and visualizing models, supported by Netron, provide insights into model structures and workings. Mobile and desktop games are also supported, expanding the range of applications for browser-based inference.

7. Advanced Browser-Based ML Implementations

Short description:

Various multimodal applications like captioning, speech recognition with Whisper, and text-to-speech with CoCro. In-browser inference for background removal and image editing with models like Segment Anything. Educational use cases for attention mechanisms, tokenization, model visualization, and support for mobile and desktop games.

And you can also take an image and then, let's say, convert this image of LaTeX to the LaTeX code. Various other multimodal applications, like captioning we also support. Speech recognition with models like Whisper is a very important and powerful but also popular use case that people have been able to develop cool applications with. Text-to-speech with models like CoCro, which in this case, is a very small 82 million parameter model. I'll play a quick demo of the quality. CoCro is an open weight TTS model with 82 million parameters. lightweight architecture. It delivers comparable quality to larger models while being significantly faster and more cost efficient.

We also have background remover models, which like I mentioned before, it is one of those canonical examples of why you would choose in-browser inference. Image editing with models like Segment Anything. This is people have actually been able to create Figma plugins that utilize Segment Anything. In this case, it's only around 15 megabytes in size. So being able to run that in the browser locally is really great. And this is running in realtime here.

We also support a bunch of educational use cases, so being able to visualize the attention mechanism in vision transformers, being able to see what the models are focusing on. That's another example. Everything's running locally. And other educational tools like tokenization, how tokenization works, being able to run that locally is also a great use case. And also being able to visualize the models, so if you want to, I'll skip forward a bit, being able to dissect and really go into detail of what the models are, how they're built. This is powered by Netron. We also support mobile and desktop games. In this case, it's a simple image classification model. You draw to a canvas and then the model predicts what you're drawing. The goal of the game, it's similar to Google Quick Draw. So the goal is to draw as many items as you can in an allotted amount of time. Another example, object detection. We have depth estimation, many normal vision applications. Music generation is another one. Another example, lo-fi music.

QnA

Advanced Interactive AI Demonstrations

Short description:

Lo-fi music and semantic search use case with raw data conversion to embeddings for semantic search. Live demo of conversational AI with interactive role-playing scenarios like joke-telling and gift requests. Showcase of SmallVLM for real-time analysis and encouragement to explore browser-based machine learning opportunities.

Another example, lo-fi music. And one other one I would say is very important, and it's a very simple use case of semantic search taking raw data, whether an image, as you can see on the left, or audio, and being able to convert to embeddings and then being able to do semantic search at run time. And then this is a demo, the most recent demo, which is a conversational AI. And we will try to run this. Remember, I think a previous talk with Thomas was if the demo works, we applaud. And if it doesn't work, we applaud louder. So let's see if it does work. I'm going to pull it up.

Hey there, my name is Hart. How can I help you today? Hey there, I'm on stage at JS Nation. Can you tell me a joke? Sure, here's one for you. Why don't programmers like to watch old movies? Because they're too bugged to sit through. Okay, now can you tell me what the capital of Netherlands is? Sure, the capital of the Netherlands is Amsterdam. And now I want you to role play as Santa, and I'm going to be a little boy asking you for a gift.

Okay, I'll be Santa. What kind of gift are you looking for? Could I have a lump of coal, please? I'm sorry, but as a character, I can't actually give you a lump of coal. But I can tell you that in the world of ice hockey, a lump of ice is a crucial part of the game. The more you know. And one last live demo I'd like to show is using SmallVLM running in real time. So what I'm going to do is click start here and ask it what do you see? And in this case it says, with his hands on his chest, the word hugging face on the shirt, I guess that seems right, so that's great to see. And let's continue. I think one or two more slides. Just some final thoughts. I would like to say that there are endless possibilities and I guess would want to encourage you to maybe explore within browser on device machine learning and be able to share what you create. We'd love to see it. And the last, if I leave you with one thought, it's always bet on JS. As Brendan Aich would say, first they said JS couldn't be used to build rich internet apps. Then they said it couldn't be fast, then they said it couldn't be fixed, then it couldn't be multi-core GPU. Wrong every time. So his advice, always bet on JS.

Model Licensing and Transition to ECMAScript

Short description:

Discussing the transition to ECMAScript for better efficiency and faster development. Explanation on shipping electron applications with pre-cached models. Licensing details for different models and factors affecting model redownloading.

So naturally I quote tweeted him and said, then it couldn't run LLMs. Wrong every time. And he seemed to like that. So I hope we're on the right track and being able to build cool demos with browser AI and we'd love to see what you build. So thanks so much.

The first question here is, if I'm using an electron-based build, can I ship the application with pre-cached models or do I always need to specify a mirror? So we do support being able to run and load models at build time. You can specify the directory that you would like to load it from. If you ship your electron application with that model, you'll be able to run it without having to fetch anything from a server. Great.

What was the reason for starting with CommonJS in the first place? I would actually say a bit of inexperience on my side. I was maybe more of a Python developer, more of a machine learning engineer traditionally, I guess. Making the switch to ECMAScript was a massive, massive benefit. We've seen being able to run in workers and a bunch of other examples that if someone is still using CommonJS, I would recommend switching over. So I've been able to build much faster, much more efficient after the switch. Nice.

Model Licensing, Commercial Use, and Redownloading

Short description:

Recommendation to switch from CommonJS to boost efficiency. Details on commercial use licensing for various models. Factors influencing model redownloading, browser caching leniency, and storage impact.

We've seen being able to run in workers and a bunch of other examples that if someone is still using CommonJS, I would recommend switching over. So I've been able to build much faster, much more efficient after the switch. Nice. Next one.

How does licensing work with different models? Are they open to use commercially? Yeah. So this all depends on the model you're going to run. So as an example, the TTS model that I was showcasing earlier, CoCro is commercially usable. It's an Apache 2 license. It depends on certain other models, whether it's MIT license or Apache 2. Those ones are commercially allowed. But it is very model-specific, and the company or the person who makes the model will then specify the license that you can see on the model card when you search for it on the Hugging Face Hub. Great.

What factors could necessitate redownloading the model? Yeah. So basically we're at the mercy of the browser's caching implementation. So those models that I was demoing earlier actually have been cached probably for the last month, just because, I guess, I regularly visit them showcasing the demos. But if you are running low on storage space, the browser has the permission to evict it out of the cache. But I would say that browsers are actually very lenient when it comes to what level of system or disk space is available. And the different caching mechanisms that Transformers.js uses are able to leverage that. So it depends, I guess, it is at the browser's mercy of when it decides to evict it. But I have seen, especially if you don't have disk space issues, it can be very long-lasting. Awesome.

Browser Agents, Performance, and Model Deployment

Short description:

Use of agents in browser with Transformers.js. Performance considerations for CPU-only usage. Running a model in a project without downloading, using .onix files.

Next one here, I think this one is really interesting. Is it possible to use agents in the browser with Transformers.js? Yeah. So this is a really great example of where the community has been great and where we encourage the community to explore more. So Transformers.js, I guess the use case of it is to do the inferencing of the models, the running of the language models, generating the text. And then I guess it would be a developer's, depending on what example and what application they're trying to build, would then be able to use Transformers.js to output certain function calling code or if you have an API that's exposed to the browser, that could be really interesting as a website developer, you're allowing like window.function, you know? And allowing a library like Transformers.js to see that and then run that, that would be a very interesting use case. But yeah, I would say it's up to the developers to be able to take the inference of the large language models and then build around it and explore and create agentic applications. Sick.

How good is the performance when using CPU only? That's a great question. It would, I guess, there's always the it depends answer, but depending on which model you're I mean, some of the WISP models that I've showcased here run perfectly fine on my iPhone 12 in WebAssembly in the browser, like multiple layers of sandboxing and abstraction on a relatively old phone, maybe not one of the oldest ones, but even in that case, it runs perfectly well. If you're going to be running large language models, I would highly recommend being able to utilize the GPU. Especially when you're going via WebAssembly or even in a CPU node JS backend, it is, it can be pretty slow. So we recommend being able to run on WebGPU if you have the option, but if not, then I would recommend choosing the model to align with your users, your target audiences, device specifications and whether you're building mobile, whether you're building desktop, those factors all contribute to what model you choose, what quantization you choose, and model size and things along those lines. Awesome.

One last one. Is it possible to run a model by placing it in a project without downloading it? If I understand the question correctly, it's basically like dragging the model into or having it either pre-downloaded or dragging it into the interface maybe. That could be great. All you need is a .onix file and if it's following the correct signature, if it's exported with the tools we recommended, then it can run. I think the thing we're currently maybe, as a feature request, it would be basically using that data directly instead of going via the Hugging Face Hub and those are things we're definitely open to. Amazing. Thank you so much. It's been awesome. Cool. Thank you. Thank you.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Embracing WebGPU and WebXR With Three.js
JSNation 2024JSNation 2024
27 min
Embracing WebGPU and WebXR With Three.js
Top Content
The 3JS project has evolved into a community-driven effort with numerous contributors over the past 14 years. It started with 3D engine work in Flash and transitioned to using SVGs for rendering in HTML5 before adopting WebGL. The project showcases various projects and frameworks, including a no-code tool powered by 3.js. The team is working on a new render using WebGPU and developing a new shader language called TSL. The hope is that WebGPU will eventually replace WebGL, offering better control and performance.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Unreal Engine in WebAssembly/WebGPU
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Unreal Engine in WebAssembly/WebGPU
Top Content
Alex Saint-Louis, co-founder of Wunder Interactive, shares the mission of bringing Unreal Engine to the browser, enabling cross-platform 3D applications and games. They are working on a WebGPU back end for Unreal Engine to push the limits of 3D on the web. Wunder Interactive has improved compression, built their own asset file system, and offers powerful tools for game developers. They are utilizing modern web technologies like WebAssembly, WebGL, and WebGPU, and plan to support other engines like Unity and Godot. The team aims to transform the industry by bringing console-quality games to the browser and providing an alternative distribution path. They are excited to bring Unreal Engine 5 to the web with WebGPU support and are working on WebXR support for cross-platform 3D experiences, including VR.
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
React Advanced 2021React Advanced 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
MediaPipe is a cross-platform framework that helps build perception pipelines using machine learning models. It offers ready-to-use solutions for various applications, such as selfie segmentation, face mesh, object detection, hand tracking, and more. MediaPipe can be integrated with React using NPM modules provided by the MediaPipe team. The demonstration showcases the implementation of face mesh and selfie segmentation solutions. MediaPipe enables the creation of amazing applications without needing to understand the underlying computer vision or machine learning processes.
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
JSNation 2022JSNation 2022
22 min
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
Top Content
Welcome to MakePad, a new way to build UI for web and native using WebAssembly and Rust. JavaScript is not suitable for complex applications like IDEs and design tools. Rust, a new programming language, was used to reimagine MakePad, resulting in a fast and efficient platform. MakePad offers live editing, high CPU performance, and the ability to load native instrument components. The future of MakePad includes an open-source release, a design tool, and support for importing 3D models.
Extending Unity WebGL With Javascript
JS GameDev Summit 2022JS GameDev Summit 2022
32 min
Extending Unity WebGL With Javascript
Top Content
Unity targets over 25 platforms and technologies, including desktop, mobile, and virtual reality. They use Emscripten to compile the engine and game logic into WebAssembly for web development. Unity can be extended with plugins to access browser features like WebXR's augmented reality mode. The speaker demonstrates intercepting Unity's calls to the browser to modify its behavior. Unity is actively working on mobile support for web export and improving documentation for extending Unity with web plugins.

Workshops on related topic

Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
C3 Dev Festival 2024C3 Dev Festival 2024
48 min
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
WorkshopFree
Andreia Ocanoaia
Andreia Ocanoaia
Feeling the limitations of LLMs? They can be creative, but sometimes lack accuracy or rely on outdated information. In this workshop, we’ll break down the process of building and easily deploying a Retrieval-Augmented Generation system. This approach enables you to leverage the power of LLMs with the added benefit of factual accuracy and up-to-date information.
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs
Hands on with TensorFlow.js
ML conf EU 2020ML conf EU 2020
160 min
Hands on with TensorFlow.js
Workshop
Jason Mayes
Jason Mayes
Come check out our workshop which will walk you through 3 common journeys when using TensorFlow.js. We will start with demonstrating how to use one of our pre-made models - super easy to use JS classes to get you working with ML fast. We will then look into how to retrain one of these models in minutes using in browser transfer learning via Teachable Machine and how that can be then used on your own custom website, and finally end with a hello world of writing your own model code from scratch to make a simple linear regression to predict fictional house prices based on their square footage.
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
ML conf EU 2020ML conf EU 2020
112 min
The Hitchhiker's Guide to the Machine Learning Engineering Galaxy
Workshop
Alyona Galyeva
Alyona Galyeva
Are you a Software Engineer who got tasked to deploy a machine learning or deep learning model for the first time in your life? Are you wondering what steps to take and how AI-powered software is different from traditional software? Then it is the right workshop to attend.
The internet offers thousands of articles and free of charge courses, showing how it is easy to train and deploy a simple AI model. At the same time in reality it is difficult to integrate a real model into the current infrastructure, debug, test, deploy, and monitor it properly. In this workshop, I will guide you through this process sharing tips, tricks, and favorite open source tools that will make your life much easier. So, at the end of the workshop, you will know where to start your deployment journey, what tools to use, and what questions to ask.
Introduction to Machine Learning on the Cloud
ML conf EU 2020ML conf EU 2020
146 min
Introduction to Machine Learning on the Cloud
Workshop
Dmitry Soshnikov
Dmitry Soshnikov
This workshop will be both a gentle introduction to Machine Learning, and a practical exercise of using the cloud to train simple and not-so-simple machine learning models. We will start with using Automatic ML to train the model to predict survival on Titanic, and then move to more complex machine learning tasks such as hyperparameter optimization and scheduling series of experiments on the compute cluster. Finally, I will show how Azure Machine Learning can be used to generate artificial paintings using Generative Adversarial Networks, and how to train language question-answering model on COVID papers to answer COVID-related questions.