Migration from WebGL to WebGPU

Rate this content
Bookmark

In this presentation, I'll explore the transition from WebGL to WebGPU, demonstrating how these changes affect game development. The talk will include practical examples and code snippets to illustrate key differences and their implications on performance and efficiency.

This talk has been presented at JS GameDev Summit 2023, check out the latest edition of this JavaScript Conference.

FAQ

WebGL is a web technology that allows web developers to incorporate 3D objects into browsers without requiring extra plugins. The first stable version, WebGL 1.0, was released in 2011, and it was based on OpenGL ES 2.0. WebGL 2.0, introduced in 2017, brought several improvements and new features.

WebGPU is a new graphics API designed to offer developers more control and flexibility over graphics hardware resources. It is built on the foundation of other APIs like Vulkan, Direct3D 12, and Metal. WebGPU is available on platforms like Mac, Windows, and Chrome OS, with support for Linux and Android expected soon.

WebGL and WebGPU differ in several ways, including their initialization processes, graphics pipeline management, and handling of uniform variables. WebGL uses a context object tied to a specific canvas, while WebGPU uses a device that can render on multiple canvases. WebGPU encapsulates more aspects of rendering into a single pipeline object, making the process more predictable and error-resistant.

WebGPU is currently supported on Mac, Windows, and Chrome OS. Support for Linux and Android is expected to be added soon.

Several graphics engines either support or are experimenting with WebGPU. Babylon.js fully supports WebGPU, while Three.js has experimental support. PlayCanvas is still in development, Unity has announced early experimental support, and Cocos Creator 3.6.2 officially supports WebGPU. Construct supports WebGPU on Chrome version 113 or later on Windows, MacOS, and Chrome OS.

In WebGL, uniform variables can be set directly via API calls, but this requires multiple calls for each variable. WebGL2 allows grouping uniform variables into buffers for better performance. WebGPU, on the other hand, exclusively uses uniform buffers, allowing data to be loaded in large blocks, which is preferred by modern GPUs for increased performance.

One tool available for converting GLSL to WGSL is Naga, a Rust library that can be used to convert GLSL to WGSL. Naga can even be used in the browser with the help of WebAssembly.

In WebGL, images are usually loaded so that the first pixel is in the bottom left corner, while in WebGPU, images are loaded from the top left corner. This follows the practice used by Direct3D and Metal systems, making it more straightforward for most developers.

WebGL uses a coordinate system where the depth range is from -1 to 1, while WebGPU uses a range from 0 to 1. This difference can cause issues such as flipped images or objects being clipped too soon, and adjustments may be needed when transitioning from WebGL to WebGPU.

Transitioning from WebGL to WebGPU represents a step towards the future of web graphics. WebGPU combines successful features and practices from various graphics APIs, offering significant benefits in terms of flexibility, performance, and resource management. However, it requires a thorough understanding of technical and philosophical changes.

Dmitrii Ivashchenko
Dmitrii Ivashchenko
21 min
28 Sep, 2023

Comments

Sign in or register to post your comment.
Video Summary and Transcription
This talk explores the differences between WebGL and WebGPU, with a focus on transitioning from WebGL to WebGPU. It discusses the initialization process and shader programs in both APIs, as well as the creation of pipelines in WebGPU. The comparison of uniforms highlights the use of uniform buffers for improved performance. The talk also covers the differences in conventions between WebGL and WebGPU, including textures, viewport and clip spaces. Lastly, it mentions the differences in depth range and projection matrix between the two APIs.
Available in Español: Migración de WebGL a WebGPU

1. Introduction to WebGL and WebGPU

Short description:

In this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition. WebGL has a history that dates back to 1993, and the first stable version, WebGL 1.0, was released in 2011. WebGL 2.0, released in 2017, brought several improvements and new features. WebGPU, built on Vulkan, Direct3D 12, and Metal, has been making significant progress and is supported by several engines.

Hello, everyone. I am Dmitry Vaschenko, a Lead Software Engineer at My.Games. And in this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition.

Let's begin by exploring the timeline of WebGL and WebGPU, as well as the current state of WebGL and WebGPU. WebGL, similar to other technologies, has a history that dates back to the past. The desktop version of WebGL debuted way back in 1993. In 2011, WebGL 1.0 was released as the first stable version of WebGL. It was based on OpenGL ES 2.0, which was introduced in 2007. And this release allowed web developers to incorporate 3D objects into browsers without requiring extra plugins. In 2017, a new version of WebGL was introduced, called WebGL 2.0. And this version was released six years after the initial version, and was based on WebGL ES 3.0, which was released in 2012. WebGL 2.0 came with several improvements and new features, making it even more capable of producing powerful 3D graphics on the web.

Lately, there has been a growing interest in new graphics APIs that offers developers more control and flexibility. Three notable APIs here are Vulkan, Direct3D 12, and Metal. Together these three APIs create the foundation for WebGPU. Vulkan, developed by the Kronos Group, is a cross-platform API that provides developers with lower level access to graphics hardware resources. This allows for high performance applications with better control of graphics hardware. Direct3D 12, created by Microsoft, is exclusively for Windows and Xbox, obviously, and offers developers deeper control over graphics resources. And Metal, an exclusive API for Apple devices, which designed by Apple, of course, with maximum performance in mind of their hardware. WebGPU has been making significant progress lately. It has expanded to platforms like Mac, Windows and Chrome OS, now available in Chrome and aged 113 versions. And Linux and Android support is expected to be added soon. There are several engines that either support or are experimenting with WebGPU. For example, Babylon.js fully supports WebGPU, while Tree.js currently has experimental support. Play Canvas is still in development, but its future looks promising. And Unity made an announcement of early and experimental WebGPU support in alpha version 2023.2. Cocoa's Creator 3.6.2 officially supports WebGPU. And finally Construct is currently only supporting Chrome version 113 or later on Windows, MacOS and Chrome OS machines. Taking this into consideration, it seems like a wise move to start transitioning towards WebGPU or at least preparing projects for future transition. Now let's explore the main high-level differences.

2. Graphics API Initialization and Shader Programs

Short description:

When working with graphics APIs like WebGL and WebGPU, the first step is to initialize the main object for interaction. WebGL uses contacts to represent an interface for drawing on a specific HTML5 canvas element, while WebGPU introduces the concept of a device that provides more flexibility. In WebGL, the shader program is the primary focus, and creating a program involves multiple steps. However, this process can be complicated and error-prone.

And when beginning to work with graphics APIs, the first step is to initialize the main object for interaction. This project process has some differences between WebGL and WebGPU, which can cause some issues in both systems. In WebGL this object is called contacts. And this context represents an interface for drawing on an HTML5 canvas element. And obtaining these contacts is easy, but it's important to note that it's tied to a specific canvas. This means that if you need to render on multiple canvases, you will need multiple contacts.

And WebGPU introduces a new concept called device. The device represents a GPU abstraction that you will interact with. The initialization process is a bit more complex than in WebGL, but it provides more flexibility. One advantage of this model is that one device can render on multiple canvases or even none. This provides additional flexibility, allowing one device to control rendering in multiple windows or contexts.

WebGL and WebGPU are two distinct methods for managing and organizing the graphics pipeline. In WebGL, the primary emphasis is one on the shader program, which combines vertex and fragment shaders to determine how vertex is transformed and how each pixel is colored. To create a program in WebGL, you need to follow several steps. Firstly, you need to write and compile the source code for shaders. Next you need to attach the compiled shaders to the program and then link them. Next you need to activate the program before rendering. And lastly, you need to transmit data to the activated program. This process provides flexible control over graphics but can be complicated and prone to errors, particularly for large and complex projects.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

A Guide to React Rendering Behavior
React Advanced 2022React Advanced 2022
25 min
A Guide to React Rendering Behavior
Top Content
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React Concurrency, Explained
React Summit 2023React Summit 2023
23 min
React Concurrency, Explained
Top Content
Watch video: React Concurrency, Explained
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
The Future of Performance Tooling
JSNation 2022JSNation 2022
21 min
The Future of Performance Tooling
Top Content
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
Optimizing HTML5 Games: 10 Years of Learnings
JS GameDev Summit 2022JS GameDev Summit 2022
33 min
Optimizing HTML5 Games: 10 Years of Learnings
Top Content
Watch video: Optimizing HTML5 Games: 10 Years of Learnings
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
How React Compiler Performs on Real Code
React Advanced 2024React Advanced 2024
31 min
How React Compiler Performs on Real Code
Top Content
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!

Workshops on related topic

React Performance Debugging Masterclass
React Summit 2023React Summit 2023
170 min
React Performance Debugging Masterclass
Top Content
Featured WorkshopFree
Ivan Akulov
Ivan Akulov
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up).
Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too.
Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step.
(Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Make a Game With PlayCanvas in 2 Hours
JSNation 2023JSNation 2023
116 min
Make a Game With PlayCanvas in 2 Hours
Top Content
Featured WorkshopFree
Steven Yau
Steven Yau
In this workshop, we’ll build a game using the PlayCanvas WebGL engine from start to finish. From development to publishing, we’ll cover the most crucial features such as scripting, UI creation and much more.
Table of the content:- Introduction- Intro to PlayCanvas- What we will be building- Adding a character model and animation- Making the character move with scripts- 'Fake' running- Adding obstacles- Detecting collisions- Adding a score counter- Game over and restarting- Wrap up!- Questions
Workshop levelFamiliarity with game engines and game development aspects is recommended, but not required.
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
Featured WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
How to make amazing generative art with simple JavaScript code
JS GameDev Summit 2022JS GameDev Summit 2022
165 min
How to make amazing generative art with simple JavaScript code
Top Content
WorkshopFree
Frank Force
Frank Force
Instead of manually drawing each image like traditional art, generative artists write programs that are capable of producing a variety of results. In this workshop you will learn how to create incredible generative art using only a web browser and text editor. Starting with basic concepts and building towards advanced theory, we will cover everything you need to know.
Next.js 13: Data Fetching Strategies
React Day Berlin 2022React Day Berlin 2022
53 min
Next.js 13: Data Fetching Strategies
Top Content
WorkshopFree
Alice De Mauro
Alice De Mauro
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
PlayCanvas End-to-End : the quick version
JS GameDev Summit 2022JS GameDev Summit 2022
121 min
PlayCanvas End-to-End : the quick version
Top Content
WorkshopFree
João Ruschel
João Ruschel
In this workshop, we’ll build a complete game using the PlayCanvas engine while learning the best practices for project management. From development to publishing, we’ll cover the most crucial features such as asset management, scripting, audio, debugging, and much more.