Video Summary and Transcription
This talk explores the differences between WebGL and WebGPU, with a focus on transitioning from WebGL to WebGPU. It discusses the initialization process and shader programs in both APIs, as well as the creation of pipelines in WebGPU. The comparison of uniforms highlights the use of uniform buffers for improved performance. The talk also covers the differences in conventions between WebGL and WebGPU, including textures, viewport and clip spaces. Lastly, it mentions the differences in depth range and projection matrix between the two APIs.
1. Introduction to WebGL and WebGPU
In this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition. WebGL has a history that dates back to 1993, and the first stable version, WebGL 1.0, was released in 2011. WebGL 2.0, released in 2017, brought several improvements and new features. WebGPU, built on Vulkan, Direct3D 12, and Metal, has been making significant progress and is supported by several engines.
Hello, everyone. I am Dmitry Vaschenko, a Lead Software Engineer at My.Games. And in this talk, we will explore the differences between WebGL and the soon to be released WebGPU and learn how to get the project ready for transition.
Let's begin by exploring the timeline of WebGL and WebGPU, as well as the current state of WebGL and WebGPU. WebGL, similar to other technologies, has a history that dates back to the past. The desktop version of WebGL debuted way back in 1993. In 2011, WebGL 1.0 was released as the first stable version of WebGL. It was based on OpenGL ES 2.0, which was introduced in 2007. And this release allowed web developers to incorporate 3D objects into browsers without requiring extra plugins. In 2017, a new version of WebGL was introduced, called WebGL 2.0. And this version was released six years after the initial version, and was based on WebGL ES 3.0, which was released in 2012. WebGL 2.0 came with several improvements and new features, making it even more capable of producing powerful 3D graphics on the web.
Lately, there has been a growing interest in new graphics APIs that offers developers more control and flexibility. Three notable APIs here are Vulkan, Direct3D 12, and Metal. Together these three APIs create the foundation for WebGPU. Vulkan, developed by the Kronos Group, is a cross-platform API that provides developers with lower level access to graphics hardware resources. This allows for high performance applications with better control of graphics hardware. Direct3D 12, created by Microsoft, is exclusively for Windows and Xbox, obviously, and offers developers deeper control over graphics resources. And Metal, an exclusive API for Apple devices, which designed by Apple, of course, with maximum performance in mind of their hardware. WebGPU has been making significant progress lately. It has expanded to platforms like Mac, Windows and Chrome OS, now available in Chrome and aged 113 versions. And Linux and Android support is expected to be added soon. There are several engines that either support or are experimenting with WebGPU. For example, Babylon.js fully supports WebGPU, while Tree.js currently has experimental support. Play Canvas is still in development, but its future looks promising. And Unity made an announcement of early and experimental WebGPU support in alpha version 2023.2. Cocoa's Creator 3.6.2 officially supports WebGPU. And finally Construct is currently only supporting Chrome version 113 or later on Windows, MacOS and Chrome OS machines. Taking this into consideration, it seems like a wise move to start transitioning towards WebGPU or at least preparing projects for future transition. Now let's explore the main high-level differences.
2. Graphics API Initialization and Shader Programs
When working with graphics APIs like WebGL and WebGPU, the first step is to initialize the main object for interaction. WebGL uses contacts to represent an interface for drawing on a specific HTML5 canvas element, while WebGPU introduces the concept of a device that provides more flexibility. In WebGL, the shader program is the primary focus, and creating a program involves multiple steps. However, this process can be complicated and error-prone.
And when beginning to work with graphics APIs, the first step is to initialize the main object for interaction. This project process has some differences between WebGL and WebGPU, which can cause some issues in both systems. In WebGL this object is called contacts. And this context represents an interface for drawing on an HTML5 canvas element. And obtaining these contacts is easy, but it's important to note that it's tied to a specific canvas. This means that if you need to render on multiple canvases, you will need multiple contacts.
And WebGPU introduces a new concept called device. The device represents a GPU abstraction that you will interact with. The initialization process is a bit more complex than in WebGL, but it provides more flexibility. One advantage of this model is that one device can render on multiple canvases or even none. This provides additional flexibility, allowing one device to control rendering in multiple windows or contexts.
WebGL and WebGPU are two distinct methods for managing and organizing the graphics pipeline. In WebGL, the primary emphasis is one on the shader program, which combines vertex and fragment shaders to determine how vertex is transformed and how each pixel is colored. To create a program in WebGL, you need to follow several steps. Firstly, you need to write and compile the source code for shaders. Next you need to attach the compiled shaders to the program and then link them. Next you need to activate the program before rendering. And lastly, you need to transmit data to the activated program. This process provides flexible control over graphics but can be complicated and prone to errors, particularly for large and complex projects.
3. WebGPU Pipeline Creation
In WebGPU, a pipeline replaces separate programs and includes shaders and other rendering parameters. Creating a pipeline involves defining the shader, creating the pipeline, and activating it before rendering. This approach simplifies the process and allows for optimized and efficient graphics on the web.
When developing graphics for the web, it's essential to have a streamlined and efficient process. And in WebGPU, this is achieved through the use of a pipeline. The pipeline replaces the need for separate programs and includes not only shaders but also other critical information that are established as states in WebGL. Creating a pipeline in WebGPU may seem more complicated initially but it offers greater flexibility and modularity. The process involves three key steps.
First, you must define the shader by writing and compiling the shader source code just as you would in WebGL. Second, you create the pipeline by combining the shaders and other rendering parameters into a cohesive unit. And finally, you must activate the pipeline before rendering. Compared to WebGL, WebGPU encapsulates more aspects of rendering into a single object. This approach creates a more predictable and error-resistant process, and instead of managing shaders and rendering states separately, everything is combined into one pipeline object. By following these steps, developers can create optimized and efficient graphics for the web with ease.
4. Comparison of Uniforms in WebGL and WebGPU
Uniform variables in WebGL and WebGPU can be consolidated into larger structures using uniform buffers, leading to reduced API calls and improved performance. WebGL2 allows for subsets of a large uniform buffer to be bound through the bind-buffer-range API call, while WebGPU uses Dynamic Uniform Buffer offsets. These optimizations provide flexibility and efficiency for developers working on WebGL and WebGPU projects.
Now, let's compare the uniforms in WebGL and WebGPU. Uniform variables offer constant data that can be accessed by all shader instances, and with basic WebGL, we can set uniform variables directly via API calls. However, this approach is straightforward, but necessitates multiple API calls for each uniform variable. With the advert of WebGL2, developers are now able to group uniform variables into buffers, a highly efficient alternative to using separate uniform shaders. By consolidating different uniforms into a larger structure using uniform buffers, all uniform data can be transmitted to the GPU at once, leading to reduced API calls and superior performance. In this case of WebGL2, subsets of a large uniform buffer can be bound through a special API call, known as bind-buffer-range. Similarly, in WebGPU, Dynamic Uniform Buffer offsets are utilized for the same purpose, allowing the passing of a list of offsets when invoking the set-bind group API. This level of flexibility and optimization has made Uniform Buffers a valuable tool for developers looking to optimize their WebGL and WebGPU projects.
5. Transitioning from WebGL to WebGPU
Instead of supporting individual Uniform Variables, work is exclusively done through Uniform Buffers. Loading data in one large block is preferred by modern GPUs instead of many small ones. Transitioning from WebGL to WebGPU involves modifying both the API and shaders. The WGSL specification facilitates a seamless and intuitive transition while ensuring optimal efficiency and performance for contemporary GPUs. If you are working with WGSL, you will notice that some of the built-in GLSL functions have different names or have been replaced. There are tools available that can automate the process of converting GLSL to WGSL. Let's talk about some of the differences in conventions between WebGL and WebGPU. Specifically, we will go over disparities in textures, viewport and clip spaces. When you migrate, you may come across an unexpected issue where your images are flipped.
A better method is available through WebGPU. Instead of supporting individual Uniform Variables, work is exclusively done through Uniform Buffers. Loading data in one large block is preferred by modern GPUs instead of many small ones. Rather than recreating and rebinding small buffers each time, creating one large buffer and using different parts of it for different draw calls can significantly increase performance. And while WebGL is more imperative, resetting global state with each call and striving to be as simple as possible, WebGPU aims to be more object-oriented and focused on resource reuse, which leads to efficiency, of course.
Although transitioning from WebGL to WebGPU may seem difficult due to difference in methods, starting with a transition to WebGL2 as an intermediate step can simplify the work. Transitioning from WebGL to WebGPU involves modifying both the API and shaders. The WGSL specification facilitates a seamless and intuitive transition while ensuring optimal efficiency and performance for contemporary GPUs. I have an example shader for a texture that uses GLSL and WGSL. WGSL serves as a connection between WebGPU and native graphics APIs. Although WGSL appears to be more wordly than GLSL, the format is still recognizable. The following tables display a comparison between the basic and matrix datatypes found in GLSL and WGSL. Moving from GLSL to WGSL indicates a preference for more stringent typing and clear specification of data size, resulting in better quality legibility and lower chance of mistake. The metadeclaring structures has been altered with addition of explicit syntax for declaring fields in WGSL structures, and this highlights the need for improved clarity and simplification for data structures in shaders. By altering the syntax of functions in WGSL, it promotes a unified approach to declarations and return values, which results in more consistent and predictable code.
If you are working with WGSL, you will notice that some of the built-in GLSL functions have different names or have been replaced. This is actually helpful because it simplifies the function names and makes them more intuitive. This will make it easier for developers who are familiar with other graphical graphic APIs to transition to WGSL. If you are planning to convert your WebGL projects to WebGPU, there are tools available that can automate the process of converting GLSL to WGSL. One such tool is Naga, a Rust library that can be used to convert GLSL to WGSL, and best of all, it can even be used right in your browser with the help of WebAssembly.
Let's talk about some of the differences in conventions between WebGL and WebGPU. Specifically, we will go over disparities in textures, viewport and clip spaces. And when you migrate, you may come across an unexpected issue where your images are flipped. This is a common problem for those who have moved applications from OpenGL to Direct3D. In OpenGL and WebGL, images are usually loaded so that the first pixel is in the bottom left corner. However, many developers load images starting from top left corner, which results in a flipped image. Direct3D and Metal systems use the upper left corner as the starting point for textures. and the developers of WebGPU have decided to follow this practice since it appears to be most straightforward approach for most developers. If your WebGL code selects pixels from the frame buffer, it's important to keep in mind that WebGPU uses a different coordinate system. To adjust for this, you may need to apply a straightforward y="1-y'' operation to correct the coordinates.
6. Differences in Depth Range and Projection Matrix
WebGL and WebGPU have different definitions for the depth range of the clipping space. WebGL uses a range from minus one to one, while WebGPU uses a range from zero to one. The projection matrix is responsible for transforming the positions of your model into clip space. Adjustments can be made by ensuring the projection matrix generates outputs ranging from zero to one. Transitioning to WebGPU is a step towards the future of web graphics, combining successful features and practices from various graphics APIs.
If a developer encounters a problem where objects are disappearing or being clipped too soon, it may be due to differences in the depth domain. WebGL and WebGPU have different definitions for the depth range of the clipping space. While WebGL uses a range from minus one to one, WebGPU uses a range from zero to one, which is similar to other graphics APIs like Diag2D, Metal and Vulkan. This decision was made based on advantages of using a range from zero to one that were discovered while working with other graphics APIs.
So the projection matrix is primarily responsible for transforming the positions of your model into clip space, and one useful way to make adjustments to your code is to ensure that the projection matrix generates outputs ranging from zero to one. This can be achieved by implementing certain functions available in libraries like GLMatrix, such as PerspectiveCO function. Other metrics operations also offer comparable functions that you can utilize. And in the event when you are working with an existing project matrix that cannot be modified, there is still a solution. You can transform the projection matrix to fit the 0 to 1 range by applying another metric that modifies the depth range before the projection matrix. This pre-multiplication technique can be an effective way to adjust the range of your projection matrix to fit your needs.
So, as you see, transitioning to WebGPU is more than just switching graphic APIs. It's a step towards the future of web graphics, combining successful features and practices from various graphics APIs.
Comments