HTML5 games have come a long way, leveraging the power of modern web technologies to deliver impressive experiences. Despite advancements in hardware and software, optimization remains crucial. This article delves into the key areas of optimization for HTML5 games, drawing on insights from over a decade of experience with the PlayCanvas game engine.
The focus lies on two main areas: load times and frame rate optimization. Each plays a significant role in ensuring games are accessible and enjoyable for players across various devices.
Understanding PlayCanvas and Its Impact
PlayCanvas is an open-source game engine built with JavaScript and WebGL. It offers a cloud-based, real-time collaborative visual editor, making game development more accessible. PlayCanvas powers Snap Games on Snapchat and is popular among developers worldwide for creating diverse game genres, from casual games to first-person shooters.
PlayCanvas's usage by major platforms underscores its versatility and capability in handling HTML5 games. Its wide adoption highlights the importance of optimizing games for a global audience.
The Importance of Load Time Optimization
Load times are a critical factor in retaining players. Research shows that after six seconds of waiting, 40% of users may abandon a game. Thus, minimizing load times is essential to maintain player engagement.
Tools like Chrome DevTools offer valuable insights into resource loading. By analyzing the Networking and Performance tabs, developers can identify large files and redundant resources, focusing on optimizing the largest assets first.
Texture data often constitutes the bulk of game resources. Optimizing these can significantly reduce load times. Techniques such as hardware texture compression and using formats like DXT, PVR, and ETC can mitigate memory usage and decode costs.
Utilizing Basis for Texture Compression
Basis is a texture format that reduces file size while maintaining the benefits of hardware texture compression. It transcends textures to supported formats at runtime, offering a solution to large image downloads.
In the PlayCanvas Editor, compressing textures to Basis is straightforward, enabling developers to efficiently manage texture data and improve load times without sacrificing quality.
Mesh Data Optimization
Mesh data is another significant contributor to download size, especially in 3D games. JSON is a common format for mesh data, but can be large and slow to parse. GLTF, known as the JPEG of 3D, offers a more efficient alternative.
GLTF stores data in a GPU-ready format, reducing parse times and memory usage. Further compression with Draco can decrease mesh sizes dramatically, although decompression at load time is required.
Server-Side Compression Techniques
Compressing game resources on the server can also enhance load times. Ensuring that resources are served with gzip or Brotli compression is crucial. The method varies by provider, but the benefits are universal.
Applying these techniques to games like Swoop shows tangible improvements in load times, retaining a larger portion of the audience by simply optimizing media formats and data handling.
Design Considerations for Load Time Reduction
Game design can influence load times significantly. Techniques such as asynchronous loading during gameplay transitions, as seen in games like Metroid Prime, can create a seamless experience without loading bars.
Integrating minimal asset sets for initial menus allows for background streaming of game assets, enhancing user experience by reducing perceived load times.
Frame Rate Optimization for Diverse Hardware
With the disparity in mobile hardware performance, frame rate optimization ensures games run smoothly across devices. Tools like hierarchical profilers in Chrome DevTools help identify CPU-intensive hotspots.
Rendering performance can be assessed using extensions like Spectre.js, which provides a detailed view of WebGL activity, helping developers pinpoint areas for improvement.
Resolution and Graphical Complexity
Choosing the right resolution is crucial. High resolutions may not be suitable for devices with weaker GPUs. Offering users resolution options or detecting GPU types can help tailor the experience.
Limiting graphical complexity by using simpler shaders and fewer dynamic effects can enhance performance, allowing games to run at higher frame rates on low-end devices.
Optimizing Draw Calls and Batching
Draw calls represent a performance bottleneck in rendering. Techniques such as texture atlasing and batching can consolidate these calls, reducing CPU and GPU overhead.
By minimizing draw calls, developers can achieve smoother gameplay, particularly on devices with limited resources.
Key Takeaways for Game Optimization
Game optimization should begin early in the development process, considering performance at every stage. It's essential to test on baseline devices throughout development to ensure broader compatibility.
By applying these techniques, developers can create HTML5 games that load quickly and run smoothly, enhancing user experience and reaching a wider audience.
The open source PlayCanvas game engine is built specifically for the browser, incorporating 10 years of learnings about optimization. In this talk, you will discover the secret sauce that enables PlayCanvas to generate games with lightning fast load times and rock solid frame rates.
This talk has been presented at JS GameDev Summit 2022, check out the latest edition of this JavaScript Conference.
PlayCanvas powers Snap Games on Snapchat, which has had over 200 million players. It is also used by game developers worldwide to create various types of games, including casual games, .io games, and FPS games.
PlayCanvas is an open-source game engine written in JavaScript and based on WebGL. It includes a browser-based, real-time collaborative visual editor built in the cloud.
Techniques to improve frame rates include using hierarchical profiling, adjusting rendering resolution based on device GPU, limiting graphical complexity, and reducing the number of draw calls through texture atlasing and batching.
Chrome DevTools provides advanced tools for optimizing load times, including the Networking and Performance tabs. These tools help identify large resources and redundant data that can be optimized.
Testing on different devices ensures that the game performs well across a range of hardware, from high-end to budget devices. This helps in identifying and fixing performance issues early in the development cycle.
A common mistake is rendering games at full device resolution without considering the device's GPU capabilities. Using overly complex shaders and not optimizing draw calls can also negatively impact performance.
Performance optimization is crucial for HTML5 games to ensure fast load times and smooth frame rates. Long load times can cause users to leave, and poor frame rates can affect the gameplay experience.
Texture data can be optimized using hardware texture compression formats like DXT, PVR, and ETC. Additionally, the Basis texture format can be used to compress textures further and reduce GPU memory usage.
GLTF is designed to be the JPEG of 3D, providing a standard format for 3D models. It reduces file size, improves parse times, and lowers peak memory usage compared to JSON, making it ideal for HTML5 games.
You can use user agent sniffing and properties like window width and device pixel ratio to infer device capabilities. For Android, Chrome reports the GPU family, allowing for more precise optimizations based on hardware.
PlayCanvas is an open-source game engine used by game developers worldwide. Optimization is crucial for HTML5 games, focusing on load times and frame rate. Texture and mesh optimization can significantly reduce download sizes. GLTF and GLB formats offer smaller file sizes and faster parsing times. Compressing game resources and using efficient file formats can improve load times. Framerate optimization and resolution scaling are important for better performance. Managing draw calls and using batching techniques can optimize performance. Browser DevTools, such as Chrome and Firefox, are useful for debugging and profiling. Detecting device performance and optimizing based on specific devices can improve game performance. Apple is making progress with WebGPU implementation. HTML5 games can be shipped to the App Store using Cordova.
1. Introduction to PlayCanvas and Game Optimization
Hi, my name's Will Eastcott. I'm the creator of PlayCanvas. Today, I'll be talking about optimizing HTML5 games based on 10 years of experience with the PlayCanvas game engine. PlayCanvas is an open-source game engine written in JavaScript and based on WebGL. It includes a browser-based visual editor for real-time collaborative game development. It powers Snap Games in Snapchat and is used by game developers worldwide for various types of games. My journey in game optimization began with Renderware, a game engine used in the PlayStation 2 generation. HTML5 game developers now have powerful hardware and built-in tools, but optimization is still crucial, focusing on load times and frame rate optimization.
2. Investigating Load Times and Texture Optimization
So why does it matter whether we present our users with loading bars? After 6 seconds of waiting, we tend to lose 40% of our audience. To investigate load times, we can use advanced tools built into the browser, such as Chrome DevTools. By sorting resources based on size, we can identify opportunities for optimization. In HTML5 games, most of the data is texture-based, and large images can cause crashes and long load times. However, hardware texture compression can help by reducing GPU memory usage and eliminating decode costs.
3. Texture and Mesh Optimization
Fortunately, there's another texture format called basis, which compresses the original jpeg to a smaller size while maintaining the benefits of native formats. Compressing textures to basis in the Play Canvas Editor is a simple process. Mesh data is another significant contributor to download size, especially in 3D games. JSON is a commonly used format for loading mesh data, but it can result in large file sizes and slow parsing times. GLTF, an open standard owned by the Kronos Group, offers a solution to this problem and has a thriving ecosystem.
4. GLTF and GLB Optimization
We now use GLTF as the primary format for the engine. GLB, the binary format of GLTF, is significantly smaller than JSON and has a parse time of just 50 milliseconds. The glTF format stores data in a GPU-ready format, allowing direct parsing to WebGL with no processing. Compressing the glb file using Draco technology from Google reduces its size to 1.84 megabytes, with a decompression time of 0.4 seconds. Offloading decompression to a WebWorker thread can further optimize performance.
5. Game Resource Compression
It's important to compress game resources to improve load times. Verify that your server serves compressed resources by checking the content encoding header. The compression technique will vary depending on your back end services provider. For example, with Google Cloud, you can use util to specify which file types should be gzipped.
6. Optimizing Load Times and Game Design
Let's apply these techniques to the game Swoop. By converting JPEGs to basis and using GLB instead of JSON, we can reduce load time by a second. Unloading and asynchronously loading areas can create a seamless environment without loading bars. Bitmoji Party uses this technique to load assets while the user selects a gameplay option.
7. Framerate Optimization and Resolution
Let's talk about framerate optimization and why it's important to scale your game from high-end to budget devices. Investigate framerate using the performance tab and hierarchical profiler. Focus on hotspots in the render function for performance gains. Use tools like Spectre.js to capture and analyze rendering frames. Choose the right resolution based on device capabilities and limit graphical complexity for better performance.
8. Optimizing Draw Calls and Performance
The complexity of shaders affects GPU strain and frame rates. Carefully manage draw calls to minimize CPU and GPU costs. Techniques like Atlasine Textures and batching can optimize draw calls and reduce processing overhead. Three key pieces of advice: optimize early, design for performance, and test on your baseline device.
9. Browser Usage and DevTools
Let's start by taking a look at the poll results. There's an overwhelming majority of Chrome users, which is not surprising. I primarily use Chrome, but I also test the engine on other browsers. Firefox's DevTools have caught up in recent years and are worth looking at. The CPU profiler is particularly useful. Opera shares Chrome DevTools, but I'm not sure if they have their own engine. Safari's DevTools are great for iOS debugging, allowing remote debugging on a connected phone. However, there are limitations when connecting to signed and production apps. The experience of connecting to Chrome on Android is also smooth.
QnA
Detecting Device Performance and Optimizations
Dan asks about tools for detecting device performance. Play Canvas has a mini stats profiler that shows CPU and GPU utilization, as well as draw calls. GPU profiling is difficult on mobile, but a WebGL extension called Disk Joined Timer can provide accurate timings. Mark asks about optimizing games based on specific devices. Detecting the device in the browser can be limited, but user agent sniffing and window properties can provide clues. On Android, Chrome reports the GPU family, allowing for specific optimizations. Many game developers use if statements to target older GPUs and limit rendering complexity.
Micro-optimization and Apple's Progress
It's sad that we have to do micro-optimization for web games based on device differences. Apple has made progress with WebGPU implementation, supporting WebGL 2 and WebXR. They're catching up rapidly, and things will be exciting with WebGPU. HTML5 games can be shipped to the App Store using Cordova, which is easy and quick.
This transcription provides a brief guide to React rendering behavior. It explains the process of rendering, comparing new and old elements, and the importance of pure rendering without side effects. It also covers topics such as batching and double rendering, optimizing rendering and using context and Redux in React. Overall, it offers valuable insights for developers looking to understand and optimize React rendering.
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
React 18's concurrent rendering, specifically the useTransition hook, optimizes app performance by allowing non-urgent updates to be processed without freezing the UI. However, there are drawbacks such as longer processing time for non-urgent updates and increased CPU usage. The useTransition hook works similarly to throttling or bouncing, making it useful for addressing performance issues caused by multiple small components. Libraries like React Query may require the use of alternative APIs to handle urgent and non-urgent updates effectively.
Today's Talk discusses the future of performance tooling, focusing on user-centric, actionable, and contextual approaches. The introduction highlights Adi Osmani's expertise in performance tools and his passion for DevTools features. The Talk explores the integration of user flows into DevTools and Lighthouse, enabling performance measurement and optimization. It also showcases the import/export feature for user flows and the collaboration potential with Lighthouse. The Talk further delves into the use of flows with other tools like web page test and Cypress, offering cross-browser testing capabilities. The actionable aspect emphasizes the importance of metrics like Interaction to Next Paint and Total Blocking Time, as well as the improvements in Lighthouse and performance debugging tools. Lastly, the Talk emphasizes the iterative nature of performance improvement and the user-centric, actionable, and contextual future of performance tooling.
I'm Nadia, a developer experienced in performance, re-renders, and React. The React team released the React compiler, which eliminates the need for memoization. The compiler optimizes code by automatically memoizing components, props, and hook dependencies. It shows promise in managing changing references and improving performance. Real app testing and synthetic examples have been used to evaluate its effectiveness. The impact on initial load performance is minimal, but further investigation is needed for interactions performance. The React query library simplifies data fetching and caching. The compiler has limitations and may not catch every re-render, especially with external libraries. Enabling the compiler can improve performance but manual memorization is still necessary for optimal results. There are risks of overreliance and messy code, but the compiler can be used file by file or folder by folder with thorough testing. Practice makes incredible cats. Thank you, Nadia!
This Talk explores the use of Babylon.js and WebXR to create immersive VR and AR experiences on the web. It showcases various demos, including transforming a 2D game into a 3D and VR experience, VR music composition, AR demos, and exploring a virtual museum. The speaker emphasizes the potential of web development in the metaverse and mentions the use of WebXR in Microsoft products. The limitations of WebXR on Safari iOS are discussed, along with the simplicity and features of Babylon.js. Contact information is provided for further inquiries.
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up). Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too. Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step. (Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
In this workshop, we’ll build a game using the PlayCanvas WebGL engine from start to finish. From development to publishing, we’ll cover the most crucial features such as scripting, UI creation and much more. Table of the content:- Introduction- Intro to PlayCanvas- What we will be building- Adding a character model and animation- Making the character move with scripts- 'Fake' running- Adding obstacles- Detecting collisions- Adding a score counter- Game over and restarting- Wrap up!- Questions Workshop levelFamiliarity with game engines and game development aspects is recommended, but not required.
- Introduction- Prerequisites for the workshop- Fetching strategies: fundamentals- Fetching strategies – hands-on: fetch API, cache (static VS dynamic), revalidate, suspense (parallel data fetching)- Test your build and serve it on Vercel- Future: Server components VS Client components- Workshop easter egg (unrelated to the topic, calling out accessibility)- Wrapping up
In this workshop, we’ll build a complete game using the PlayCanvas engine while learning the best practices for project management. From development to publishing, we’ll cover the most crucial features such as asset management, scripting, audio, debugging, and much more.
Ivan’s first attempts at performance debugging were chaotic. He would see a slow interaction, try a random optimization, see that it didn't help, and keep trying other optimizations until he found the right one (or gave up). Back then, Ivan didn’t know how to use performance devtools well. He would do a recording in Chrome DevTools or React Profiler, poke around it, try clicking random things, and then close it in frustration a few minutes later. Now, Ivan knows exactly where and what to look for. And in this workshop, Ivan will teach you that too. Here’s how this is going to work. We’ll take a slow app → debug it (using tools like Chrome DevTools, React Profiler, and why-did-you-render) → pinpoint the bottleneck → and then repeat, several times more. We won’t talk about the solutions (in 90% of the cases, it’s just the ol’ regular useMemo() or memo()). But we’ll talk about everything that comes before – and learn how to analyze any React performance problem, step by step. (Note: This workshop is best suited for engineers who are already familiar with how useMemo() and memo() work – but want to get better at using the performance tools around React. Also, we’ll be covering interaction performance, not load speed, so you won’t hear a word about Lighthouse 🤐)
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site. QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications. Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Comments