The Future of Numerical Computing in JavaScript

This ad is not shown to multipass and full ticket holders
React Summit US
React Summit US 2025
November 18 - 21, 2025
New York, US & Online
The biggest React conference in the US
Learn More
In partnership with Focus Reactive
Upcoming event
React Summit US 2025
React Summit US 2025
November 18 - 21, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

JavaScript is evolving beyond web development and into the realm of high-performance numerical computing. In this talk, we’ll explore the current landscape of scientific computing in JavaScript, compare its performance with Python and C, and discuss what’s missing. Through live demos – including AI, real-time data processing, and mathematical computing – we’ll see how libraries like stdlib and a few others are shaping the future.

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

Performing numerical computing tasks inside a web browser offers instant access without needing to install additional software, ensures privacy and security by keeping data on the client's machine, provides low latency by eliminating server processing delays, and increases accessibility for users without high-end machines.

TensorFlow.js allows for machine learning tasks to be performed entirely in the browser, from data pre-processing to inference, without involving any server-side processing.

Tesseract.js is a popular tool used for OCR in JavaScript. It is a JavaScript port of the Tesseract OCR engine and can extract text from images entirely within the browser.

WebAssembly allows for performance-heavy applications to run at near-native speeds in web browsers by compiling code from languages like C or C++ into a binary format. It supports SIMD functions, enabling efficient computations.

JavaScript lacks native support for certain data structures like ndarrays and data frames, which are essential for scientific computing. It also faces challenges in providing consistent performance and ease of use compared to other languages like Python.

Python libraries like NumPy for numerical operations, Pandas for data manipulation, and scikit-learn for machine learning serve as models for JavaScript libraries that aim to provide similar capabilities in the web environment.

Vectorization allows operations to be applied to entire arrays rather than individual elements, significantly improving performance and efficiency, especially with large datasets.

Standard Lib generally outperforms Pyodide, especially for smaller datasets, due to its efficient use of ndarrays, universal functions, and optimized performance without the overhead of loading Python libraries.

WebGazer uses the webcam to estimate where a user is looking on the screen, enabling interactive applications that respond to eye movements directly in the browser.

Standard Lib provides a common foundation with standardized APIs for building domain-specific libraries, supports ndarrays, offers functions for linear algebra and statistical analysis, and facilitates fast numerical computations through vectorization.

Gunj Joshi
Gunj Joshi
21 min
16 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Gunj Joshi explores the future of numerical computing in JavaScript, showcasing benefits like privacy, low latency, and accessibility. The talk delves into training models inside browsers, challenges with TensorFlow.js, and the importance of syntactical niceties for readability. It highlights the significance of Jupyter Notebooks, Observable for live coding, and the impact of WebAssembly on JavaScript's performance. The discussion emphasizes NumPy's superiority in numerical applications, the importance of vectorization, and the potential of JavaScript's ecosystem for scientific applications. Additionally, it covers the efficiency of vectorized data processing, performance differences in data processing approaches, and the high performance of WebAssembly in web browsers.

1. Exploring Numerical Computing in JavaScript

Short description:

Gunj Joshi discussing the future of numerical computing in JavaScript, emphasizing benefits like privacy, low latency, and accessibility. Demonstrations include a digit classifier, toxicity detection platform, OCR, and MNIST model training visualization.

Hi everyone, I'm Gunj Joshi and today I'll be talking about the future of numerical computing in JavaScript. This is a space that has been dominated by languages such as Python, R, etc. But things have started to change. As a core contributor at Standard Lib, I've had the front row seat to some exciting developments and today we'll explore what's possible, what's hard, and where we're headed.

So let's get started with the why. Why should we even bother to do heavy numerical tasks inside the browser? One, instant access. No need to install Python or set up any backend. You just open a tab and start to compute. Two, privacy and security. The data stays on the client's machine. This is crucial for things such as medical information, personal finance, or anything sensitive in general. Three, low latency. No more waiting for the server to process your tasks. Everything happens right away. And four, accessibility. This lowers the barrier of entry for people who don't have high-end machines or they can't install complex software.

So you might be thinking how does this look like in practice? I've got some cool demos through which we can see that. The first demo is a digit classifier. This is your classic handwritten digit recognition. It is powered by TensorFlow.js. The beauty of this is that everything from data pre-processing to inference fully runs inside your browser. So no server involved. Next example, what we have is of a toxicity detection platform. What it does is that it takes a block of text like an abusive tweet or a comment on a social media platform and flags it as either toxic or not toxic right away in your web browser. Suppose we have a toxic comment here and we want to classify it so it comes out to be true. After that, we have the demo of an OCR. Here we have Tesseract.js which is the JavaScript port of the popular OCR engine. What it does is it takes an image and it extracts the text out of that particular image. After that, we have the MNIST example to visualize model training right inside your browser. So yes, you can actually watch the actual model training right inside your browser such as here.

2. Building Advanced Browser Applications

Short description:

Training inside the browser, accessibility demos like WebGazer and Holobooth. Challenges: TensorFlow.js specialization, npm grab bag issues, and the importance of syntactical niceties for readability and maintainability.

When we start the training you can see the training is happening right inside your web browser rather than a back end. And after this training is done, you can see the custom training charts or accuracy, confusion matrix, all those stuff.

Next away we have the demos of some accessibility applications. First one of which is WebGazer. It uses your browser to create a web page. Next is WebGazer. It uses your webcam to estimate where you are actually looking on the screen. So suppose wherever my eyeballs move, the red ball, the red pointer or the red ball moves in that particular direction. Next away in our lineup we have Holobooth which is a web-based AR filter. So wherever I move my face or blink my eyes, it moves according to that. And all this also happens inside your web browser again.

We have seen what's possible. Now here comes the big question. How do you actually build something like this? Let's figure this out together. First job comes out to be TensorFlow.js. But it is highly specialized for machine learning. It has little to no support for numerical and linear algebra. Maybe stats or any other scientific computing task. If you think about development pace, it is slowing down. Even its GitHub issues and pull requests suggest that Google's focus might be shifted somewhere else. Our second stop comes out to be an npm grab bag. Here what we do is we just grab some packages such as mathjs for mathematical functions, maybe simple statistics for stats, mljs for machine learning, randomjs to generate some random numbers. Pretty soon you are juggling with 5-6 different libraries, each with its own API style, its own quirks and its own bugs. It's fragile, inconsistent and it gets worse as your project grows. Shouldn't be this hard, right? Alright. Let's pause for a second and zoom out. Maybe let's take a step back and check out what are the essential ingredients of a scientific computing numerical system. And why is it so currently hard in JavaScript? First up in our ingredient list comes out to be syntactical niceties. These might seem cosmetic at first, but they have a big impact on how readable and maintainable your code is. Take R's pipeline operators for example.

3. Enhancing JavaScript Computational Capabilities

Short description:

Nested function call style comparison and the importance of Jupyter Notebooks in interactive computing. JavaScript's Observable for live coding. Hardware-optimized libraries like LAS and LFback, and the impact of WebAssembly on JavaScript's performance level.

On the left we have a classic nested function call style which is pretty dense. But on the right you see much more clearer how it becomes when we use the pipeline. Each operation is changed step by step just like you do in plain english text.

Then next in our list comes site extensions or interactive notebooks. In Jupyter, in Python, Jupyter Notebooks have become the gold standard for interactive computing, mixing code, plots, and text all in one place. In JavaScript we have Observable which brings the same live reactive coding experience right inside your web browsers.

Moving on to something under the hood but super important, bindings to hardware optimized libraries. Libraries such as LAS and LFback power everything from matrix multiplication and QR decomposition to SVD or solving linear equations. These libraries date back to 1970s which were originally written in Fortran and most tools we use today such as NumPy, MATLAB, R, Julia are just wrappers around them. JavaScript is now catching up thanks to WebAssembly which lets us tap into the same performance level right inside the web browser.

4. Exploring Data Handling and Vectorization

Short description:

NumPy's ndarrays importance in numerical applications. JavaScript's lack of a native equivalent. Pandas data frames for tabular data manipulation, DanfoJS in JavaScript. Vectorization's impact on scientific computing and performance leveraging.

Next up we have NumPy's ndarrays which are the backbone of almost every numerical application. They allow you to handle multi-dimensional data efficiently with operations like slicing, reshaping, and broadcasting baked in. In JavaScript we lack a native equivalent while there are libraries trying to fill the gap but it's an area still under development.

Beyond arrays we also need data frames. Pandas data frames let you work with tabular data easily, support filtering, aggregation, and joining datasets. In Python this is handled by Pandas which is built on top of NumPy's ndarrays. JavaScript has projects like DanfoJS which aim to build similar capabilities inside the browser with NodeJS but the tooling and ecosystem around it are not yet as mature mostly because we still lack efficient ndarray infrastructure underneath.

Now let's talk about one of the biggest performance levers in scientific computing, vectorization. This is what it looks like without vectorization. Here we are using a simple list comprehension in Python to square a list of numbers. It works but under the hood this is just looping through each element one by one. It is readable but when you're working with large datasets this approach goes down very fast.

5. Optimizing Data Processing with Vectorization

Short description:

Vectorized approach using NumPy for efficient data handling and performance. Real-world example of calculating distances using latitude and longitude. Generating large datasets with pandas and utilizing vectorization for scalability.

It is readable but when you're working with large datasets this approach goes down very fast. Now check out the vectorized version. Using NumPy we convert the list into an ndarray and apply this squaring operation directly to the entire array. There's no explicit loop here, it's all happening at the C level under the hood making it much more faster.

Here's a more real-world example, a function to compute the distance between two points on the earth's surface using latitude and longitude. This is based on the Haversine formula and we're using NumPy functions to vectorize the entire calculation. Vectorization lets us run the function over entire arrays of coordinates and not just individual points which is key while working with geospatial datasets.

Of course and to really test performance we need large datasets. Here's a method to generate a pandas data frame filled with 100,000 random coordinates. This gives a solid dataset for testing. Notice how we're using vectorized random number generation to quickly fill entire columns. It is efficient and it scales well. Now imagine you want to calculate distances from a fixed point to every row in that big data frame.

6. Comparing Data Processing Approaches

Short description:

Comparing non-vectorized and vectorized approaches for data processing speed. Demonstrating the performance difference between the two methods. The importance of specialized libraries like scikit-learn for machine learning in Python.

Here's the non-vectorized approach using a basic for loop to iterate through each row. This gets the job done but it's painfully slow for large datasets. Each iteration is handled in python space and the performance penalty adds up quickly.

Here's the vectorized approach of the same version that we saw earlier. Notice how we pass entire arrays of latitude and longitude values from the data frame directly into the function. Thanks to numpy and pandas under the hood this runs way faster than looping. And the performance difference is actually very real.

Here's a benchmark showing different methods. As the number of rows grows you can see how the vectorized approach stays flat while others especially the for loop blow up exponentially. Next in the ecosystem specialized domain specific libraries. One of the most famous is scikit-learn which makes machine learning in python super approachable.

7. Exploring Numerical Computing Ecosystem

Short description:

Setting up linear regression models, stats models for statistical analysis. Key components of a mature numerical computing ecosystem. JavaScript's potential to meet the standards and the emerging ecosystem's impact on web-based scientific applications.

This example shows how easy it is to set up a linear regression model and make predictions with just a few lines of code. Another critical piece is stats models which focuses on statistical analysis. Here we're using a simple ordinary least squares regression and printing a full statistical summary of the model. So let's take a moment to recap what we have covered till now. These are the key ingredients that make up a mature numerical computing ecosystem.

First syntactical niceties things like pipeline operators and expressive syntax that make the code readable. Then hardware optimized bindings glass and lapack under the hood to make sure performance scales. Then data structures like nd arrays and data frames which form the foundation for serious numerical work. Then vectorization because for loops don't scale and finally specialized libraries such as scikit-learn and stats models to help you build actual workforce not just toy examples.

Now the question is can javascript meet this bar? Have a pause and wonder for a second. The good news is we are seeing an emerging ecosystem. Individually these are strong but the challenge is still bringing them together into a seamless experience and that's exactly where projects like standard lib are aiming to fill the gap. So what's really exciting about this movement in time is that I think we are on the cusp of a new and versioning ecosystem for web-based scientific ecosystem. An ecosystem which will power and supercharge web applications of the future.

8. Introducing Standard Lib and Performance Benefits

Short description:

Spotlight on standard lib as a key project in the emerging ecosystem. Fancy indexing, Blast operations for linear algebra and statistical analysis. Performance benefits of vectorization in standard lib.

Now it's time to put a spotlight on a key project which is driving forward this emerging ecosystem and that project is standard lib. Standard lib provides a common foundation with standardized APIs atop which others can build domain specific libraries. Now let's revisit our ingredient list that we discussed and see if standard lib got everything covered.

Fancy indexing and by fancy indexing we mean python-like indexing. In that you can create and set arbitrary slices of array data and apply things like Boolean and integer array mask for filtering and data extraction. So here we are first importing the standalone package then we initialize our array x. Apply the array to fancy method on that array then we can do slice retrieval and slice assignment and finally get the result. After that we have Blast which is a collection of fundamental operations used for linear algebra or statistical analysis image transformation and machine learning as well.

It includes operations for matrix multiplication manipulation scaling and transformations such as d dot or d swap dsm dgmm etc and in this code segment what we are doing is first we are importing just the packages that we require then we initialize arrays x and y set the constant alpha and then use the DAX py operation. It also has the linear algebra package also known as LAPACK. This is an example of the DLACPY method which is used to copy all part of a matrix A or only a small part of matrix A to another matrix B. Standard lib supports nd arrays too.

9. Exploring WebAssembly Performance and Applications

Short description:

Setting data types and performance benefits of vectorization. Standard lib for data analysis and PageRank algorithm. WebAssembly's high performance in web browsers and comparisons with JavaScript and C for computational routines.

As given in the code segment, you can set the data type buffer shape order which is the array order and strides and offsets. We talked about performance that we obtain by you know using vectorization instead of simple for loops. Similarly the ufuncs of standard lib also support vectorization. In this code block, we are once again importing only the required modules. We initialize the array and apply the absolute function over the entire array at once instead of looping through each element.

To help facilitate interactive data analysis, standard lib comes with its own ripple pre-loaded with all of standard lib's functionality, making it easy to load in a data set for performance analysis. This example demonstrates implementing the PageRank algorithm using standard functionality. Import statements may appear extensive, but they allow importing only necessary packages, reducing bundle sizes. Standard lib serves as a comprehensive solution for numerical computing, with advantages like reduced library juggling.

WebAssembly, a high-performance binary format running in web browsers near native speeds, supports SIMD functions and enables performance-heavy applications in languages like C, C++, or REST. Entire game engines have been ported to WebAssembly for running games directly in web browsers. Notably, applications like Adobe Photoshop are entirely web-based, enabling full image editing within web browsers. Comparisons between JavaScript, C, and WebAssembly for routines like dexpy show performance differences, with WebAssembly excelling in computation performance when data copying is minimized.

10. Analyzing WebAssembly Performance and Libraries

Short description:

Comparing JavaScript, C, and WebAssembly for dexpy routine. JavaScript's performance and WebAssembly's advantages. Standard lib superiority in performance and loading time compared to alternatives.

This is one more such example of an entire image editing software Adobe Photoshop which is entirely based on our web browsers so you can do all operations all image editing things inside the web browser directly.

Here we compare JavaScript against C and WebAssembly for the blast level routine dexpy. Dexpy multiplies a vector by a scalar and then adds the result to another vector.

In the plot I am showing two different WebAssembly measurements. The first one is WASM and the second one is WASM copy. For the first WebAssembly measurement, I'm measuring WebAssembly performance when performing the computation over vectors which are already allocated in the WebAssembly module heap.

In the second WebAssembly measurement, which is WASM copy, I'm measuring WebAssembly performance when you need to copy data to and from the WebAssembly memory module in addition to performing the blast operation.

So in this plot, as you might have expected, performing computation in WebAssembly without needing to copy performs much better than when needing to copy, especially for routines such as dexpy where copying data adds a significant overhead as compared to the actual operation.

But what may be unexpected is that JavaScript still performs quite well on comparing with WASM copy. That demonstrates that especially for simpler operations using vanilla JavaScript is just fine.

It is only 2 or 2.5 times slower as compared to C. There have been a few more efforts in this direction such as Pyodide for running Python inside your web browsers and WebR for running R inside your web browsers.

Based on these charts we can conclude that standard lib is better or equal to other alternatives based on both loading time and speed.

So let's have a recap of what we have just covered. We first saw how vectorization enables fast and readable numerical code in JavaScript.

Then we saw the role of specialized libraries for ML and stats such as scikit-learn and stats models. Then we also saw how standard lib provides indy arrays, universal functions, core math APIs, and much more.

Also, GLASS and ALIPACK are now accessible inside web browsers using standard lib, and all of this is converging into a real emerging ecosystem with, of course, standard lib at its center.

So how can you get involved in all this? Well, these two QR codes link to the standard lib repository and its blog. So if you just feel to contribute or facing any issue, you can just go ahead and ping the GitHub repository.

You can also read the standard lib blogs which are updated quite regularly regarding the recent activities. Standard lib also participates in Google Summer of Code. It participated last year and this year as well.

That was all from my side. Hope you enjoyed listening to me. Thank you. Have a nice day.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Full Stack Documentation
JSNation 2022JSNation 2022
28 min
Full Stack Documentation
Top Content
The Talk discusses the shift to full-stack frameworks and the challenges of full-stack documentation. It highlights the power of interactive tutorials and the importance of user testing in software development. The Talk also introduces learn.svelte.dev, a platform for learning full-stack tools, and discusses the roadmap for SvelteKit and its documentation.
SolidJS: Why All the Suspense?
JSNation 2023JSNation 2023
28 min
SolidJS: Why All the Suspense?
Top Content
Suspense is a mechanism for orchestrating asynchronous state changes in JavaScript frameworks. It ensures async consistency in UIs and helps avoid trust erosion and inconsistencies. Suspense boundaries are used to hoist data fetching and create consistency zones based on the user interface. They can handle loading states of multiple resources and control state loading in applications. Suspense can be used for transitions, providing a smoother user experience and allowing prioritization of important content.
From GraphQL Zero to GraphQL Hero with RedwoodJS
GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
Tom Pressenwurter introduces Redwood.js, a full stack app framework for building GraphQL APIs easily and maintainably. He demonstrates a Redwood.js application with a React-based front end and a Node.js API. Redwood.js offers a simplified folder structure and schema for organizing the application. It provides easy data manipulation and CRUD operations through GraphQL functions. Redwood.js allows for easy implementation of new queries and directives, including authentication and limiting access to data. It is a stable and production-ready framework that integrates well with other front-end technologies.
Tanstack Start - A Client-Side First Full-Stack React Framework
React Summit US 2024React Summit US 2024
30 min
Tanstack Start - A Client-Side First Full-Stack React Framework
Top Content
We surveyed thousands of developers to show that a louder audience leads to a better presentation. There has been a shift in web app development towards server-first architectures, which has improved full-stack capabilities but at the cost of complexity and divergence from the client-centric approach. Tanstec Start is a meta-framework that aims to provide the best client-side authoring experience with powerful server-side primitives. The Tansec Router supports advanced routing features, URL state management, and JSON storage. Combined with the server-side rendering capabilities of TanStack Start, it becomes even more powerful. The TanStack Router has isomorphic loaders and integrates seamlessly with TanStack Query for additional features like polling and offline support. UseSuspenseQuery allows for dynamic streaming of data during SSR. TanStack Start also offers server-side features, API routes, server functions, and middleware. The future plans include RSCs, websockets, real-time primitives, and static pre-rendering. TanStack Start is now in beta and is suitable for building React apps. It is open source.

Workshops on related topic

Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Back to the Roots With Remix
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Workshop
Alex Korzhikov
Pavlik Kiselev
2 authors
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs
Learn Fastify One Plugin at a Time
Node Congress 2021Node Congress 2021
128 min
Learn Fastify One Plugin at a Time
Workshop
Matteo Collina
Matteo Collina
Fastify is an HTTP framework for Node.js that focuses on providing a good developer experience without compromising on performance metrics. What makes Fastify special are not its technical details, but its community which is wide open for contributions of any kind. Part of the secret sauce is Fastify plugin architecture that enabled developers to write more than a hundred plugins.This hands-on workshop is structured around a series of exercises that covers from basics "hello world", to how to structure a project, perform database access and authentication.

https://github.com/nearform/the-fastify-workshop
Build a Product Page with Shopify’s Hydrogen Framework
React Advanced 2022React Advanced 2022
81 min
Build a Product Page with Shopify’s Hydrogen Framework
Workshop
David Witt
David Witt
Get hands on with Hydrogen, a React-based framework for building headless storefronts. Hydrogen is built for Shopify commerce with all the features you need for a production-ready storefront. It provides a quick start, build-fast environment so you can focus on the fun stuff - building unique commerce experiences. In this workshop we’ll scaffold a new storefront and rapidly build a product page. We’ll cover how to get started, file-based routing, fetching data from the Storefront API, Hydrogen’s built-in components and how to apply styling with Tailwind.You will know:- Get started with the hello-world template on StackBlitz- File-based routing to create a /products/example route- Dynamic routing /products/:handle- Hit the Storefront API with GraphQL- Move the query into the Hydrogen app- Update the query to fetch a product by handle- Display title, price, image & description.- Tailwind styling- Variant picker and buy now button- Bonus if there’s time: Collections page
Prerequisites: - A Chromium-based browser (StackBlitz)- Ideally experience with React. A general web development background would be fine.
Build a Universal Reactive Data Library with Starbeam
JSNation 2023JSNation 2023
66 min
Build a Universal Reactive Data Library with Starbeam
WorkshopFree
Yehuda Katz
Yehuda Katz
This session will focus on Starbeam's universal building blocks. We'll use Starbeam to build a data library that works in multiple frameworks.We'll write a library that caches and updates data, and supports relationships, sorting and filtering.Rather than fetching data directly, it will work with asynchronously fetched data, including data fetched after initial render. Data fetched and updated through web sockets will also work well.All of these features will be reactive, of course.Imagine you filter your data by its title, and then you update the title of a record to match the filter: any output relying on the filtered data will update to reflect the updated filter.In 90 minutes, you'll build an awesome reactive data library and learn a powerful new tool for building reactive systems. The best part: the library works in any framework, even though you don't think about (or depend on) any framework when you built it.
Table of contents- Storing a Fetched Record in a Cell- Storing multiple records in a reactive Map- Reactive iteration is normal iteration- Reactive filtering is normal filtering- Fetching more records and updating the Map- Reactive sorting is normal sorting (is this getting a bit repetitive?)- Modelling cache invalidation as data- Bonus: reactive relationships