Video Summary and Transcription
Progressive enhancement is a strategy that provides a baseline experience for all users while enhancing it for those with modern browsers. Feature detection and graceful degradation are complementary approaches to achieve this. Polyfills can emulate new browser functionality in old browsers. Progressive enhancement is about meeting user needs while considering performance. Building apps in SvelteKit allows for easy development of progressively enhanced apps. Svelte components and DOM content provide a convenient way to structure and update the UI. Form submission and progressive enhancement can be achieved by enabling file upload and processing when JavaScript is disabled.
1. Introduction to Progressive Enhancement
I'm Elliott Johnson, a software engineer at Versell, specializing in front-end development. Progressive enhancement is a strategy I've found effective in building applications that provide a baseline experience for all users while enhancing it for those with modern browsers. It involves using widely supported HTML, CSS, and JavaScript features and enhancing them through server-side rendering and feature detection.
Hey there. I'm Elliott Johnson and I'm so glad you've chosen to be here with me today. A huge thank you to the organizers of the JS Nation Conf for their hard work coordinating everything and allowing me to speak.
By way of introduction, I'm a software engineer at Versell working in our growth department. In my spare time, I help maintain Svelte and SvelteKit. I started my professional life as a business intelligence analyst and moved on from there to analytical data architecture. It was around this time that I realized that wasn't really what I wanted to be doing and started my long slide into software engineering, ending up a front-end engineer with an uncomfortable amount of backend experience.
The front-end space is very different to any of those spaces I worked in before, specifically regarding feature support. In other spaces, this isn't as big a concern. When you control the hardware and software, you can just deploy what you want for the most part. If you need more bandwidth, just buy it. If you need more processing power, just upgrade your instance. If you need a new version, upgrade your software. In the front-end space, though, you're shipping code to places you can't control. To Dan with the slow internet. Or Carla with the phone that's ten years old that she can't afford to replace. Or to Ebenezer, the owner of all the latest tech that can run anything you throw at him. To somebody in a developing country where the best phone they can afford is the equivalent of ten-year-old hardware. These people will experience the sites I write in different ways, and if I don't write my application with all of them in mind, I am going to degrade their experience.
But writing applications like this is challenging. It requires an understanding of HTML, CSS, JavaScript, and every other web technology that was beyond me even last year. One strategy that I have found for dealing with this that makes it more simple, and that I have become quite enamored with as of late, is progressive enhancement. Starting from MDN, progressive enhancement is a design philosophy that provides a baseline of essential content and functionality to as many users as possible, while delivering the best possible experience only to users of the most modern browsers that can run all the required code. Restated, we can say that progressive enhancement aims to provide a baseline functional experience for as many users as possible, enhancing this experience for those whose technology supports it. In practice, this is usually accomplished by building a base experience using widely supported HTML, CSS, and JavaScript features, and then enhancing them in one of a few ways. The first is executing code on the server. We took a brief step away from this as an industry with the advent of single-page apps. But, more recently, this approach has regained great popularity with newer meta-frameworks like SvelteKit or Next.js that take a more SSR-first approach to web development. The next way is detecting whether features are supported before we use them. Typically, we would use some construct like the addSupports query in CSS to see if a CSS feature is supported.
2. Feature Detection and Graceful Degradation
You can accomplish the same thing through JavaScript using CSS.supports. There are libraries for feature detection that you can use. Another way is to not detect whether additional features are supported, but be ready for them to fail and fall back to better-supported defaults when they do. This is technically known as graceful degradation, not progressive enhancement, but they're basically the same thing in opposite directions, and they're so complementary that I like to lump them together.
You can accomplish the same thing through JavaScript using CSS.supports. There are libraries for feature detection that you can use. You can also use feature-specific methods of detection.
So, like, if you wanted to use the geolocation API, you could check to see that the key geolocation is defined in the Navigator object. There are a multitude of ways, but the important thing is thinking about, hey, I want to use this feature. I know it's not supported in a lot of browsers. Maybe it's really new. I should probably check to see that it's there before I use it.
But in some circumstances, that's not really realistic. We might be using a library that accesses features like this somewhat unpredictably and we don't really have a way of wrapping all of those in guard checks to provide fallbacks and enhancements. So, another way that we can do this is to not detect whether additional features are supported, but be ready for them to fail and fall back to better-supported defaults when they do. This is technically known as graceful degradation, not progressive enhancement, but they're basically the same thing in opposite directions, and they're so complementary that I like to lump them together. If that offends you, that's too bad.
3. Polyfills for Browser Functionality
Polyfills are a commonly used way to emulate new browser functionality in old browsers. They can have downsides like reduced performance or edge cases where they don't work correctly, but if you understand the trade-offs and are prepared to accept them, they can be a good solution.
And the last way that I've seen commonly used is polyfills. These are basically when you have some new browser functionality like, I don't know, object.create that might not be supported in old browsers, and you want to be able to So you write code that emulates that behavior and inject it if the feature is not supported. These typically have some sort of downside, like reduced performance or edge cases where they don't work correctly. But if you know the trade-offs, and you're prepared to accept them, and you're knowledgeable about what accepting them is going to do to you, then they can be a good solution.
4. What Progressive Enhancement is Not
Progressive enhancement is not about shipping zero kilobytes of JavaScript, but rather meeting user needs while considering performance. JavaScript is essential for optimal performance. The goal is to be thoughtful about what we ship and how it meets user needs.
We should also just take a minute to discuss what progressive enhancement is not. I'm just going to quote Rich Harris here, because he said it better than I ever could. Something I've seen more and more of lately is people talking about zero kilobytes of JS, as in this framework shifts zero kilobytes of JS by default. The implication is that JavaScript is inherently bad, and so a framework that doesn't serve JavaScript is inherently good. But zero kilobytes of JavaScript is not a goal. The goal is to meet some user need, or, if you're cynical, to meet some business need by way of meeting a user need. Sometimes performance is a factor in how effectively you can meet that need. We've all seen the studies showing that every millisecond delay costs Amazon like a billion dollars or whatever. And sometimes you can improve startup performance by reducing the amount of JavaScript on the page. But doing so is always in service of some other larger objective. Collectively, we're in danger of mistaking the means for the ends. As we'll see later, if you want to get the best possible performance, JavaScript is actually essential. The goal of progressive enhancement is not to ship JS, though that's an easy pit to fall into, as we walk back the incredible progress that was made by single page applications. Instead, it is to be thoughtful about what we ship and how that does or doesn't meet user needs.
5. Building Apps in SvelteKit
The goal of today's talk is to introduce you to building apps in SvelteKit and show you how easy it is to develop progressively enhanced apps. I created an app to generate humorous acronyms using a trained naïve Bayes classifier. The app meets my requirements of displaying random acronyms, enabling sharing, and working well on slow internet connections. Let me give you a tour of the implementation, starting with the root page that generates a random acronym and provides functionalities to reject, copy, and rotate it.
Now that we've established what we're talking about, let's abandon theory and jump into practice. The goal of today's talk is to both introduce you to building apps in SvelteKit and to show you how easy SvelteKit makes it to develop progressively enhanced apps.
We all know the acronym LGTM. Looks good to me or looks good to merch, depending on your context. For months, I've been coming up with bogus acronyms for this, like lumpy gorillas teach math, lunar grandeur tantalizes minimally, laggards growl to murder, etc., and posting these acronyms as my approval comments on PRs. I got a bit tired of having to be creative, so I came up with an app to create these acronyms for me. Simply using random combinations of words starting with LGT and M is certainly not good enough. You would end up with too much nonsense and not enough comedy, so I went with a little more thorough route, training a naïve Bayes classifier with a few thousand good and bad acronym combinations and running 100,000 random acronyms through that. It's not perfect, but it's plenty good.
From the start, there were a few requirements for this app for me. Display a random acronym when I go there, always a new one. Be able to share that acronym, so if I give somebody a link to go there, they should see the same one that I saw. It needed to enable me to train the model. I'm not a huge fan of fancy CLIs. I'd much rather just have something that I can pull up on my phone and use. And it also needed to work well on slow internet connections, where I may not have JavaScript or may just be taking a really long time to load web pages. The times when I have time to train the model are when I'm in line for groceries or at the gas station or something like that. Time to train it a few times and then move on. And my cell connection is not great. So let me give you a tour of what I came up with, and I'll show you how it's implemented. We have two pages here. The first page, the root page, is the main page that I send people to. This is going to give you a an acronym. Layers greasing toadstool misrepresenting. And it's going to give you just a few functionalities. It's going to give you the ability to reject that acronym for a new one. It's going to give you the ability to copy that acronym to your clipboard. It is going to give you the ability to rotate that acronym. Some of these look really good in this orientation. They can be harder to read with longer words, though, so I kind of like the opportunity to see them both ways.
6. Sharing and JavaScript Features
We have the ability to share what we're seeing with query parameters. JavaScript-only features include automatic selection and copy behavior. There's a page for training the model and uploading acronyms. The page works without JavaScript. In Svelte Kit, there are lib and routes folders. The routes folder uses a file system-based router to define routes.
We have the ability to share exactly what we're seeing because we've got two query parameters that represent both the orientation and the ID of this acronym so that it will never change. We can always share this. They'll come here and they'll see it.
And then we have a couple of JavaScript-only features. So we can only do the automatic selection of this entire box with JavaScript enabled, so far as I know. And we can only do this copy behavior with JavaScript enabled. So like if I turn off this, you'll see when I reload this page, this button is disabled and I can no longer just click on this to select it, I have to actually select it. But my No button and my Yes, but rotated button both still work great. Let's re-enable JavaScript before I forget.
And then we have a page that I built to enable myself to train this model. You if you come here are going to see a login screen and I ask you not to make a million requests trying to bypass it, but it enables me to basically sit on a treadmill of infinite acronyms and approve, or disapprove, or modify them so that I can improve them and train the model. And then it will give me an acronym like look out gleams tipsiest mold, and I will decide whether or not I like that. Yes, no, or not quite where I can adjust it and submit. And I also wanted the ability to upload a file of a bunch of acronyms because I'd come up with like dozens of them and I often come up with dozens of them and I don't want to have to type all those in in this little box. So I've got my little file button here and I can select a text file full of acronyms and upload them. And this whole page also works without JavaScript. And I want to show you how we do that in spelt kit because it is shockingly easy. It works really well. And I think it's going to blow your mind. So when we come in here to a spelt kit app, you'll see that it's pretty simple.
We have a lib folder and a routes folder. Lib folder is where all of your, your, your library code lives, your components, uh, your database connection. You know, anything that you would consider shared code. The routes folder is like many other meta frameworks, the way that we define our routes in spelt kit, we use a file system-based router that uses folder paths as your route and the files inside of those folder paths determine the behavior of that route. So you can see here at the root that we saw that displays the square with the acronym in it, we have a layout file, a server file, and a spelt component. So basically what this means is that everything under this route is going to be wrapped in this layout, which is just a main around the content with some formatting applied to it and some CSS imported. And then we're going to have a server side load for this route, and a page for this route. And then we also have our train route, which you just saw, which also has a server file and a page file, and a success route, which you'll see later. I'm going to skip talking about this for now. If we have time, I'll come back to it, but I don't think we are.
7. Svelte Components and DOM Content
For those unfamiliar with Svelte components, they consist of a script tag for JavaScript, an HTML markup section, and a style tag for scoped CSS. The script tag behaves like JavaScript, with function definitions hoisted and variables assigned values. The label syntax with the dollar sign is used to rerun and update the DOM based on dependencies. The data and form exports are automatically typed and populated from the server. The DOM content resembles a native form submit, with an action pointing to the current URL and a query parameter for the form action in this file.
I think the most interesting stuff for you is going to be in this train route. For those of you that have not seen a Svelte component before, they're quite simple. Most of them have three sections. A script tag, where we have our JavaScript, a section where we put all of our HTML markup, which is right here, and then a style tag, which contains scoped CSS. This CSS is going to apply to paragraphs in this file, but not any other file.
So for people who have worked in JavaScript, the behavior of this script tag is probably going to be pretty self-explanatory. It runs just about like you would expect. Function definitions are hoisted to the top level. Const function definitions are not. It runs from top to bottom, so it's going to create this variable, assign false to it, it's going to create this variable, assign no to it, it's going to create this function, assign it to a variable, and then this little special syntax here, don't let it freak you out too much, it's just JavaScript syntax. It's a label. The name of the label happens to be dollar sign. What that tells spell kit is that I want this to rerun and assign to name every time one of the dependencies on the right side changes. In this case, we only have one dependency. It's form.name. So every time form or form.name changes, this will rerun and assign the result to name, which will in turn update all of my DOM to follow that.
We have two special exports here. We have data and form. Both of those are populated from our page.server.ts file and they are automatically typed, so you can see our data is page data and our form is action data. So if I type like data.blah, those are things that I'm returning from my load function on the server. If I type form.law, these are all of the responses that my form handlers can send from the back end. It's quite a nice little feature. I quite enjoy it. If you look at our DOM content here, this is pretty simple. It really is pretty close to a native form submit. We have a form with a method of post. All of these have an action that points to the current URL with a query parameter. This might look a little bit weird. What this is really saying is I want to go to train because that's the route that I'm on right now and I want to append a query parameter called slash train. That query parameter is going to point us at a form action in this file.
8. Form Submission and File Uploader
As you can see, we have a use enhanced function that emulates native form submission behavior on the client side. It reloads the page without destroying all DOM contents. The configuration argument allows for defining behavior during submit, such as preventing form reset and updating selected values. The file uploader provides a seamless upload experience without requiring a page reload.
As you can see, we also have one for authenticate and one inside this uploader for upload. Then we have this little use enhanced function here that I'll explain in a second. Without this use enhanced function, this will still work. If I go into my site here and I click no with JavaScript disabled, it's going to work. It's going to submit my form. It's going to take a second. It's going to reload and I get my new acronym, and it has submitted that. The reason being, this is just a native form submission. It's a post to the surfer and then it reloads the page.
The nice thing is we have this use enhanced function. Use is just a directive that Svelte provides that lets you give it a function that will be called with a reference to the dom element when it's mounted. So, when this form gets mounted, it will call enhance with the form element. It will also call it with whatever configuration argument is right here as its second argument. What use enhanced does is it emulates the native form submission behavior on the client side. So, SvelteKits ships a client-side router and if I re-enable Javascript and reload this page, you'll see that we had gone to train question train because we had reloaded that from a native form submission. If I click no now, it's a little bit faster and it also is not fully reloading my page. The reason being, this is emulating a full page reload by reloading all of my data as if the page had been reloaded, but it is not destroying all of the dom contents and recreating them. It is only recreating the ones that need to be updated based on new data. So that's really nice.
And this configuration argument is also really nice because it enables me to define behavior that's going to happen during the submit. So here, I'm just setting an argument of submitting to true, and I'm returning an async function to run after the submission has succeeded. This async function applies the default action by calling update, which is to emulate that reload. It prevents it from resetting the form, it turns off our submitting loading state, and it resets the selected value to no, which is really nice. So when we don't have JavaScript enabled, we click the submit button, we don't have any way to know that the button has been clicked or that we're doing any work. So if we're facing network latency, we're never going to know. But here, when I click submit, it's going to disable for a moment while it thinks and then it's going to reload. It's nice. We can layer that behavior on top. We also have this really interesting behavior with this file uploader here. If I click on this icon, it's going to give me the opportunity to select a file and with JavaScript enabled, it's just going to give me this little toast down here that's going to tell me that I succeeded or failed in my upload, which is great, because it doesn't require us to reload the whole page.
9. Form Submission and Progressive Enhancement
When JavaScript is disabled, the form submit won't work. However, a file upload button will appear, allowing the upload and processing of files. This is achieved by having a separate form with a file input that triggers the form submission when the file changes. The button is enabled when the file input becomes valid, providing the ability to submit the form. Progressive enhancement is a philosophy that is worth adopting as it allows for a better user experience and a more resilient website. For more information and the code, visit github.com/tccsejohnson/lgtm.
It's a quick experience, it makes things great. But when JavaScript is disabled, I don't have the ability to run that form submit when I add that file. So if I disable JavaScript again, if I can locate my mouse, there we go, and reload this page, if I go to upload my file, when I upload it, it's going to turn into a little upload button, and when I click that, it's going to do the upload, process the file, and tell me that I have succeeded, and I can go back to my previous page. That's great.
How does it work? Pretty simple. We really just have another form. It's a very short section of DOM code, and this form has a handle submit, very like the other one, but it has an input that accepts a file, and when that file changes, if JavaScript's enabled, it's going to submit the form, and if JavaScript is not enabled, this handler will not be attached, this handler will not be attached, and what's going to happen is this button is going to be enabled, because we never got the chance to disable it with this browser variable, and when the file input becomes valid, and the button becomes not disabled, the button will appear, which gives us the ability to submit this form, which then redirects us to the other page.
So this works great, with or without JavaScript, and it only added a few lines of code. I don't have time to talk to you more about this, or I would, I could go on about it all day, but I really do hope that this has given you some philosophical questions to think about. Hopefully you can see that the philosophy of progressive enhancement, while it's definitely a mental shift from the implement everything in JavaScript philosophy, is not as difficult as it may seem on the surface. And the payoff is certainly worth it. Reaching more customers and having a site that's more resilient to technology failures will pay dividends in the long run. This code is available at github.com slash tccsejohnson slash lgtm. Feel free to contact me with questions on Twitter at underscore Gruntled, or by email at elliot.johnson at bursell.com.
Comments