Demystifying IPFS: A Web Developer's Guide to Content Distribution

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

IPFS is a peer-to-peer network distributing for content addressed data and can be thought of as a distributed CDN. Yet, given its evolutionary trajectory IPFS is often misunderstood. In this talk, you’ll learn about current state of IPFS with a focus on integration with the web platform using HTTP(S), how it enables local-first distribution of web apps, and real applications for you as a web developer.

This talk has been presented at JSNation 2025, check out the latest edition of this JavaScript Conference.

FAQ

IPFS, or InterPlanetary File System, is a set of protocols designed for addressing and sharing data on the web. Its main purpose is to provide a decentralized way to store and share files using content addressing and peer-to-peer networking.

IPFS addresses link rot by using content addressing through cryptographic hashes. This ensures that links represent the data itself, allowing for multiple providers to serve the content, thus preventing broken links.

The key benefits of using IPFS include censorship resistance, prevention of link rot, tamper-proofing, high cacheability, and data deduplication. It also enhances resilience by allowing multiple providers for a piece of data.

A CID, or Content Identifier, in IPFS is a hash that includes metadata like the codec and hash function used, which helps in uniquely addressing content in the IPFS network.

IPFS handles large files by chunking them into smaller pieces and organizing them into a Merkle DAG (Directed Acyclic Graph), which is a data structure that allows efficient storage and retrieval.

Helia is a lean and modular TypeScript implementation of IPFS designed for Node.js and the web. It allows developers to interact with the IPFS network, handling operations like encoding data and retrieving content by CID.

IPFS gateways provide an HTTP API for retrieving data from IPFS nodes, abstracting the complexities of peer-to-peer networking for easier access, especially in web browsers.

IPFS ensures data integrity through content addressing, where cryptographic hashes serve as content identifiers (CIDs). This allows verification of the data's integrity, ensuring it hasn't been tampered with.

IPFS faces challenges in web browsers due to limited connections, the need for TLS certificates, and restricted transport protocols. These limitations make direct peer-to-peer connections difficult, often requiring HTTP gateways.

IPFS provides censorship resistance by allowing multiple nodes to serve the same content, making it difficult for state actors to block access to specific data, as there is no single point of failure or control.

Daniel Norman
Daniel Norman
20 min
16 Jun, 2025

Comments

Sign in or register to post your comment.
Video Summary and Transcription
Introduction to Demystifying IPFS, discussing the web platform's reach and flaws like link rot and censorship. Discussing the security challenges of web origin anchoring, location addressing flaws, and the introduction to IPFS with content addressing and peer-to-peer networking. Exploring IPFS content addressing with SIDs and the role of providers in data retrieval. Exploring data encoding in IPFS with UnixFS and the role of peer-to-peer networking in content retrieval. Discussing the challenges of peer-to-peer networking in IPFS and the benefits of content addressing. Emphasizing the advantages of censorship resistance, link rot prevention, and tamper-proofing in IPFS. Discussing the importance of DNS link in IPFS and the key operations of addressing data by SID, providing or pinning data, and data retrieval using protocols like bit swap and HTTP from gateways. Introducing the challenges with centralized gateways in IPFS, efforts to eliminate them by turning every IPFS node into a gateway, and the introduction of Helia, a modular TypeScript implementation for Node.js and the web.

1. Introduction to Demystifying IPFS

Short description:

Introduction to Demystifying IPFS, discussing the web platform's reach and flaws like link rot and censorship.

Welcome to Demystifying IPFS, and thank you for joining JS Nation and this talk. So the way that this talk is going to be structured is into three parts. The first part, we're going to talk about the web platform. And then we're going to move on to IPFS and where that fits in with the web. And finally, we're going to take a look at IPFS for JS developers and how you can actually use IPFS in practice.

The web is amazing. The web platform empowers billions of people, and it has the widest reach with over 5 billion users across all continents via browsers or web views and mobile apps. It also is the most ubiquitous platform which practically works on every device spanning from your desktop, smartphone, laptop, TVs, and even fridges these days. And finally, it's also an open ecosystem where anyone can publish content or build sites with minimal gatekeeping.

But the web has its flaws. Link rot is a big problem on the web whereby links are no longer functional. So there's research that shows that one in five scientific papers has at least one broken link within a few years of publication. In fact, other research suggests that 30 to 70% of web links rot within a couple of years from getting published. There's also the problem of censorship. A great example of this is the censorship of Wikipedia, of which there are many examples in countries like China, Turkey, India, Russia, and other authoritarian regimes. And the nature of the web is such that it's possible for these state actors to block large swaths of the Internet.

2. Security Challenges and Introduction to IPFS

Short description:

Discussing the security challenges of web origin anchoring, location addressing flaws, and the introduction to IPFS with content addressing and peer-to-peer networking.

There's also the problem of how security is anchored to origin, which means that you lack tamper proofing. A great example for this is when you go to your online banking web app, how do you know that it hasn't been tampered with? The way that this works on the web is using the same origin policy, whereby essentially your browser validates the certificate authority signed certificate that is presented to you by your online banking. So essentially you're trusting who rather than what you're actually getting. This is in contrast to how package managers implement tamper proofing using hashes and signatures. Now the same origin policy of course delineates the trust and security boundaries, and that's typically tied to the protocol and the domain name. But the key idea here is that you're trusting who rather than what.

The one exception to this of course is subresource integrity, which is a relatively recent feature that was introduced in order to allow importing packages from other origins and having an integrity attribute which validates essentially the hash of that script that you might be importing. This has been a great introduction. Of course the root of a lot of these problems is in the fact that the web is location addressed. Location addressing is foundational to the web and how the Internet works. Every network device has an IP, and DNS is used to map human friendly names to IPs. Now this works extremely well because the network makes it possible to find multiple paths between any two IPs on the Internet. But this is also the cause of many of these problems because essentially for a domain name you typically have a single source, and as we mentioned before, IPs and DNS can be easily blocked by state actors. If a copy of data or a website disappears, there's no way to fundamentally know where other copies might be. Moreover, it's not possible for you to verify the integrity of the content that you're getting, like in this sort of visualization or diagram that we have here.

This brings us to the second part of this talk, IPFS. IPFS is a set of protocols for addressing and sharing data on the web. It has popular implementations in TypeScript and Go, and it is designed to work natively on the web. What do we mean by sharing? Sharing typically means two things, publishing and retrieving. There are two core ideas behind IPFS that are key to understanding IPFS. The first is content addressing, and the second is peer-to-peer networking. But it's worth flagging that IPFS fuses a lot of ideas from Git, the source code management tool, and BitTorrent, the peer-to-peer file sharing. Content addressing, which is in contrast to location addressing, is this idea that you derive the address of content from the content itself, using cryptographic hashes, rather than from the location where it is stored, as is common with Ahrefs and links on the web. Hash functions are functions that take arbitrary length data and produce a fixed length fingerprint of that data. Content addressing is not a new idea; it is very prevalent in a system known as Git, whereby each commit hash is a hash of the repo's full file tree at the time of the commit. PNPM also uses a content addressed store to deduplicate dependencies and make package management faster in the JavaScript ecosystem.

3. IPFS Content Addressing and Provider Role

Short description:

Exploring IPFS content addressing with SIDs and the role of providers in data retrieval.

Hash functions are functions that take arbitrary length data and produce a fixed length fingerprint of that data. Now content addressing is not a new idea. In fact, it is very prevalent in a system known as Git, which I've previously mentioned, whereby each commit hash is a hash of the repo's full file tree at the time of the commit. PNPM also uses a content addressed store to deduplicate dependencies and make package management faster in the JavaScript ecosystem.

Now the way that IPFS implements content addressing is using content identifiers, commonly referred to as SIDs. A SID is essentially a hash which contains some additional metadata, like the codec and the hash function and the length of that hash. This is a property that of course allows it to be upgradable and self-describing. Now the typical URI scheme that is used is IPFS colon slash slash and then a SID. And of course a SID can be represented in binary form or in string form. And the example that we have here with the IPFS URI is in fact a string representation.

The key benefit that content addressing gives you is that it enables you to have multiple providers for a piece of data. So a provider is essentially just an IPFS node that is reachable and is serving data that is addressed by that SID. Of course you still need to fetch from a location, so content addressing can be thought of as something that is overlaid over a location. Addressed internet, which still uses IPs and ports. So there's no magic here, you still need to fetch from a location. And this is where the Cademlia DHT comes into play here. So the way that you go from a SID to an actual IP of a node that has the data for that SID, is using this Cademlia DHT. Which is quite complex to understand but we're not going to get into all of the details of that now.

4. IPFS Data Encoding and Peer-to-Peer Networking

Short description:

Exploring data encoding in IPFS with UnixFS and the role of peer-to-peer networking in content retrieval.

But the key idea is that the DHT essentially allows you to go from a SID to a provider or multiple providers. As we have in the diagram here we have a JPEG that has three providers. So how do you encode data in IPFS? Because you can't just for example hash a directory. UnixFS is the format that is used in IPFS in order to encode files and directories. And the idea with UnixFS is that everything is essentially a block that is addressed by a SID. And large files are chunked into smaller chunks, which then comprise a larger data structure that is known as a Merkle DAG.

This is the same data structure that is used by Git. And the benefit of a Merkle DAG is that you can have a single SID that represents a whole file tree. And in this example it's a very minimal example of a build directory of a static website. And you can see that it has an index.html file and some static assets and a style.css. The reason that I use this as an example is because this emphasizes how UnixFS and in general IPFS is ideal for publishing static websites and apps. Where you essentially just take your build output and encode it with UnixFS, thereby getting a single SID that represents the whole website.

Now the second key principle idea behind IPFS that is important to understanding IPFS is peer-to-peer networking. And the key idea behind peer-to-peer networking is that every client is also a server. Now there's a little star there because web browsers turn out to be very bad servers. You're very constrained in terms of what you can do in a browser. So it's useful to sometimes think about IPFS as being a network in which you have some nodes which are just doing retrieval because they can't do the job of serving. Now peer-to-peer networking in principle compliments content addressing because as I mentioned you can have multiple providers. And if you have multiple providers for a SID, you want to be able to connect to them directly.

5. IPFS Benefits and Challenges

Short description:

Discussing the challenges of peer-to-peer networking in IPFS and the benefits of content addressing. Emphasizing the advantages of censorship resistance, link rot prevention, and tamper-proofing in IPFS.

You also do away with this idea of canonical or trusted servers that have a certificate that has been given by a certificate authority that has authenticated their identity as we have with the traditional web. Instead we take the approach of verifying and treating everyone with a minimal level of trust. Now peer-to-peer networking is hard. It's hard because networking is hard and you have NATs and you have firewalls and you also have limited transports in browsers which I mentioned in the beginning. So a lot of this stuff is actually handled by a library that's spun out of IPFS called Lip2P.

And it handles a lot of the magic of NAT traversal and overcoming firewalls and of course implements this Cademlia DHT, the distributed hash table, where we store the information of who has what. So what benefits do we get from IPFS and these ideas of content addressing and peer-to-peer networking? Well the first is we get censorship resistance and resilience because we can have multiple providers. We prevent link rot because these SIDs, they represent the data. Now this doesn't guarantee you that you actually have providers for data.

So it doesn't guarantee you the persistence of data. But it ensures that the links actually represent the data so that you can always bring it back and then those links are fixed so they don't break. You also get tamper-proofing because you're essentially verifying every piece of content that you get back. And because everything is immutable, because it's hashed, it's highly cacheable, which enables you to do things like local first static websites, which we're going to take a look at in a moment. And of course you get deduplication across data sets. This is also the same feature that is in Git.

6. IPFS DNS Link and Key Operations

Short description:

Discussing the importance of DNS link in IPFS and the key operations of addressing data by SID, providing or pinning data, and data retrieval using protocols like bit swap and HTTP from gateways.

So Git uses these Merkle DAGs that essentially deduplicate data that is persistent across commits. DNS link is another technology that is quite important to IPFS because SIDs are not very human-friendly. They're long and they're not very memorable. What DNS link allows you to do is it introduces mutability into IPFS and allows you to link DNS names, domains, to SIDs. And you do that using a simple TXT record at the underscore DNS link dot the domain. And you store in that record a path to the SID that you have.

This brings us to the third and final part of this talk, IPFS for JS developers. Of course, after all this is JS Nation, a JavaScript conference. So let's talk about the three key operations that you have in IPFS when you're interacting with the system. The first one is addressing data by SID. So you take input data and you encode it such that it's addressed by a SID. Under the hood, of course, it's Merkle-ized into this Merkle DAG data structure. But of course, a lot of that is handled by the libraries and SDKs that IPFS has. The second operation is providing or also often known as pinning, where you run a node that announces itself as a provider for the SID or multiple SIDs. And it serves that data over the network. It's by analogy similar to running an HTTP server, but obviously a bit more involved because you have to actively announce the data to this DHT.

The third operation is retrieval. And retrieval by SID, whereby you fetch data using the SID as an address. And this can happen using two main protocols. The first one is bit swap, which is a block exchange protocol, which happens over these libPTP connections. Or, as we'll take a look at in more detail in a moment, HTTP from gateways. Now, I've sort of hinted at this earlier in the talk. IPFS on the web is pretty hard. The web platform is amazing, but it is heavily constrained, which means that if you're running an application in a browser, you're limited in terms of the number of connections that you can open. And usually when you want to connect to a server, if you're running in a secure context, you need the server to have a TLS certificate. And to have a TLS certificate, you typically need a domain. And finally, you're limited with the transport, so you can't just open arbitrary TCP or UDP connections. You're constrained to HTTP, web transport, web sockets, and web RTC. So this brings us to IPFS HTTP gateways and how they came to be. IPFS HTTP gateways are an HTTP API for retrieving data from IPFS nodes.

7. IPFS Centralized Gateways and Helia Introduction

Short description:

Introducing the challenges with centralized gateways in IPFS, efforts to eliminate them by turning every IPFS node into a gateway, and the introduction of Helia, a modular TypeScript implementation for Node.js and the web.

And probably the most common example of this is IPFS.io, which is a public good CDN, essentially, that we provide. And you can just pass it an IPFS path with the SID, and it will abstract all of the complexities of peer-to-peer and IPFS in a browser, and essentially do the retrieval for you and just hand it back to you. Now, this turned out to be a huge success for the IPFS ecosystem over the last 10 years since IPFS was conceived, but it turns out that this was a double-edged sword. It was very convenient, and developers loved this, but it turned IPFS.io and similar gateways into a point of centralization, thereby eliminating and undermining many of the core principles of IPFS.

Having multiple providers and an open and decentralized network. This is why one of the big initiatives that we're currently working on is to eliminate the need for these centralized gateways, and the way that we're approaching this is by essentially turning every IPFS node into an IPFS gateway that also has a TLS certificate so it can be reached directly from a web browser. And there's a number of ways that we're doing this, which I won't go into great detail, but feel free to reach out or read some of our blog posts that go into more details around web transports and how we approach this problem. But the key idea is that, as we speak, it's becoming increasingly possible to do direct retrieval from providers without going through a centralized gateway.

Now I want to introduce you to Helia. A lot of this was a long preamble to get to the meat of this talk, which is Helia. Helia is a lean and modular TypeScript implementation of IPFS, and it is intended for Node.js and the web, so it uses all of the standard web APIs. You can check it out on GitHub, and here we just have an example where I show you how you can just take text and essentially encode it as a file with UnixFS. You get the SID for that, and you provide that to the network, and then you can actually use that SID in order to retrieve it directly from this Helia node that is running in Node.js.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Building Better Websites with Remix
React Summit Remote Edition 2021React Summit Remote Edition 2021
33 min
Building Better Websites with Remix
Top Content
Remix is a web framework built on React Router that focuses on web fundamentals, accessibility, performance, and flexibility. It delivers real HTML and SEO benefits, and allows for automatic updating of meta tags and styles. It provides features like login functionality, session management, and error handling. Remix is a server-rendered framework that can enhance sites with JavaScript but doesn't require it for basic functionality. It aims to create quality HTML-driven documents and is flexible for use with different web technologies and stacks.
Speeding Up Your React App With Less JavaScript
React Summit 2023React Summit 2023
32 min
Speeding Up Your React App With Less JavaScript
Top Content
Watch video: Speeding Up Your React App With Less JavaScript
Mishko, the creator of Angular and AngularJS, discusses the challenges of website performance and JavaScript hydration. He explains the differences between client-side and server-side rendering and introduces Quik as a solution for efficient component hydration. Mishko demonstrates examples of state management and intercommunication using Quik. He highlights the performance benefits of using Quik with React and emphasizes the importance of reducing JavaScript size for better performance. Finally, he mentions the use of QUIC in both MPA and SPA applications for improved startup performance.
Modern Web Debugging
JSNation 2023JSNation 2023
29 min
Modern Web Debugging
Top Content
This Talk discusses modern web debugging and the latest updates in Chrome DevTools. It highlights new features that help pinpoint issues quicker, improved file visibility and source mapping, and ignoring and configuring files. The Breakpoints panel in DevTools has been redesigned for easier access and management. The Talk also covers the challenges of debugging with source maps and the efforts to standardize the source map format. Lastly, it provides tips for improving productivity with DevTools and emphasizes the importance of reporting bugs and using source maps for debugging production code.
Full Stack Documentation
JSNation 2022JSNation 2022
28 min
Full Stack Documentation
Top Content
The Talk discusses the shift to full-stack frameworks and the challenges of full-stack documentation. It highlights the power of interactive tutorials and the importance of user testing in software development. The Talk also introduces learn.svelte.dev, a platform for learning full-stack tools, and discusses the roadmap for SvelteKit and its documentation.
SolidJS: Why All the Suspense?
JSNation 2023JSNation 2023
28 min
SolidJS: Why All the Suspense?
Top Content
Suspense is a mechanism for orchestrating asynchronous state changes in JavaScript frameworks. It ensures async consistency in UIs and helps avoid trust erosion and inconsistencies. Suspense boundaries are used to hoist data fetching and create consistency zones based on the user interface. They can handle loading states of multiple resources and control state loading in applications. Suspense can be used for transitions, providing a smoother user experience and allowing prioritization of important content.
From GraphQL Zero to GraphQL Hero with RedwoodJS
GraphQL Galaxy 2021GraphQL Galaxy 2021
32 min
From GraphQL Zero to GraphQL Hero with RedwoodJS
Top Content
Tom Pressenwurter introduces Redwood.js, a full stack app framework for building GraphQL APIs easily and maintainably. He demonstrates a Redwood.js application with a React-based front end and a Node.js API. Redwood.js offers a simplified folder structure and schema for organizing the application. It provides easy data manipulation and CRUD operations through GraphQL functions. Redwood.js allows for easy implementation of new queries and directives, including authentication and limiting access to data. It is a stable and production-ready framework that integrates well with other front-end technologies.

Workshops on related topic

React, TypeScript, and TDD
React Advanced 2021React Advanced 2021
174 min
React, TypeScript, and TDD
Top Content
Featured Workshop
Paul Everitt
Paul Everitt
ReactJS is wildly popular and thus wildly supported. TypeScript is increasingly popular, and thus increasingly supported.

The two together? Not as much. Given that they both change quickly, it's hard to find accurate learning materials.

React+TypeScript, with JetBrains IDEs? That three-part combination is the topic of this series. We'll show a little about a lot. Meaning, the key steps to getting productive, in the IDE, for React projects using TypeScript. Along the way we'll show test-driven development and emphasize tips-and-tricks in the IDE.
Mastering advanced concepts in TypeScript
React Summit US 2023React Summit US 2023
132 min
Mastering advanced concepts in TypeScript
Top Content
Featured WorkshopFree
Jiri Lojda
Jiri Lojda
TypeScript is not just types and interfaces. Join this workshop to master more advanced features of TypeScript that will make your code bullet-proof. We will cover conditional types and infer notation, template strings and how to map over union types and object/array properties. Each topic will be demonstrated on a sample application that was written with basic types or no types at all and we will together improve the code so you get more familiar with each feature and can bring this new knowledge directly into your projects.
You will learn:- - What are conditional types and infer notation- What are template strings- How to map over union types and object/array properties.
From Todo App to B2B SaaS with Next.js and Clerk
React Summit US 2023React Summit US 2023
153 min
From Todo App to B2B SaaS with Next.js and Clerk
Top Content
WorkshopFree
Dev Agrawal
Dev Agrawal
If you’re like me, you probably have a million side-project ideas, some that could even make you money as a micro SaaS, or could turn out to be the next billion dollar startup. But how do you know which ones? How do you go from an idea into a functioning product that can be put into the hands of paying customers without quitting your job and sinking all of your time and investment into it? How can your solo side-projects compete with applications built by enormous teams and large enterprise companies?
Building rich SaaS products comes with technical challenges like infrastructure, scaling, availability, security, and complicated subsystems like auth and payments. This is why it’s often the already established tech giants who can reasonably build and operate products like that. However, a new generation of devtools are enabling us developers to easily build complete solutions that take advantage of the best cloud infrastructure available, and offer an experience that allows you to rapidly iterate on your ideas for a low cost of $0. They take all the technical challenges of building and operating software products away from you so that you only have to spend your time building the features that your users want, giving you a reasonable chance to compete against the market by staying incredibly agile and responsive to the needs of users.
In this 3 hour workshop you will start with a simple task management application built with React and Next.js and turn it into a scalable and fully functioning SaaS product by integrating a scalable database (PlanetScale), multi-tenant authentication (Clerk), and subscription based payments (Stripe). You will also learn how the principles of agile software development and domain driven design can help you build products quickly and cost-efficiently, and compete with existing solutions.
Building WebApps That Light Up the Internet with QwikCity
JSNation 2023JSNation 2023
170 min
Building WebApps That Light Up the Internet with QwikCity
WorkshopFree
Miško Hevery
Miško Hevery
Building instant-on web applications at scale have been elusive. Real-world sites need tracking, analytics, and complex user interfaces and interactions. We always start with the best intentions but end up with a less-than-ideal site.
QwikCity is a new meta-framework that allows you to build large-scale applications with constant startup-up performance. We will look at how to build a QwikCity application and what makes it unique. The workshop will show you how to set up a QwikCitp project. How routing works with layout. The demo application will fetch data and present it to the user in an editable form. And finally, how one can use authentication. All of the basic parts for any large-scale applications.
Along the way, we will also look at what makes Qwik unique, and how resumability enables constant startup performance no matter the application complexity.
Integrating LangChain with JavaScript for Web Developers
React Summit 2024React Summit 2024
92 min
Integrating LangChain with JavaScript for Web Developers
WorkshopFree
Vivek Nayyar
Vivek Nayyar
Dive into the world of AI with our interactive workshop designed specifically for web developers. "Hands-On AI: Integrating LangChain with JavaScript for Web Developers" offers a unique opportunity to bridge the gap between AI and web development. Despite the prominence of Python in AI development, the vast potential of JavaScript remains largely untapped. This workshop aims to change that.Throughout this hands-on session, participants will learn how to leverage LangChain—a tool designed to make large language models more accessible and useful—to build dynamic AI agents directly within JavaScript environments. This approach opens up new possibilities for enhancing web applications with intelligent features, from automated customer support to content generation and beyond.We'll start with the basics of LangChain and AI models, ensuring a solid foundation even for those new to AI. From there, we'll dive into practical exercises that demonstrate how to integrate these technologies into real-world JavaScript projects. Participants will work through examples, facing and overcoming the challenges of making AI work seamlessly on the web.This workshop is more than just a learning experience; it's a chance to be at the forefront of an emerging field. By the end, attendees will not only have gained valuable skills but also created AI-enhanced features they can take back to their projects or workplaces.Whether you're a seasoned web developer curious about AI or looking to expand your skillset into new and exciting areas, "Hands-On AI: Integrating LangChain with JavaScript for Web Developers" is your gateway to the future of web development. Join us to unlock the potential of AI in your web projects, making them smarter, more interactive, and more engaging for users.
Back to the Roots With Remix
React Summit 2023React Summit 2023
106 min
Back to the Roots With Remix
Workshop
Alex Korzhikov
Pavlik Kiselev
2 authors
The modern web would be different without rich client-side applications supported by powerful frameworks: React, Angular, Vue, Lit, and many others. These frameworks rely on client-side JavaScript, which is their core. However, there are other approaches to rendering. One of them (quite old, by the way) is server-side rendering entirely without JavaScript. Let's find out if this is a good idea and how Remix can help us with it?
Prerequisites- Good understanding of JavaScript or TypeScript- It would help to have experience with React, Redux, Node.js and writing FrontEnd and BackEnd applications- Preinstall Node.js, npm- We prefer to use VSCode, but also cloud IDEs such as codesandbox (other IDEs are also ok)