Never Have an Unmaintainable Jupyter Notebook Again!

This ad is not shown to multipass and full ticket holders
JSNation US
JSNation US 2025
November 17 - 20, 2025
New York, US & Online
See JS stars in the US biggest planetarium
Learn More
In partnership with Focus Reactive
Upcoming event
JSNation US 2025
JSNation US 2025
November 17 - 20, 2025. New York, US & Online
Learn more
Bookmark
Rate this content

Data visualisation is a fundamental part of Data Science. The talk will start with a practical demonstration (using pandas, scikit-learn, and matplotlib) of how relying on summary statistics and predictions alone can leave you blind to the true nature of your datasets. I will make the point that visualisations are crucial in every step of the Data Science process and therefore that Jupyter Notebooks definitely do belong in Data Science. We will then look at how maintainability is a real challenge for Jupyter Notebooks, especially when trying to keep them under version control with git. Although there exists a plethora of code quality tools for Python scripts (flake8, black, mypy, etc.), most of them don't work on Jupyter Notebooks. To this end I will present nbQA, which allows any standard Python code quality tool to be run on a Jupyter Notebook. Finally, I will demonstrate how to use it within a workflow which lets practitioners keep the interactivity of their Jupyter Notebooks without having to sacrifice their maintainability.

This talk has been presented at ML conf EU 2020, check out the latest edition of this Tech Conference.

FAQ

One major challenge is version control, as traditional git diff commands produce unclear diffs with Jupyter Notebooks. Another challenge is the lack of integrated code-quality tools which are commonly used with Python scripts.

NBQA is a tool that allows you to run typical Python code-quality tools on Jupyter Notebooks by temporarily converting them into Python scripts. It supports tools like Black, iSort, PyUpgrade, and Flake8 to ensure code quality.

Precommit runs specified code-quality checks automatically before a commit is accepted, ensuring that notebooks maintain consistent code quality. It uses a configuration file to specify the tools and versions for the checks.

Yes, Jupyter Notebooks are excellent for data visualization and can be used as a developmental environment for writing maintainable code, not just for testing. Tools like NBQA and nbdime enhance their functionality and maintainability.

Jupyter Notebooks allow for interactive data visualization which is crucial for understanding data beyond just summary statistics. This interactive environment supports better insights and storytelling with data.

Yes, tools like nbdime offer GitHub integrations for reviewing pull requests, making it easier to manage notebook versions and collaborate on projects hosted on GitHub.

Using a specialized tool like nbdime can simplify version control by providing a more visually understandable view of diffs between notebook versions. This tool can be called from the command line and viewed in a web browser.

Marco Gorelli
Marco Gorelli
26 min
02 Jul, 2021

Comments

Sign in or register to post your comment.
Video Summary and Transcription
The video discusses how to avoid having an unmaintainable Jupyter Notebook by addressing key challenges such as version control and code quality. One major challenge is that traditional git diff commands produce unclear diffs with Jupyter Notebooks, but using a specialized tool like nbdime can simplify version control by providing a more visually understandable view of diffs. Another challenge is the lack of integrated code-quality tools, which can be tackled using NBQA. This tool allows you to run typical Python code-quality tools on Jupyter Notebooks by temporarily converting them into Python scripts. The Precommit tool can also help maintain code quality by running checks automatically before a commit is accepted. Jupyter Notebooks are crucial for data science, allowing for interactive data visualization and better storytelling with data. The video also mentions tools for integrating Jupyter Notebooks with GitHub, such as nbdime for reviewing pull requests. For long-term reproducibility, it might be better to keep code in a notebook rather than moving it to a Python package.

1. Introduction to Jupyter Notebooks

Short description:

We will discuss the importance of Jupyter Notebooks and the challenges of maintaining them. Then, I will demonstrate a workflow for keeping your Jupyter Notebooks maintainable.

Hello, friends. We are here today to talk about Jupyter Notebooks and how to keep them maintainable. We will start with a motivating example, in which I'll make the case for why you might care about using Jupyter Notebooks in the first place. Then, I'll address a couple of challenges which people often bring up when trying to keep their Jupyter Notebooks maintainable.

The first one has to do with version control, and anyone who's tried to look at the difference between two notebooks using git diff will know what I'm talking about. It's not easy. The second has to do with continuous integration and, more specifically, the lack of code-quality tools which are available to run on Jupyter Notebooks.

So, then, finally, I will demonstrate a workflow for keeping your Jupyter Notebooks maintainable. Let's dive straight in with our motivating example. I've prepared a pretty standard data science workflow here, absolutely standard. We'll go through it in a second. Now, you might be wondering why I'm showing you an absolutely standard data science workflow, and bear with me, there might be a twist at the end, might. So let's go through it.

2. Analyzing Summary Statistics

Short description:

We start by reading in four CSV files using Pandas read CSV. We print out summary statistics for all four data sets, which show that they are pretty similar.

We start by reading in four CSV files using Pandas read CSV, pretty standard. Each of these has two columns, x and y, pretty standard. So then we'll print out some summary statistics, so we'll print out the mean of x, the mean of y, the standard deviation of x, the standard deviation of y, and the correlation between x and y. We will do this for all four data sets, still pretty standard.

And then, using Scikit-learn, for each of these data sets we will fit a linear regression model, also pretty standard, and we will print out the mean squared error, also absolutely standard.

So where's the twist? Well, let's see what happens if we run this using Python. Right, look at that. If we look at what's been printed on the console, we'll see that the mean of x is the same for all four data sets, but so is the mean of y, the standard deviation of x, the standard deviation of y, the correlation between x and y, and the mean squared error from having fit a linear regression model is also almost identical. So if we look at this, we can tell that the four data sets must be pretty similar. That's what these summary statistics are telling us.

3. Analyzing Data Sets in Jupyter Notebooks

Short description:

Let's try doing the analysis in a Jupyter notebook instead of a Python script. We'll visualize the data sets and the linear regression lines. The plots reveal that the data sets are not the same, highlighting the importance of visualization. Jupyter notebooks can be criticized for version control issues, as shown by the diff after a trivial change.

Now, let's try doing something slightly different. Let's repeat this analysis, but instead of doing it in a Python script, let's do it in a Jupyter notebook. We'll do the same thing. We'll just read in these data sets using pandas.read.csv and we'll fit a linear regression model using scikit-learn. But then, instead of just printing out some summary statistics, we will visualize our data sets and we will also visualize the linear regression lines, which we will have fit. And because we just printed out the summary statistics and they were the same for all four data sets, we expect the four plots to look almost identical. So, let's go. Ready, set, go.

Oh, what's going on? Looks like maybe these four data sets aren't that similar after all. However, if we contrast this to what we saw a second ago when we just printed out some numbers to the console, now we can tell that the four data sets aren't actually the same. They just happen to have some shared characteristics. But when we were just relying on single numbers as summary statistics, we couldn't tell that. Yet it's frustratingly common to see data science workflows in which people will just load in data, fit a model, and then print out a few numbers without ever bothering to visualize it. So that's the motivating example. I hope this motivating example has highlighted the importance of visualizing your data. And Jupyter notebooks are a great way to do that. But if Jupyter notebooks are so great, why do they sometimes get criticized? Well, I said earlier that they pose a problem when it comes to version control. And for anyone who's not tried doing that before, let's see together what I mean. Let's save the notebook as it is. And let's make a commit. git commit run notebook. Now let's make an absolutely trivial change. Let's just add a little line here saying fig subplot title Data Frames. You know, really small change. Let's run the cell again. The only thing that's changed is that I've added this title. If this was a Python script and we had just changed one line of code, then if we did git diff, we would see a really small diff. However, this is not a Python script. It's a Jupyter notebook. And so if we save and do git diff, look at what happens.

4. Challenges with Jupyter Notebooks and Solutions

Short description:

We encounter a horrendous diff between notebooks, which makes me want to stop using Jupyter notebooks. However, a specialized tool called nbdyme provides a visually pleasing view of the diff, making Jupyter notebooks more desirable. Another challenge is the lack of code quality tools for notebooks. However, a tool called NBQA can convert notebooks to Python scripts, run code quality tools, and reconstruct the notebook. This allows for continuous integration and code quality checks in Jupyter notebooks.

We get this absolutely horrendous, unreasonable raw image diff. I look at this and I have no idea what's going on. It makes me want to stop using Jupyter notebooks forever. However, it's all lost, because maybe it's not so much that Jupyter notebooks don't work under version control, maybe it's just that we need a more specialized tool.

And one such tool, which I will present to you today, is called nbdyme. The way nbdyme works is you call it from the command line, as nbdiff-web, and you will, let me just allow that, and then you will get a URL which you can open up in your browser, and now we get a visually pleasing, easy to understand view of the diff between the notebooks. Now, if we look at this, it's absolutely clear that just one line of code has changed. We can also easily compare the diff in the outputs and see that just the title has changed. This is much easier to read compared to what we had a couple of minutes ago. This absolutely unreadable diff, now we have something visually pleasing which makes me want to use Jupyter Notebooks again. Great.

So it wasn't that Jupyter Notebooks didn't work with version control. It was more that we needed a specialized tool. So that's the first challenge when it comes to Jupyter Notebooks which I brought up earlier. Let's now look at the next one. Because if you're keeping things in version control, then chances are you're not just looking at the diff between versions of your code, you'll also be running continuous integration. If you're used to doing continuous integration on your Python scripts, then likely you'll be used to running a whole suite of linters and formatters on your code like black, I sort, flake eight, high upgrade, my pie, the list goes on. If you tell someone who's used to doing that that all of a sudden they need to switch over to using Jupyter notebooks, for which they won't have available that large suite of tools, then they might quite rightly feel like crying.

But does that mean that all is lost, or again, does it mean that we just need a more specialized tool? Let's see. We would need a tool which would temporarily convert your notebook to a Python script and store it in a temporary directory, run these code quality tools on it, reconstruct the notebook, and pass the output. And one such tool, which I'll present to you today, is called NBQA. Let's have a little look at how that works. I've prepared a notebook here for you, which produces a pretty plot at the end, which is taken from the matplotlib gallery, but inside it I've written some purposefully distorted code. Let's have a look at what happens when we run NBQA and then some code quality tools on it. You can run NBQA from the command line. In fact, you don't even need your notebook open or to have a Jupyter instance running. So, let's see what happens. Let's auto-format it using black. Then let's sort the imports using isort. We will then upgrade the syntax using pyupgrade, and then finally we will run flakate, which will not modify our notebook.

5. Using Precommit for Future Code Quality

Short description:

We remove an unused import and use nb-dime to view the diff between notebooks. The imports are sorted, the unused import is removed, the outdated constructor is replaced, and the inconsistent indentation is sorted out. To ensure future code quality, we can use the Precommit tool, which runs code quality checks automatically and blocks commits if they don't pass. By enabling pre-commit in our workspace, we can ensure that our notebooks continue to pass code quality checks. Pre-commit can also be used during continuous integration to validate incoming changes.

It'll just let us know if there are any style guide violations. And, in fact, it tells us that there's an import which is unused, Seaborn. So, we can open up our notebook again, remove this unused import, and now, well, let's see what's changed.

To see what's changed, we're going to do git diff, except we're not going to do git diff. I just told you that we should be using nb-dime to view the diff between notebooks. So, let's use it. Let's open up this link in our browser, and now, let's see what's changed.

So, first of all, you'll see that the imports have now been sorted thanks to isort. This unused import has been removed thanks to flakeate. This outdated constructor of this dictionary has been replaced with a more modern one thanks to pyupgrade. This inconsistent indentation has been sorted out thanks to black, and all of a sudden, the code style feels a lot more uniform, and it's going to be easier to keep this of a consistent quality. It's going to be easier to compare diffs when different people have been modifying it, if the style is consistent.

Great, except we don't just want our notebook to be of a certain code quality today. We want to make sure that it stays this way in the future, and a popular way of doing that is via a tool called Precommit. The way Precommit works is you need a .precommit config.yaml file in which you specify the repositories which host the code quality tools which you want to run on your files. So here I'll be using NVQA. You specify a revision, at the moment I'm putting 0.3.3, but you should always check to see what the latest one is, and probably put that one. And then specify which hooks you want to run. So I'll be running nbqa-black, nbqa-iSort, nbqa-pyupgrade and nbqa-flake8. So this is exactly what we had earlier but now I've put it in my pre-commit file.

So what will happen now is that if we make a commit which stages a notebook, then pre-commit will run all of these code quality checks automatically and it will block our commit if they don't all pass. Except, sorry, we need to enable pre-commit in our workspace for that to work. So let's get reset notebook. Right now let's add it again. Let's commit. All right and now you'll see that it has run our code quality tools. I needed to do this twice to get them all passing and the second time pre-commit let us actually make the commit. So if you use pre-commit you will make sure that not only your notebooks pass your code quality checks today but also that they will continue passing your code quality checks in the future. You can also run pre-commit during your continuous integration. And so you'll make sure that any incoming change to your repository will pass these checks.

6. Configuring NBQA and Code Quality Tools

Short description:

Is it really as simple as running nbqa black on your notebooks? I've hidden away complexity in the PyProject.toml file, where you can configure NBQA and other code quality tools. You can let NBQA modify your notebook in place by specifying it in the mutate section. Extra command line arguments can be added in addopt.

Now is it really this simple? Is it really as simple as just running nbqa black and then you can use black on your notebooks just as you would normally use black on your Python scripts? I have a confession to make. I've actually hidden away a little bit of complexity from you in the PyProject.toml file. You can configure NBQA entirely within this file. It's the same file you can use to configure your black formatter. So if any of your tools take config files, you can put them here. If you want any code quality tool to modify your notebook in place, you can let NBQA know here in the mutate section. Notice that I haven't put flakate because flakate just analyzes our notebook without actually modifying it. And then, if you want to pass any extra command line arguments, you can put them here in addopt.

7. Importance of Jupyter Notebooks and Challenges

Short description:

Jupyter notebooks are crucial for data science as they allow for data visualization, addressing the challenges of maintainability, and providing code quality tools through NB time and NBQA.

Great. So, in conclusion, we have seen how Jupyter notebooks play an integral role in data science. This is because they allow you to visualize your data, which helps you understand it in a way that simply printing some summary statistics to your console does not. We also saw that Jupyter notebooks present some challenges when it comes to keeping them maintainable. Namely, that viewing the diff between two notebooks is hard. And we saw how we can address this using NB time. And also that there's a lack of code quality tools available for Jupyter notebooks. We saw how we can keep our same Python code quality tools but just run them on our Jupyter notebooks via NBQA. I've included links to the homepages of NB time and NBQA here as well as to this presentation. That's it from me. Now please go out and write a maintainable Jupyter notebook. Good to have you.

QnA

NBQA Integration and Benefits

Short description:

We've had PyMC3, Alibi, SK time, Pandas profiling and NLP profiler use NBQA as part of their continuous integration workflows. We're looking at introducing a GitHub action for NBQA. nbdime has a GitHub integration for reviewing pull requests. It's free for open source projects but not for private projects. MBQA is similar to integrating an IDE into a Jupyter Notebook and adding some steroids. Jeremy Howard describes Jupyter Notebooks as an embodiment of the literate programming environment envisioned by Donald Knuth. Any tool that helps us program in a more comfortable and maintainable way within a notebook would be welcome.

So we're going to jump to the questions from our audience. Are you ready? Sure it's good to be here. Good good.

Question one. Have you introduced NBQA to the other data scientists in your workplace? How much has it helped in their workflow? Sure. So I have only recently spoken about it at work and so it's a limited buy-in. At the moment most of the buy-in has been in the open source world. We've had PyMC3, Alibi, SK time, Pandas profiling and NLP profiler use it as part of their continuous integration workflows. I suspect that most of the buy-in is probably going to be there. We're looking at introducing a GitHub action and hopefully that'll help further spark, further bring it to more people. Yeah, so, can you elaborate on that because you were showing how to use it locally but you can use it on GitHub in the future? Yeah, sure. So, yeah, in the future there will be a GitHub action. So, this is with reference to nbqa specifically. The other tool I showed, nbdime, which just to clarify, I'm not affiliated with, I'm not a co-author of that one, nbdime has a GitHub integration which you can use to review pull requests on GitHub. There are some libraries such as pymc3 which use that quite heavily. It's free for open source projects but not for private projects. If you want to use that in your workplace, then you will have to make the case to your employer as to why they should pay for it. Well, I can be very convincing, so that's not a problem.

Next question. Would it be safe to say MBQA is similar to integrating an IDE into a Jupyter Notebook and adding some steroids? I'm surprised no one came up with this before. Amazing work. Oh, well, thank you. That's very kind of you. I would like to think of it that way. I think... I'm trying to think of his name. The guy who did fast.ai, Jeremy Howard, he describes Jupyter Notebooks as being an embodiment of the literate programming environment, which was envisioned by Donald Knuth, if I'm not mistaken. And I think it's a pity that a lot of the standard development practices, which are available to us when we're programming in Python scripts, are not so readily available when we're programming in Jupyter Notebooks. And given some of the benefits that they provide when doing data science, I think that any tool which helps us program in a more comfortable and maintainable way within a notebook, I'd like to think it would be welcome. Okay.

Moving Code from Notebook to Python Package

Short description:

When deciding whether to move code from a Jupyter Notebook to a Python package, consider the long-term need for reproducibility. If the code is part of a report or analysis that needs to be produced consistently over time, it may be better to keep it in the notebook. However, if you want a self-contained solution or if the code is not directly related to data science, migrating it to a Python package might be more appropriate. Jeremy Howard and the Fast.AI team have tools for creating packages from notebooks, although I haven't personally used them yet.

Next question is from our audience member, Dido. Any recommendations on when to move code from Notebook to a Python package? That's a good question. I mean, my main use for notebooks is when I have some report or some analysis that I want not just to be able to produce today, but also that I want to be able to produce one month, two months from now and know that when I try to produce it again in two months it won't suddenly break. So with this use case in mind, I wouldn't typically migrate what I have in a notebook to a Python package. My usual thinking for making a Python package is when I want something somewhat self-contained that isn't part of an analysis or some model while the kind of work I do in a Jupyter notebook is more to do with pure data science. So I wouldn't typically migrate a notebook to a Python script. However, Jeremy Howard and the Fast.AI team, they do have some way of actually creating a package from a Jupyter notebook. They are very prolific in the number of tools that they put out. So there is a possibility for that. It's just not something I've used yet in my own work.

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

(Easier) Interactive Data Visualization in React
React Advanced 2021React Advanced 2021
27 min
(Easier) Interactive Data Visualization in React
Top Content
This Talk is about interactive data visualization in React using the Plot library. Plot is a high-level library that simplifies the process of visualizing data by providing key concepts and defaults for layout decisions. It can be integrated with React using hooks like useRef and useEffect. Plot allows for customization and supports features like sorting and adding additional marks. The Talk also discusses accessibility concerns, SSR support, and compares Plot to other libraries like D3 and Vega-Lite.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Transformers.js: State-of-the-Art Machine Learning for the Web
JSNation 2025JSNation 2025
27 min
Transformers.js: State-of-the-Art Machine Learning for the Web
Joshua introduces Transformers JS and Hugging Face, emphasizing community collaboration and pre-trained models. Transformers JS evolution led to 1.4 million monthly users, supporting 155 architectures. The library's browser-based capabilities offer real-time processing, cost-efficiency, and scalability. Integration enhancements include native web GPU execution and React Native implementation. Web ML implementation focuses on Onyx Runtime for device execution and web GPU for resource optimization. Browser-based ML applications cover vision, speech recognition, and text-to-speech. Advanced implementations include multimodal applications and educational tools. Interactive AI demonstrations showcase semantic search and conversational AI scenarios. Model licensing transitions to ECMAScript for efficiency and model redownloading factors are discussed.
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
React Advanced 2021React Advanced 2021
21 min
Using MediaPipe to Create Cross Platform Machine Learning Applications with React
Top Content
MediaPipe is a cross-platform framework that helps build perception pipelines using machine learning models. It offers ready-to-use solutions for various applications, such as selfie segmentation, face mesh, object detection, hand tracking, and more. MediaPipe can be integrated with React using NPM modules provided by the MediaPipe team. The demonstration showcases the implementation of face mesh and selfie segmentation solutions. MediaPipe enables the creation of amazing applications without needing to understand the underlying computer vision or machine learning processes.
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Charlie Gerard's Career Advice: Be intentional about how you spend your time and effort
Article
Charlie Gerard
Charlie Gerard
When it comes to career, Charlie has one trick: to focus. But that doesn’t mean that you shouldn’t try different things — currently a senior front-end developer at Netlify, she is also a sought-after speaker, mentor, and a machine learning trailblazer of the JavaScript universe. "Experiment with things, but build expertise in a specific area," she advises.
What led you to software engineering?My background is in digital marketing, so I started my career as a project manager in advertising agencies. After a couple of years of doing that, I realized that I wasn't learning and growing as much as I wanted to. I was interested in learning more about building websites, so I quit my job and signed up for an intensive coding boot camp called General Assembly. I absolutely loved it and started my career in tech from there.
What is the most impactful thing you ever did to boost your career?I think it might be public speaking. Going on stage to share knowledge about things I learned while building my side projects gave me the opportunity to meet a lot of people in the industry, learn a ton from watching other people's talks and, for lack of better words, build a personal brand.
What would be your three tips for engineers to level up their career?Practice your communication skills. I can't stress enough how important it is to be able to explain things in a way anyone can understand, but also communicate in a way that's inclusive and creates an environment where team members feel safe and welcome to contribute ideas, ask questions, and give feedback. In addition, build some expertise in a specific area. I'm a huge fan of learning and experimenting with lots of technologies but as you grow in your career, there comes a time where you need to pick an area to focus on to build more profound knowledge. This could be in a specific language like JavaScript or Python or in a practice like accessibility or web performance. It doesn't mean you shouldn't keep in touch with anything else that's going on in the industry, but it means that you focus on an area you want to have more expertise in. If you could be the "go-to" person for something, what would you want it to be? 
And lastly, be intentional about how you spend your time and effort. Saying yes to everything isn't always helpful if it doesn't serve your goals. No matter the job, there are always projects and tasks that will help you reach your goals and some that won't. If you can, try to focus on the tasks that will grow the skills you want to grow or help you get the next job you'd like to have.
What are you working on right now?Recently I've taken a pretty big break from side projects, but the next one I'd like to work on is a prototype of a tool that would allow hands-free coding using gaze detection. 
Do you have some rituals that keep you focused and goal-oriented?Usually, when I come up with a side project idea I'm really excited about, that excitement is enough to keep me motivated. That's why I tend to avoid spending time on things I'm not genuinely interested in. Otherwise, breaking down projects into smaller chunks allows me to fit them better in my schedule. I make sure to take enough breaks, so I maintain a certain level of energy and motivation to finish what I have in mind.
You wrote a book called Practical Machine Learning in JavaScript. What got you so excited about the connection between JavaScript and ML?The release of TensorFlow.js opened up the world of ML to frontend devs, and this is what really got me excited. I had machine learning on my list of things I wanted to learn for a few years, but I didn't start looking into it before because I knew I'd have to learn another language as well, like Python, for example. As soon as I realized it was now available in JS, that removed a big barrier and made it a lot more approachable. Considering that you can use JavaScript to build lots of different applications, including augmented reality, virtual reality, and IoT, and combine them with machine learning as well as some fun web APIs felt super exciting to me.

Where do you see the fields going together in the future, near or far? I'd love to see more AI-powered web applications in the future, especially as machine learning models get smaller and more performant. However, it seems like the adoption of ML in JS is still rather low. Considering the amount of content we post online, there could be great opportunities to build tools that assist you in writing blog posts or that can automatically edit podcasts and videos. There are lots of tasks we do that feel cumbersome that could be made a bit easier with the help of machine learning.
You are a frequent conference speaker. You have your own blog and even a newsletter. What made you start with content creation?I realized that I love learning new things because I love teaching. I think that if I kept what I know to myself, it would be pretty boring. If I'm excited about something, I want to share the knowledge I gained, and I'd like other people to feel the same excitement I feel. That's definitely what motivated me to start creating content.
How has content affected your career?I don't track any metrics on my blog or likes and follows on Twitter, so I don't know what created different opportunities. Creating content to share something you built improves the chances of people stumbling upon it and learning more about you and what you like to do, but this is not something that's guaranteed. I think over time, I accumulated enough projects, blog posts, and conference talks that some conferences now invite me, so I don't always apply anymore. I sometimes get invited on podcasts and asked if I want to create video content and things like that. Having a backlog of content helps people better understand who you are and quickly decide if you're the right person for an opportunity.What pieces of your work are you most proud of?It is probably that I've managed to develop a mindset where I set myself hard challenges on my side project, and I'm not scared to fail and push the boundaries of what I think is possible. I don't prefer a particular project, it's more around the creative thinking I've developed over the years that I believe has become a big strength of mine.***Follow Charlie on Twitter
Observability with diagnostics_channel and AsyncLocalStorage
Node Congress 2023Node Congress 2023
21 min
Observability with diagnostics_channel and AsyncLocalStorage
Observability with Diagnostics Channel and async local storage allows for high-performance event tracking and propagation of values through calls, callbacks, and promise continuations. Tracing involves five events and separate channels for each event, capturing errors and return values. The span object in async local storage stores data about the current execution and is reported to the tracer when the end is triggered.

Workshops on related topic

Build a powerful DataGrid in few hours with Ag Grid
React Summit US 2023React Summit US 2023
96 min
Build a powerful DataGrid in few hours with Ag Grid
Top Content
WorkshopFree
Mike Ryan
Mike Ryan
Does your React app need to efficiently display lots (and lots) of data in a grid? Do your users want to be able to search, sort, filter, and edit data? AG Grid is the best JavaScript grid in the world and is packed with features, highly performant, and extensible. In this workshop, you’ll learn how to get started with AG Grid, how we can enable sorting and filtering of data in the grid, cell rendering, and more. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid into your React application.
We all know that rolling our own grid solution is not easy, and let's be honest, is not something that we should be working on. We are focused on building a product and driving forward innovation. In this workshop, you'll see just how easy it is to get started with AG Grid.
Prerequisites: Basic React and JavaScript
Workshop level: Beginner
Build a Powerful Datagrid With AG Grid
React Summit 2024React Summit 2024
168 min
Build a Powerful Datagrid With AG Grid
Top Content
WorkshopFree
Brian Love
Brian Love
Does your React app need to efficiently display lots (and lots) of data in a grid? Do your users want to be able to search, sort, filter, and edit data? AG Grid is the best JavaScript grid in the world and is packed with features, highly performant, and extensible. In this workshop, you’ll learn how to get started with AG Grid, how we can enable sorting and filtering of data in the grid, cell rendering, and more. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid into your React application.
Powerful Data Visualisation with AG Grid & AG Charts
React Advanced 2024React Advanced 2024
145 min
Powerful Data Visualisation with AG Grid & AG Charts
WorkshopFree
Brian Love
Brian Love
Does your React app have lots (and lots) of data that needs to be displayed in both Data Grids and Charts? Do your users want to interact with, analyse, and work with this data without compromising on performance or reliability? AG Grid provide the best React Data Grid & Charts libraries that are packed with features and provide unbeatable performance whilst being fully customizable. In this workshop, you'll learn how to get started with both AG Grid and AG Charts, learn how to use their key features. You will walk away from this free 3-hour workshop equipped with the knowledge for implementing AG Grid & AG Charts into your React application.
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
C3 Dev Festival 2024C3 Dev Festival 2024
48 min
Can LLMs Learn? Let’s Customize an LLM to Chat With Your Own Data
WorkshopFree
Andreia Ocanoaia
Andreia Ocanoaia
Feeling the limitations of LLMs? They can be creative, but sometimes lack accuracy or rely on outdated information. In this workshop, we’ll break down the process of building and easily deploying a Retrieval-Augmented Generation system. This approach enables you to leverage the power of LLMs with the added benefit of factual accuracy and up-to-date information.
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
JSNation 2024JSNation 2024
108 min
Leveraging LLMs to Build Intuitive AI Experiences With JavaScript
Workshop
Roy Derks
Shivay Lamba
2 authors
Today every developer is using LLMs in different forms and shapes, from ChatGPT to code assistants like GitHub CoPilot. Following this, lots of products have introduced embedded AI capabilities, and in this workshop we will make LLMs understandable for web developers. And we'll get into coding your own AI-driven application. No prior experience in working with LLMs or machine learning is needed. Instead, we'll use web technologies such as JavaScript, React which you already know and love while also learning about some new libraries like OpenAI, Transformers.js
Let AI Be Your Docs
JSNation 2024JSNation 2024
69 min
Let AI Be Your Docs
Workshop
Jesse Hall
Jesse Hall
Join our dynamic workshop to craft an AI-powered documentation portal. Learn to integrate OpenAI's ChatGPT with Next.js 14, Tailwind CSS, and cutting-edge tech to deliver instant code solutions and summaries. This hands-on session will equip you with the knowledge to revolutionize how users interact with documentation, turning tedious searches into efficient, intelligent discovery.
Key Takeaways:
- Practical experience in creating an AI-driven documentation site.- Understanding the integration of AI into user experiences.- Hands-on skills with the latest web development technologies.- Strategies for deploying and maintaining intelligent documentation resources.
Table of contents:- Introduction to AI in Documentation- Setting Up the Environment- Building the Documentation Structure- Integrating ChatGPT for Interactive Docs