Building Brain-controlled Interfaces in JavaScript

Rate this content
Bookmark

Neurotechnology is the use of technological tools to understand more about the brain and enable a direct connection with the nervous system. Research in this space is not new, however, its accessibility to JavaScript developers is.

Over the past few years, brain sensors have become available to the public, with tooling that makes it possible for web developers to experiment building brain-controlled interfaces.

As this technology is evolving and unlocking new opportunities, let's look into one of the latest devices available, how it works, the possibilities it opens up, and how to get started building your first mind-controlled app using JavaScript.

This talk has been presented at JSNation Live 2021, check out the latest edition of this JavaScript Conference.

FAQ

The speaker is Charlie Girard, a senior frontend developer at Netlify and a member of the Google Developer Experts group in Web Technologies.

The focus of the talk is on how to use brain-controlled interfaces to interact with devices and interfaces using JavaScript.

Brain sensors are devices containing electrodes that are placed on the scalp to transform electrical signals from the brain into digital data that can be used for various applications.

The talk focuses on the NeuroCity Notion brain sensor, specifically its newer model called the Crown.

Common applications include detecting focus to optimize workflow, scrolling web pages for accessibility, triggering music effects, and password-less authentication using brainwaves.

The NeuroCity Notion detects focus and calm states by analyzing brainwave patterns common among people, allowing it to provide probability scores for these states without requiring additional training.

Mental commands are specific thoughts or imagined movements that can be trained using the NeuroCity Notion's UI or API. Users alternate between focusing on a command and resting while the device records brain data to train its machine learning model.

Yes, there is potential for using brain-controlled interfaces in medical fields, such as detecting states of depression or providing real-time feedback on a patient's emotional state during psychiatric or psychological sessions.

Limitations include lower data quality compared to implants, potential delays in data processing, the need for training mental commands, and the current inability to detect complex thoughts like specific imagery.

JavaScript developers can help shape the future by providing feedback to companies, contributing to open-source projects, and developing creative applications that utilize brain sensor technology.

Charlie Gerard
Charlie Gerard
27 min
09 Jun, 2021

Comments

Sign in or register to post your comment.

Video Summary and Transcription

Learn how to build brain-controlled interfaces using JavaScript and brain sensors. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore examples of calm and focus detection, as well as the Kinesis API for mental commands. Discover the applications of brain-controlled interfaces, such as scrolling web pages and password-less authentication. Understand the limits and opportunities of brain control and the potential for using brain sensors in medical applications.

1. Introduction to Brain-Controlled Interfaces

Short description:

Learn how to build brain-controlled interfaces using JavaScript. Charlie Girard, senior frontend developer at Netlify, shares insights on using brain sensors to transform brain activity into digital data. Discover the NeuroCity Notion, a commercial brain sensor, and how the number of electrodes impacts its use cases.

Hi everyone, thanks for joining me today to learn more about how to build brain-controlled interfaces using JavaScript. Before we dive into this topic, here's a little bit more about me. My name is Charlie Girard. I'm a senior frontend developer at Netlify. I'm also part of the Google Developer Experts group in Web Technologies. It's a community group that's sponsored by Google for developers who would like to give back to the community in different ways. I'm also the author of a book about TensorFlow.js for JavaScript developers.

Most of all, I spend a lot of my personal time building and researching prototypes about human computer interaction, which is also called HCI. That's the study of the design and use of computer technology focused on the interfaces between people and computers. It can involve a lot of things like AR, VR, interactive arts, machine learning, et cetera. I've been interested in this since I started learning to code. Throughout the years, my research has led me to the topic of today. It has nothing to do with my day job at Netlify, but hopefully this talk will show you that you can use your JavaScript skills for a lot of different things.

The focus of today is our brain and how to use it to interact with interfaces directly using JavaScript. How we can get data directly from our brain activity, and write some JavaScript code to use it to interact with interfaces or devices. How do we even get this data from our brain? We do this with the help of brain sensors. These are devices that contain electrodes that you place on the scalp. In contact with the skin, they are able to transform the electrical signals coming from the brain into digital data that we can work with. On this slide, I put a few of the commercial brain sensors that you can buy currently. You can see that they come in different shapes. They have different number of electrodes. That will impact what you're able to track and what kind of applications you're able to build with it. There's probably more brain sensors available out there, but here are the ones that I mostly heard of or played with. The one that this talk is going to focus on is the one on the bottom right. That's called the NeuroCity Notion. They recently released a new model called the Crown. If ever you're interested in buying it, it might be called the Crown now, but I experimented with one of their very first version that was called the Notion. To understand how the number of electrodes impacts the use cases, let's talk briefly about how that works. In the context of the Notion device, I highlighted in green the placement of the electrodes based on their reference number on the 1020 EEG system. This is a system that's a reference in neurotechnology, and it's a kind of map representing the placement of electrodes on a user's head.

2. Brain Sensors and Data Analysis

Short description:

Learn about the different brain sensors and their placement on the head. Understand the functions of different parts of the brain and how they relate to sensor placement. Explore the raw data and available features in the neuro CT notion UI, including focus and calm detection. Discover the process of training custom mental commands using the Notion headset.

At the top is the front of your head and at the bottom is further away down the back. Each electrode has a reference number and letter. So these are important because it will give you an idea of the type of brain waves that you can track depending on the area of the brain that the electrodes are closest to.

So the Notion has eight electrodes, four on the left side of the brain and four on the right side, mostly focused on the top and the front of the head. So this is important to know because depending on the placement of the electrodes, you will get data from different parts of the brain. So it means that you will, what you can interpret from the data that you're getting will vary. So here I made a small animation to explain what I'm talking about. So different parts of the brain have different purposes. At the front, you have the frontal lobe, then the cerebellum is at the lower back, the parietal lobe at the top, etc. You don't have to know this by heart, but, and it might not mean too much to you right now, but they are in charge of different physiological functions.

So, for example, the frontal lobe is in charge of voluntary movement and concentration and problem solving. The parietal lobe at the top is more focused on sensations and body awareness. And the temporal lobe is the one on the side that receives sensory information from the ears and processes that information into meaningful units such as speech and words. So depending on what you'd like to track or build, you will want to check different brain sensors towards position to see they're more likely to be focusing on the area of the brain that you're interested in. For example, one of the brain sensor on one of the previous slides is called NextMind and it mostly focuses on the occipital lobe at the middle back because they claim to be focusing on the user's vision to try to predict what somebody is looking at.

So anyway, now that we've talked about brain sensors, what does it look like for us as JavaScript developers? So with the neuro CT notion you have access to a UI in which you can see different graphs. So here is the part of the UI where you can see your raw brain waves. So you can see the different lines. There's eight of them and each label responds to the name of an electrode position based on the 1020 EEG system that I talked about a few slides ago. So this represents a graph of the raw data coming live from the brain sensor. But in general when you get started in this space of neurotechnology you don't start straightaway experimenting with raw data. Most of the brain sensors out there have implemented things like focus detection or calm detection that you can use without having to build your own machine learning model. So focus and calm detection don't need any training because they rely on a pattern of brain waves that are pretty common amongst everybody. However custom mental comments have to be trained so what do I mean by that. I don't bother reading the entire list but for the Notion headset the comments you can train are focused on imagining specific movements. So you can see biting a lemon or pitching your left finger or thinking about pushing something in space. For example here's what training the right foot mental comment looks like. So you can do it also with their API but in general to do it faster you do it through their UI. So you have two animations playing every few seconds to guide you into what you're supposed to do. So you have to alternate between states of focusing on that comment so thinking about tapping your right foot on the floor and also resting where you're supposed to try to think about nothing at all.

QnA

Check out more articles and videos

We constantly think of articles and videos that might spark Git people interest / skill us up or help building a stellar career

Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
JSNation 2022JSNation 2022
21 min
Crafting the Impossible: X86 Virtualization in the Browser with WebAssembly
ChirpX is a technology to securely run binary code in the browser, written in C++ and compiled to JavaScript WebAssembly. It can run a full virtualized system in the browser, including Bash and other languages like Python and JavaScript. ChirpX aims for scalability and the ability to work with large code bases, supporting multiprocessing and multithreading. It uses a two-tiered execution engine with an interpreter and a JIT engine. Future plans include running the full X.Org server in the browser and implementing the Windows system call. WebVM, the underlying technology, has a virtual file system backed by Cloudflare.
TensorFlow.js 101: ML in the Browser and Beyond
ML conf EU 2020ML conf EU 2020
41 min
TensorFlow.js 101: ML in the Browser and Beyond
TensorFlow.js enables machine learning in the browser and beyond, with features like face mesh, body segmentation, and pose estimation. It offers JavaScript prototyping and transfer learning capabilities, as well as the ability to recognize custom objects using the Image Project feature. TensorFlow.js can be used with Cloud AutoML for training custom vision models and provides performance benefits in both JavaScript and Python development. It offers interactivity, reach, scale, and performance, and encourages community engagement and collaboration between the JavaScript and machine learning communities.
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
JSNation 2022JSNation 2022
22 min
Makepad - Leveraging Rust + Wasm + WebGL to Build Amazing Cross-platform Applications
Top Content
Welcome to MakePad, a new way to build UI for web and native using WebAssembly and Rust. JavaScript is not suitable for complex applications like IDEs and design tools. Rust, a new programming language, was used to reimagine MakePad, resulting in a fast and efficient platform. MakePad offers live editing, high CPU performance, and the ability to load native instrument components. The future of MakePad includes an open-source release, a design tool, and support for importing 3D models.
WebHID API: Control Everything via USB
JSNation 2022JSNation 2022
23 min
WebHID API: Control Everything via USB
Today's Talk introduces the webHID API, which allows developers to control real devices from the browser via USB. The HID interface, including keyboards, mice, and gamepads, is explored. The Talk covers device enumeration, input reports, feature reports, and output reports. The use of HID in the browser, especially in Chrome, is highlighted. Various demos showcase working with different devices, including a DualShock controller, microphone, gamepad, and Stream Deck drum pad. The Talk concludes with recommendations and resources for further exploration.
How I've been Using JavaScript to Automate my House
JSNation 2022JSNation 2022
22 min
How I've been Using JavaScript to Automate my House
The Talk covers various experiments with JavaScript and C++, including controlling lights and creating a car control system. The speaker shares his experiences with home automation and the challenges of hiding wires. He explores using JavaScript with Esperino for face recognition and discusses the benefits and limitations of the platform. The Talk concludes with suggestions for using JavaScript in hardware projects and learning opportunities.
Webdevelopment Tailored for 2024
React Summit 2024React Summit 2024
7 min
Webdevelopment Tailored for 2024
Today's Talk covers native features in browsers, including upcoming ones and the Interop Project. Native features offer speed, compatibility, and improved debugging. Notable examples include dynamic viewport units, flexbox/grid gap, container queries, and motion path. The Interop Project ensures consistent web platform features across different browsers. Upcoming features like native CSS nesting and entry/exit animations are on the horizon.