But most importantly, an application can be whatever you're thinking of. I hope that listening to this talk, you might be coming up with ideas of things that you'd like to build using a brain sensor. It's a field that's growing with more and more devices, advancements in hardware and software. And I'd love to see more people come up with ideas for this technology.
Remember that you can use JavaScript in a lot of different contexts, and that's what's awesome about it. And it's a real advantage to being a JavaScript developer in this space. We build websites that run on desktop, laptop, iPad, phone. You can build front end and back end things. You can build Electron apps. You can build AR and VR prototypes. You have frameworks to make music in JS. You can control robots. I mean, it's really super exciting. And it's not something that developers in using a different language can do because they might not have access to an ecosystem that's as great as the one that JavaScript has.
So I've talked a lot about what's possible, but let's briefly talk before I finish about the limits of such technology. So, first of all, it's important to know that the quality of the data is different with external brain sensors than it is with implants. Implants have a direct access to the brain as a device is placed really on the brain, whereas a brain sensor gets data that has to go through layers of skin and skull and hair. So if you find a demo built by another company that uses implants, you have to be aware that the quality of the data that you will get with an external brain sensor is different and that will impact what you can do.
Another limit is the fact that as the data is lossless, and that's great, you get all of it, but working with delays will have an impact on the kind of applications that you can build. If you're building things with focus or calm, I guess it doesn't really matter if you get it, if your application changes the state in a second, but if you're really building applications that are really time-sensitive, working with timestamps might be a difficulty, but it's something to think about. When you're getting into something like this, you have to remember that in a lot of ways, we still don't understand everything about the brain and we can't really detect things like, I want to trigger something when I'm thinking about the beach. It doesn't really work like that, so the type of thoughts that we can detect at the moment is more limited. So it's something that you have to keep in mind when you're getting into a space like this.
And finally, as mental commons need training, if you build an interface, remember that common focus are the same for all users, so you don't need any training, the user will just have to wear the sensor, but if you want something a bit more personalized, a bit more complex, the fact that the user will have to do some training before being able to use your application might be a limitation.
The biggest opportunity that I see in this industry at the moment is that as JavaScript developers, we can help shape the future of brain-controlled interfaces. It's still in early stages, but it's moving fast and we're in a position where we can give feedback to different companies about what we'd like to be able to do with the brain sensor. We can help shape what the web would look like if it was brain-controlled and we can contribute to the different open source packages, et cetera. To me, I really think it's a big opportunity. On this, I'm going to stop here, but hopefully giving you a better idea of what is possible when we talk about brain control on the web. And I hope you're as excited as I am about the possibilities that this technology opens up.
Comments