Video Summary and Transcription
The video explores controlling apps with your mind, diving into the future of UI and UX. It emphasizes the transition from traditional 2D mediums to immersive VR UI and AR UI experiences. The speaker discusses the use of EEG headsets like Muse for measuring brain waves through electrodes. This data, processed via bandpass filters and fast Fourier Transform, can be used to control devices like drones. The talk also highlights the potential of adaptive UI and mind reading in enhancing user interaction. Machine learning plays a crucial role in classifying brain wave data for practical applications. For those interested in learning, the speaker recommends resources like React Native, AR, Bluetooth, and drones, and platforms like Egghead.io.
1. Introduction to Controlling Apps with Your Mind
Hi, everyone. Today, I want to talk about controlling apps with your mind and the future of UI and UX. We are in a transition phase, exploring new horizons and dimensions. We have the ability to adapt and transform things to something totally different. The foundations of UI and UX are based on 2D Medium, even though we have fake 3D elements.
Hi, everyone. I'm really excited to be here at React Summit Remote Edition, streaming from outer space. And today, I want to talk about controlling apps with your mind. My name is Vladimir Novik, I'm a software architect and consultant in Vladimir Novik Labs. I'm Google Developer Expert, author, engineer, and on a daily basis, I work in web, mobile, VR, AR, IoT, and AI fields.
I'm also CTO and co-founder of EventLoop, and we are creating robust, performant, feature-rich online conference experience. Basically, instead of having Zoom for your virtual conference, we bring you a set of conference tools and different plugins and widgets and so on that helps you to organize the conference, attend the conference. So if your organizer or speaker or attendee reaches out to us and signs up for our Alpha product, it will be an open source product. So if you want to collaborate, you are welcome. You can find us at eventloop.ai or on Twitter at eventloopHQ.
Today, I want to talk about the future. And I think we are in kind of transition phase technological-wise. When we are exploring new horizons, we're exploring new medium, we have VR, AR, mixed reality, web, mobile. Everything is constantly changing. And that's why we are sort of the ones who break the rules to make the new ones. Let's think about what medium we will use in the future. What dimensions we will use? Maybe VR? Maybe AR? Maybe something different. I'm just thinking here, but quantum computing is getting traction, VR is getting traction, AR is getting traction. Everything is kind of changing. Which dimensions we will use? Will there be web as it is today or mobile or will we completely switch to different medium? How we should prepare for that transition? How we should adapt? Should we adapt or should we actually shatter foundations that we have? Maybe invent new techniques to manipulate things, to interact differently? Maybe new UX patterns, maybe new best practices?
So, we have this ability right now to change things, to adapt and transform things to something totally different. And let's talk what are the very foundations of UI and what are the very foundations of UX. And I think it's 2D Medium. And if you think about that, we've gone a long way from cave drawings to the mobile apps. But if you think about all the color theory and lines and history of art and how everything kind of created the foundation of design and everything in our screens is basically 2D Medium. Because we're fake 3D. Third dimension. It's not really like third dimension. We have 3D models in the browser or in headsets on wherever, but all the like shapes and depth is based on shaders, which is basically a function of how light is reflected. So it's it's kind of fake, right? And we have things on our phones, we have screens. It's everything is 2D.
2. The Future of UI and UX in VR and AR
XR is adding a new dimension to UI and UX in VR and AR. We need to create reality-based interactions in these mediums. Understanding the different dimensions and limitations is crucial. Adaptive UI and mind reading are also emerging trends.
And XR is adding a new dimension to that. So how we've adopted. We took different forms. Let's say you have a sign up form. So we have this form hovering in thin air in VR. Realistic? Not really. Something that we've adopted from 2D medium, right? Or we have in AR, we have arrows pointing to different directions that completely break immersion. But we have it because we've adopted. And we haven't invented something new.
So I think it's more crucial to create some sort of reality based interactions in VR and AR. And if you think about that, if you need to log in inside VR, so currently you have a login form, you have this hovering keyboard, you type on this keyboard, and then you get into, you type your username and password and you get inside. But is it something that you will see in reality? Not really, right? So it's more realistic to have some kind of password or key or whatever that you just put on the spot or turn the key inside the door, and it will let you through. It's a sort of reality-based interactions, right? So we need to understand our reality in order to create these interactions. And the medium is completely different.
Now, in VR there is another dimension, something happening behind the viewer. So, I'm looking at the camera, but something is happening behind. So I cannot use color theory to do this amazing call-to-action button animation. So I need to use different things like haptics, sounds, maybe slowing time, and so on. There is also an adaptive UI. Adaptive UI is something that is used on the web. And the idea is, UI, is kind of learning from what are you doing with it. So, like, forms are learning and adapting. So, you can google that. That's kind of a new trend-ish. And another thing that I propose is actual mind reading. And, yeah, obviously I cannot, like, read your thoughts, right? But, to some degree. And I want to ask you, what is it? And obviously, like, we're online. You can answer in chat. I'll pause. So, it's the known universe.
3. Understanding Neurons and EEG Headsets
All these dots are clusters of galaxies, but they actually represent neurons in our brains. Neurons work in pairs, with excitatory neurons releasing glutamate and creating a dipole mechanism. This potential change can be measured by electrodes on our skull. To analyze this further, we use EEG headsets, such as the Muse, which helps with meditation.
And all these, like, dots are clusters of galaxies. And it looks amazing. But what is this? It looks pretty much the same, right? But it's actually neurons in our brains. So, we are the universe. And, ask accordingly.
So, how do neurons in our brain... how do they work? So, the neurons come in pairs. And there is excitatory neuron and it releases glutamate and creates dipole mechanism. Basically, have plus and minus. And it sort of acts like a battery. So, you have potential change between different neurons.
So, it creates a potential change that can be measured by electrodes on our skull. And it looks like this. If you measure your brain. So, this is the wake state. This is the sleep state. And you see it's kind of different, right? But it's sort of random ish, right? So, we need to analyze that and to get what does all this mean, right?
So, in order to do so, we will use EEG headsets. And there are lots of consumer and research versions of EEG headsets. And the idea is to put electrodes on your skull, and based on that, measure the potential change under our skull. So, research EEGs look like this. And they're quite costly. But there are consumer ones. And I actually have one here. It's called Muse. It's a nice product that helps you with meditation. So, if you're doing meditation, it kind of helps you to focus and so on. And, yeah, I will read my brain waves, and you will see how it looks like. So it's fairly cheap. It has only like 5 electrodes and that's about it. But it's good for our example.
4. Connecting to Muse via Bluetooth
Now, I don't need to open my skull and plug it in. I can connect it using Bluetooth. We're talking about experiments, right? It's sort of thoughts experiment, literally. But also like experiments where technology will lead us. I can measure to some degree what's happening on my brain. I created here a little app. And before connecting to Muse, actually, yeah, before connecting to Muse, in TeamViewer as to what I want to do, I want to see all the readings of this headset. I use Muse.js library. And this library exposes your readings as RxStream that I can subscribe to.
So how do we connect this thing to our brain? Now, I don't need to open my skull and plug it in. I can connect it using Bluetooth. And specifically we will use Web Bluetooth. I can use Bluetooth by accessing Navigator, Bluetooth object. And I call requestDevice. I will filter the Muse service. And I will just connect to and get some attributes from the Bluetooth.
Now, the support is not quite there, right? So we see it in Chrome and for some reason Opera. But the rest is kinda... no, right? It doesn't support. But we're talking about experiments, right? It's sort of thoughts experiment, literally. But also like experiments where technology will lead us. And let's look at the demo. So I have this playground here. And I can pair to my headset. And you will see the brainwaves coming through. And this is my actual brainwaves. And as you can see, when I talk, the waves kind of change. When I blink, you see these tiny spikes, right, of voltage spikes. If I do something like this, you see the higher spikes. So I can measure to some degree what's happening on my brain, right. So this is pretty nice.
But what do I do with this data, right? So I created here a little app. And before connecting to Muse, actually, yeah, before connecting to Muse, in TeamViewer as to what I want to do, I want to see all the readings of this headset. I will subscribe to readings and I will just console log them. Now, I use Muse.js library. And this library exposes your readings as RxStream that I can subscribe to. And just console log whatever Muse.js shows me. So let's connect. And let's see what we have here.
5. Measuring Brain Waves and Blink Detection
To measure things more precisely, we use a bandpass filter to cut out frequencies and focus on spikes. These spikes are then divided into epochs to analyze time reference. By applying fast Fourier Transform, we can convert the data to frequency domain and recognize different brain waves. Each wave represents a different state of mind, such as sleep, heightened perception, or relaxation. Before exploring ML, I will demonstrate how to subscribe to alpha wave readings and differentiate blinks using filtering techniques.
And as you can see, similar as the graph we don't really have like the data is quite weird right so we cannot do pretty much anything with that. So, the question is how we measure things more precisely. To measure more precisely we will use bandpass filter. So we cut out the frequencies on this is like the graph right we cut out these and get only the spikes.
Now, then we need to cut all these into epochs which is basically a timeframe because we want like time reference, if in specific amount of time, there is a spike that probably that's, that's a blink right. So we cut this into epochs and also we need to pass it through fast Fourier Transform. So that means that we take the data that we get is a microvolts and we want to convert that to frequency domain so we use fast Fourier Transform and we convert that to frequency domain so it looks from like the raw data that we get we will see different frequencies.
Now we can recognize different brain waves based on these frequencies and the differentiation is gamma, beta, alpha, theta, and delta and each one of them is different to our state of mind. For instance in delta, it's sleep, loss of body awareness, repair and so on. Gamma is heightened perception, learning, problem solving, tasks, quality processing. As you can see they are not super distinctive, it's a broad range. Beta is generally awake, and alpha is relaxed. I can measure the alpha state and see if I'm relaxed. So we can react to brainwave spikes, and we also can feed this data into ML.
But before doing some amazing stuff with ML, I want to show you something. I want to subscribe to the focus. And basically it will give me the alpha wave readings. Also, I want to subscribe to blinking. In order to differentiate the blink I get the readings. I filter them. I get the reading only for the electrode above my left eye. I get the maximum of it, like the spike. Then I use RXOperatorSwitchMap to differentiate the spike. I don't really care about the rest of the data, just the spike. If there is a spike then this is something that I will return. How it looks like. I need to also remove this one. Let's connect it again. And what we will see. If I blink, you can see here me blinking.
6. Feeding Data to Machine Learning
We want to feed the blinking data to machine learning. I will connect to my used headset, get all the data, pass it through the necessary filters, and add it as a sample to the KNN Classifier. By classifying the data, I can determine which card I'm thinking about. The main topics of interest are VRXL and IoT AI.
So, as you can see, sometimes it's worse and it's not because it's like a threshold, and maybe I put it not really close to my skull. So, yeah, so this is the blinking, right? So, now we want to feed this data to machine learning. In order to do so, what I will do, I will go to my app.js and we'll add my prediction panel, or predict some stuff. And here I have three cards, and these three cards, one is web and mobile, other is VRXR, and another is IoT AI. So, what I will try to do, I will connect to my used headset, get all the data, pass through all the filters that I need, and then add this as a sample to KNN Classifier, which is machine learning algorithm, and we'll start classifying which card I'm thinking about. So, let me record these waves really quick. So, I'll click on this button while looking at the web and mobile. Now VRXL, and now IoT AI. So, now, if I click on classify, I will be able to switch by just looking, hands are here, so I can just look at different cards and switch between them. And, yeah, as you can see, the main topic that I'm interested in is VRXL and IoT AI.
7. Mind Control with AR and Learning Resources
So, this is quite cool. I have a more big store here with the enable drone flag, and, yeah, I have a drone here, this tiny fellow. I will connect to my Muse headset, connect to my drone, record brainwaves, and start classifying. The main takeaway here is that the future is already here and you are the ones to build it. Thank you. We're going to bring Vlad back to the stage for one quick question. After seeing something like this, where does somebody even start if they want to start learning this stuff for themselves? You can broaden your horizons by learning React Native and AR or React and Bluetooth and drones. There are resources like Egghead.io and workshops available. Feel free to reach out to me on Twitter for guidance and learning materials. Thank you, Vlad.
So, this is quite cool, but if it will work, I will bring another level of coolness here. So, I will ... I have a more big store here with the enable drone flag, and, yeah, I have a drone here, this tiny fellow. So, it occasionally works, so let's see if it will work this time. So, what I will do, I will first connect to my Muse headset, I need not to blink. And then I will connect to my drone. Okay, so here we have the drone and hopefully you see it. I will record brainwaves and start classifying. Now, I hope it's in the camera view. Okay, now I will try to move it by just looking at it and let's land it. And it fell. I'm not sure if it was in the view, so let me try to put it in the view again. Now it's probably here and again, I'm just moving that with the power of my thought. So that was quite cool. And the main takeaway here is what is the purpose of all of this, right? Like why are we flying drones with our mind? Why we're using these devices? They are not that reliable. We have web bluetooth support only in Chrome. The main reason for that is that we are the ones who made the rules and we break the rules, we invent new things, right? My main takeaway from this talk is that the future is already here and you are the ones to build it. Thank you.
All right. That was incredible. I didn't think that we were going to see somebody fly a drone with their mind this early in the morning. Unfortunately, we don't have much time, but we're going to bring Vlad back to the stage for one quick question. And then we're going to move on to our next session.
So Vlad, I think after seeing something like this, the mind control with the AR, where does somebody even start if they want to start learning this stuff for themselves? I mean, the whole point of this talk was like right, you need to like, you can change the world basically, right? And you are the one who can change everything, right? And then you need to decide like what you want to learn, like technology-wise. You want to learn, like be more experienced just in React or you want to broaden your horizons, right? So for instance, if you want to have like React Native and AR or have like React and Bluetooth and drones, you can get on different resources. There's Egghead.io which is an amazing website where I will be also doing courses there. Actually, I will be doing a workshop on React Native and AR pretty soon. So I actually created this sound for React Summit, so I will send the link in community channel. So if you want to get to this workshop, you can do so. Otherwise, I also had a bunch of talks. So I have a YouTube channel streamed about VR, but I'll be also recording a bunch of stuff on this topic. Because, I mean, I like to teach these things, right? So that's why I also started a Twitch channel. And yeah, that's one of the places, but obviously there are lots of places where to learn. And if you really want to get into this, I would say just DM me on Twitter, I will let you know. Like, let me know what you want to do as far as technology-concerned, and I probably will be able to direct you to the learning materials and free sources somewhere and so on. Excellent. Well, Vlad, thank you so much. We really appreciate it. I wish we had more time for Q&A.
Comments