So in this other example, it was a clone of Beat Saber, where originally the UI was an open source repo that was on GitHub, and it was supposed to be connected to a headset by, like, using web VR, and you were supposed to be able to connect this app to joysticks using JavaScript, but I didn't have a VR headset, and I was like, I have hands, so I can just, like, play Beat Saber with my hands. So I basically used two of the key points from PostNet, so my right wrist and my left wrist, and then I kind of used these directions or these coordinates, translated them to the VR world, and being able to kind of, like, play Beat Saber with my hands.
Another model that you can use is one that is specifically around key points on the hands. So right hand and left hand, and here I think you have about 21 key points, and you can build something like this. So I started thinking, well, what if you could augment the tools that you already use but add some kind of motion detection to it as well? So that was interacting with Figma, so it's not necessarily that all of a sudden you will build entire interfaces just with your fingers, but what if you just were augmenting the type of things that you can do?
In this example, I think I was using just, like, a pinch gesture to be able to drag shapes around, and then here in this GIF, it's, like, basically when I move my hands like this, but when I do, like, a palm gesture, then this is a zoom, and you would also have to write that on logic. It's, like, what's even a palm gesture versus, like, a fist or something? And then you measure the distance between your hands and you can, you know, do whatever you want with it really.
And finally, another model that you can use is around more face detection key points. This one has more than, I think, 450 key points all around the head, and what I focused on specifically is to build a tool to focus on gaze detection. And with this, you end up having even more questions, not only technically how do you build this, but even in terms of human-computer interaction and user experience. If you only have two inputs, left or right, how do you write something? And here it's kind of like reimagining the keyboard as well. So, you know, if you have a keyboard and you want to select the very last letter, you could technically, as you're looking right, select, you know, go letter by letter, but that would take a while.
Comments