Keyglove Serial Communication Protocol Draft

Keyglove Serial Communication Protocol Draft

One of my main goals for the Keyglove is to allow it to be used on as many devices and operating systems as possible with no complicated driver installation or configuration every time you set it up. Using a USB connection and Human Interface Device (HID) profiles, along with the Bluegiga WT12 Bluetooth module that also supports HID profiles, the Keyglove can appear to the host devices as a standard keyboard, mouse, or joystick, none of which require drivers. This allows for basic usage with almost no work.

But what about special types of usage? What about reconfiguring the Keyglove’s behavior, even if most of the time you do just want to use it as a keyboard and mouse? What about extending the capabilities beyond what I imagined, or implementing your own special driver for your particular hardware or software application? Obviously a HID-only approach is too limited. So, we turn to basic serial communication, which is one of the easiest methods to communicate with a hardware device in a way that pretty much anyone can work with, regardless of the platform. It works the same over Bluetooth as it does over a USB virtual serial port (which is built into the main Keyglove processor), and it’s also easy to adapt to a direct connection to another microcontroller, should anyone want to do that.
Read more

Acceleration, Velocity, Position, and Dead Reckoning

Acceleration, Velocity, Position, and Dead Reckoning

The core of the Keyglove’s motion capture system is made up of a digital accelerometer and gyroscope. Both of these devices measure only one value—the accelerometer measures linear acceleration, while the gyroscope measures rotational velocity. This means that if the accelerometer is perfectly still and perfectly level, then it will experience only the force due to gravity (9.8 m/s2 downward), and if the gyroscope is held still in the same position, it will experience no measurable force at all. All of the math for dealing with velocity as it relates to position applies as well to gyroscopes and it does to accelerometers, but for the purpose of this post, I’ll focus only on the accelerometer.
Read more

Controller Code Updates

I’ve spent a good deal of time lately completely rearranging the Keyglove controller code into a whole bunch of logically separated modular units to make everything more manageable. I know there is still plenty more code to write before all of the desired features are working, but even without that code it was becoming difficult to navigate and edit efficiently. When I brought the LUFA library into things and wrapped it around the existing Arduino sketch, I knew I had to do something drastic. So, here’s an overview of the changes.
Read more

Practicing with the Keyglove

Practicing with the Keyglove

This weekend, I finished most of the sensor mounting on my Prototype B glove. Actually, it should probably be called something more like “Prototype G” because of how many different things I’ve tried since Prototype A, but I’m going to stick with “B” since this is still only the second glove I’ve got quite this far on. All of the sensors are mounted and wired except for the three large ones on the palm. It’s the first time since my original attempt that I’ve been able to connect and test a nearly complete array of sensors, and I must say that it does work better than my first one.

However, I must also say that it still needs some work, specifically in the following areas:

  • The sensor size and placement is significantly better, but a few of them still need to be moved a bit due to the structure of the hand and the way the thumb reaches each other sensor.
  • The tiny stranded wire is more flexible than the larger solid wire I had before, but it is actually quite fragile (as far as wire goes, anyway). I’m going to try some enameled magnet wire instead.
  • This glove is nylon, not cotton. I’ve decided that I like the cotton one better, so I’m going to switch back to that for more testing.
  • I really need a faster way to connect and disconnect the sensor pins from the Arduino board. I think I may get some screw terminal sets and build a header breakout board or something.

Overall though, it’s a good step forward. With some more software modifications and a hacked-together serial interface program, I was even able to do some real typing in regular programs with the glove. Mouse movement should come soon as well. Now, I do want to reiterate here that using a serial driver (or foreground program, as I’m doing here) to type characters and move the mouse is not a permanent solution. I fully intend to make this into a complete OS-independent device that communicates with standard input device protocols—PS/2, USB, and Bluetooth. I’m only using a serial interface right now because it was easy and it works.

One thing I discovered in my five elated minutes of typing is that I have no idea how to type efficiently with the Keyglove. I expected this, of course, since I’ve never used it in a real typing environment before. I do have the beginnings of a letter-frequency-based touchset created, but I haven’t been able to practice with it at all since I didn’t have a fully working glove. This revelation of my sad inability prompted me to begin building a practice tool.

So, last night, I took a picture of each base touch combination as represented by my hand (60 in all), and then this afternoon I modified them to be more visually informative. Then, I created unique versions of the main sensor diagram to go along with each photo in order to eliminate any possible confusion as to what the photo is trying to show.

This evening, I put all of these images together into a very basic Training page. It’s currently only a full list of all 60 base touch combinations, but it will eventually grow into an interactive training app online. I may also make an standalone application for this, but I believe I can accomplish at least all of the training aspects with a web app. Customizing touchsets is another matter, but I’ll take care of that once I get a little farther along.

Take a look at the table on the Training page and see if there are any combinations that are very difficult for you to do. If there are, let me know. I’d love to get some feedback on the feasibility of what I have in mind. Although I’ve tried to stick with combinations that seem pretty easy, I am double-jointed (some say disturbingly so) and I might just be oblivious to the fact that some of them are impossible for normal people to make.

Once I make a few more changes to the Training page to add at least a minimal amount of interactivity, I’ll post the results of my first few practice sessions. My next immediate goal here is to create another proof-of-concept video that shows real typing, not just sensor touch recognition in a debug window.

Automatic Sensor Source Code Generation

Automatic Sensor Source Code Generation

At the same time as I’ve been trying all kinds of different approaches to sensor material and attachment (including paint, glue, wire, thread, tape, and fabric), I’ve been working on customization and efficient touch sensor test code as well. In a post quite a while ago, I mentioned that after spending some time figuring out all of the ergonomically possible combinations of touches, I actually came up with many hundreds of unique possibilities—a lot more than I’d anticipated.

As you might guess, it’s not very easy to work efficiently with hundreds of unique tests in the source code, especially after I came across a couple of base 1-to-1 combinations that I’d missed before. Making one slight change to the foundation of the system results in a tremendously complicate series of changes all through the code. I have to keep track of which pins correlate to which sensors, which base combinations exist, which complex combinations exist, and which order to check each of them in.

This last consideration has been the most difficult. I spent a few hours this past Wednesday working on the touch priority, and while it wasn’t difficult to do for the base combinations, to became extremely complicated as soon as I started to get into the complex ones.

So, I created a PHP script to automate the entire sensor code generation process. I realize that a PHP script may not be a good final solution, but I chose that approach because I use the language every day and it’s very easily accessible to me. I don’t know if web-based Keyglove configuration will be possible in the future, but if not, I can at least port it to other languages.

Anyway, the basic idea of the script is to use the sensor array, the base combinations, and a fun set of “impossible combination” arrays to build everything automatically. This takes care of literally all of the code necessary to paste into the Arduino IDE. On top of that, the PHP script also builds only the combinations needed for whatever touchset you want to define. That means smaller, simpler, more efficient code.

One of the problems I’ve had is in figuring out how to have the minimum possible code while still allowing touchset customization. It is possible to have a very flexible array list or hash table using pure Arduino code, but for a limited hardware scenario like the Arduino, it’s a bad idea to use dynamic memory allocation and complex data types all over the place. It can seriously slow down the process. So, for the moment, I’m relying on hard-coded sensor control. Actually, except for the touchset portion, the hard code this script generates is perfectly acceptable in any situation. I’d like to make the touchset code more dynamic, but that can wait for now.

The most complicated part of the script of the set of of “impossible combination” arrays. It is an associative array with one keyed element for each base touch combination, and each of those elements contains a list of other base touch combinations that are impossible while the element’s key touch combination is active. For example, the “AY” combination (index tip to thumb tip) would include “Y1” as an impossible combination (thumb tip to top of palm). Obviously, you cannot touch your index fingertip and your palm both at the same time with your thumb tip—unless you are freakishly double-jointed, anyway, and even then not in a very controlled and repeatable fashion.

The rest of the script uses a recursive build function and a few convenient features of PHP to directly create all of the code, complete with the correct touch test order. It’s beautiful. And, more importantly, it will save me hours and hours of time and lots of frustration trying to figure out everything manually.

I hope to have a good test video up within a week showing some real typing with the glove. That, of course, is contingent on finishing a good Prototype B. I’ve been trying so many different things lately that I think maybe I ought to take a few hours and simply build something that fundamentally works even if I’m only 80% satisfied with it. The last 20% will come later.

VIDEO: Data Visualization

Yet another new video demonstrating more refined accelerometer cursor control:

Keyglove #03 – Data Visualization from Jeff Rowberg on Vimeo.

This video demonstrates the massively updated Processing sketch that I am now using to test the glove’s sensors. It relies on serial data flow for everything and connects directly to the Arduino’s virtual COM port, making it unsuitable for a finished product (you’d want a true HID-compatible connection). However, with this tool, you can very easily see everything about the glove’s current condition to test things like touch sensitivity and mouse control.

I have temporarily postponed the PS/2 interface tests while I work on the core of the glove functionality. Now that I have a reliable test tool, the PC interface is not necessary for continued development and testing. Obviously, it will be eventually, but I’m not going to focus on that right now. The Processing sketch will do nicely until everything else is working well.

For anyone interested, that visualization program is available from the repository, and of course the Processing IDE is open-source and available here.

VIDEO: Accelerometer Tests

New video demonstrating accelerometer cursor control, complete with jitter reduction:

Keyglove #02 – Accelerometer Tests from Jeff Rowberg on Vimeo.

Although the real mouse interface isn’t working, I’ve rigged up a Processing sketch to graphically represent the same movement that will be affecting the mouse cursor once I get that part working. This means I can at least test the accelerometer code I have.

This test shows raw input control from the accelerometer, followed by 5-point averaging, then 20-point averaging, then graduated averaging depending on speed. The idea is to make the movement very responsive if you’re moving it quickly, but much less jittery if you’re trying to hold it in one place. Therefore, it averages only 5 points (fast) during quick movements, but averages 20 points (precise and still) during very slow movement.

This video demonstrates the “tilt translated to position” approach described in the previous blog post.

I’ll be making that Processing sketch available in the Google Code repository after I make a couple more tweaks to it. That should be extremely useful for testing the whole glove without any specific hardware interfaces in place. I want to add a raw accelerometer reading graph, 3D tilt display, and of course regular touch displays as well. Processing is pretty awesome, really.

Incorporating an Accelerometer

The accelerometers that I ordered from SparkFun arrived late last week. I got two of them: one analog (ADXL335) and one digital (ADXL345). The analog one is supposed to be much simpler to use, and it’s a little bit ($3) cheaper, but the digital one requires less power, is less prone to errors, and has more features (including a 0g detection interrupt). I ordered the analog one, then decided to give the digital one a shot even though they said it’s more complicated. I considered cancelling the order for the analog one, but I realized that I’d really like both so I can see the differences in capability first-hand.

So, all that being said, I tried to make the digital one work first, because I know I’ll be able to make the other one work. I do love a challenge.

It turned out to be much easier than I was afraid it would be, thanks to detailed information from Euristic at Live Fast, Code Young. With his code and instructions (with slight modifications for the Mega board pin differences), I had a test program reading data from the accelerometer in 10 minutes of effort. Amazing.

The ADXL345 has a total of 8 pins on the breakout board, which are labeled GND, VCC, CS, INT1, INT2, SDO, SDA, and SCL. For my purposes (and the test program), it is safe to completely ignore INT1 and INT2. Connect GND to the Arduino board ground (obviously), and VCC to the 3.3v pin (not 5V, this can damage the accelerometer). Short CS to VCC right on the breakout board, and solder a small jumper wire between GND and SDO. Finally, plug SDA and SCL into pins 20 and 21 respectively of the board. Note these pins are specific to the Mega; on other Arduino boards, SDA and SCL are analog pins 4 and 5 respectively. With this arrangement and the source code on the above linked blog, I had a steady stream of data flowing in.

Euristic’s schematic has two 10k resistors acting, I believe, as pull-ups from VCC to SDA and SCL. I am honestly not sure what their purpose is, since the behavior seems to be the same whether they are there or not. I left them in though.

After getting the device to work using Euristic’s code, I switched over to the slightly cleaner and more complete ADXL345 library from user kstevenard on Google Code. It works the same and has methods to make use of all the settings available on the accelerometer.

Next, I wrote some code that converts the measurements into angles for each axis. It isn’t perfect, but it’s certainly pretty close. I think a sharp movement would generate unusable data, but we’ll see. I’m still trying to get the PS/2 mouse code to actually move the mouse on my computer; it’s giving me a lot more trouble than the keyboard code did. I figured it would just work since I have the active PS/2 adapter and everything.

Types of motion detection

However, there’s another conceptual problem to solve here: how exactly should your physical motion be translated into mouse movement? There are three main approaches here, and I’ll try to explain each. Two are tilt-based, and one is movement-based.

First, tilt translated to velocity is where the speed of cursor movement matches the steepness of tilt. This is kind of like the way the Trackpoint mouse on ThinkPads work, and it’s also kind of like how a game console controller works (imagine moving a small hand or crosshairs around using the D-pad). Interestingly, I love the Trackpoint, and I can’t stand D-pad cursor movement. I think the Trackpoint must be implemented extremely well, but I know it’s still something people have to get used to.

Using this method is simple to understand but difficult to master with precision. You have to speed up and slow down the cursor at just the right moment in order to “land” on the desired target. It’s very easy to overshoot (or undershoot). It would feel something like the cursor is a marble on a table, and you have to tilt and then re-balance the table at just the right moment. Personally, this sounds terrible to me for normal cursor movement. On the other hand, if you wanted to use the tilt of your hand to control lateral movement in a 3D game, you might prefer this method.

Second, tilt translated to position is where the cursor position changes directly based on the orientation of your hand. If you turn your hand to the right and leave it there, the cursor will move to the right and stay there. In contrast to the previous method, it won’t keep moving until you make your hand level again. This movement can be tracked on any axis. This seems much more intuitive to me than the previous option for normal cursor movement.

The downside of this approach is that you can only rotate your hand about 180 degrees left/right and more like 90 degrees up/down. This means one of two things: either the range of motion would have to be calibrated so you could reach all edges of the screen through one full sweep of each axis, or else you would need a way to temporarily disable motion while you reoriented your hand.

This second “fix” is much the same as when you’re using a desktop mouse and you run out of space while moving across the desk, you simply pick up the mouse (to prevent backwards movement) and put it back down on the desk so that you have more room to move. Most of us do this without thinking about it. For this reason, I believe a similar solution could be used in the Keyglove, perhaps by having a simple “hold” touch combination that would leave you in “mouse” mode but prevent any motion. Then, when you release the “hold” touch, you would be free to continue moving.

Third, movement translated to position is where the speed of the cursor movement matches the speed of your hand movement. This is the easiest to implement in the code, but requires a lot more effort and available space. It does feel very Minority-Report-esque though. Essentially, if you move your hand up and down, the cursor moves up and down with it. Left and right movement function the same way. The downside of this approach is that you need more room to move around, it’s far more visually distracting, and it is also far more physically demanding than the alternatives. It might be nice for a vigorous presentation though. I would think that if you can master the first or second approaches using tilt (which are much more subtle by definition), there’s no reason you’d ever want to use this approach—at least not for mouse cursor movement. Obviously, movement-based gestures could still be very valuable.

So what’s the plan?

I want to build support for all three of these methods into the glove code. Maybe there can be three different “mouse-on” toggle combinations, one for each method, so you can switch between them as you wish. All three methods should have a temporary “hold” combination (like picking up the desktop mouse off the desk).

Now If I can just get this PS/2 mouse emulation code working, I’ll be able to really test it out.