 The core of the Keyglove’s motion capture system is made up of a digital accelerometer and gyroscope. Both of these devices measure only one value—the accelerometer measures linear acceleration, while the gyroscope measures rotational velocity. This means that if the accelerometer is perfectly still and perfectly level, then it will experience only the force due to gravity (9.8 m/s2 downward), and if the gyroscope is held still in the same position, it will experience no measurable force at all. All of the math for dealing with velocity as it relates to position applies as well to gyroscopes and it does to accelerometers, but for the purpose of this post, I’ll focus only on the accelerometer.

Note that this post doesn’t cover any compensation for gravity, which is very important when working with accelerometers in most cases. This means that without any adjustments, this math will only produce pure results if you happen to be in a zero-gravity environment. Not very common, I know. I will write a follow-up post including gravity compensation.

Within the context of the Keyglove’s requirements, just knowing the linear acceleration really isn’t especially useful most of the time. I’m more interested in knowing linear velocity (speed and direction of movement), and in some cases the actual position. Fortunately, position, velocity, and acceleration are all related to each other. The process of obtaining velocity from acceleration and position from velocity is known as dead reckoning. The idea is to start from a known state (e.g. holding still) and calculate a new state (e.g. moving up or down) based on a measurement that indicates change, although it doesn’t actually give you the info you want directly.

Acceleration, for example, is the rate of change of velocity. I don’t really care how fast I’m accelerating, but I do care how fast I’m actually moving. The problem is that because of the way acceleration is related to velocity, I could be moving forwards at a constant one hundred miles per hour, and the accelerometer would tell me nothing about that velocity because it isn’t changing. But we know that I had to gradually get up to that velocity at some point; I didn’t just immediately start moving that fast. The trick therefore is to piece together each acceleration measurement and “reckon” a new velocity measurement.

If my acceleration is 10 m/s2, that means that my velocity is increasing by 10 m/s every second. If my initial state is holding still, then after one second, I will be moving at 10 m/s. After two seconds, I will be moving at 20 m/s. After three seconds, I will be moving at 30 m/s, and so on. If the acceleration reading then drops down to zero, I know that I’m still moving at 30 m/s.

#### Sample Data Points

To illustrate exactly what I’m trying to accomplish here, I put some sample data in a spreadsheet up on Google Docs (“Acceleration, Velocity, and Position” if you want to check it out). The spreadsheet contains the following tabular data:

table.avpdata { width: 100%; }
table.avpdata tr th { padding: 2px; background-color: #EEE; border: 1px solid #AAA; }
table.avpdata tr td { padding: 2px; text-align: center; border: 1px solid #888; }
table.avpdata tr td:nth-child(2) { background-color: #DDF; font-weight: bold; }
table.avpdata tr td:nth-child(3) { background-color: #FCC; }
table.avpdata tr td:nth-child(4) { background-color: #FD9; }

Acceleration Velocity Position
#0 0 0 0
#1 2 2 2
#2 4 6 8
#3 5 11 19
#4 3 14 33
#5 1 15 48
#6 0 15 63
#7 -1 14 77
#8 -3 11 88
#9 -5 6 94
#10 -4 2 96
#11 -2 0 96
#12 0 0 96

The important thing to notice is that the blue column (acceleration) is the only data that is read directly from the sensor, and only on one axis (the table and graphs only show one axis for simplicity, but the concept applies to all three). The other two columns are calculated from the acceleration. Physics tells us that acceleration is the rate of change of velocity, and velocity is the rate of change of position. So, for each iteration, velocity changes by whatever acceleration currently is, and position changes by whatever velocity currently is. Examining the first three rows shows us this:

Iteration #0
(this is the starting position and nothing is moving)
```Acceleration(0) = 0 Velocity(0) = 0 Position(0) = 0```

Iteration #1
```Acceleration(1) = 2 Velocity(1) = Velocity(0) + Acceleration(1) = 0 + 2 = 2 Position(1) = Position(0) + Velocity(1) = 0 + 2 = 2```

Iteration #2
```Acceleration(2) = 4 Velocity(2) = Velocity(1) + Acceleration(2) = 2 + 4 = 6 Position(2) = Position(1) + Velocity(2) = 2 + 6 = 8```

#### Visualizing the Concept

To illustrate further, I created a few simple graphs that show these sample measurements in relation to each other over the same period of time. The vertical axis shows the measurement, and the horizontal axis is time. This is arbitrary sample data, so there are no units, but you can assume time is in seconds, and measurements are in m/s2, m/s, or m depending on whether the measurement is acceleration (blue), velocity (red), or position (orange) respectively.

First, take a look at acceleration all by itself: Notice how it starts at 0, goes up to 5, then back across 0 and all the way to -5, and finally back to zero. This means it is accelerating forwards at first, then transitiosn to accelerating backwards, and finally stops accelerating at all. While you might guess this from the shape of the curve on the graph, we don’t actually know yet where it is (position) or how fast it’s going (velocity). All we know is that we aren’t changing velocity anymore—which is the same as saying our acceleration is zero.

Now let’s add velocity to the graph, shown over the same time period: Velocity also starts at zero, but notice something else: velocity increases while acceleration is above zero, and decreases while acceleration is below zero. When acceleration is right at zero, velocity doesn’t change (which is what happens at the very top of the bell-shaped curve). This demonstrates that it is moving forward (positive velocity) at increasing speed until the sixth second, then it slows down again, finally coming to rest. Although acceleration was negative for a time, velocity was never negative. That is, it was never moving backwards. The negative acceleration was just enough to slow it down until it stopped moving.

What about position? Let’s add that on top of acceleration and velocity here: See what happened? Position begins at 0, then increases while velocity is above zero. Once velocity goes back to zero, it stops moving. Based on this data, we calculate that we’ve moved forward 96 units by the time we stop moving. Acceleration and velocity both return to zero.

In summary, there are three metrics at play: acceleration, velocity, and position. We can only measure acceleration, but acceleration “controls” velocity, and velocity “controls” position—at least when we’re calculating backwards.

The problem is that this method is very inaccurate with cheap consumer-grade motion sensors. Typically, if you want to get good results, you’ll start with position and calculate velocity from that (ideally using derivatives), and then calculate acceleration from velocity. With reasonably accurate time measurements, you can get very good results for velocity and acceleration. Going the other direction, however—acceleration to velocity and velocity to position—is not so great.

One problem is that cheap motion sensors often have quite a bit of noise in the raw data. If each axis measures on a scale of +/-1000, for example, you might see as many as 50 units of random deviation for each measurement, even if the device is sitting perfectly still. This can be mitigated with various kinds of filtering, but it’s difficult to get rid of it entirely, and it can really throw a monkey wrench into things if you’re trying to be very precise.

Another problem is that measurements are taken at intervals, not continuously. Ideally, we take measurements as quickly as possible to get a close approximation of a continuous curve. But if the main microcontroller is busy doing other things as well, sometimes there are significant gaps between each measurement. The calculations above assume each two measured points have a straight line between them, but what if there’s other motion that occurs between two measurements? This will cause our calculated result to be wrong compared to reality. A quick “jerk” of the motion sensor may show two readings that are the same, when there was actually a sharp increase and subsequent decrease between the two points.

These kinds of errors affect velocity calculations enough, but they are even worse for position calculations. If every velocity calculation is a little bit off, the calculated positions will be even further off because they are based on two error-prone calculations instead of one error-prone calculation (original velocity) and one direct measurement (current acceleration).

#### Calculating Velocity and Position

All that being said, for certain applications (like the Keyglove), dead reckoning can still be useful if a high level of accuracy isn’t necessary. For example, I really don’t need to know if I’ve moved 12 inches to the left as opposed to 18 inches to the left. I just want to know if I moved to the left by some amount, and if my margin of error is +/- 30%, that’s good enough to still get a lot of good data to use for motion control. Depending on the application, I might only care to know which direction I’m moving (velocity), regardless of how far I’ve actually gone (position). This reduces the need for precision.

Assuming this is still useful for your particular project (and it is for the Keyglove), the basic calculations to get started are not that complicated. Let’s assume that you have just a linear 3-axis accelerometer, which store each axis measurement in `ax`, `ay`, and `az`. These are the current acceleration measurements. Remember that:

• Current Velocity = Previous Velocity + Current Acceleration
• Current Position = Previous Position + Current Velocity

So, we need to have variables for each of the following, for each axis:

• Current Acceleration (`ax, ay, az`)
• Previous Acceleration (`ax0, ay0, az0`)
• Current Velocity (`vx, vy, vz`)
• Previous Velocity (`vx0, vy0, vz0`)
• Current Position (`px, py, pz`)

A set of previous position variables is not necessary for our calculations, due to compiler shorthand. Then, assuming we’ve already read the accelerometer measurements into `ax`, `ay`, and `az`, the code to calculate each is this:

```// velocity vx += (ax + ax0) / 2; vy += (ay + ay0) / 2; vz += (az + az0) / 2;```

``` ```

```// position; px += (vx + vx0) / 2; py += (vy + vy0) / 2; pz += (vz + vz0) / 2;```

The “Previous Velocity” and “Previous Position” values are actually still in there, assuming we haven’t reset the `vx/vy/vz` and `px/py/pz` variables between measurements. By using the shorthand “+=” operator, we’re just adding the new acceleration values to the previous velocity values and storing it right back into the same variable. The same concept applies to position.

The `(ax + ax0) / 2` value is actually using the average of the two readings, rather than just the newest one, because it eliminates some error. The new “area” we’re adding to each approximated integral is a trapezoid instead of a plain rectangle, and the angled edge is usually much closer to the true continuous curve than the original rectangle.

#### Approximating a (very rough) Integral

The above equations are really simple. However, they assume that the time between each measurement is exactly the same, which may not be the case. Ideally you’ll have an interrupt-driven processor that always takes a reading at the same interval (100 Hz, or 200 Hz, or whatever your project requires). However, if time intervals are not all equal, we can work around this by incorporating time into the equations. If you have another set of variables `t` and `t0` that contain the current measurement time and previous measurement time, respectively, the code to calculate each metric changes to this:

```// velocity vx += ((ax + ax0)/2) * (t - t0); vy += ((ay + ay0)/2) * (t - t0); vz += ((az + az0)/2) * (t - t0);```

``` ```

```// position; px += ((vx + vx0)/2) * (t - t0); py += ((vy + vy0)/2) * (t - t0); pz += ((vz + vz0)/2) * (t - t0);```

This code is basically doing a very simple approximation of an integral, first using acceleration to obtain velocity, and then using velocity to obtain position. It’s still not incredibly accurate, but it works better than the previous equations because it doesn’t assume a constant measurement interval.

An integral, remember, is generically defined as the area under the curve of a particular function. Each time we take a new measurement, we are getting a new “slice” of the curve, and adding it to our previous stored value. The acceleration measurement gives us the height (in m/s2), and the time interval gives us the width (in seconds). This means that when we multiply them together, as above, we are left with a value that has the correct units for velocity:

m/s2 * s = m/s

So how well does this work in the Keyglove? Good question. I’m still working on the motion control, and while it’s constantly improving, I’m still not personally satisfied. There are still more features to add and weirdnesses to work out. I plan to get a new video up as soon as I’ve got it to a stage where I’m confident to show it off. Keep your eyes open!

Also, there’s an excellent document from Freescale (PDF) with most of this same information in a slightly different form, along with a sample code implementation. It’s definitely worth reading.

1. 2. 