|
Post by Frisbone on Mar 1, 2013 10:22:43 GMT -5
An accelerometer exists in most smart phones today but is also obtainable for hobbyists and reasonably priced. Basically what you get from it is the amount of G-forces felt in all 3 spacial dimensions. Orientation is important of course. The Z direction is assumed to be the direction of gravity so 9.8 m/s2 is felt in the negative direction. So the idea is that if you could measure the acceleration of each axis over regular integral intervals then you could estimate generally how it is spatially moving. This doesn't have to be perfect because in a game environment a human can adjust for slight differences between their actions and the resulting perceptual changes in the virtual environment. Here is a cheap accelerometer and its user manual: dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Accelerometers/MMA8452Q.pdfwww.sparkfun.com/products/10953Have example code for arduino linked off the the product page above. With a compatible micro-controller and the arduino should be fairly easy to interface. Need to research I2C ( en.wikipedia.org/wiki/I%C2%B2C). The Raspberry Pi has an I2C interface so could use it in conjunction. So basically we would have the X-box controller connect to the RP through digital interfaces. The trick is to interpret the accelerometer inputs and translate them into a joystick coordinate position. Would need to look at very small slices of time and the acceleration differences to translate to the proper X-Y PWM code for the correct amount of time. The first step might be to translate the input to 3D space and sync up physical movements with actual 360 degree motion. It looks like you connect the interrupt pins to GPIO and here is a forum post that discusses handling them: www.raspberrypi.org/phpBB3/viewtopic.php?f=44&t=7509
|
|
|
Post by Frisbone on Apr 8, 2013 13:24:22 GMT -5
A bit of disappointing though processes today. As I started "thinking" through the code to convert acceleration changes to movement I realized that based on how the AMs work - there is no way to accurately detect rotation along the X-Y axis (assuming Z is in the direction of gravity).
The problem is that if the center of rotation is beneath the AM then the AM will not pick up anything on the X & Y axis (in a perfect world).
Even if you offset it from where it rotated you would have to assume a pivot point for the information to be useful. And it might be, hard to tell.
The only accurate way to do this would be to have two accelerometers on different points of the object. Then you could judge both movement and relative orientation in space (from a starting position)
|
|
|
Post by Frisbone on Jun 25, 2013 12:18:01 GMT -5
I devised a basic approach to a 2-accelerometer scheme. Basically what you do is:
1. Assume a starting position facing in an arbitrary orientation. Establish a vector from a point of origin in that direction (a line) 2. Get inputs from the accelerometers 3. Assume you are in a sphere of arbitrary size (say 10 radius) 4. get two points along the line that are 5% the size of the radius in both directions - these two points serve as the locations of the two accelerometers 5. Create two new points based on accelerometer data (normalized based on maximum G ratings) 6. Project a new line with the unit vector between these two points from the sphere origin 7. Determine the right ascension and declination angles between the old line and the newly projected line. 8. Convert the angles to percentage of max angle (PI for ascension, PI/2 for declination) 9. Send the ascension conversion to the X PWM 10. Send the declination conversion to the Y PWM
The idea is that this more accurately describes what we are experiencing with goggles on.
|
|
|
Post by Frisbone on Jun 25, 2013 12:37:59 GMT -5
Now that I think about my original issue I am wondering what the big deal was. Assuming a pivot point is actually very applicable for this situation as the device will be held and probably at a fair distance from the center of gravity. I'm thinking it would be worth while to continue with the original plan. Perhaps incorporating projection onto a sphere or a plane to aide in translating it into units that might be more helpful with the PWM. The sphere is obviously best for the virtual soldier environment.
|
|
|
Post by Frisbone on Sept 10, 2013 8:29:57 GMT -5
I'm thinking of using the built-in HPF (high pass filter) of the MMA8452Q to remove the static component of gravity before I read the data - but before I do, I want to make sure the other accelerometer has a similar capability - because if it has to be done in software - I want to know now. Can you check that?
|
|
|
Post by lintball on Sept 10, 2013 20:37:37 GMT -5
|
|
|
Post by Frisbone on Sept 10, 2013 20:49:46 GMT -5
You know, I never looked at that data sheet in detail except for very early on when I didn't know much about the AM's and was just focusing on I2C. That AM is not very full featured at all. Perhaps it would be better if you just picked up the one I have - not very interested in a downgrade. A high pass filter isn't the only thing its missing. Wouldn't want you to waste too much time on a driver.
What do you think?
|
|
|
Post by lintball on Sept 10, 2013 21:42:30 GMT -5
I don't see any harm in keeping that part consistent…esp since this is the piece that will be the trickiest to figure out. I'll work on getting one.
|
|
|
Post by Frisbone on Sept 15, 2013 9:15:31 GMT -5
I did a few quick calculations because I wanted to know what the maximum sampling rate was that we could support. Since our operations scenario has:
Poll AM - 2 byte request, 6 byte read Command PWM - 2 byte per write of 4 bytes for on/off = 8 bytes * 2 for two axis write
We then have that each cycle has 16 + 8 = 24 bytes to transfer serially over a 100Kbps line. Add in 6 bytes for slop and you have 30. 100Kbps is 12.5KBps which leaves you with 416 samples max per second. The closest setting is 400hz for our AM. This would be cutting it really close and assuming no time gap between requests on the line (which is unrealistic) - so the next step down is probably our realistic max - 200hz.
So my feeling is that we can't consider running this thing faster than 200hz unless we go to a higher rate i2c line - which I'm not sure is doable for the chips we are using.
|
|
|
Post by lintball on Sept 15, 2013 19:19:57 GMT -5
I wonder if that is gong to be enough to track the accel accurately enough. Any idea for at least comparison how quick something like the Wiimote updates accel.
|
|
|
Post by Frisbone on Sept 15, 2013 19:59:35 GMT -5
I think once you get beyond 50Hz it won't matter much. Our biggest problem is hidden from us until we get the algorithm worked out - which is what is the built-in detection delay coupled with the delay in the high-pass filter from the moment an impulse occurs - to when our software sees it through sampling.
Haven't looked at their data sheets closely enough yet to see if they report that.
|
|
|
Post by Frisbone on Sept 17, 2013 6:11:49 GMT -5
www.osculator.net/forum/threads/765-wii-mote-data-resolution-and-sampling-ratewww.wiimoteproject.com/wiimote-accelerometer-and-motions-detecting-projects/wiimote-adxl330-accelerometer-bandwidth-question/?PHPSESSID=210ce282d67be1e8dfa91ef62842630fA couple wiimote references - Sounds like their sampling rate is likely 100Hz. I observed the raw data a bit more closely yesterday. I bumped up the sample frequency to 50hz and captured at rest data with the HPF on. Each axis seemed to have a bias that favored one direction (instead of averaging out over time). Want to accumulate the acceleration and store it for a better test next - and see how the number changes with increased sampling rates. My initial tests of code to detect start/stop of movement didn't go well. I think I need to think through the process a bit more before I go back to the code. The idea was to compute a rolling average of both the velocity and acceleration over say 5 samples or so (1/10th sec) - If the acceleration crosses a certain threshold - then it considers the object moving. If the velocity surpasses a certain percentage of the observed maximum velocity (say 5%) than it considers the object moving. Both have to agree to change the over-all moving state of the gun. If we go from moving to not moving - we reset the velocity to zero. That was my overall idea - not sure if I implemented it properly.
|
|
|
Post by Frisbone on Sept 18, 2013 6:09:06 GMT -5
I think I need to profile the data coming from the accelerometer before I make any more algorithmic changes/adjustments. I'm making too many assumptions at this point without confirmation.
The biggest assumption is what this high pass filter is doing for me. I assumed that it would negate gravity and negate any built in bias from the manufacturing process. But if this is not the case, it could certainly explain why I see so much velocity drift.
At rest in any position I would expect to see the numbers balance out to zero when accumulated over time. Instead I see that on some axes there is a slight offset to one direction - which causes the velocity drift.
My next goal is to collect at-rest sample data (large set - maybe 1 minutes worth). Do this at varying sample rates (say 50Hz, 100Hz, 200Hz). Do this with the HPF enabled and disabled and in what I believe to be a level state.
Accumulate after 10, 20, 30, 40, 50, then 60 seconds. See under what conditions the accumulation grows. With this data hopefully we can draw some conclusions that can help pick a direction from a design perspective.
|
|
|
Post by Frisbone on Sept 19, 2013 21:29:44 GMT -5
I ran a static analysis of data for 7 minutes in four scenarios (2G setting): - 12.5Hz - 12.5Hz High Pass Filter Enabled - 50Hz - 50Hz HPF enabled The HPF data does not accumulate around a zero center. There is definitely a bias offset remaining that the HPF is not filtering out - and its pretty significant. The non-HPF is interesting. If I track the deviation from the average (4th column) and then accumulate this deviation (5th column) it sums to close to zero (and looks like a flat bell curve). This may just be a mathematical proof of equivalence - haven't investigated the meaning of that approach - but what I was going for was trying to get a sense of its variance over the sample period. In looking at the HPF 12.5Hz example in more detail it looks like on the X-axis there is a bias of -0.000859 - which ultimately accumulates to -.75 over 70 seconds. This is of course what causes the velocity drift. Each axis has a slight bias (X negative, Y negative, Z positive - but thats because of a conversion we do to flip that axis). What I'm having trouble understanding is why this bias exists in the first place. The HPF is supposed to remove constants. So what is this then, a rounding error? Problem with their HPF algorithm? Noise (but if noise, why would noise be biased?)? Should probably investigate HPF algorithms in the AM world. Something to consider: does the bias change with orientation? Here are the current values from the last 12.5Hz test: X=-0.000860414 Y=-0.000987187 Z=0.000951129 Their magnitude is generally in the same ball-park although Z is in a different direction. Would make sense that as you change the orientation that these biases would also change - if so, how would we account for this bias in our calculations without knowing orientation? Would we need a gyro? Attachments:
|
|
|
Post by Frisbone on Sept 20, 2013 6:54:09 GMT -5
Here is a discussion thread on HPF/LPF and BPF (band pass). Looks rather simple. Here is some code I found elsewhere to go along with it:
#define kFilteringFactor 0.1static UIAccelerationValue rollingX=0, rollingY=0, rollingZ=0;
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
// Subtract the low-pass value from the current value to get a simplified high-pass filter
rollingX = (acceleration.x * kFilteringFactor) + (rollingX * (1.0 - kFilteringFactor));
rollingY = (acceleration.y * kFilteringFactor) + (rollingY * (1.0 - kFilteringFactor));
rollingZ = (acceleration.z * kFilteringFactor) + (rollingZ * (1.0 - kFilteringFactor));
float accelX = acceleration.x - rollingX; float accelY = acceleration.y - rollingY; float accelZ = acceleration.z - rollingZ;
// Use the acceleration data.
}
|
|