Post by Frisbone on Oct 31, 2013 6:06:42 GMT -5
Using an accelerometer to track the position of an object in space encounters the following inherent issues:
1. A periodic sampling only captures some changes, but not all - resulting in fidelity loss.
2. Increasing the frequency of sampling will help, but will never eliminate the problem completely
3. Gravity itself will introduce some error because it forces a HPF (High Pass Filter) technique that will remove that static component - but will also remove some changes that are relevant to the actual movement of the device.
4. Inherent detection precision error in the device.
Only the sampling frequency has the unique feature of resulting in loss of information that would tend to "balance the equation". Imagine the following scenario where sample periods are shown (Vertical | is the sample point) and the number of G's detected along a single axis:
S1___|___S2___|___S3___|___S4___|___S5___|___S6___|___S7___|___S8___|___S9___|___S10__|
_____1________2_______-1_______-1_______-1___________-1____1________2_______-2
Notice in this example that the changed in velocity (acceleration) are occurring at each sample period (the | sign) except for one between sample 6 and sample 7. If you sum up the detected samples you get: 1+2-1-1-1+1+2-2 == 1. In this example we took an object starting at rest and moved it along a single axis pulling those G's at those points in time. By the time you get to S10 the object is at rest again. However, our program will think overall it is being affected by 1G of acceleration. So its velocity is seen as 1 m/s. The problem of course is that it is not moving and we have a detection error because we never saw the sample at S6.5 of -1. So lets say we double the sample frequency - well, that will cover this example and the velocity will now be seen as zero at the end of the test.
However, if you keep shifting when that -1 appears in the timeline there is the possibility it could be missed - and if its missed, the calculations will show that its moving - when it is not (causing drift).
Now, the reality is that changed in acceleration do not happen in an instant. Meaning that we didn't really go from 0 G's to 1 G exactly at that sample period. It is more likely that it ramped up during that period. So when we want to determine the influence on velocity we take our observations and assume that the acceleration measured held over that entire sample period.
Now, if we were dealing with huge forces this problem could be significant because it could be conceivable that you could miss a large increase, then decrease in velocity during period and completely miss the change in position during that change. In our practical application though the sudden changes are limited by a persons physical movement constraints - which perhaps can be measured. However, you'll always have error. Take the following example.
An accelerometer is strapped to your hand and you punch a wall as hard as you can. The acceleration increases to a maximum a human can achieve and then suddenly decelerates at a rate that a human could not achieve - due to the blocking wall. That deceleration occurred during a brief period of time that is unlikely to be captured by any sampling device - at least a way that results in balanced measurements. Any recording program will think the hand continued to move right through the wall - albeit at a slowish velocity.
Ok - so how do we correct for the lack of information? In our last example it would have helped to know that the wall was there blocking movement in that direction. We could have set that velocity vector to zero eliminating the drift.
However, a free moving object with no obstacles presents challenges.
In our use however, we cannot say that the acceleration is free moving in all directions. In fact by our own definition it is tethered to a person's hand and therefore to their body which tends to be anchored in a general area of space. If you draw an imaginary line between the origin and the AM like we do in the model and then say the AM moved, we then draw a new line and determine the angles (ascension/declination). However, we don't currently restrict where its allowed to move to - but we could. Since its tethered we can say that the original point is on the surface of a sphere and that its only choice is to move tangentially along the surface. So at a minimum we could eliminate drift that would result in the point moving away from the sphere - which currently is the biggest contributor to lack of sensitivity to movement (the further away the point gets, the less small changes in position affect measured angles).
So the challenge is figuring out how we remove bad components of velocity when recalculating for each sample period.
1. A periodic sampling only captures some changes, but not all - resulting in fidelity loss.
2. Increasing the frequency of sampling will help, but will never eliminate the problem completely
3. Gravity itself will introduce some error because it forces a HPF (High Pass Filter) technique that will remove that static component - but will also remove some changes that are relevant to the actual movement of the device.
4. Inherent detection precision error in the device.
Only the sampling frequency has the unique feature of resulting in loss of information that would tend to "balance the equation". Imagine the following scenario where sample periods are shown (Vertical | is the sample point) and the number of G's detected along a single axis:
S1___|___S2___|___S3___|___S4___|___S5___|___S6___|___S7___|___S8___|___S9___|___S10__|
_____1________2_______-1_______-1_______-1___________-1____1________2_______-2
Notice in this example that the changed in velocity (acceleration) are occurring at each sample period (the | sign) except for one between sample 6 and sample 7. If you sum up the detected samples you get: 1+2-1-1-1+1+2-2 == 1. In this example we took an object starting at rest and moved it along a single axis pulling those G's at those points in time. By the time you get to S10 the object is at rest again. However, our program will think overall it is being affected by 1G of acceleration. So its velocity is seen as 1 m/s. The problem of course is that it is not moving and we have a detection error because we never saw the sample at S6.5 of -1. So lets say we double the sample frequency - well, that will cover this example and the velocity will now be seen as zero at the end of the test.
However, if you keep shifting when that -1 appears in the timeline there is the possibility it could be missed - and if its missed, the calculations will show that its moving - when it is not (causing drift).
Now, the reality is that changed in acceleration do not happen in an instant. Meaning that we didn't really go from 0 G's to 1 G exactly at that sample period. It is more likely that it ramped up during that period. So when we want to determine the influence on velocity we take our observations and assume that the acceleration measured held over that entire sample period.
Now, if we were dealing with huge forces this problem could be significant because it could be conceivable that you could miss a large increase, then decrease in velocity during period and completely miss the change in position during that change. In our practical application though the sudden changes are limited by a persons physical movement constraints - which perhaps can be measured. However, you'll always have error. Take the following example.
An accelerometer is strapped to your hand and you punch a wall as hard as you can. The acceleration increases to a maximum a human can achieve and then suddenly decelerates at a rate that a human could not achieve - due to the blocking wall. That deceleration occurred during a brief period of time that is unlikely to be captured by any sampling device - at least a way that results in balanced measurements. Any recording program will think the hand continued to move right through the wall - albeit at a slowish velocity.
Ok - so how do we correct for the lack of information? In our last example it would have helped to know that the wall was there blocking movement in that direction. We could have set that velocity vector to zero eliminating the drift.
However, a free moving object with no obstacles presents challenges.
In our use however, we cannot say that the acceleration is free moving in all directions. In fact by our own definition it is tethered to a person's hand and therefore to their body which tends to be anchored in a general area of space. If you draw an imaginary line between the origin and the AM like we do in the model and then say the AM moved, we then draw a new line and determine the angles (ascension/declination). However, we don't currently restrict where its allowed to move to - but we could. Since its tethered we can say that the original point is on the surface of a sphere and that its only choice is to move tangentially along the surface. So at a minimum we could eliminate drift that would result in the point moving away from the sphere - which currently is the biggest contributor to lack of sensitivity to movement (the further away the point gets, the less small changes in position affect measured angles).
So the challenge is figuring out how we remove bad components of velocity when recalculating for each sample period.