|
Post by Frisbone on Sept 24, 2013 19:15:56 GMT -5
Graph % Applied Voltage to Rotation RateThis is a graph of Percentage of applied voltage versus rotation rate for 360 degrees (max sensitivity). The spreadsheet for it is attached. The graph roughly looks like y = 1/(Ax+B) Where A is a coefficient where .3 is about the right hyperbolic shape and B is -.9 (appx) when X is positive, and +1.4 when X is negative. I suspect that different coefficients will map to different sensitivities. Attachments:
|
|
|
Post by Frisbone on Oct 9, 2013 8:01:05 GMT -5
So two weeks ago (yeah, slow going finding time) I updated the code to allow me to manually manipulate the position of the virtual gun to test the mapping calculations and angular velocity conversions. I created a simple test where I moved the point in a rectangle (Up, Left, Down, Right). The results were encouraging - generally it did do this, but it was skewed (Rhombus movement that didn't center back on original position). I've attached a file with some velocity changes that should mirror the position data I tested before. Here is the plan: 1. Retest with some minor fixes the position data (original position did not match input data, was a factor of 4 off on the negative Z direction so that could have caused an initial "jump", and I put a conversion factor in for sensitivity) 2. Make sure I update the sensitivity formula for a slower sensitivity 3. Observe/record results 4. Run again with velocity data instead, observe/record - ensure that it looks the same. If it isn't, we may have a problem converting from velocity to position 5. Create acceleration data that translates properly into velocity when tracked, run test again, make sure it behaves the same (tests acceleration to velocity/position conversion). Once successful I'll know that all of the 3D to 2D mapping functionality is mostly good. From there I'll need to focus on basic verification of acceleration data. For example. If I physically move the device by one foot to the left in .25 seconds - then I should be able to judge a few things: - I know that the sum of the acceleration changes in that direction should be zero - I should know that the velocity calculated from the data should be positive and non-trivial and will end up as zero (going from zero to +value back to zero) - If I do this on a known level surface with known orientation that is on a guide with stoppers at exact distances, I should be able to compare actual data with sampled data. If I have a video of 24fps I can use it to back fill data and generate expectations. Lets say that I have a measuring stick that serves as the guide and a new AM is mounted to a small block with long wires attached. Measuring stick is then clamped to a table where the table has been made completely level. The block is then oriented so the X axis is along the measuring stick and that there are two stoppers clamped at a distance of .5 meters apart (close enough that you could move it without shifting). Start the 24fps video and move the block from stopper to stopper in about 1 second. Use the video to calculate the exactly time period. Figure out how many samples exist that show the position during that period, record the position for each frame. Reverse calculate the instantaneous acceleration required at each sample point to get to the next spot for the following frame. Add every two frames together and compare to sampled data - should be similar. Attachments:
|
|
|
Post by Frisbone on Oct 24, 2013 20:54:24 GMT -5
Tonight I updated the code a bit and put velocity samples though so I could try to match the position samples I originally created. Came close enough (given that the rate was really 12.5 Hz and I could only report on a Hz boundary - so I'd be slightly off - but I did essentially duplicate the data.
I then did the same thing for acceleration data, and matched the velocity data I created. This was possible because I added code to separate the samples read in from the output samples generated (before I was reusing the same buffers). So it looks like theory is proving out so far in practice. So long as the raw data turns out to be something that is relatively representative of reality - it should turn out ok.
Of course I didn't fix any major bugs in this process (except for a reversed axis - but that shouldn't have been that big of a deal, just a perception confusion issue). So I guess the next step is going to be trying to create a controlled test with the AM and determine if I'm getting what I'm expecting. One thing I do know is that I was converting at one point to percentage of max Gs and I need to make sure that I'm just dealing with Gs - otherwise the math will be very strange.
So for next time I'll:
1. Verify not returning %G, but instead G from Am collection data 2. Run PWM output test with test acceleration data as input (verify I get essentially the box motion as before). 3. Re-run a live test to see where we stand 4. Create a live output stream that represents the position relative to a sphere - an external system can plot the data so I can visually see drift live. (this might be a bit of an effort)
Current code checked into bitbucket.
|
|
|
Post by Frisbone on Oct 27, 2013 10:06:13 GMT -5
Well, All my tests so far are demonstrating positive results on the algorithm. Double integration and velocity tracking looks good and its converting to positions that look right. Conversion to ascension/declination look approximate as well.
I think its boiling down to an accelerometer sampling issue. Data is simply missing that is causing the velocity to drift and compound. Eventually the small changes in acceleration have no discernible affect on the overall position. Will post Youtube videos for my visualization results. The first shows the reading Am data from a file approach (AM data attached) - basically counter clockwise rotation around a box shape.
The second is just waving the gun back and forth horizontally and watching where the ball goes. Note that the box in the visualizer denotes the area of positive motion direction.
One thing I did note is that the ascension and declination movement is not quite right. This is likely due to the Angular conversion logic I added to use the y=1/(ax+b) curve and apply it to the converted PWM duty cycle - I need to get that equation right and have the right sensitivity settings I think. Going to ignore that issue for now.
|
|
|
Post by Frisbone on Oct 31, 2013 6:22:40 GMT -5
Right now I have it at .4 meters from the origin. I'm thinking I'm going to move it a lot closer to the origin and reduce the size of the ball (for visual affect). The closer it is to the origin - the more extreme the calculated angles of movement will be (and if I zoom in close enough I should be able to see better movement) - this is the next test I'll run. However, I do believe drift will get the better of me quickly.
|
|