|
Post by mikef522 on Jun 5, 2016 0:09:16 GMT -7
I've been digging through the movement algorithm implemented in open-dobot's MoveWithSpeed() function. It seems pretty solid to me, although I have a couple questions.
First, let me describe the algorithm just to check if I understand it. The user specifies a speed and an acceleration. A vector equation for a line in 3D space is calculated based on the starting and destination points. Based on the speed/acceleration, one can calculate the distance that the arm will move in 20ms and slice up the line into slices based on the calculated distance for each 20ms interval. Looping through the slices, the number of steps for each motor is then calculated to get from the starting point of the slice (or interval) to the ending point of the slice. From here, the number of steps to move in 20ms can now be sent to the arduino. Note that the motors will naturally always move in the same direction during these 20ms intervals (though the motors certainly do sometimes change direction during movement along a line). Also, I noticed that the algorithm keeps track of fractions of steps that are lost due to the limited stepper motor resolution (very nice).
If I understand the algorithm correctly (please correct me if I don't; I'm not 100% confident in my understanding of it), moving very slowly will generate a line with higher resolution (i.e. very linear movement). On the other hand, moving very fast will generate a line with less resolution (i.e. the arm movement is not as linear). One question I have is what is the maximum number of steps that can be executed in a 20ms interval? I'm having a hard time understanding the firmware code. I saw the comments that the timer 5 ISR toggles pins at maximum 20kHz, but I'm not sure how exactly that relates to the maximum number of steps per 20ms. Similarly, I'm confused as to how the number of ticks relates to the maximum number of steps per interval.
To better illustrate my confusion, I'm confused about the meaning of the following code:
// At 50kHz how many ticks pass between TIMER5_COMPA_vect ISR calls. #define TICKS_PER_CALL 40 // Coefficient that is used in DobotDriver to calculate stepping // period. #define STEPS_COEFF 20000 // How long delay does the TIMER1_COMPA_vect ISR introduce. // This is to be acoounted for make corresponding adjustments. #define ISR_DELAY 80
and the following code:
// Timer5 compare match Interrupt Service Routine. // Toggles pins at maximum 20kHz. ISR(TIMER5_COMPA_vect) { if (stepsX) { ticksLeftX -= TICKS_PER_CALL; if (ticksLeftX < 0) { X_STEP_PORT ^= (1 << X_STEP_PIN); if (X_STEP_PORT & (1 << X_STEP_PIN)) { stepsX--; } ticksLeftX += ticksX; } }
For the first code block, I don't really understand the significance of those variables. My naive thought is that 40 ticks per call (which occurs every 20ms?) means that the step pins can be toggled a maximum of 40 times per call (and since the pins need to to be toggled to HIGH and LOW for a step to occur, that limits the number of steps per call to 20). Certainly though, my understanding must be severely flawed as the comments state "Toggles pins at maximum 20kHz." and the code seems to be able to execute more than 20 steps per 20ms. Also, for the second code block, I don't know what the bitwise operators are doing (though I imagine they must be changing the step pin state). I'm also confused on the difference between PIN and PORT, I thought these were sort of the same thing.
Despite my confusion, assuming I understand the general algorithm correctly, the caveat that I wanted to bring up for discussion is that based on the current movement algorithm (Algorithm 1), the "movement resolution" (how linear the movement actually is) seems to be dependent on the speed. The faster the speed, the less slices that the line will be divided into and movement resolution suffers. At slower speed, the line will be divided into more slices and movement resolution will be very good. Is this the best (or most ideal) algorithm? I'm really only worried about a loss of movement resolution at high speeds, but in all fairness it seems to work well at practical speeds..
Another approach (Algorithm 2) that I can think of is that the line can be divided into an arbitrary number of slices to ensure some maximal level of linear movement resolution. From there, a step sequence can be generated, but this sequence of steps will be time independent. Steps/sec could then be calculated based on a desired speed (mm/sec) specified and the length of the line. The formula would be: line length/(desired speed in mm/sec) = time; # of steps in step sequence/time = steps/sec. One caveat of this approach is I think that there might be a delay of up to 10ms when motor direction needs to be reversed during a line movement.
Again, in all fairness, the first algorithm seems to work very well at realistic speeds and the second algorithm might be more time consuming for not much benefit. Also, my thinking could very much be flawed, especially since I'm still confused by the firmware code. Any clarification on that would be appreciated.
In sum:
1. Do I understand the movement algorithm in the MoveWithSpeed() function correctly?
2. What is the maximum number of steps that can be executed in a 20ms interval? And how does the code that I posted relate to that maximum number? (I'm most interested in the answer to this question).
3. Any thoughts of whether Algorithm 1 or 2 are better than the other? I will try and implement algorithm 2 out of my own curiosity, and also because my current linear algorithm is more suited to that version. In any case, it's super simple for me to use the open-dobot MoveWithSpeed() function with my GUI and so, I will do that first. Also, I'll add in error checking (seeing if angles that the arm moves to along the line are valid). I think I did a decent job with that.
Best,
Mike
|
|
|
Post by Max on Jun 5, 2016 1:21:02 GMT -7
1. Correct. It is worth noting also that that function implements the simplest acceleration algorithm - trapezoidal, and as such finds corresponding segments during which acceleration/deceleration must be executed and whether with the given acceleration and maximum velocity there will be a segment with constant velocity (flatSlices). 2. 20kHz is the maximum rate at which the firmware toggles stepper driver's STEP pin. The driver makes a step when the pin goes from LOW to HIGH, hence the maximum step rate is 20kHz/2=10kHz, which is 10k steps per second. The execution window is 20ms which makes 50 execution windows per second. Therefore, number of steps per execution window (which is 20ms) = 10k/50 = 200 3. Correct, the higher the speed the lower the resolution. However, the implementation is much simpler and is compatible with FPGA board. That is the reason dobot original software is so slow even when drawing with a brush (the laser is low power so it requires low velocity, but not the brush). example-sdk.py runs at a reasonable velocity with a reasonable accuracy (if it wasn't for the backlash). I checked the algorithm accuracy at that velocity by plotting the actual pose and the error was still within fractions of millimeter. Keep in mind that steppers have a limit to their speed at which they simply have no torque (the arm just falls down), so you always want to run the arm at reasonable velocity (may be a little faster than the examples). As to the algorithm #2 - have a look at the complexity of Redeem bitbucket.org/intelligentagent/redeem/src/de9d0728f1c3c0a331c3b8134ea6e1921a3025cd/redeem/path_planner/?at=master , which runs on BeagleBone. Arduino won't be able to do that planning and if planning is done on the host then you won't be able to transfer that amount of data via serial. Sure there are other ways to do that, but take into account the limitations - from the very beginning I was aiming at a solution that could be controlled realtime to stick it into ROS and have the arm manipulate real world realtime, so solutions that work for CNC (e.g. G-codes fed to a 3D-printer) wouldn't work for ROS as CNC most likely buffers a few dozens of commands and plans them upfront. The caret operator on a port makes the bit to flip (to toggle the pin). Port is an 8-bit register each bit of which is "connected" to a pin. In order to get the state of a specific pin you need to get corresponding bit, which you do by masking the rest of bits by "AND"ing them with zeroes. Then the result is 0 if that bit is 0, and the result is non-zero if the bit is 1. Non-zero is treated as "true" in conditional statements, while 0 - as "false". When you want to set a bit you don't want to change the rest of the bits. In this case you do logical "OR" with that bit at 1 and rest at 0 when you want the bit set to 1, or logical "AND" with that bit at 0 and the rest at 1 otherwise. If by "PIN and PORT" you meant the difference in like "PORTF and PINF", then in AVR "PORT" means the port to set and "PIN" means the port to read. If the pin in question is set as input and you try to read the pin by referring to the port as "PORT" then you won't get the actual signal but rather the state of "pull-up" (when a pin is input you can enable pull-up resistor on it by setting that pin's state on the "PORT" to 1 - useful for buttons to eliminate the need for external resistors, and is used to enable accelerometer calibration). Arduino has some syntactic sugar to hide those port and pin specifics, but underneath still refers to AVR definitions. I didn't use any Arduino stuff as it messes up realtime application.
|
|
|
Post by mikef522 on Jun 5, 2016 2:36:10 GMT -7
Awesome, thanks for the explanation! and the quick response as always! I'll probably just stick with the current algorithm for now then. I think the linear movement resolution it has now is more than good enough for my purposes anyways. And man, 200 steps/20ms! That's way more speed than I need! Honestly, 700mm/sec with 100mm/sec^2 acceleration is already plenty fast enough for my intended applications (maybe even too fast), though I can see how one might want faster. I'll keep in mind the effect that speed has on torque. Definitely already experienced that problem first hand . This is great, I saw that you accounted for the tool tip in the inverse kinematics. I think I'm ready to mount some end effectors and get Dobot to do some of my lab work. I'll post a youtube video of how to get open-dobot and open-dobot-gui (integrated with open-dobot) up and running (most likely next weekend at the latest).
|
|
|
Post by Max on Jun 5, 2016 11:10:46 GMT -7
Experimentally I found that dobot's motors lose torque at 7k steps/s, that's 140 steps per execution window, so 200 is not even reachable.
The rest of the stuff (TICKS_PER_CALL and 50 kHz) has to do with timer and its ticks to smooth things out. The signal that drives the STEP pin is not perfect in terms of width and distance between pulses, but it does maintain the number of steps per window perfectly and no pulse is too narrow for the driver to not pick it up, nor the distance is too close for the motor to not act on, which is the main point. To improve the signal shape another Arduino can be used to use its timers to drive pins in hardware rather that with interrupts. The firmware for the second Arduino would be dead simple.
|
|
|
Post by mikef522 on Jun 6, 2016 23:24:00 GMT -7
Thanks, this is good to know.
I ended up implementing algorithm 2 (still need to add acceleration though) because I ran into a problem with algorithm 1. I still use DobotDriver though. Algorithm 2 works perfectly for me with my testing so far, and the extra computation doesn't seem to matter (the processor on my 5 year old laptop is more than fast enough, the computations are still essentially instantaneous even for the longest moves at high resolution).
When I run the following code with open-dobot's DobotSDK, I get some unacceptably nonlinear movement (note the starting pose has the rear arm vertical and the forearm horizontal). Also, I'm still not using the accelerometers yet.
speed = 700 acceleration = 100 while(True): dobot.MoveWithSpeed(210.9, 0, 0, speed, acceleration) dobot.MoveWithSpeed(210.9, 0, 238, speed, acceleration) dobot.MoveWithSpeed(210.9, 150, 238, speed, acceleration)
dobot.MoveWithSpeed(0, 150, 238, 50, acceleration) <-- movement starts to get very nonlinear here (lowering the speed from 700 to 50 doesn't seem to help)
dobot.MoveWithSpeed(210.9, -150, 238, 50, acceleration) <-- also a super nonlinear move dobot.MoveWithSpeed(210.9, 0, 238, speed, acceleration)
Don't know if you have the same problem or whether or not this is unintended behavior of algorithm 1. I didn't do any debugging, just implemented algorithm 2.
I'm still going to release open-dobot-gui with the buttons connected to the DobotSDK (algorithm 1), but I'll also release another version using algorithm 2.
|
|
|
Post by Max on Jun 6, 2016 23:40:35 GMT -7
dobot.MoveWithSpeed(0, 150, 238, 50, acceleration): x=0 is the base. Dobot physically can't reach there dobot.MoveWithSpeed(210.9, -150, 238, 50, acceleration): probably also not reachable There are no checks in SDK whether the coordinate is reachable. The hope is the user wouldn't try to hack their own arm
|
|
|
Post by mikef522 on Jun 6, 2016 23:57:35 GMT -7
hmm, yea, x = 0 is the base, but I moved the arm 150 to Dobot's right first. I think our coordinate systems are the same. dobot.MoveWithSpeed(0, 150, 238, 50, acceleration) should move the arm essentially 90 degrees to the right. Anyways, I didn't check for out of bounds/invalid angles yet.
|
|
|
Post by mikef522 on Jun 7, 2016 0:10:53 GMT -7
changing dobot.MoveWithSpeed(0, 150, 238, 50, acceleration) to dobot.MoveWithSpeed(0, 210.9, 238, 50, acceleration) fixed it. Yea, probably an out of bounds/invalid angle problem (the arm with the tool tip is longer than 150, so that can't possibly work). Well, I'll probably still stick to algorithm 2 for myself anyways, since resolution isn't as dependent on speed.
|
|
|
Post by Max on Jun 7, 2016 0:22:54 GMT -7
You are right, x=0 is not the only coordinate, there are two more However, I created a gist to check whether a coordinate reachable https://gist.github.com/maxosprojects/5777d2940fd6471d4c3def206b1baa22 Here are the results of checking coordinates you posted: Maximum arm stretch 295.0 (210.9, 0, 0) distance 281.3329699839676 (210.9, 0, 238) distance 294.5577023267258 (210.9, 150, 238) distance 337.8471983694284 (0, 150, 238) distance 242.04505778883404 (210.9, -150, 238) distance 337.8471983694284 (210.9, 0, 238) distance 294.5577023267258
As you can see there are two coordinates dobot can't physically reach. Those mess up current the tracked pose and consequently mess up further moves. Moreover, there are certain configurations of the links that dobot can't set either. Those are rather not straightforward, so I skipped checks altogether to get something working with reachable coordinates at least.
|
|
|
Post by Max on Jun 7, 2016 0:39:41 GMT -7
Also I feel either I didn't get your approach with algorithm2 or you didn't get that there is hard execution window size of 20ms and unless you change the firmware (which would be quite complex and may not be doable on Atmega2560 to maintain arbitrary number of steps per each motor) there is not more arbitrary number of slices can be made that there is already in SDK - the number of slices per second will always be 50 and the only thing you could change in application software is to skip a slice.
I don't understand why you see 50 slices per second not working for you.
|
|
|
Post by mikef522 on Jun 7, 2016 1:48:49 GMT -7
I'll explain it in more detail when I release the code officially. The relevant code is here if you can't wait ( github.com/mikef522/open-dobot-gui/blob/master/MovementAlgorithm2/DobotGUIMain.py). Be warned that I still need to clean that code up and reorganize it a good deal (there's several bits of old code and comments that won't be accurate or make sense). If one starts from the pushButtonMoveToCoordinate_clicked function, you can follow the logic, but it might take you a bit to digest it. I'll write out a better explanation of the algorithm (probably tomorrow since I need to go to bed) and give example output during various points throughout its execution to better illustrate it. Very quickly, I break the line into an arbitrary number of points (say 1000 xyz coordinates). Then I calculate how many steps it takes to get from say point 1 to point 2. Then point 2 to point 3 and so forth until I reach 1000. At each point to point calculation, I add the number of steps (one at a time) it takes to an array, thereby generating a "step sequence" that is time independent. Now, certainly, 1000 subdivisions is overkill for a lot of lines. If no steps are to be taken from point a to point b, nothing is added to the step sequence array. The step sequence array ends up looking something like this: 001011 001100 001011 011011..... The first 3 digits in each set of 6 correspond to the base, upper (rear arm), and lower (forearm) arm directions respectively (e.g. 1 - rotate counterclockwise, 0 - rotate clockwise). The last 3 digits indicate whether or not a step is to be taken for the base, upper (rear arm), and lower (forearm) arm stepper motors respectively (e.g. 1 - take a step, 0 -don't take a step). Like I said in my earlier post, based on desired speed, desired steps/sec can be calculated. From there, it's simple enough to tally the number of steps for each motor to take in 20ms while moving according to the step sequence. Also, the final step sequence array actually looks a little different since I divide the sequence into "chunks" (arrays within an array) where the steppers are all moving in the same direction. I realize that I only need a to specify the direction for each chunk rather than each step, but I left it like that for now. The processors on modern computers are so fast that any extra computational time needed for this algorithm doesn't really matter. It's probably easier to understand with example output at the various points of the algorithm, which I'll have to post later. I do recognize that I have to send the number of steps to take in 20msec and I do account for that. I'm not executing slices per second, but rather steps per second (which I think your code is also ultimately doing). The only difference is my "step sequence" is different. Also, I haven't thought it through much, but I think my algorithm may often yield more steps to take per line, which would probably limit maximum velocity. In any case though, the speeds I've been able to reach (haven't tested max speed) are more than fast enough.
|
|
|
Post by mikef522 on Jun 7, 2016 2:11:09 GMT -7
Pictorally, I think the difference between this algorithms is explained by the picture linked below. The solid line is the path to move along. The length of the subdivisions of the line are greater as speed increases for algorithm 1. Consequently, movement from subpoint to subpoint is more "curvy", less linear. For algorithm 2, the length of subdivisions are constant. The dashed perpendicular lines represent subdivisions. The curves are the movement of the robot. imgur.com/cfKAJtB
|
|
|
Post by Max on Jun 7, 2016 9:38:56 GMT -7
Currently your linearLineResolution defines velocity.
Essentially you're not doing anything different to how it is done in SDK, you just call things differenttly. You can achieve the resolution you want by reducing the speed you provide with MoveWithSpeed in SDK, in which case the graphs for both "algorithms" will look exactly the same, except SDK is more accurate right now and accounts for acceleration.
If by "subdivisions" you mean the execution window of 20ms then it is exactly what is done in SDK and algorithms you referred to are not different. If you think that subdivisions can somehow increase the resolution in one execution window, then you are wrong - the execution window duration is constant and the only thing you can change in application software is the number of steps per execution window, which defines velocity. If velocity is not a concern, then just provide lower velocity to MoveWithSpeed call and the moves will be more linear.
In case you judge how linear dobot goes by actual abservations, then take into account that you don't have accelerometers installed yet and most probably you are not setting the starting pose correctly (we are just humans).
Also, there is a significant backlash in the motor gears. I suspect original dobot firmware accounts for it programmatically - just add predefined (experimentally or analytically determined) number of steps to cover for backlash to the mototr that changes direction to engage the gear. That works only when dobot does not interact with the physical world (a non-push-pull task). In case there is interaction then there is no way to know in software when exactly a contact with an object occurs to account for the magnetic backlash at that exact moment. That's right, there is a magnetic backlash too and there two kinds of them. One is regular magnetic backlash at full step. Another is worse and comes from microstepping. Microstepping really gives only resolution, but not accuracy. Have a read on that. So for only holding its own weight dobot can account for the gear backlash because the arms are pretty light, but when it comes to additional variable load (interaction with physical world) there is no way to account for magnetic backlash is software. The only way to improve accuracy for interactive tasks is to switch to full step and increase reduction ratio accordingly, preferably with a timing belt to eliminate gear backlash. But then, I suspect, you would reduce the maximum speed because steps can be issued only that fast (motor's limitation).
I'm going to update SDK to account for gear backlash at least. Hopefully it will improve things for you.
I have reassembled my dobot in a slightly different way to eliminate backlash in the lever joints and I don't want to bring it back. So instead of hardcoding backlash I'm going to make it configurable and create a routine to calibrate any dobot (the more it gets worn the greater becomes the backlash anyway).
Let me know if anything in this post is not clear.
|
|
|
Post by mikef522 on Jun 9, 2016 1:02:21 GMT -7
linearLineResolution doesn't define the velocity. stepsPer20msec defines the velocity. Based on the number of steps in a line one needs to determine the correct value for the stepsPer20msec variable. I didn't write that in the code yet, but it's trivial to do. I'm also not referring to the 20msec window when I say "subdivisions". All subdivisions means is how many segments you divide a line into. In my case, these are equally sized segments. With algorithm 1, if moving at high enough speed, the number of subdivisions will be low, to the point that how linear the movement actually is will suffer. To what degree this will be noticeable (i.e. how much the linear movement suffers) is unknown to me. I suspect it will not be very noticeable since even at the higher speeds that are possible, the number of steps issued in 20msec is relatively low, but I haven't tested it.
Also, I'm aware of the problems of backlash and microstepping. That's why I want to add rotary encoders.
|
|
|
Post by Max on Jun 9, 2016 6:37:55 GMT -7
Mike, please look at you algorithm again before you waste time implementing it.
The segments the line is divided to in SDK are always 20ms windows. It terms of number of steps they become larger with velocity increasing, true, but you don't have to increase velocity when you want to achieve better accuracy. Say, in your application there are moves where there is no need for accuracy, and, hence, can be executed faster, and some moves, that require accuracy - those should be executed slower.
You can't divide a line in subdivisions that are smaller than 20ms window with the given velocity - there is no way you can execute them. If you're planning to execute multiple subdivisions in one window then: 1) number of subdivisions in one window must be integer 2) it defeats the very point of having subdivision size smaller than 20ms - you can as well have subdivision size equal to 20ms, which is how SDK is implemented.
So, with whatever algorithm you come up with, with hard window of 20ms accuracy will always be defined by velocity.
In case you're thinking to execute, say 10 of your subdivisions in one window and then 1 subdivision in another, the robot will behave extremely jerky.
|
|