Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>They are two main approachs for humanoid robot walking. Practically it depends on the physical capabilities of your robot and what you want to achieve.</p> <p>If your robot is able to sustain strong impact and you aim at cyclic walking on uneven terrain I would recommend to try the Hybrid Zero Dynamics approach. The current best example is the Cassie or the Digit robot from Agility Robotics see [1] for a recent example.</p> <p>For a more fragile robot, the usual step to make a humanoid walking on flat ground without obstacle is to:</p> <ol> <li>Plan foot steps.</li> <li>Generate a ZMP trajectory from the foot steps, and the foot trajectories (usually polynomials with zero speed, zero acceleration at the beginning/end and an intermediate vertical point).</li> <li>Generate the CoM trajectory from the ZMP trajectory.</li> <li>Use your IK solver to follow the CoM trajectory and the foot trajectories. An inverse dynamic solver would be better, but it is more difficult to achieve.</li> </ol> <p>Once you have a first version of this working in simulation you can try to develop a stabilizer. It assumes usually an estimator of your robot's root attitude (often the waist), and a feedback on the Divergent Component of Motion of your robot. For a good overview on this topic I would recommend the following paper [2].</p> <p>From this, they are then various approachs depending on what you want to do: multiple contacts, uneven terrain, manipulation, co-working with a human.</p> <p>[1] Inverse Dynamics Control of Compliant Hybrid Zero Dynamic Walking, Jenna Reher and Aaron D. Ames, ICRA 2021, <a href="https://arxiv.org/pdf/2010.09047.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2010.09047.pdf</a></p> <p>[2] Stair Climbing Stabilization of the HRP-4 Humanoid Robot using Whole-body Admittance Control, Caron et al, ICRA 2018, <a href="https://arxiv.org/abs/1809.07073" rel="nofollow noreferrer">https://arxiv.org/abs/1809.07073</a></p>
8562
2015-11-29T09:31:09.707
|mobile-robot|stability|humanoid|
<p>I'm currently working on Humanoid robot. I've solved the Forward &amp; Inverse Kinematic relations of the robot, and they turn out to be fine. Now I want to move onto Walking. I've seen tons of algorithms &amp; research papers but none of them make the idea clear. I understand the concept of ZMP &amp; what the method tries to do, but I simply can't get my head around all the details that are required to implement it in a real robot. Do I have to plan my gait &amp; generate the trajectories beforehand, solve the joint angles, store them somewhere &amp; feed it to the motors in real-time? or Do I generate everything at run-time(a bad Idea IMO)? Is there a step-by-step procedure that I can follow to get the job done? or Do I have to crawl all my way through those Research papers, which never make sense(at least for me).</p>
Humanoid balancing
<p>It sounds like you want some sort of <a href="http://www.cs.cmu.edu/~motionplanning/lecture/Chap2-Bug-Alg_howie.pdf" rel="nofollow">bug algorithm</a>.</p> <p>But the pseudo-code for what you want is simple:</p> <pre><code>while not at goal: if there is a wall in front: turn right a little else if there is a wall on the left: drive straight a little else: turn left a little stop </code></pre> <p>Of course tuning this thing can be tricky. For example, you probably don't want to turn in place when you see no wall. You should drive forward and left in an arc. The specifics of which depend on your robot and environment.</p> <p>Also, because you have a spinning ultrasonic sensor, you should not drive the robot too fast.</p> <p>Note that the pseudo code really only needs a single sensor on the left of the robot and in front of the robot. But since you have an array of range values, you can potentially do something more sophisticated, although more complicated. For example, use the few range readings on the left side of the robot to determine the distance and angle of the wall relative to the robot, and act accordingly. </p>
8570
2015-11-30T15:22:13.327
|wheeled-robot|navigation|
<p>I have a task that involves implementing robot behaviour that will follow wall and avoid obstacles along it's path. The robot must stay at desired distance from the wall but also stick to it so it should not loose sight of it. Robot is sensing it's surrounding with ultrasonic sensor that is oscillating from left to right and filling an array of small length (10 values) with detected distances (every 10 degrees). From this reading I would like to calculate heading vector that will result in robot path similar to one shown in bottom picture:</p> <p><strong>Black(walls), red(obstacles), blue(robot), green(desired path)</strong> <a href="https://i.stack.imgur.com/YtVGG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YtVGG.png" alt="Path example"></a></p>
Calculate robot heading to follow wall and avoid obstacles
<p>$$y_d = a x^2 + b x + c$$ is your model, where I am using $y_d$ to represent the <em>desired</em> value of your control variable.</p> <p>Using the measured value of your encoder $y_m$, define an error value $y_e$ $$y_e = y_d - y_m$$</p> <p>Your PID controller can then be implemented as </p> <p>$${PWM}_{dc} = k_p y_e + k_i \int y_e dt + k_d \frac{dy_e}{dt}$$</p> <p>Here I am using ${PWM}_{dc}$ to be the duty cycle of your PWM signal.</p> <p>So if you have a mapping showing what PWM duty cycles will achieve which $x$ speeds (perhaps a lookup table?) you can use ${PWM}_{dc}$ to find the value of $x$ to command. Otherwise, go ahead and substitute your $x$ in place of my ${PWM}_{dc}$ and assume the mapping between these two variables is linear.</p>
8579
2015-12-01T12:36:14.390
|motor|pid|pwm|
<p>I have a motor with an encoder. When I set the speed of the motor it should change its speed so that encoder readings per second should fit an equation $y = ax^2 + bx + c$ where <code>x</code> is <code>speed</code> value that is given to the motor and <code>y</code> is the encoder readings per second that should get with motor.</p> <p>Encoder reading is counted in every <code>1ms</code> and if it is not equal to the value of the encoder output should get from motor (it is calculated using the equation), the PWM input to the motor should vary in-order to get desired encoder output.</p> <p>I want to control this value using a <code>PID</code> controller but I'm confused in writing equations. Any help would be appreciated..</p>
Change PWM values according to encoder output
<p>The three “standard servo motors” – presumably standard-size hobby RC servos – will be the main load on your power supply, unless your LEDs are high-power devices. Pololu.com's <a href="https://www.pololu.com/blog/16/electrical-characteristics-of-servos-and-introduction-to-the-servo-control-interface" rel="nofollow">electrical characteristics of servos...</a> web page has some guidelines you probably should read. Some excerpts:</p> <blockquote> <p>... A standard servo will have a stall current around one amp, a micro servo will need a few hundred milliamps, and a giant servo can draw ten amps or more. ... quiescent currents are tricker to estimate ... typically, the current should be in the few dozen to one or two hundred milliamp range</p> </blockquote> <p>For your three servos, a 5V 2A power supply is reasonable, supposing you never stall more than one servo at once and that stall times are brief (a few seconds at most). If you were to stall all three servos at the same time on a 2A PS, and the Uno were powered by the same supply, it probably would reset or otherwise misbehave.</p> <p>Figure 35 to 50 mA for the Uno, and perhaps 50 mA for the ArduCam Mini 2MP camera (125 mW at 3V, IIUC), and 15 to 25 mA per LED. Note, for indicator LEDs (vs illumination) many high brightness LEDs work ok at sub-milliamp currents; for example, on some of my Arduino nano boards I've replaced the PWR indicator's 1KΩ dropping resistor with 20KΩ and it's adequately bright.</p> <p>In brief, allow two to three amps for the servos and perhaps a fourth of an amp for the rest.</p> <p>The Arduino will draw only the current it needs, as long as it's powered via USB or its Vin pin or DC socket. Several Arduino models (eg Uno, Mega2560, Nano) have a FET or diode automatic power-selection network, typically allowing them to be powered by USB at the same time as by Vin pin or DC socket. </p> <p>If you wish to connect a 5V PS to an Arduino's +5V line (which can be done), first review the “gotcha's” at <em><a href="http://www.rugged-circuits.com/10-ways-to-destroy-an-arduino/" rel="nofollow">10 ways to destroy an arduino </a></em> at rugged-circuits.com. </p> <p><em>Edit:</em> </p> <p>For a 12V adapter, hook into the Vin pin or DC socket. On an Uno, there's a diode after the DC socket and in this case using the socket is preferable to using Vin.</p> <p>With 12V into the Uno's +5V regulator (typically a NCP1117ST50T3G), you probably won't draw more than 0.3–0.5 A from the +5V line without overheating the regulator. A 9V supply, dropping 3V less across the regulator, dissipates 1.5 W less power in it at 0.5 A than does a 12V supply, with consequently less heating.</p> <p>An <a href="https://www.arduino.cc/en/Main/ArduinoBoardUno" rel="nofollow">Uno page at arduino.cc</a> says:</p> <blockquote> <p>The board can operate on an external supply from 6 to 20 volts. If supplied with less than 7V, however, the 5V pin may supply less than five volts and the board may become unstable. If using more than 12V, the voltage regulator may overheat and damage the board. The recommended range is 7 to 12 volts. </p> </blockquote> <p>Note that hooking a 5V PS to the +5V bypasses the regulator entirely. The earlier link (<em>10 ways...</em>) recommends against this in certain cases. The Uno page at arduino.cc advises against it:</p> <blockquote> <p>5V. This pin outputs a regulated 5V from the regulator on the board. The board can be supplied with power either from the DC power jack (7 - 12V), the USB connector (5V), or the VIN pin of the board (7-12V). Supplying voltage via the 5V or 3.3V pins bypasses the regulator, and can damage your board. We don't advise it.</p> </blockquote> <p>Options include: </p> <ul> <li><p>Power Arduino and low-current devices via USB or via a 9V 3-5 W adapter; use a higher-current, eg 2A or 10 W, 5V adapter for servos.</p></li> <li><p>With a 5 A or 60 W, 12V adapter, connect +12 to Vin or DC socket; also attach one or more inexpensive adjustable 2-amp LM2596-based buck regulators like <a href="http://www.ebay.com/itm/LM2596S-ADJ-DC-DC-Buck-Regulator-Power-Module-3A-Adjustable-5V-12V-24V-/191095436739?hash=item2c7e2cf5c3:g:27IAAOxydUJTMqQv" rel="nofollow">1</a>, <a href="http://www.ebay.com/itm/DC-DC-Buck-Step-Down-Converter-Module-LM2596-Voltage-Regulator-Led-Voltmeter-/160995621268?hash=item257c163994:g:uUsAAMXQQQZRyqOI" rel="nofollow">2</a>, or <a href="http://www.ebay.com/itm/LM2596-LED-Voltage-Regulator-DC-DC-Buck-Step-Down-Converter-Module-Voltmeter-/331701453975?hash=item4d3af28497:g:ihkAAOSw5ZBWPMuc" rel="nofollow">3</a> to the +12, dial the outputs in to +5, and use them to power the servos, with the Arduino ground common with the regulator grounds.</p></li> </ul>
8580
2015-12-01T12:55:21.780
|arduino|power|
<p>I'm shopping for my first Arduino with a specific goal in mind. I need to attach 3 standard servo motors, an ArduCam Mini 2MP camera, and several LEDs. I'm trying to figure out power requirements. I assume that USB power won't be sufficient. I'm looking at 12V AC-to-DC outlet adapters and I noticed that Amps vary from ~500MA to 5A. I don't want to use batteries.</p> <p>What would you recommend as minimum amperage for this setup? Is there a maximum amperage for Arduino boards? I don't want to plug it in and burn it out. If I plug in both the USB cable and a power adapter at the same time, is power drawn from both cables?</p> <p>Thanks!</p>
Arduino Power Adapters
<p>I'm not familiar with your device, but here are a few tips that might help...</p> <p>Since the device is using a quaternion to describe an orientation, they must also specify some base coordinate frame against which that rotation applies.</p> <p>The w term in a quaternion is the cosine of the half-angle of desired rotation around the (x,y,z) vector component. If you don't care about the angle of rotation, then w can be anything (subject to normalization requirements), including 0.</p>
8592
2015-12-02T15:05:25.103
|inverse-kinematics|orientation|
<p>I have the following problem:</p> <p>Given 3 points on a surface, I have to adjust a manipulator end-effector (i.e. pen) on a Baxter Robot, normal to that surface.</p> <p>From the three points I easily get the coordinate frame, as well as the normal vector. My question is now, how can I use those to tell the manipulator its supposed orientation.</p> <p>The Baxter Inverse Kinematics solver takes a $(x,y,z)$-tuple of Cartesian coordinates for the desired position, as well as a $(x,y,z,w)$-quaternion for the desired orientation. What do I set the orientation to? My feeling would be to just use the normal vector $(n_1,n_2,n_3)$ and a $0$, or do I have to do some calculation?</p>
Manipulator end-effector orientation with quaternions
<p>The question is a bit old but an answer might help. I think you are getting confused by thinking of mapping, localization and exploration as separate processes in the context of grid-based FastSLAM. In the most basic form of the algorithm you have the three steps you described:</p> <p>At every timestep :</p> <ol> <li><p>Update the poses of your particles using your motion model</p></li> <li><p>Assign weights using your measurement model</p></li> <li><p>Use the calculated weights to resample the particle set</p></li> </ol> <p>For a given particle, once you have estimated $x_{t}$ based on $x_{t-1}$ through sampling (step 1), you simply update the previous map by integrating the new measurements you have made at time $t$ "trusting" that they were made from pose $x_{t}$.</p> <p>It's because the algorithm gives importance to assigning precise weights (step 2) and regularly resampling your particle set (step 3) that it can confidently use the $x_{t}$ estimates made through the sampling step to update the map.</p> <p>Because with every timestep you are marginalizing over the previous state - like all Bayesian filtering approaches to online SLAM - you do not need to keep track of the full history of your data.</p> <p>If you are still fuzzy on how these steps work together, you can take a look at the pseudocode for gmapping from <a href="http://www.cs.berkeley.edu/~pabbeel/cs287-fa11/slides/gmapping.pdf" rel="nofollow noreferrer">this presentation</a>:</p> <p><a href="https://i.stack.imgur.com/YCUmJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YCUmJ.png" alt="Pseudocode for gmapping"></a></p>
8595
2015-12-03T02:32:55.290
|slam|occupancygrid|
<p>It's unclear as to how one goes about integrating Occupancy Grid mapping and Monte Carlo localization to implement SLAM.</p> <p>Assuming <strong>Mapping</strong> is one process, <strong>Localization</strong> is another process, and some motion generating process called <strong>Exploration</strong> exist. Is it necessary to record all data as sequenced or with time stamps for coherence? </p> <p>There's Motion: $U_t$, Map: $M_t$, Estimated State: $X_t$, Measurement: $Z_t$</p> <p>so..</p> <ul> <li><p>each <strong>Estimated state</strong>, $X_t$, is a function of the <strong>current motion</strong>, $U_t$, <strong>current measurement</strong>, $Z_t$, and <strong>previous map</strong>, $M_{t-1}$;</p></li> <li><p>each <strong>confidence weight</strong>, $w_t$, of estimated state is a function of <strong>current measurement</strong>, $Z_t$, <strong>current estimate state</strong>, $X_t$, and <strong>previous map</strong>, $M_{t-1}$;</p></li> <li><p>then each <strong>current map</strong>, $M_t$ is a function of <strong>current measurement</strong>, $Z_t$, <strong>current estimated state</strong>, $X_t$, and <strong>previous map</strong>, $M_{t-1}$.</p></li> </ul> <p>So the question is, is there a proper way of integrating mapping and localization processes? Is it something you record with timestamp or sequences? Are you suppose to record all data, like FullSLAM, and maintain full history. How can we verify they are sequenced at the same time to be referred to as <strong>current</strong> (i.e. measurement) and <strong>previous</strong> (measurement).</p>
In order to integrate MCL and Occupancy Grid to implement Grid-based FastSLAM, do you have to record all data?
<p>Regarding the lift mechanism, I suggest you look at the semiconductor wafer-handling robots. You can see the insides of one version here: <a href="https://www.ifixit.com/Teardown/Hine+Design+Inc.+Automated+Wafer+Handling+Unit+Teardown/1651" rel="nofollow">https://www.ifixit.com/Teardown/Hine+Design+Inc.+Automated+Wafer+Handling+Unit+Teardown/1651</a> </p> <p>In that robot they used a lead screw vertical drive mechanism along with (it looks like) two cylindrical shafts for stability and linearity. When I've built systems using parallel rails or shafts, it is very difficult to keep them collinear, so we would usually fix both ends of one rail, and only one end of the second rail. The other end could float over a small range. That way we would prevent binding due to misalignment issues.</p> <p>That industry has made vertical lift mechanisms for robots out of everything from pneumatics, to linear motors, to ball screws, and more. All of their designs require compactness and cleanliness, so some of the designs are very clever. You can search for patents by companies such as Genmark, PRI Automation, Brooks Automation, Applied Materials, KLA Tencor, Hine Design, and Adept to find good examples.</p>
8598
2015-12-03T17:58:28.307
|mechanism|
<p>What kind of systems can be used to make a torso lifting system like the one used by this robot (the black part) :</p> <p><a href="https://i.stack.imgur.com/r6okd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r6okd.jpg" alt="enter image description here"></a></p> <ul> <li>Rack and pinion</li> <li>lead screw</li> <li>scissor lift</li> <li>can a triple tree help ? </li> </ul> <p>What are the pro and cons of each system ? How do they ensure stability ? And finally, is there a way to draw current when lowering instead of drawing current when lifting ?</p>
Comparison of lifting systems
<p>A simple rule of thump is based on the rise time that you want to achieve. This is basically the time that the final system needs to change form one position to another.</p> <p>A reasonable choice is a value between 5 and 100 samples within the desired rise time.</p> <p>The concrete choice may be influenced by a bunch of factors, for example:</p> <ul> <li>Method of the controller design</li> <li>available computing power</li> <li>existing disturbances</li> </ul> <p>For a little overview check my article <a href="https://www.imt.ch/en/insight/saving-computing-power-with-the-right-choice-of-sampling-time/" rel="nofollow noreferrer">https://www.imt.ch/en/insight/saving-computing-power-with-the-right-choice-of-sampling-time/</a></p>
8601
2015-11-28T09:07:21.703
|pid|
<p>I have a robotic system I'm controlling with Arduino, is there an heuristic way to determine a proper sampling time for my PID controller? Considering I have some other things to compute on my sketch that require time, but of course a good sampling time is crucial.</p> <p>Basically I have a distance sensor that needs to detect at a constant rate an object that is moving, sometimes slow, sometimes fast. I don't have a good model of my system so I can't actually tell the physical frequency of the system.</p>
Choosing a proper sampling time for a PID controller
<p>You'd probably have more success with Android, i.e. NXT Remote Control took me 30 seconds to find.</p>
8611
2015-12-05T03:59:25.180
|software|research|
<p>I have an R.C car and there is a program in my computer in which I can code the car to perform movements.I would like to have an application with a visual design.Where it shows the cars path.</p> <p>Is there available software code for this? Saves me lots of time.</p>
IPhone controlled RC car
<p>MEMS gyros all have a problem with drift due to temperature. This can be worked around using several techniques, and higher cost parts generally have some amount of compensation built in. However, in my experience even the most expensive gyros need some individual calibration or sensor fusion techniques to deal with drift.</p> <p>Taking a look at this sensor package, the 3 axis gyro is not calibrated for temperature, but does include a temperature sensor. This is pretty common in low cost gyros. </p> <p>As an aside, I would not consider this part an IMU since it does no sensor fusion or calibration between the gyro, temperature sensor, accelerometer and magnetometer. This is reflected in its price.</p> <p>Anyway, what that means is you will need to come up with a way to deal with the gyro drift yourself.</p> <p>The best technique is going to depend on your application and budget, but here are some approaches to look at:</p> <ol> <li>Buy a better IMU: Depending on your budget, this may be the simplest solution. Expect to spend between $50-$100.00 for such a device.</li> <li>Have an operational calibration process: In this case, you would measure the drift rate of each axis every time you knew it was not in motion, keeping it updated in a moving average and apply that as an offset to the measured rate. This is a simple solution if you have a way of knowing that the sensor is not moving and that will happen regularly during operation.</li> <li>Calibrate the offset for temperature: If you have access to a precision thermal chamber, you can pre-calculate the drift rate vs temperature function and update it on the fly. This works best if combined with an occasional stopped calibration.</li> <li>Implement a Complementary or Kalman filter to process the data. This is described in the answers to <a href="https://stackoverflow.com/questions/1586658/combine-gyroscope-and-accelerometer-data">https://stackoverflow.com/questions/1586658/combine-gyroscope-and-accelerometer-data</a> and can be combined with 2 and 3.</li> </ol>
8631
2015-12-07T17:43:47.353
|sensors|imu|sensor-fusion|
<p>Hi I'm using "minImu 9" 9 DOF IMU (gyro, accelerometer and compass) sensor and it gives pitch roll and yaw values with a slope on desktop (no touch, no vibration, steady). Y axis is angle in degree and X axis is time in second. X axis length is 60 seconds. How can fix this?</p> <p>Pitch</p> <p><a href="https://i.stack.imgur.com/D0fH9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D0fH9.png" alt="enter image description here"></a></p> <p>Roll</p> <p><a href="https://i.stack.imgur.com/7hI8v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7hI8v.png" alt="enter image description here"></a></p> <p>Yaw</p> <p><a href="https://i.stack.imgur.com/gr6XY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gr6XY.png" alt="enter image description here"></a></p> <p>Note1: <a href="https://github.com/pololu/minimu-9-ahrs-arduino" rel="nofollow noreferrer">minIMU code</a></p>
IMU sensor and compensation
<p>First, please respond to the questions I ask in the comments on your question.</p> <p>You mention that the device to be controlled is a motor. Bear in mind that rotating power is $P = \tau \omega$, or torque times speed, so you won't get the same response at high speed that you get at low speed because motor torque typically declines with increasing speed. </p> <p>I'm not sure if this is the cause of your differing responses or not, but I believe it would have a large impact <em>unless</em> you're just using "1.0" and "2.0" as placeholders for some other value, or they are literally 1 and 2 and are very close together percentage-wise in terms of the entire speed band of the motor.</p> <p>Regarding wind-up, mentioned in 50k4's answer, I wouldn't think it would be an issue unless your process is saturating, which is to say that it shouldn't be an issue unless your motor is hitting some physical limits that would prevent it from responding, like a current limit or something similar. Note however that if you <em>are</em> getting to the point where your response is saturating that you are also likely operating at a point where motor power is limited and thus you are unlikely to get the same response curves no matter what actions you take to tune a standard PID controller. </p> <p>Ultimately, if I were you, I would evaluate the errors and determine if there is a problem with your error accounting or if you are realizing the physical limitations of a motor. Keep in mind that the <em>entire point</em> of the integral term is to accumulate error - this will speed control response if it is "taking too long". Accumulated error before settling is both normal and desirable <em>as long as</em> the controller output is still able to effect change in the system. Only when the system becomes unresponsive: linear actuators at an end stop; motors at top speed; motor drivers at a current limit; etc., will wind-up become an issue. </p> <p>I would output integral error, set the motor setpoint to 1.0, then wait until integral error is (effectively) zero, then change the setpoint to 2.0. If the output response is still inconsistent, then you have a problem with the physical limitations of the system (increasing torque required at higher speeds). If you get consistent responses then you are entering "Phase 2" at time "t=N', which is still inside the response time for Phase 1. As the controller is still actively responding it will naturally generate a different response. At that point, consider waiting longer to enter Phase 2 or increasing the gains on your controller to achieve a settled response by time "t=N". </p> <p>As a final comment, the last line of your code has "Term = <strong>k</strong> * (<code>error terms</code>)". If you are doing everything else correctly (generally, have your sampling time correct), then you should leave $k=1$, or just use the sum of the PID terms as the controller output.</p>
8636
2015-12-08T09:33:10.837
|pid|
<p>I'm trying to implement a PID controller by myself and I've a question about the sum_error in I control. Here is a short code based on the PID theory.</p> <pre><code>void pid() { error = target - current; pTerm = Kp * error; sum_error = sum_error + error * deltaT ; iTerm = Ki * sum_error; dTerm = Kd * (error - last_error) / deltaT; last_error = error; Term = K*(pTerm + iTerm + dTerm); } </code></pre> <p>Now, I start my commands:</p> <p>Phase 1, If at t=0, I set target=1.0, and the controller begins to drive motor to go to the target=1.0, Phase 2, and then, at t=N, I set target=2.0, and the controller begins to drive motor to go to the target=2.0</p> <p>My question is, in the beginning of phase 1, the error=1.0, the sum_error=0, and after the phase 1, the sum_error is not zero anymore, it's positive. And in the beginning of phase 2, the error=1.0 (it is also the same with above), but the sum_error is positive. So, the iTerm at t=N is much greater than iTerm at t=0.</p> <p>It means, the curves between phase 2 and phase 1 are different!!!</p> <p>But to end-user, the command 1, and the command 2 is almost the same, and it should drive the same effort.</p> <p>Should I set the sum_error to zero or bound it? Can anyone tell me how to handle the sum_error in typical?</p> <p>Any comment will be much appreciated!!</p> <p>Kevin Kuei</p>
sum_error in PID controller
<p>Something's not right, either with your test, your conversion, or with the device itself. If you look at your graph of heading, pitch, and roll, you can see that heading rotates through 360 degrees, which is expected, but roll changes almost 180 degrees in one direction, then 180 degrees back. If your device is loose and rolling around on the turntable then maybe that's correct, but otherwise you've got something else going on. </p> <p><strong>Can you provide your initialization code?</strong> Notice that you can't just power-on this device an use it - from <a href="http://www.pnicorp.com/wp-content/uploads/Sentral-MandM-Technical-Datasheet_rG.pdf" rel="nofollow">page 16 of the manual</a>:</p> <blockquote> <p><strong>Note:</strong> It is necessary to set the MagRate, AccelRate, AND GyroRate registers to non-zero values for the SENtral algorithm to function properly and to obtain reliable orientation and scaled sensor data. If a [Sensor]Rate register is left as 0x00 after power-up, or is changed to 0x00, this effectively disablesthat sensor within the SENtral algorithm. Also, the CalStatus, MagTransient, and AlgorithmSlow bits become undefined.</p> </blockquote> <p><strong>Are you getting any errors?</strong> Page 17, step (d) states you are to read the event register and then process the bits in the priority order specified in Figure 4-3 (at the top of that page), which means you are to check and act on an error bit <em>before</em> checking the sensor data available bits. </p> <p>Finally, <strong>Can you provide your sensor read code?</strong> Your sample data with the discontinuity shows values in the range of -1 to 1, but Page 19 clearly states that Full Scale Output for the quaternions are 0-1, or $\pm \pi$. All of your data appears to be bounded at +1 on the high end, which makes me believe that you are not operating at the $\pm \pi$ band, so maybe you are reconstructing the bytes incorrectly upon receipt. </p>
8642
2015-12-09T20:22:49.593
|sensors|calibration|orientation|
<p>Why is there a discontinuity in the quaternion representation of my device orientation?</p> <p>I'm using a <a href="http://www.pnicorp.com/products/sentral-mm/" rel="nofollow noreferrer">SENtral+PNI RM3100+ST LSM330</a> to track orientation. I performed the following test:</p> <ol> <li>Place the device in the center of a horizontal rotating plate ("lazy susan").</li> <li>Pause for a few seconds.</li> <li>Rotate the plate 360° clockwise.</li> <li>Pause for a few seconds.</li> <li>Rotate the plate 360° clockwise again.</li> </ol> <p>I got this output, which appears discontinuous at sample #1288-1289. <a href="https://i.stack.imgur.com/B4BQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4BQA.png" alt="enter image description here"></a></p> <p>Sample #1288 has <code>(Qx,Qy,Qz,Qw) = (0.5837, 0.8038, 0.0931, 0.0675)</code>, but sample #1289 has <code>(Qx,Qy,Qz,Qw) = (0.7079, -0.6969, -0.0807, 0.0818)</code>.</p> <p>Plugging in the formulas on page 32 of <a href="http://www.pnicorp.com/wp-content/uploads/Sentral-MandM-Technical-Datasheet_rG.pdf" rel="nofollow noreferrer">this document</a>, this corresponds to a change in orientation from <code>(Heading, Pitch, Roll) = (108°, 0°, 142°)</code> to <code>(Heading, Pitch, Roll) = (-89°, 0°, 83°)</code>.</p> <p>The <a href="https://i.stack.imgur.com/ZGvMI.png" rel="nofollow noreferrer">graph of (Heading, Pitch, Roll)</a> is also not continuous mod 90°.</p> <p>Does this output make sense? I did not expect a discontinuity in the first plot, since the unit quaternions are a covering space of SO(3). Is there a hardware problem, or am I interpreting the data incorrectly?</p> <p><strong>Edit:</strong> The sensor code is in <a href="http://pastebin.com/raw/sT2F7f5g" rel="nofollow noreferrer">central.c</a> and <a href="http://pastebin.com/raw/qu8RFx4z" rel="nofollow noreferrer">main.c</a>. It is read with this <a href="http://pastebin.com/raw/7QXJ00cq" rel="nofollow noreferrer">Python script</a>.</p>
Discontinuity in device orientation
<p>I guess you want to find a cubic polynomial for the end effector. You have 3 coordinates for your points A and B, from your question is not clear if they are $x,y,z$ or $x,y,\theta$. Anyway, I'll show here the procedure for $x$, and you can repeat it for the other two coordinates.</p> <p>Given the cubic parametric form $x = a_0+a_1 t + a_2 t^2 + a_3 t^3$ ($*$), you want to find the parameters $a_i$. </p> <p>At $t=0$ you know that $x = 1.5$, so from ($*$) $a_0 = 1.5$.</p> <p>At $t=10$ you know that $x = 1$, so from ($*$) $a_0+10a_1 + 100a_2 + 1000a_3 = 1$.</p> <p>Let's differentiate the polynomial: $\dot{x} = a_1 + 2a_2 t + 3a_3 t^2$ ($**$).</p> <p>At $t=0$ you know that $\dot{x} = 0$ (rest condition), so from ($**$) $a_1 = 0$.</p> <p>At $t=10$ you know that $\dot{x} = 0$ (stop condition), so from ($**$) $a_1 + 20a_2 + 300a_3 = 0$.</p> <p>So you have:</p> <p>$a_0 = 1.5$</p> <p>$a_1 = 0$</p> <p>$a_0+10a_1 + 100a_2 + 1000a_3 = 1 \Rightarrow 1.5 + 100a_2 + 1000a_3 = 1$</p> <p>$a_1 + 20a_2 + 300a_3 = 0 \Rightarrow 20a_2 + 300a_3 = 0$</p> <p>From the last equation you get $a_2 = -15a_3$, substituting in the second last you get $a_3 = 0.001$, and substituting $a_3$ back you get $a_2 = -0.015$. </p> <p>So you have all the $a_i$ coefficients to replace in $(*)$ and you can get the solution $x(t) = 1.5 - 0.015t^2 + 0.001t^3$. You can do the same for the other 2 coordinates changing the initial and final conditions.</p>
8654
2015-12-11T03:22:28.620
|motion-planning|inverse-kinematics|motion|jacobian|
<p>My professor gave us an assignment in which we have to find the cubic equation for a 3-DOF manipulator. The end effector is resting at A(1.5,1.5,1) and moves and stops at B(1,1,2) in 10 seconds. How would I go about this? Would I use the Jacobian matrix or would I use path planning and the coefficient matrix to solve my problem. I'm assuming coefficient matrix but I am not given the original position in angle form. I was only taught how to use path planing when the original angles are given.</p>
Finding cubic polynomial equation for 3 joints
<p>:EDIT:</p> <p>I've edited out most of the content I had previously written because your code <em>does</em> work (except for the mis-matched parenthesis), but it threw me off because this is not really a complimentary filter. You have a hodge-podge here that is confusing to look at initially.</p> <p>First you have a <a href="http://my.execpc.com/~steidl/robotics/first_order_lag_filter.html" rel="nofollow">lag filter</a> on the accelerometer output:</p> <pre><code>alpha = 0.98; filteredAccelerometer = alpha*filteredAccelerometer + (1-alpha)*rawAccelerometer; </code></pre> <p>Then, instead of doing proper numeric integration on the gyro, which would look something like:</p> <pre><code>gyroAngle = gyroAngle + gyroAngleVelocityArray.Pitch*deltaTime; </code></pre> <p>You instead are substituting in the filtered accelerometer measurement as the "accumulated" angle:</p> <pre><code>gyroAngle = filteredAccelerometer + gyroAngleVelocityArray.Pitch*deltaTime; </code></pre> <p>and you are calling all of this a "complimentary filter", such that:</p> <pre><code>pitchAngleCF = alpha*pitchAngleCF + (1-alpha)*angleAccelArray.Pitch + gyroAngleVelocityArray.Pitch*deltaTime; </code></pre> <p>However, this isn't really a complimentary filter because there's "option" or "setting" to blend the gyro and accelerometer outputs. I would suggest converting both sensors to a common unit (degres, for instance), then using the complimentary filter to combine the two measurements, then filter <em>that</em> if you find that anything needs filtering. </p> <p>So, following your schema:</p> <pre><code>accelVelocityArray.Pitch = accelVelocityArray.Pitch + accelAccelArray.Pitch*deltaTime; accelAngleArray.Pitch = accelAngleArray.Pitch + accelVelocityArray.Pitch*deltaTime; gyroAngleArray.Pitch = gryoVelocityArray.Pitch*deltaTime; alpha = 0.95; pitchCF = alpha*gyroAngleArray.Pitch + (1-alpha)*accelAngleArray.Pitch; </code></pre> <p><em>THIS</em> is the output of a complimentary filter. Now if you find that your craft is not responsive enough you can increase <code>alpha</code> (take more of the reading from the gyro), or if you find that there is too much drift you can decrease <code>alpha</code> (take more of the reading from the accelerometer). If you find that the reading is too noisy, then you can put this through a <em>lag filter</em>:</p> <pre><code>beta = 0.9; filteredPitch = beta*pitchCF + (1-beta)*filteredPitch; </code></pre> <p>NOW you adjust <code>beta</code> to determine how smooth you want the <em>combined</em> pitch reading to be. </p> <p>I hope this clarifies things for you. </p> <p>Now, regarding your actual question, I don't think it's possible to give a quality answer with the (lack of) data you have provided. </p> <p>It is true that, for a real system with inertia, if you set the proportional gain too high the system will oscillate. However, it is not clear if you are only using a proportional gain (if you have kI and kD set to zero). Also, because the craft is tethered and you are holding it, it's not clear if the craft is responding to its own controller or if it's responding to the external forces (you and the tether).</p> <p>I'm also not sure how stable your control loop timing is and/or how you are determining what <code>deltaTime</code> should be. If you're interested in manual tuning of a PID controller, <strong>first be sure that you are processing your sensor output correctly</strong> (see my above comments regarding sensor fusion and filtering), then give the <a href="https://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method" rel="nofollow">Ziegler-Nichols tuning method</a> a try:</p> <ol> <li>Set kP, kI, and kD to zero.</li> <li>Increase kP until the system achieves consistent oscillations.</li> <li>Record kP (as the "ultimate" gain, kU), and the oscillation period. </li> <li>Set kP, kI, and kD to values based on the ultimate gain and oscillation period. </li> </ol>
8657
2015-12-11T10:15:07.393
|quadcopter|pid|sensor-fusion|tuning|filter|
<p>Good day,</p> <p>I am currently working on a project using Complementary filter for Sensor fusion and PID algorithm for motor control. I viewed a lot of videos in youtube as well as consulted various blogs and papers with what to expect with setting the P gain to high or too low.</p> <p>P Gain too low</p> <blockquote> <p>easy over correction and easy to turn by hand</p> </blockquote> <p>P Gain too high</p> <blockquote> <p>oscillates rapidly</p> </blockquote> <p>I have a sample video of what I think a high P gain (3 in my case) looks like. Do this look like the P gain is too high? <a href="https://youtu.be/8rBqkcmVS1k" rel="nofollow">https://youtu.be/8rBqkcmVS1k</a> </p> <p>From the video:</p> <p>I noticed that the quad sometimes corrects its orientation immediately after turning few degrees (4-5 deg). However, it does not do so in a consitent manner.</p> <p>It also overcorrects.</p> <p>The reason behind my doubt is because the quadcopter doesn't react immediately to changes. I checked the complementary filter. It updates (fast) the filtered angle reading from sudden angular acceleration from the gyro as well as updates the long term filtered angle changes from the accelerometer (albeit slowly). If I am right, is the the P gain is responsible for compensating the "delay"?</p> <p>The formula I used in the complementary filter is the following:</p> <pre><code>float alpha = 0.98; float pitchAngleCF=(alpha)*pitchAngleCF+gyroAngleVelocityArray.Pitch*deltaTime)+(1-alpha)*(accelAngleArray.Pitch); </code></pre> <p>Here is a video for a P gain of 1: <a href="https://youtu.be/rSBrwULKun4" rel="nofollow">https://youtu.be/rSBrwULKun4</a></p> <p>Your help would be very appreciated :)</p>
P gain tuning for quadcopter (Is my perception for a P-gain too high correct?)
<p>You don't have axes shown, so the math is a little hard to follow, but based on the bottom row of $ J_2$ it looks like y is along the length of link 2. If this is the case, then I would expect rotation and x and z to have the same moment of inertia, because each would be rotating the cylinder about the end instead about its longitudinal axis. </p> <p>This means that Ixx and Izz should cancel where their signs are different, as in J11 and J33, leaving only Iyy/2, but instead of that I see zeros as the entries there. As this is (a model of) a real link, Iyy must be something; I would assume $ mr^2/2$, or 0.0025. This should be in J11 and J33. I haven't checked your math on J22, but given that Iyy doesn't cancel out there and you end give a very "round" 2/3 as an answer I suspect it is incorrect as well. </p> <p>Check your math on your J2 matrix and/or provide drawings of link 2 with a set of datums (dati?) (side note: the plural of datum is data, but I think it would be confusing to ask for "drawings with a set of data" when I mean graphical reference points, so I'll leave it as datums) I think this is where your problem lies. </p>
8661
2015-12-11T14:58:24.993
|dynamics|matlab|torque|
<p>I have a 2DOF robot with 2 revolute joints, as shown in the diagram below. I'm trying to calculate (using MATLAB) the torque required to move it but my answers don't match up with what I'm expecting. <a href="https://i.stack.imgur.com/K7K0t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K7K0t.png" alt="Robot configuration"></a></p> <p>Denavit-Hartenberg parameters: $$ \begin{array}{c|cccc} joint &amp; a &amp; \alpha &amp; d &amp; \theta \\ \hline 1 &amp; 0 &amp; \pi/2 &amp; 0 &amp; \theta_1 \\ 2 &amp; 1 &amp; 0 &amp; 0 &amp; \theta_2 \\ \end{array} $$</p> <p>I'm trying to calculate the torques required to produce a given acceleration, using the Euler-Lagrange techniques as described on pages 5/6 in <a href="http://www.worldacademicunion.com/journal/1746-7233WJMS/wjmsvol05no01paper02.pdf" rel="nofollow noreferrer">this paper</a>. Particularly, $$ T_i(inertial) = \sum_{j=0}^nD_{ij}\ddot q_i$$ where $$ D_{ij} = \sum_{p=max(i,j)}^n Trace(U_{pj}J_pU_{pi}^T) $$ and $$ J_i = \begin{bmatrix} {(-I_{xx}+I_{yy}+I_{zz}) \over 2} &amp; I_{xy} &amp; I_{xz} &amp; m_i\bar x_i \\ I_{xy} &amp; {(I_{xx}-I_{yy}+I_{zz}) \over 2} &amp; I_{yz} &amp; m_i\bar y_i \\ I_{xz} &amp; I_{yz} &amp; {(I_{xx}+I_{yy}-I_{zz}) \over 2} &amp; m_i\bar z_i \\ m_i\bar x_i &amp; m_i\bar y_i &amp; m_i\bar z_i &amp; m_i \end{bmatrix} $$</p> <p>As I was having trouble I've tried to create the simplest example that I'm still getting wrong. For this I'm attempting to calculate the inertial torque required to accelerate $\theta_1$ at a constant 1 ${rad\over s^2}$. As $\theta_2$ is constant at 0, I believe this should remove any gyroscopic/Coriolis forces. I've made link 1 weightless so its pseudo-inertia matrix is 0. I've calculated my pseudo-inertia matrix for link 2: $$ I_{xx} = {mr^2 \over 2} = 0.0025\\ I_{yy} = I_{zz} = {ml^2 \over 3} = 2/3 $$ $$ J_2 =\begin{bmatrix} 1.3308 &amp; 0 &amp; 0 &amp; -1 \\ 0 &amp; 0.0025 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0.0025 &amp; 0 \\ -1 &amp; 0 &amp; 0 &amp; 2 \\ \end{bmatrix} $$</p> <p>My expected torque for joint 1: $$ T_1 = I\ddot \omega \\ T_1 = {ml^2 \over 3} \times \ddot \omega \\ T_1 = {2\times1\over3}\times1 \\ T_1= {2\over3}Nm $$</p> <p>The torque calculated by my code for joint 1:</p> <pre><code>q = [0 0]; qdd = [1 0]; T = calcT(q); calc_inertial_torque(1, T, J, qdd) </code></pre> <p>$$ T_1={4\over3}Nm $$</p> <p><strong>So this is my problem</strong>, my code $T_1$ doesn't match up with my simple mechanics $T_1$.</p> <p>The key functions called are shown below.</p> <pre><code>function inertial_torque_n = calc_inertial_torque(n, T, J, qdd) inertial_torque_n = 0; for j = 1:2 Mnj = 0; joint_accel = qdd(j); for i = 1:2 Uij = calcUij(T, i, j); Ji = J(:,:,i); Uin = calcUij(T, i, n); Mnj = Mnj + trace(Uin*Ji*transpose(Uij)); end inertial_torque_n = inertial_torque_n + Mnj * joint_accel; end end function U=calcUij(T,i,j) T(:,:,j) = derivative(T(:,:,j)); U = eye(4,4); for x = 1:i U = U*T(:,:,x); end end function T = derivative(T) dt_by_dtheta = [0 -1 0 0 1 0 0 0 0 0 0 0 0 0 0 0]; T = dt_by_dtheta*T; end </code></pre> <p>I realise this is a fairly simple robot, and a complicated process - but I'm hoping to scale it up to more DOF once I'm happy it works. </p>
Calculating required torques for a given trajectory using Lagrange-Euler
<p>You're trying to do numeric integration, which takes the form:</p> <p>$$ \mbox{integrated value } +=\mbox{derivative} * \mbox{elapsed time} $$</p> <p>What you have instead of <code>elapsed time</code> is some value called <code>speed</code>. Try setting up your numeric integration code on an interrupt, where the interrupt timing is what you would use in place of <code>elapsed time</code>. </p> <p>I'm not sure what method you're using to get from quaternions to the rotated acceleration vector, but I would like to point out that you can't just do numeric integration on quaternions like you can with accelerations or velocities. See <a href="http://www-users.cs.umn.edu/~trawny/Publications/Quaternions_3D.pdf">page 11 of this document</a> for more detail, but briefly, take your gyroscope angular accelerations $\omega_x \omega_y \omega_z$ and the existing quaternion $q(t)$ and calculate the quaternion derivative:</p> <p>$$ \dot{q}(t) = \frac{1}{2} \left[ \begin{array}{cccc} 0 &amp; \omega_z &amp; -\omega_y &amp; \omega_x \\ -\omega_z &amp; 0 &amp; \omega_x &amp; \omega_y \\ \omega_y &amp; -\omega_x &amp; 0 &amp; \omega_z \\ -\omega_x &amp; -\omega_y &amp; -\omega_z &amp; 0 \end{array} \right] q(t) $$</p> <p>Then you numerically integrate <em>that</em>, such that, for a discrete system,</p> <p>$$ q_k = q_{k-1} + \dot{q}_k * dT $$</p> <p>You do not provide any code on how you're updating your acceleration vector, no code on how you're getting a quaternion, etc., so it's not possible to give you any more specific feedback than this.</p>
8680
2015-12-14T12:53:17.020
|accelerometer|algorithm|
<p>Please help me with the following task. I have MPU 9150 from which I get acceleration/gyro and magnetometer data. What I'm currently interested in is to get the orientation and position of the robot. I can get the position using quaternions. Its quite stable. Rarely changes when staying still. But the problem is in converting accelerometer data to calculate the displacement.</p> <p>As I know its required to to integrate twice the accel. data to get position. Using quaternion I can rotate the vector of acceleration and then sum it's axises to get velocity then do the same again to get position. But it doesn't work that way. First of all moving the sensor to some position and then moving it back doesn't give me the same position as before. The problem is that after I put the sensor back and it stays without any movement the velocity doesn't change to zero though the acceleration data coming from sensors are zeros.</p> <p>Here is an example (initially its like this): the gravity: -0.10 -0.00 1.00<br> raw accel: -785 -28 8135<br> accel after scaling to +-g: -0.10 -0.00 0.99<br> the result after rotating accel vector using quaternion: 0.00 -0.00 -0.00</p> <p>After moving the sensor and putting it back it's acceleration becomes as: 0.00 -0.00 -0.01 0.00 -0.00 -0.01 0.00 -0.00 -0.00 0.00 -0.00 -0.01 and so on. If I'm integrating it then I get slowly increasing position of Z.</p> <p>But the worst problem is that the velocity doesn't come back to zero</p> <p>For example if I move sensor once and put it back the velocity will be at: -0.089 for vx and 0.15 for vy</p> <p>After several such movements it becomes: -1.22 for vx 1.08 for vy -8.63 for vz</p> <p>and after another such movement:</p> <p>vx -1.43 vy 1.23 vz -9.7</p> <p>The x and y doesnt change if sensor is not moving but Z is changing slowly. Though the quaternion is not changing at all.</p> <p>What should be the correct way to do that task?</p> <p>Here is the part of code for integrations:</p> <pre><code>vX += wX * speed; vY += wY * speed; vZ += wZ * speed; posX += vX * speed; posY += vY * speed; posZ += vZ * speed; </code></pre> <p>Currently set speed to 1 just to test how it works.</p> <p><strong>EDIT 1:</strong> Here is the code to retrieve quaternion and accel data, rotate and compensate gravity and get final accel data.</p> <pre><code> // display initial world-frame acceleration, adjusted to remove gravity // and rotated based on known orientation from quaternion mpu.dmpGetQuaternion(&amp;q, fifoBuffer); mpu.dmpGetAccel(&amp;aaReal, fifoBuffer); mpu.dmpGetGravity(&amp;gravity, &amp;q); //Serial.print("gravity\t"); Serial.print(gravity.x); Serial.print("\t"); Serial.print(gravity.y); Serial.print("\t"); Serial.print(gravity.z); Serial.print("\t"); //Serial.print("accell\t"); Serial.print(aaReal.x); Serial.print("\t"); Serial.print(aaReal.y); Serial.print("\t"); Serial.print(aaReal.z); Serial.print("\t"); float val = 4.0f; float ax = val * (float)aaReal.x / 32768.0f; float ay = val * (float)aaReal.y / 32768.0f; float az = val * (float)aaReal.z / 32768.0f; theWorldF.x = ax; theWorldF.y = ay; theWorldF.z = az; //Serial.print("scaled_accel\t"); Serial.print(ax); Serial.print("\t"); Serial.print(ay); Serial.print("\t"); Serial.print(az); Serial.print("\t"); theWorldF.x -= gravity.x; theWorldF.y -= gravity.y; theWorldF.z -= gravity.z; theWorldF.rotate(&amp;q); //gravity.rotate(&amp;q); //Serial.print("gravity_compensated_accel\t"); Serial.print(theWorldF.x); Serial.print("\t"); Serial.print(theWorldF.y); Serial.print("\t"); Serial.print(theWorldF.z); Serial.print("\t"); Serial.print(deltaTime); Serial.println(); </code></pre> <blockquote> <p>EDIT 2:</p> </blockquote> <p>dmpGetQuaternion, dmpGetAccel functions are just reading from the FIFO buffer of MPU. </p> <p>dmpGetGravity is:</p> <pre><code>uint8_t MPU6050::dmpGetGravity(VectorFloat *v, Quaternion *q) { v -&gt; x = 2 * (q -&gt; x*q -&gt; z - q -&gt; w*q -&gt; y); v -&gt; y = 2 * (q -&gt; w*q -&gt; x + q -&gt; y*q -&gt; z); v -&gt; z = q -&gt; w*q -&gt; w - q -&gt; x*q -&gt; x - q -&gt; y*q -&gt; y + q -&gt; z*q -&gt; z; return 0; } </code></pre> <p><strong>EDIT 3:</strong> the library for using MPU 9150: <a href="https://github.com/sparkfun/MPU-9150_Breakout" rel="nofollow">https://github.com/sparkfun/MPU-9150_Breakout</a></p> <p><strong>EDIT 4: Another example</strong> </p> <p>gravity vector: -1.00 -0.02 0.02<br> raw accel data: -8459 -141 125 accel data scaled (+-2g range): -1.03 -0.02 0.02<br> gravity compensation and rotation of accel data: -0.01 0.00 0.33 </p>
calculating position based on accelerometer data
<p>Barometers are cheap, easy to use, and very sensitive. They can be placed inside a sealed rubber ball and detect changes in pressure. See for example these sensors: <a href="http://www.takktile.com/" rel="nofollow">www.takktile.com</a>. (The makers of these sensors encase them in rubber, which you may or may not want to do).</p>
8684
2015-12-15T06:43:42.397
|wheeled-robot|
<p>I am thinking of a project proposal for my robotics course and we are required to make one that has a potential application on physical therapy or medical fields. One thing that came across my mind is a motorized wheelchair that moves when a stress ball control is squeezed by the user. As a robotics novice, I wonder if I could integrate a sensor circuit with a rubber ball so that when it is pressed, perhaps by a stroke patient, it triggers some driver circuit. is this possible? if so, how? My experience with robotics is limited to arduino, servo motors and basic sensors.</p>
Is there a way to use a stress-ball-like device as acceleration control interface
<p>I would add a few lines after you check that theta is between +/- 2pi:</p> <pre><code>meanDistance = (SL + SR)/2; posX = posX + meanDistance*cos (theta); posY = posY + meanDistance*sin(theta); </code></pre> <p>This of course assumes theta is positive CCW starting from the +x-axis. This is similar but not the same as your code for X and Y, but your code appears to put the X origin on the right wheel and Y origin on the left, and I'm not sure why you would do that. </p> <p>Additionally, your code for theta uses a small angle sine approximation, sin(theta) = theta = dy/dL. You could use a true asin for this instead.</p> <p>Otherwise, taking my comments above into account, this is how I would do differential-drive dead reckoning. Your next, least-expensive upgrade would be a compass or magnetometer, but that would probably come with a decent IMU, which would be your second upgrade, with GPS being the last. But, as I said, if all you need is dead reckoning, you've basically got it. </p>
8693
2015-12-16T18:42:50.910
|arduino|kinematics|odometry|differential-drive|
<p>I am using a differential wheel robot for my project. I need to know the current coordinates of the robot with respect to it's initial position taken as the origin. I m doing the computation on an Arduino UNO and the only sensory input that I get is from the two encoders. I have the function updateOdomenty() called in the loop() and this is it's corresponding code:</p> <pre><code>void updateOdometry() { static int encoderRPosPrev = 0; static int encoderLPosPrev = 0; float SR = distancePerCount * (encoderRPos - encoderRPosPrev); float SL = distancePerCount * (encoderLPos - encoderLPosPrev); encoderRPosPrev = encoderRPos; encoderLPosPrev = encoderLPos; x += SR * cos(theta); y += SL * sin(theta); theta += (SR - SL) / wheelDistance; if(theta &gt; 6.28) theta -= 6.28; else if(theta &lt; -6.28) theta += 6.28; } </code></pre> <p>This is the code that me and my team mates made after reading <a href="http://rossum.sourceforge.net/papers/DiffSteer/" rel="nofollow">this</a> paper. I am wondering if this is the best possible way to solve this problem with an Arduino. If not, how is odometry done in differential wheeled systems?</p>
How to perform odometry on an arduino for a differential wheeled robot?
<p>In Radio Control lingo, we refer to PWM (Pulse Width Modulation) to describe a very specific <strong>control</strong> signal, instead of a power signal. Check the <a href="https://en.wikipedia.org/wiki/Servo_control#Pulse_duration" rel="nofollow">wiki article</a> for more info. This doesn't carry the power required to drive the servo, but encodes angle information for the servo to obey.</p> <p>In order to enclose PWM setpoint information (for more than one servo) in the same wire, we use a PPM signal (Pulse Position Modulation). Essentially, this is multiple PWMs, each one shifted in time a bit, and interleaved. More info <a href="http://rcarduino.blogspot.gr/2012/11/how-to-read-rc-receiver-ppm-stream.html" rel="nofollow">here</a>.</p> <p>In your case, sadly the repo author has made a mistake, and he meant to say <strong>PMW</strong> ESC, instead of CAN-bus, etc.</p> <p>Luckily, the arduino boards can drive RC PWM devices using the <em>servo</em> library.</p>
8698
2015-12-17T08:22:56.087
|arduino|quadcopter|esc|
<p>First, I don't see any manufacturer or online store telling whether an ESC is PPM controlled or not. Second, I have also been Googling and asking comments in <a href="https://www.youtube.com/watch?v=OZNxbxL7cdc" rel="nofollow"><em>all about ESCs</em></a> youtube videos for long, but I couldn't find anything useful.</p> <blockquote> <p>Why do I need PPM controlled ESCs?</p> </blockquote> <p>I'm doing a project based on <a href="https://github.com/romainbaud/andro-copter" rel="nofollow">AndroCopter</a> and its clearly mentioned that it specifically requires the use of PPM controlled ESCs.</p> <blockquote> <p>Can I use any ESCs available in the market for this project?</p> </blockquote> <p>It's also mentioned in the Github repo that PPM controlled ESCs are the most common ones. However from some who explained ESCs in Youtube video has commented back for my doubt telling that most common ESCs are PWM controlled which is contradicting the previous statement.</p> <p><strong>PS</strong>: I need to use Arduino Mega to control the four ESCs. And Arduino Mega is programmed to send out PPM signals which is exactly why I need PPM controlled ESCs. Correct me if I made any mistakes.</p>
What exactly are PPM controlled ESCs? Are most ESCs available to build quadcopters PPM Controlled?
<p>The input to the autopilot is a 50Hz PWM signal with pulse width varying from 1000 µs to 2000 µs (this is with APM, it should be same with KK2). In case of a RC transmitter this signal is modulated on a 2.4GHz channel and the PWM values are mapped to the output of a potentiometer on the remote. Now, you can keep an Arduino on your quadrotor with an XBee, and generate the same signal using the 'Servo' library (it also generates a signal of the same frequency). And on your PC you can write a python script to transmit PWM values through an XBee connected on your serial port.</p>
8704
2015-12-18T23:22:08.967
|pid|quadcopter|
<p>I want to make pc controlled quadrotor. All the tutorials/projects made with rc receiver. I want to use arduino or xbee instead of rc receiver for pc control of quadrotor. How can I do this. </p> <p>Note: I have arduino, beaglebone, xbee, hc-05, KK2 and multiwii parts. </p>
Using another device instead of RC transmitter
<p>I got a working solution using both a KD-tree and a Graph (iGraph) provided by Python. Below is the working code:</p> <pre><code>def makeLine(distance, q_near, index, xrand, nodes, graph, elem): num = int((distance)/0.01) for i in range(1, num+1): qnext = (xrand - q_near)/distance * 0.01 * i + q_near #check for collision at qnext, if no collision detected: nodes = numpy.vstack([nodes, qnext]) graph.add_vertex(len(nodes)-1) if i == 1 and elem == 1: graph.add_edge(index, len(nodes)-1) else: graph.add_edge(len(nodes)-2, len(nodes)-1) #else if there is collision, return 0, nodes, ((xrand - q_near)/distance*0.01*(i-1)+q_near return 1, nodes, qnext, graph def BIRRT(start, goal): gStart = Graph() gGoal = Graph() gStart.add_vertex(0) gGoal.add_vertex(0) startNode = start goalNode = goal limits = numpy.array([[-2.461, .890],[-2.147,1.047],[-3.028,3.028],[-.052,2.618],[-3.059,3.059],[-1.571,2.094],[-3.059,3.059]]) for i in range(1, 10000): xrand = numpy.array([]) for k in range(0, len(limits)): xrand = numpy.append(xrand, random.uniform(limits[k,:][0], limits[k,:][1])) kdTree = scipy.spatial.KDTree(startNode[:, 0:7]) distance, index = kdTree.query(xrand) q_near = kdTree.data[index] index = numpy.where(numpy.all(startNode==q_near, axis=1))[0][0] success, startNode, qFinal, gStart = makeLine(distance, q_near, index, xrand, startNode, gStart, 1) kdTree2 = scipy.spatial.KDTree(goalNode[:, 0:7]) distance2, index2 = kdTree2.query(qFinal) q_near2 = kdTree2.data[index2] index = numpy.where(numpy.all(goalNode==q_near2, axis=1))[0][0] success, startNode, qFinal2, gStart = makeLine(distance2, qFinal, q_near2, startNode, gStart, 0) if success: NodePath = numpy.array(start) graphPath = [] graphPath.append(gStart.get_all_shortest_paths(0, numpy.where(numpy.all(startNode==qFinal2, axis=1))[0][0])[0]) k = len(graphPath) graphPath.append(gGoal.get_all_shortest_paths(i, 0)[0]) for i in range(0, len(graphPath)-1): if i &lt; k: NodePath = numpy.vstack((NodePath, startNode[graphPath[i]])) else: NodePath = numpy.vstack((Nodepath, goalNode[graphPath[i]])) NodePath = numpy.vstack((NodePath, goal)) return 1, NodePath xrand = numpy.array([]) for k in range(0, len(limits)): xrand = numpy.append(xrand, random.uniform(limits[k,:][0], limits[k,:][1])) kdTree = scipy.spatial.KDTree(goalNode[:, 0:7]) distance, index = kdTree.query(xrand) q_near = kdTree.data[index] index = numpy.where(numpy.all(goalNode==q_near, axis=1))[0][0] success, goalNode, qFinal, gGoal = makeLine(distance, q_near, index, xrand, goalNode, gGoal, 1) kdTree2 = scipy.spatial.KDTree(startNode[:, 0:7]) distance2, index2 = kdTree2.query(qFinal) q_near2 = kdTree2.data[index2] index = numpy.where(numpy.all(startNode==q_near2, axis=1))[0][0] success, goalNode, qFinal2, gGoal = makeLine(distance2, qFinal, index, q_near2, goalNode, gGoal, 0) if success: NodePath = numpy.array(start) graphPath = [] graphPath.append(gStart.get_all_shortest_paths(0, i)[0]) k = len(graphPath) graphPath.append(gGoal.get_all_shortest_paths(numpy.where(numpy.all(goalNode==qFinal2, axis=1))[0][0], 0)[0]) for i in range(0, len(graphPath)-1): if i &lt; k: NodePath = numpy.vstack(NodePath, startNode[graphPath[i]]) else: NodePath = numpy.vstack(NodePath, goalNode[graphPath[i]]) NodePath = numpy.vstack((NodePath, goal)) return 1, NodePath return 0 </code></pre> <p>It's pretty messy, yes but it works for now. Now I need to figure out how to determine if a joint angle configuration will cause a collision in Gazebo...</p>
8709
2015-12-20T20:47:06.753
|inverse-kinematics|motion-planning|planning|rrt|
<p>I've kind of finished implementing a BiRRT for a 7 DOF arm, using a KD-tree from numpy.spatial in order to get nearest queries. A picture is below:</p> <p><a href="https://i.stack.imgur.com/QA2Qj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QA2Qj.png" alt="BIRRT pseudocode"></a></p> <p>I'm currently having trouble with the fact that it is impossible to retrieve a path from the start node to a particular node using a KD-tree, and while I do have an array of of all the nodes, and there are edges that can be calculated between subsets of the array, but the edges are not in any useful order. Can anyone give me some tips on how I'd retrieve a path from the starting node in the first array, and the ending node in the second array? Are there any useful data structures that would let me do this? Below is my code</p> <pre><code>def makeLine(distance, q_near, xrand, nodes): num = int((distance)/0.01) for i in range(1, num+1): qnext = (xrand - q_near)/distance * 0.01 * i + q_near #check for collision at qnext, if no collision detected: nodes = numpy.append(nodes, qnext) #else if there is collision, return 0, nodes, ((xrand - q_near)/distance*0.01*(i-1)+q_near return 1, nodes, qnext def BIRRT(start, goal): startNode = numpy.array([start]) goalNode = numpy.array([goal]) limits = numpy.array([[-2.461, .890],[-2.147,1.047],[-3.028,3.028],[-.052,2.618],[-3.059,3.059],[-1.571,2.094],[-3.059,3.059]]) for i in range(1, 10000): xrand = numpy.array([]) for k in range(0, len(limits)): xrand = numpy.append(xrand, random.uniform(limits[k,:][0], limits[k,:][1])) kdTree = scipy.spatial.KDTree(startNode[:, 0:7]) distance, index = kdTree.query(xrand) q_near = kdTree.data[index] success, startNode, qFinal = makeLine(distance, q_near, xrand, startNode) kdTree2 = scipy.spatial.KDTree(goalNode[:, 0:7]) distance2, index2 = kdTree2.query(qFinal) q_near2 = kdTree2.data[index2] success, startNode, qFinal2 = makeLine(distance2, qFinal, q_near2, startNode) if success: return 1, startNode, goalNode, 1, qFinal, qFinal2 xrand = numpy.array([]) for k in range(0, len(limits)): xrand = numpy.append(xrand, random.uniform(limits[k,:][0], limits[k,:][1])) kdTree = scipy.spatial.KDTree(goalNode[:, 0:7]) distance, index = kdTree.query(xrand) q_near = kdTree.data[index] success, goalNode, qFinal = makeLine(distance, q_near, xrand, goalNode) kdTree2 = scipy.spatial.KDTree(startNode[:, 0:7]) distance2, index2 = kdTree2.query(qFinal) q_near2 = kdTree2.data[index2] success, goalNode, qFinal2 = makeLine(distance2, qFinal, q_near2, goalNode) if success: return 1, startNode, goalNode, 2, qFinal, qFinal2 return 0 </code></pre>
BiRRT: Getting path from an array of 7 DOF angle configurations
<p>Your manipulator is almost identical to the Phantom Omni in the below picture,</p> <p><a href="https://i.stack.imgur.com/IY2N0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IY2N0.jpg" alt="enter image description here" /></a></p> <p>In this paper <a href="https://ieeexplore.ieee.org/document/6318365" rel="nofollow noreferrer">Teleoperation with inverse dynamics control for PHANToM Omni haptic device</a>, the Kinematics and Dynamics of the device are provided.</p> <p>For fun, I've simulated the model in the aforementioned paper. I've chosen the PD controller to carry out the simulation. For the desired trajectories, I've chosen the following</p> <p><span class="math-container">$$ \theta^{d}_{1}(t) = 0.1\sin(\pi t) \\ \theta^{d}_{2}(t) = 0.1\cos(\pi t) \\ \theta^{d}_{2}(t) = 0.1\cos(\pi t) \\ $$</span></p> <p>The controller input is</p> <p><span class="math-container">$$ u = M(q)*( \ddot{q}^{d} + K_{d} \dot{e}(t) + K_{p}e(t) ) + V(q,\dot{q}) \dot{q} + N $$</span></p> <p>The result is</p> <p><a href="https://i.stack.imgur.com/nXgZ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nXgZ0.png" alt="enter image description here" /></a></p> <p>If you have some uncertianties in your model, you should choose PID controller instead of the PD controller.</p>
8717
2015-12-21T21:32:03.710
|robotic-arm|
<p><a href="https://i.stack.imgur.com/IxIrc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IxIrc.jpg" alt="enter image description here"></a></p> <p>I have got a robot that exactly looks like as shown in the figure above. I have worked out the inverse kinematics analytical solution without the base rotation (considering 2 dof alone) but I am not able to find the analytical solution including the base(3 dof). How do I find the anlytical solution for this robot ?? </p>
3 degrees of freedom analytical solution
<p>in my opinion one possibility on how the robot could find people with fire is by using image processing. You can obtain a real time Infra red image of the outside world and then check for levels of heat measured. Using recognition alogirthms the robot could decide whether a particular figure is a human or not and if it is a human then by checking the heat generated it could decide whether the person is in danger or not. The source of heat could be the main setpoint that the robot wants to reach.</p>
8720
2015-12-22T11:26:51.623
|algorithm|machine-learning|
<p>I am working on proposal of autonomous fire fight robots but I'm little bit confused about its sensor and algorithms. My friend suggested there are some path finding <code>AI</code> algorithms like <code>BFS</code>, <code>DFS</code>, <code>A*</code>, and <code>Dijkstra's Algorithm</code> which are used in robots, but I didn't believe it. </p> <p>I want to ask: Are these algorithms used in real world robots or some other genetic algorithms? How does a robot discover path to detect, and differentiate, a human from fire? I only want some explanation that gives knowledge.</p>
Which algorithms are used in autonomous robot
<p>Yes this is entirely possible, and as FuaZe said, it's called a bootloader.</p> <p>You essentially have two programs on your chip, each with their own memory area; the bootloader and the application. Preferably, the bootloader area is write-protected to make sure you don't accidentally destroy it.</p> <p>With the bootloader, you can use any algorithm you want. If you have enough space, the bootloader could be large enough to communicate with the outside world, get the hex file and program the application. If not, the older application can do the communication, write the hex file somewhere (external SD card for example as you said), restart to enter the bootloader and then it would read the hex file and write to flash.</p> <p>The bootloader is naturally the first program to run on restart, and it usually either has a timeout, where if there is no communication it would jump to the application, waits for an "exit bootloader" command, or with your example, checks if there is something to bootload and if not immediately jumps to the application.</p> <hr> <p>Note that with the bootloader, you need to be careful! You need to think about all cases where there could be power cuts in the middle of programming, or corruption of the program during write etc. In short, you need to make it bullet proof, or you may find yourself in a situation where your microcontroller is bricked. This is similar to how windows 10 bricks computers.</p> <p>If your bootloader communicates directly with the outside world to download the new firmware, you have the lowest risks. You just need to protect the bootloader itself and you should be able to recover from errors.</p> <p>If your bootloader reads the program from somewhere on the board, where the firmware is written by the application itself, you need to be sure that the application is always capable of writing new firmware to that somewhere on the board. This may not always be the case, for example because there could be bugs among other reasons. In such a case, I suggest having a read-only firmware on the board which contains the original well-tested application. A button for example could tell the bootloader that you have messed up and would like to have the original application bootloaded.</p> <p>The case with your external SD card fits better in the first case, since you can always recover with removing the SD card, writing the correct firmware on it with a computer and plugging it for bootload.</p>
8722
2015-12-22T11:51:59.883
|microcontroller|
<p>Can a micro controller flash itself? What i mean to say is, I have an STM32F103RG with 1Mb Flash Size. I have a UART Communication modem connected to it. Can i send a firmware (.HEX or .BIN) to the microcontroller via the radio verify checksums, on sucess the microcontroller saves the file into a SD Card ( via SPI ) and then restarts itself and start flashing itself reading from the file ( in the sD card )?</p> <p>Can something like this be done or an external MCU is required to carry out the flashing?</p> <p>The purpose is the microcontroller and radio will be sitting at a remote location and i need a way to change the microcontroller's firmware by sending it a firmware update file remotely. </p>
Microcontroller flashing itself
<p>The answer to what capabilities the quadcopter to have is: link loss algorithms and navigation capacity.</p> <p>Link loss procedure for a robot could be:</p> <ol> <li><p>determine that there's a link loss (no commands for 1 second?)</p></li> <li><p>try to aquire link back, for a while (a minute?) (increase altitude while circling a calm radius)</p></li> <li><p>increase altitude to the Safe to Navigate altitude (which is mission plan dependent, and maybe configured before each flight).</p></li> <li><p>navigate to Lat-Lon of the Home Point (using intermediate points or direct navigation, if no intermediate waypoints exist).</p></li> <li><p>slowly descend to the ground after some time (depends on the endurance capability of the vehicle); doing some alerting actions (playing music, waving around, displaying lights on and off) for any possible human/lives on ground would be cool.</p></li> </ol> <p>Important: Home Point must be chosen away from people, cars, etc. And it should also be relatively easy to retrieve the vehicle back.</p> <p>This procedure must be known by crew and any parties that are around the flight zone (other modelers, aviation authorities, friends, police, etc), and Home Point (and intermediate waypoints) must be set up / controlled before each takeoff.</p>
8726
2015-12-22T18:52:05.353
|arduino|quadcopter|sensors|localization|gps|
<p>What should a quadcopter have, or have access to, in order to make this 'return home' feature work? Is GPS enough? What is the approach needed to make this happen?</p> <p>I used a Arduino Mega 2560 with IMU to stable my quadcopter.</p>
How to have a 'Auto Go Home' feature, like the DJI Phantom 3, on a project built quadcopter?
<p>If the pinion will be close to the gearbox plate, use a washer or <a href="https://en.wikipedia.org/wiki/Thrust_bearing" rel="nofollow noreferrer">thrust bearing</a> between the pinion and plate. The washer can be made from teflon or polyethylene sheet, or you could use thin steel washers. <a href="https://en.wikipedia.org/wiki/Shaft_collar" rel="nofollow noreferrer">Shaft collars</a> also can be used to keep the pinion from shifting in and out, but a pinion with a setscrew (as pictured below) would be more compact. An <a href="http://www.aliexpress.com/item/N20-DC6V-100RPM-Gear-Motor-Miniature-High-Torque-Gear-Box-Motor/32337727477.html?spm=2114.01020208.3.64.a8rHF5&amp;ws_ab_test=searchweb201556_2,searchweb201644_4_79_78_77_82_80_62_81,searchweb201560_8" rel="nofollow noreferrer">N20 gear motor</a> apparently has a 3 mm diameter shaft; one of the shaft collars below is 3 mm. The pictured thrust bearing is 2.5 mm bore.</p> <p><a href="https://i.stack.imgur.com/VG8zd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VG8zd.png" alt="enter image description here"></a></p>
8728
2015-12-23T07:22:18.730
|motor|
<p>I'm hoping to use a DC motor to drive a cog bar horizontally along a track. AFAIK, I'd need to install a (plastic) cog on the motor shaft, which itself grips on the (plastic) cog bar. Does anyone know how to prevent the cog from shifting on the shaft? The shaft is 10mm long and I'd like to make sure the cog cog sits at 5mm, where the cog bar is.</p> <p>Any help will be appreciated.</p> <p>Thanks</p>
DC motor shaft and gear installation
<p>The Roomba expects binary data. When you type in gtkterm, you are sending <a href="http://www.asciitable.com/" rel="nofollow noreferrer">ASCII</a> data. (Unless there is a feature of gtkterm i don't know about). For example when you type "128" into the terminal, you are actually sending 3 bytes of data: [49 50 56]. And because many commands of the <a href="http://www.irobot.com/~/media/MainSite/PDFs/About/STEM/Create/create_2_Open_Interface_Spec.pdf?la=en" rel="nofollow noreferrer">OI Spec</a> are outside the typeable ASCII range, using the terminal in this manner won't work.</p> <p>You should use a programming language such as <a href="https://robotics.stackexchange.com/questions/7158/c-and-create2">C</a>, <a href="https://robotics.stackexchange.com/questions/7018/irobot-create-2-c-connection">C#</a>, <a href="https://robotics.stackexchange.com/questions/6831/where-can-i-get-openinterface-py">Python</a>, etc to send the data packet. </p>
8731
2015-12-23T11:39:36.453
|irobot-create|roomba|linux|
<p>I have a iRobot Create model 4400 and I need to send commands to the open interface through Ubuntu. I'm using gtkterm at 57600 baud but when I press play button, it only drives around itself. I have tried to send commands as raw data and as hexadecimal data but it doesn't work.</p> <p>What am I doing wrong?</p>
Sending commands from Ubuntu
<p>Apparently a magnetometer is useless in indoor environments like man made buildings. I cite from this paper <a href="http://plan.geomatics.ucalgary.ca/papers/afzal%20et%20al_navigation_2011c.pdf" rel="nofollow">Multi-Magnetometer Based Perturbation Mitigation for Indoor Orientation Estimation</a></p> <blockquote> <p>Nevertheless, the success of these sensors for orientation estimation is conditioned by their capacity to sense Earth’s magnetic field in environments full of magnetic anomalies like urban canyons and indoors. These artificial fields contaminate Earth’s magnetic field measurements, making orientation estimation very difficult in heavily perturbed areas.</p> </blockquote>
8732
2015-12-23T12:47:18.057
|mobile-robot|navigation|magnetometer|
<p>I bought a 3-axis magnetometer (Similar to <a href="https://www.adafruit.com/products/1746" rel="nofollow noreferrer">this one</a> ) And plugged into an Arduino in order to read the heading value. I mounted it on my robot and I drove with the robot for around 30 meters, turned 180 degrees and drove back to my starting position. I plotted the heading value and it shows inconsistent values. The 180 degrees turn started at sec 55, the rest is driving in one direction using a joystick and following a wall as reference so small deviations are expected but not that big as in the image.</p> <p><a href="https://i.stack.imgur.com/wWHaz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wWHaz.png" alt="enter image description here"></a></p> <p>When the robot is turning in-place, there is no such problem and the heading variation follows the rotation of the robot. The robot (Turtlebot) is a little bit shaky such that the magnetometer doesn't always have the x and y axes parallel to the floor but I don't think few degrees of offset can cause such a huge difference. I calculate the heading as follows:</p> <blockquote> <p>heading = atan2(y field intensity, x field intensity)</p> </blockquote> <p>Why does this happen? Could it be form some metals or electric wires under the floor? Can you suggest a more robust method/sensor for estimating the heading in indoor environments?</p> <p>EDIT:</p> <p>I drove the same path again and the pattern similarity is making it even weirder <a href="https://i.stack.imgur.com/G5eeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G5eeT.png" alt="enter image description here"></a></p>
Weird magnetometer values
<p>Do you mean a threaded shaft <a href="http://m.aliexpress.com/item/32572201892.html" rel="nofollow">like this</a> or <a href="http://m.banggood.com/N20-DC-6V-55RPM-Gear-Motor-M3-20mm-Threaded-Shaft-Reduction-Gear-Motor-p-1010696.html" rel="nofollow">this</a>?</p> <p>If you wanted to use the folded reduction gear set you linked but have a threaded output shaft then you could either try to get someone to make you a shaft ($$$) or you could buy one of the motors I linked to and swap the shaft out. This assumes that the gear attached to the shaft is the same size. </p> <p>Regarding how - look around the shaft where it exits the reduction gear set. On the motors I linked there are two screws, which should be the same on yours. Take a lot of pictures before you remove them, remove them with the whole motor over a plate or pan, and expect all of the gears to fall out when you remove the screws. </p> <p>Then it's just a matter of pulling the old shaft out, putting the new one in, and trying to get all the gears in place. For that last step, try putting some extra grease on the gear shafts. The viscosity of the grease will act kind of like glue to hold them in place until you get the end plates reinstalled. </p>
8742
2015-12-24T06:33:36.110
|motor|
<p>I'm looking at the <a href="http://g01.a.alicdn.com/kf/HTB1i3odKpXXXXcgXpXXq6xXFXXXq/N20-DC-6V-74rpm-Micro-gear-Motor-with-Metal-Gear-Box-Low-speed-Motor-Flip-type.jpg" rel="nofollow">N20 DC motor</a> which is fairly popular. Does anyone know if the shaft could be swapped out for a threaded shaft?</p>
Could a motor shaft be swapped for a threaded shaft?
<p>I Would vote no.</p> <p>Building a robotic arm yourself is quite difficult, but like others said, is possible. However, this will mean that you spend a lot of time building this and not focussing on your goal of learning about neural networks.</p> <p>If you want to start with neural networks, I would vote to start with image recognition, or basic numerical problem solving, all of which can be done in software, without the need for mechanics.</p> <p>If you want to test a neural network on a robot, I would vote to take an existing robot platform (can be an arm, or a rolling platform, or anything else) and load your neural network on the controller.</p> <p>I think this way, you will learn much more in less time. But if you want to learn how to build an arm, by all means, go ahead!</p>
8749
2015-12-25T19:19:21.573
|control|
<p>I am thinking about building a small robotic arm with 4 small servo motors and an arduino uno to apply basic neural networks concepts.</p> <p>Is it a good idea to use a hand made robotic arm to learn more the power of neural networks?</p> <p>Thank your for your time and Merry Christmas</p>
Bulding a robot arm for neural networks understanding
<p>It sounds like you're using the camera frames to get a PnP solution, or something along those lines.</p> <ol> <li><p>A linear Kalman filter will usually work OK for most purposes if you're using roll/pitch/yaw and pose measurements coming from the camera algorithm. This is always the first port of call because it's much easier than EKF/UKF/etc. If this does not give adequate results then you should consider more complex filters.</p></li> <li><p>The "model" of the system is generally at least two models: a process model (how the system state evolves in time) and a measurement model (how the camera makes measurements based on the system state). Aside: "Pose estimation" is simply finding a pose, what the *KF does is <em>state filtering</em>. A widely used model ( and what seems to be the model you have pasted ) is the constant-velocity model; which assumes that between camera measurements the camera is a particle travelling with constant linear and angular velocity.</p></li> <li><p>The KF has no in-built way to discriminate if there is an uncharacteristically large spike in measurements. The term for this used in KF papers is "update rejection" and a popular empirical method thresholding each update based on Mahalanobis distance (see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.9319&amp;rep=rep1&amp;type=pdf" rel="noreferrer">1</a> section IIIE). Before implementing this, simply increase the measurement noise in your filter and see if it gives you acceptable results. If you see a high correlation between unacceptable state error and the "outlier" then you will need update rejection.</p></li> </ol> <p>A very good read is: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.9319&amp;rep=rep1&amp;type=pdf" rel="noreferrer">A Kalman Filter-based Algorithm for IMU-Camera Calibration</a>.</p> <p>Aside: You can also do things like after every measurement update, saturate the state vector's velocities or positions (e.g. make sure it's not travelling 100m/s when it can only go 1m/s). Also be aware if you use Euler angles to represent your orientation then you will undoubtebly run in to problems if you operate near +/- 90 degree Pitch angles. Also problematic is if operating near the Yaw angle limits (angle wrap), typically only if you are controlling something based off the Yaw angle.</p>
8751
2015-12-26T06:52:24.823
|kalman-filter|pose|
<p>I am currently in the process of writing a pose estimation algorithm using image data. I receive images at 30 fps, and for every image, my program computes the x,y,z and roll, pitch, yaw of the camera with respect to a certain origin. This is by no means very accurate, there are obvious problems such as too much exposure in the image, not enough feature points in the image, etc., and the positions go haywire every once in a while; so I want to write a Kalman filter that can take care of this part. </p> <p>I have read through the basics of KF, EKF etc. and then I was reading through an OpenCV <a href="http://docs.opencv.org/master/dc/d2c/tutorial_real_time_pose.html#gsc.tab=0" rel="nofollow">tutorial</a> that has an implementation of a Kalman Filter inside an algorithm for the pose estimation of an object. While this matches my use case very well, I don't understand why they are using a linear Kalman Filter while explicitly specifying parameters like (dt*dt) in the state transition matrix. For reference, the state transition matrix they are considering is </p> <pre><code> /* DYNAMIC MODEL */ // [1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0 0] // [0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0 0] // [0 0 1 0 0 dt 0 0 dt2 0 0 0 0 0 0 0 0 0] // [0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0 0] // [0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0 0] // [0 0 0 0 0 1 0 0 dt 0 0 0 0 0 0 0 0 0] // [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0] // [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0] // [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0] // [0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0 0] // [0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2 0] // [0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0 dt2] / [0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0 0] // [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt 0] // [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 dt] // [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0] // [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0] // [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1] </code></pre> <p>I'm a little confused, so my main question can be broken down into three parts:</p> <ol> <li>Would a linear Kalman Filter suffice for a 6DOF pose estimation filtering? Or should I go for an EKF?</li> <li>How do I come up with the "model" of the system? The camera is not really obeying any trajectory, the whole point of the pose estimation is to track the position and rotation even through noisy movements. I don't understand how they came up with that matrix.</li> <li>Can the Kalman Filter understand that, for instance, if the pose estimation says my camera has moved half a meter between one frame and other, that's plain wrong, because at 1/30th of a second, there's no way that could happen?</li> </ol> <p>Thank you!</p>
How do I go about implementing a Kalman Filter for a pose estimation algorithm?
<p>Use the basic formula's:</p> <p>$x_t = x_0+v_x*t+\frac{1}{2}*a_x*t^2$</p> <p>$y_t = y_0+v_y*t+\frac{1}{2}*a_y*t^2$</p> <p>This way you can calculate all your $x$ and $y$ positions as long as your time steps are small enough so $v$ and $a$ don't change too much during each time step.</p> <hr> <p>Edit:</p> <p>Since the magnetic heading ($\theta$) is known from the sensor, the values for $v_x$ and $v_y$ can be calculated:</p> <p>$v_x = v_1*cos(\theta) +v_2*sin(\theta)$</p> <p>$v_y = v_2*cos(\theta) -v_1*sin(\theta)$</p> <p>In which $v_1$ is x-direction of your robot frame, $v_2$ is y-direction of your robot frame. $v_x$ is x-direction in world frame, $v_y$ is y-direction in world frame.</p> <hr> <p>Edit 2:</p> <p>If your sensor outputs angular velocity instead of magnetic heading, you can calculate the heading like it is an angular position, so in the same way you would calculate position:</p> <p>$\theta_t = \theta_0 + \dot{\theta}*t+\frac{1}{2}*\ddot{\theta}*t^2$</p> <p>Here $\theta_0$ is your heading at the start (this should be known). $\dot{\theta}$ is your angular velocity and $\ddot{\theta}$ is your angular acceleration.</p>
8758
2015-12-28T05:26:35.417
|mobile-robot|
<p>Good day,</p> <p>I have a robot with an IMU that tells Yaw Rate, and Magnetic heading. It also tells Xvelocity and YVelocity at that instance of the vehicle, on the vehicle frame. (So irrespective of heading, if the robot moved forward, yvelocity would change for example)</p> <p>Assuming my robot starts at position (0,0) and Heading based on the Magnetic heading, I need to calculate the next position of the robot based on some world frame. How can I do this?</p>
2D Robot Motion
<p>You are absolutely correct that Dijkstra's Shortest Path Algorithm can tell you the correct path for the robot to follow. The problem seems to be that you cannot tell where the robot is, and what actions to make the robot take to get to the next node. </p> <p>There's no "right" answer here, but I can offer some guidance about how I'd do it.</p> <h3>Where is the robot</h3> <ul> <li>If the robot can observe the configuration of the outgoing edges, then he can narrow down his location to a set of possible nodes. For example, if there are two outgoing edges, he can only be in 2,3,5,6,8,9, 11, or 12. Similarly for 3, and 4 outgoing is uniquely the center</li> <li>Since you say you have to deal with the robot's orientation when moving, it is probably safe to assume the robot can measure its orientation or at least discover it relative to the current configuration of edges. If the robot has some way of knowing his orientation, then the number of edges and their cardinal direction would help even more. For example, with two edges <em>facing north and south</em>, then we know the robot is at 3,5,9, or 11.</li> <li>Furthermore, if the robot knew the history of possible locations, then we can also incorporate that information. If we knew the robot had two outgoing edges facing east and west, then he moved to the westmost edge and now had two edges facing north and south, then we know the robot is now at 3 and came from 2. What's cool here, is the robot did not know from just the edges, it was the <em>action of moving</em> that caused it to figure out where it was.</li> </ul> <p>I might do that as follows:</p> <ul> <li>Keep a separate list of all nodes and their edges. Assume all nodes are "unmarked"</li> <li>At the start <ul> <li>Observe the number of outgoing edges at the current node</li> <li>Mark all nodes with a different number of edges</li> <li>Observe the outgoing edge orientations, and mark all nodes which have different orientation (i.e., if we notice an edge going north, mark all nodes which don't have an edge going north).</li> <li>At this point, we know the robot is only in an unmarked node</li> <li>Travel along an arbitrary edge (but not back to start)</li> <li>Check the new outgoing edges. Mark all nodes which have different outgoing edges. </li> <li>Additionally, mark all nodes which have the same edges but are not connected to an unmarked node</li> </ul></li> </ul> <p>Repeat this process until only one unmarked node remains. Without orientation, this will never happen though (for example, 4 and 10 are identical up to orientation)</p> <p>If you have no orientation information, and no way to observe outgoing edges, then history is all you have. If you know what node you started at, then by all means, you must come up with a complete sequence of actions to get the robot to the desired node. Here's how we might do that.</p> <h3>Getting there</h3> <p>Let's assume the robot starts at 1 and has to get to 21. I'd program a small subroutine that did the following.</p> <ul> <li>Call dijkstra to get the shortest path.</li> <li>Until reached: <ul> <li>Turn the robot toward the next node</li> <li>take the edge</li> <li>Update the robot's current node</li> <li>Update the next node from the list in Dijkstra's output</li> </ul></li> </ul> <p>Using this, the robot always knows his current node and next node. We only assumed that we knew the starting orientation. </p>
8761
2015-12-28T15:13:51.943
|mobile-robot|automatic|dynamic-programming|
<p>I have to program an autonomous robot to traverse through a grid given in the following figure.<br/> But the main problem is that the nodes to visit is not known beforehand, it will be received by the bot in real time.<br/> E.g.- after reaching the node 19, the bot now has to go to node 6. The shortest path(19-17-7-6) can be calculated by Dijkstra algo but i don't know how to make the robot traverse that path.<br/> Any idea ?</p> <p><a href="https://i.stack.imgur.com/y7EBi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y7EBi.png" alt="enter image description here"></a></p> <p>Edit: Sorry for not making the question clear enough.<br/> I am facing the problem in determining the current position and the direction the robot is facing so i can't define the set of commands (turn left/right/forward) to traverse to the next desired node.<br/> I am thinking about an extra array for previously visited nodes and the current node and an extra variable for facing direction.<br/> But for that i will have to define command sets for each node from each and every node.<br/> Any better idea ?</p>
Completely autonomous traversal of a planar graph
<p>of course, it is a roomba create 2. sorry for this.. </p> <p>The last comment helped: the requested variable stands for: the sum of days of a week, the roomba is meant to work.</p> <p>Value Day: 1 Sunday, 2 Monday, 4 Tuesday, 8 Wednesday, 16 Thursday, 32 Friday, 64 Saturday</p> <p>Examples: 127 = every day, 40 = Wednesday and Friday</p> <p>Thank you! :) </p>
8785
2016-01-02T01:41:47.947
|irobot-create|
<p>Just a short question: The iRobot Create 2 Open Interface spec says:</p> <p>Serial sequence: [167] [Days] [Sun Hour] [Sun Minute] [Mon Hour] etc.</p> <p>Can somebody explain to me, what "Days" stands for?</p>
Roomba schedule opcode: 167, byte 1
<p>I agree with @Greenonline 's recommendation regarding LiPo batteries along with his warnings on battery care. It seems you will need a fairly small battery, considering your current requirements (about a 1000-2000mAh 2S LiPo).</p> <p>However, you also need to add 2 5V BECs (5V regulators in RC lingo) to power your circuits; The 7.4 or 11.1V provided by the battery isn't very useful. You can find more on BECs on @Greenonline's <a href="https://gr33nonline.wordpress.com/2015/08/15/power-up/" rel="nofollow">link</a>. Especially the youtube channel RCModelReviews provides good info.</p> <p>I suggest having 2 BEC to avoid servo noise on the RPi power rail, as you mentioned. Servos and motors apply intermittent loads on power supplies. Most of the time (especially on cheap power supplies), this causes severe voltage ripples on the power supply line. If sensitive (from a power supply point of view) microcontrollers or ratiometric sensors are to be used in the system, it's best to attach them on a separate regulator and leave the battery to act as a "buffer" from the servo ripples</p> <p>I'm not familiar with e-bay sellers, but something <a href="http://www.hobbyking.com/hobbyking/store/__18789__5V_5A_UBEC_2_5S_Lipoly_7_2_21v_.html" rel="nofollow">like</a> this would be adequate. If you prefer paying an extra buck than taking chances with Chinese products (I do take chances usually), you can also look at <a href="https://www.adafruit.com/products/1385" rel="nofollow">this product</a>.</p>
8787
2016-01-02T11:08:08.410
|mobile-robot|raspberry-pi|power|battery|walking-robot|
<p>I've built quadruped robot which is using 12 servos (TowerPro SG90 Servo) and Raspberry Pi (model B 1). Right now I'm "feeding" it with 5V 2.5A charger. </p> <p>How can I make it un-tethered? What should I look out for when selecting batteries? Also, I think that I need to separate powering of RPi and servos because power is "jumping" when it moves and that isn't good for RPi. </p> <p>A little video - <a href="https://www.youtube.com/watch?v=o63akKmY07w" rel="nofollow">Testing Walking Algorithm</a></p>
Best power solution for my robot
<p>Supposedly the OpenPilot Ground Control Station (GCS) will allow you to download your existing configuration. In general, you probably want to search for something that will allow you to "back up CC3D configuration" from the UAV.</p>
8804
2016-01-05T10:07:39.000
|quadcopter|
<p>I just received my first hobby-grade quadcopter.</p> <p>It's the Eachine racer 250 and comes preassembled with transmitter and receiver also included.</p> <p>It comes with some kind of CC3D flight board, most people say it's not the original one, but can be configured with the same software.</p> <p>It is actually flying very well right out of the box so I'm not sure if I want to touch the FC config.</p> <p>I'm mostly interested in learning to fly in manual/acro mode, the transmitter seems to have a switch with 3 flight modes, first 2 looks like low/high rates in self-level mode, I expect the third to be the acro mode, but I'm not sure right now, I couldn't test it because of the weather, it could be a third higher rate?.</p> <p>So, is there any way I can look at the actual FC config without changing anything? what software do I need? and are the flight modes actually set on the FC or transmitter so I could be able to see and edit them? </p>
See CC3D actual configuration
<p>I'm not sure why you would want to turn a gear on a shaft without turning the shaft unless it is mechanically linked to another gear. The reason for the brass sleeve is probably for the set screw to have a stronger material than plastic to anchor in. </p>
8807
2016-01-05T16:11:43.460
|motor|mechanism|gearing|
<p>I'm looking at <a href="https://img.alicdn.com/imgextra/i1/650686955/TB2i_qfbpXXXXXPXpXXXXXXXXXX-650686955.jpg" rel="nofollow noreferrer">this setup</a>: </p> <p><a href="https://i.stack.imgur.com/UYPrg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYPrg.jpg" alt="enter image description here"></a></p> <p>where the POM bevel gears are fitted with some kind of metal (bronze?) tube inside which fits over the shaft. What benefits does this method provide? Is it to allow the shaft to free-spin? The metal fitting wouldn't be able to grip on the shaft - right?</p> <p>Is it supposed to be a canonical approach to fitting a POM gear for (relatively) high-load applications?</p>
POM gears and metal fittings
<p>Your motors are rated for 466W max, which with a 6S battery gives about 20A max current draw per motor (This also tells you your ESC are a good choice, provided that they can handle the 6S voltage).</p> <p>I'd estimate the hover voltage at about 10A, so you'd get a flight duration of t = Capacity(C) / Current(I) = 5.2/40 = 0.13h = 7.8mins.</p> <p>This flight duration sounds a bit small. I'd try something that would get me to 10+minutes, depending on application.</p> <p><strong>That said, the correct way to approach this</strong>, would be to get some real data from your power system, either experimentally or online. The course of action is:</p> <p>(1) Set a bench test with one propeller and motor fixed on a thrust bench or scale.</p> <p>(2) Run your motor to the maximum and record the current $I_{max}$. This will verify the motor and ESC maximum specs.</p> <p>(3) Run your motor to produce thrust equal to 1/4 of the estimated quadrotor weight. This will give you your hover current, $I_H$. (4) Use $I_H$ and your required flight time to calculate the required battery capacity.</p>
8811
2016-01-06T14:57:31.220
|quadcopter|
<p>I am building a Quadcopter and I was wondering if a 5200mAh 30C 22.2V 6S Lipo battery will work with a 40Amp Esc's, MT4108 370 KV Motors, and GEMFAN 1470 Carbon Fiber Props. The over all payload will be about 5-6 pounds.</p>
Will a 5200mAh 30C 22.2V 6S Lipo battery work with
<p>In most cases, ALL of your ground reference and power return pins and wires should be tied together, preferably with a star configuration to avoid ground loops.</p> <p>In a star configuration, the ground (return) wires from all your devices are connected to a single, physical point in your circuit. It is best that this point is close to your battery, preferably a power distribution board (board with multiple parallel power ports) connected at the battery plug.</p> <p>Devices that commonly share ground are microcontrollers, batteries, RC receivers and servo motors.</p>
8833
2016-01-09T07:54:04.823
|mobile-robot|control|power|
<p>I am using arduino mega to run 4 motors via 4 motor controllers. I am using a ps2 controller as the remote control. When the joystick is at rest, the motor should stop.</p> <p>The problem is the motor still moves randomly when the joystick is at rest. Sometimes, the joystick can only produce forward motion on the motor but not backward. Is this the grounding issue or the PS2 remote control issue ir others.. Does the GND from the arduino board have to be connected to the GND from the external battery? </p> <p>How can I troubleshoot this? </p> <p>Thanks.</p>
Erratic motor behavior. Is it due to the faulty remote control or Grounding or something else?
<p>While there <em>are</em> other H-bridges that handle higher current and it might be cost-effective to just buy a higher-amperage pre-packaged ESC (electronic speed controller), you might also consider using higher-voltage motors (<a href="http://www.jameco.com/1/1/48659-ms5n-07500b-r-m-6440c367-24vdc-motor-2475-rpm-dc-direct-drive.html" rel="nofollow">1</a>, <a href="http://www.jameco.com/1/1/2528-gh35gh241640y-00-r-24v-dc-plastic-reversible-gear-head-motor.html" rel="nofollow">2</a>). Adjust the voltage up or down as the H-bridge amperage limit allows. </p> <p>Also consider using a <a href="https://www.google.com/search?hl=en&amp;site=imghp&amp;tbm=isch&amp;source=hp&amp;biw=1404&amp;bih=881&amp;q=slip+clutch&amp;oq=slip+clutch&amp;gs_l=img.3..0l4j0i5i30j0i8i30l2j0i24l3.1240.1240.0.2434.1.1.0.0.0.0.158.158.0j1.1.0....0...1ac..64.img..0.1.157.aReU-8d9CCc" rel="nofollow">slip clutch</a> between the motor and the door operator. It would allow the motor to keep running, instead of stalling, if door motion is blocked. For trying the idea out, you could salvage the clutch or torque-control parts from an old battery-operated screwdriver.</p> <p><em>Edit:</em></p> <p>Regarding use of a brushed motor vs a brushless motor, I don't see this as an important issue unless you are going to build a significant number of door-openers, rather than a one-off. But generally for a given amount of motor power, I'd expect lower cost, higher weight, and shorter life from brushed motors than from brushless motors. </p> <p>A brushless motor would require use of an ESC (<a href="http://www.digikey.com/en/articles/techzone/2013/mar/an-introduction-to-brushless-dc-motor-control" rel="nofollow">3</a>), while a typical brushed DC motor can just be switched on and off. Both types of motors can be controlled by PWM signals.</p> <p>If you use a gear motor, then gear train wear and tear (which doesn't a priori depend on motor type, but may depend on motor speed) may be more of a limiting factor than is motor life.</p>
8844
2016-01-10T03:44:53.250
|motor|h-bridge|
<p>I want to open a door using a DC motor. I've estimated that the required power in the worst case would be around 35-40W (considering a ~80% efficiency). The whole is controlled by a <a href="https://docs.particle.io/datasheets/photon-datasheet/" rel="nofollow">Particle Photon</a>.</p> <p>I was thinking to use a <a href="https://www.sparkfun.com/products/9479" rel="nofollow">L298N</a> to control the output of current to the motor. However, when I looked for powerful enough motors, they would all consume too much current when stalling (> 4A part of the L298N datasheet).</p> <p>Do you have ideas of how to overcome this? Maybe there's another dual-bridge that can handle more current, maybe there exists a DC motor that is ok for a L298N, or maybe I need to have simultaneous DC motors?</p> <p>Edit: this part should be a question by itself. I'll keep it here so that future visitors know what the sub-question was about, but please ignore it from now on if answering.</p> <blockquote> <p>As a sub-question, would it be better to use a brushed or a brushless DC motor?</p> </blockquote>
DC Motor to open a door
<p>This is no question of robotics... Keep it simple (:</p> <p>Google for "Louver Bath Vent": <a href="https://www.google.de/search?q=Louver+Bath+Vent" rel="nofollow">https://www.google.de/search?q=Louver+Bath+Vent</a></p> <p>These kinds of ducts open by the air pressure generated from a ventilator.</p>
8845
2016-01-10T21:08:40.800
|actuator|valve|
<p>My application is composting with worms outdoors inside an a styrofoam cooler. I use a heat lamp and a thermo-electric cooler to maintain the temperature in the bin when the temperature outside is out of bounds for healthy worms. When the temperature outside <em>is</em> in bounds, I'd like to exchange the air in the bin with fresh air from outside, but I don't want to permanently compromise the insulating properties of my bin with lots of air holes. So I'm looking for actuator solutions that would allow me to open/close a window of sorts. I'm considering a solenoid air valve but I don't necessarily need/want an air compressor - a simple fan is sufficient to circulate the air. Any suggestions?</p>
Exchange air and maintain thermal insulation
<p>A standard approach (using opencv solvePnP) is using at least 4 points in the image that define landmarks of a known geometry. You can then get the pose of the camera relative to the object.</p> <p>For example if you had a blue rectangle of which you could detect the corners in the image, and you knew the dimensions of this rectangle, you could work out the relative pose. You must have some way to identify which corner is which (say you had them each painted different colours just in the corners).</p> <p>The relative position between your robot would be a (rotated) error, which you could feed into a PID.</p> <p>See also: <a href="https://stackoverflow.com/questions/16265714/camera-pose-estimation-opencv-pnp">https://stackoverflow.com/questions/16265714/camera-pose-estimation-opencv-pnp</a></p>
8849
2016-01-11T05:00:45.677
|control|pid|quadcopter|tuning|opencv|
<p>I am working on a project where I should perform object tracking using the camera of Parrot AR Drone 2.0. So the main idea is, a drone should be able to identify a specified colour and then follow it by keeping some distance.</p> <p>I am using the <a href="https://github.com/puku0x/cvdrone" rel="noreferrer">cvdrone</a> API to establish communication with the drone. This API provides function:</p> <pre><code>ARDrone::move3D(double vx, double vy, double vz, double vr) </code></pre> <p>which moves the AR.Drone in 3D space and where </p> <ul> <li>vx: X velocity [m/s]</li> <li>vy: Y velocity [m/s]</li> <li>vz: Z velocity [m/s]</li> <li>vr: Rotational speed [rad/s]</li> </ul> <p>I have written an application which does simple image processing on the images obtained from the camera of the drone using OpenCV and finds needed contours of the object to be tracked. See the example below: <a href="https://i.stack.imgur.com/NVDgg.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/NVDgg.jpg" alt="enter image description here"></a></p> <p>Now <strong><em>the part I am struggling</em></strong> is finding the technique using which I should find the <strong><em>velocities</em></strong> to be sent to the <strong><em>move3D</em></strong> function. I have read that common way of doing controlling is by using <strong><em>PID</em></strong> controlling. However, I have read about that and could not get how it could be related to this problem.</p> <p>To summarise, my question is how to move a robot towards an object detected in its camera? How to find coordinates of certain objects from the camera? </p>
Tracking objects from camera; PID controlling; Parrot AR Drone 2
<p><strong>Total RMS Noise</strong></p> <p>Yes, this should be approximately equal to your standard deviation as long as there is no bias.</p> <p><strong>Rate Noise Spectral Density</strong></p> <p>You cannot obtain this from the statistics you list. You will need to take a <a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform" rel="nofollow noreferrer">Fast Fourier Transform</a> (FFT) of your raw data and look at the results in the frequency domain. Look in the data sheet to see at what frequency or frequency band the spectral noise density is specified at and then compare what your FFT says.</p> <p><strong>Low-frequency RMS noise</strong></p> <p>Again, you will need to know (or guess) what frequency cutoff your data sheet is assuming and use the same cutoff. You may need to run your data through a low-pass digital filter then look at the results in the time domain.</p> <p>You can do the above analysis with a tool like MATLAB. If you don't have MATLAB (It's kind of expensive) check out <a href="https://www.rstudio.com/" rel="nofollow noreferrer">R Studio</a> or <a href="https://www.scilab.org/" rel="nofollow noreferrer">Scilab</a>. They are both available for free. Personally I prefer R - It has great statistical analysis tools and there is a lot of open source floating around for data visualization.</p>
8860
2016-01-11T14:57:36.837
|sensors|imu|gyroscope|sensor-fusion|statistics|
<p>I’ve made a datalog from a MPU6050 (IMU: gyroscope and accelerometer) at 500Hz sample rate. Now I want to calculate the characteristics from the gyro to evaluate the sensor. </p> <p>For the gyro I’ve found following values in the datasheet:</p> <p>Total RMS Noise = 0.05 °/s</p> <p>Low-frequency RMS noise = 0.033 °/s </p> <p>Rate Noise Spectral Density = 0.005 °/s/sqrt(Hz)</p> <p>Now I want to ask how I can calculate these values from my dataset?</p> <p>At the moment I’ve the following values from the dataset:</p> <p>Standard deviation = 0.0331 °/s</p> <p>Variance = 0.0011</p> <p>Angular Random Walk (ARW) = 0.003 °/sqrt(s) (From Allan deviation plot)</p> <p>Bias Instability = 0.0012 °/s</p> <p>Is the ARW equal to the Rate Noise Spectral Density mentioned in the datasheet? And also is the RMS Noise from the datasheet equal to the standard deviation? </p> <p>edit: I found following website: <a href="http://www.sensorsmag.com/sensors/acceleration-vibration/noise-measurement-8166" rel="noreferrer">http://www.sensorsmag.com/sensors/acceleration-vibration/noise-measurement-8166</a> There is the statement: "...Because the noise is approximately Gaussian, the standard deviation of the histogram is the RMS noise" So I guess the standard deviation is the RMS noise from the datasheet. But how about the ARW?</p>
Angle Random Walk vs. Rate Noise Density (MPU6050)
<p>It looks like most of your parts have no rotation, but <em>some</em> of them do, so I'm going to guess that you didn't mate your assembly to the origin planes in Solidworks.</p> <p>First, on your base plate, open the Solidworks part file and check that the origin planes run through what you want the origin of the part to be. If they don't and it's a pain to re-draw the part, then go to the "Reference Feature" button and then "Plane" and draw what you want the origin planes to be. SAVE THE PART. </p> <p>Open your assembly, and check if it says "underdefined" at the bottom. Solidworks (annoyingly) puts the first part down fixed, but not mated, to a random position in the assembly space.</p> <p>So, in the parts tree, right click the top part (should have (f) after the name, indicating it's fixed), and click "float". You may have to expand the option list to see this. </p> <p>Now go to Mates, then expand the part list for your base plate, then mate the origin (or reference) planes of your base plate to the origin planes of your assembly space. </p> <p>You may also need to mate the wheels to prevent them from rotating in order to get the "fully defined" message at the bottom.</p> <p>I'm on my phone at the moment; I'll add some figures after I get to work.</p> <p>:EDIT:</p> <p>Here are some pictures OP provided. I've marked up what I believe to be the error. I think that the URDF Global Origin is set to be the same as the Solidworks assembly origin. In this case, for the robot, the robot (global) origin <em>should</em> be the origin of the Base Plate, but I am betting that the Base Plate is not mated to the assembly origin, so this is causing the discrepancy in the simulation visualization. </p> <p><a href="https://i.stack.imgur.com/wjubW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjubW.png" alt="Solidworks URDF Assembly Origin"></a></p> <p><a href="https://i.stack.imgur.com/iH9pT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iH9pT.png" alt="Solidworks URDF Global Origin"></a></p> <p>:EDIT 2:</p> <p>Here's a picture walk-through of mating a part to the assembly origin. If you need anything clarified, please let me know in a comment, but if you have a new question about something then please just make a new question (they're free!)</p> <p><a href="https://i.stack.imgur.com/QPBvy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QPBvy.png" alt="Solidworks URDF Part Origin"></a></p> <p><a href="https://i.stack.imgur.com/EpPt6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EpPt6.png" alt="Solidworks URDF Under defined Assembly"></a></p> <p><a href="https://i.stack.imgur.com/f9Bpz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f9Bpz.png" alt="Solidworks URDF Part Assembly Mating"></a></p> <p><a href="https://i.stack.imgur.com/bAuGZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bAuGZ.png" alt="Solidworks URDF Fully Defined Assembly"></a></p>
8866
2016-01-12T11:27:14.293
|mobile-robot|ros|navigation|odometry|gazebo|
<p>I have built my differential drive mobile robot in solidworks and converted that to URDF file using soliworks2urdf converter. I successfully launched and robot and simulated with tele-operation node. Since i am intended to use navigation stack in i viewed the transform of the robot in rviz which resulted as below.</p> <p><a href="https://i.stack.imgur.com/2FgIo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2FgIo.png" alt="enter image description here"></a></p> <p>As you can see the base plate is the one which supports the wheels and castors but the tf of base plate is shown away from the actual link and even odom is away from the model. Where have i gone wrong and how to fix this. Refer the URDF of model below.</p> <pre><code> &lt;?xml version="1.0"?&gt; &lt;robot name="JMbot"&gt; &lt;link name="Base_plate"&gt; &lt;inertial&gt; &lt;origin xyz="-0.3317 0.71959 -0.39019" rpy="0 0 0" /&gt; &lt;mass value="0.55378" /&gt; &lt;inertia ixx="0.0061249" ixy="0.00016086" ixz="-8.6651E-18" iyy="0.0041631" iyz="-1.4656E-17" izz="0.010283" /&gt; &lt;/inertial&gt; &lt;visual&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Base_plate.STL" /&gt; &lt;/geometry&gt; &lt;material name=""&gt; &lt;color rgba="0.74902 0.74902 0.74902 1" /&gt; &lt;/material&gt; &lt;/visual&gt; &lt;collision&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Base_plate.STL" /&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;/link&gt; &lt;link name="Wheel_R"&gt; &lt;inertial&gt; &lt;origin xyz="0.010951 1.1102E-16 -1.1102E-16" rpy="0 0 0" /&gt; &lt;mass value="0.45064" /&gt; &lt;inertia ixx="0.00091608" ixy="-1.2355E-19" ixz="1.0715E-18" iyy="0.00053395" iyz="-6.7763E-20" izz="0.00053395" /&gt; &lt;/inertial&gt; &lt;visual&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Wheel_R.STL" /&gt; &lt;/geometry&gt; &lt;material name=""&gt; &lt;color rgba="0.74902 0.74902 0.74902 1" /&gt; &lt;/material&gt; &lt;/visual&gt; &lt;collision&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Wheel_R.STL" /&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;/link&gt; &lt;joint name="Wheel_R" type="continuous"&gt; &lt;origin xyz="-0.14688 0.40756 -0.73464" rpy="-2.7127 -0.081268 -3.1416" /&gt; &lt;parent link="Base_plate" /&gt; &lt;child link="Wheel_R" /&gt; &lt;axis xyz="1 0 0" /&gt; &lt;/joint&gt; &lt;link name="Wheel_L"&gt; &lt;inertial&gt; &lt;origin xyz="-0.039049 2.2204E-16 2.498E-15" rpy="0 0 0" /&gt; &lt;mass value="0.45064" /&gt; &lt;inertia ixx="0.00091608" ixy="-9.6693E-19" ixz="-1.7816E-18" iyy="0.00053395" iyz="1.3553E-19" izz="0.00053395" /&gt; &lt;/inertial&gt; &lt;visual&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Wheel_L.STL" /&gt; &lt;/geometry&gt; &lt;material name=""&gt; &lt;color rgba="0.74902 0.74902 0.74902 1" /&gt; &lt;/material&gt; &lt;/visual&gt; &lt;collision&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Wheel_L.STL" /&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;/link&gt; &lt;joint name="Wheel_L" type="continuous"&gt; &lt;origin xyz="-0.46668 0.40756 -0.70859" rpy="2.512 0.081268 3.4272E-15" /&gt; &lt;parent link="Base_plate" /&gt; &lt;child link="Wheel_L" /&gt; &lt;axis xyz="-1 0 0" /&gt; &lt;/joint&gt; &lt;link name="Castor_F"&gt; &lt;inertial&gt; &lt;origin xyz="2.2204E-16 0 0.031164" rpy="0 0 0" /&gt; &lt;mass value="0.056555" /&gt; &lt;inertia ixx="2.4476E-05" ixy="-2.8588E-35" ixz="1.0281E-20" iyy="2.4476E-05" iyz="-1.2617E-20" izz="7.4341E-06" /&gt; &lt;/inertial&gt; &lt;visual&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Castor_F.STL" /&gt; &lt;/geometry&gt; &lt;material name=""&gt; &lt;color rgba="0.75294 0.75294 0.75294 1" /&gt; &lt;/material&gt; &lt;/visual&gt; &lt;collision&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Castor_F.STL" /&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;/link&gt; &lt;joint name="Castor_F" type="continuous"&gt; &lt;origin xyz="-0.31952 0.39256 -0.57008" rpy="-1.5708 1.1481 -1.3614E-16" /&gt; &lt;parent link="Base_plate" /&gt; &lt;child link="Castor_F" /&gt; &lt;axis xyz="0 0 1" /&gt; &lt;/joint&gt; &lt;link name="Castor_R"&gt; &lt;inertial&gt; &lt;origin xyz="-1.1102E-16 0 0.031164" rpy="0 0 0" /&gt; &lt;mass value="0.056555" /&gt; &lt;inertia ixx="2.4476E-05" ixy="0" ixz="-3.9352E-20" iyy="2.4476E-05" iyz="-1.951E-20" izz="7.4341E-06" /&gt; &lt;/inertial&gt; &lt;visual&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Castor_R.STL" /&gt; &lt;/geometry&gt; &lt;material name=""&gt; &lt;color rgba="0.75294 0.75294 0.75294 1" /&gt; &lt;/material&gt; &lt;/visual&gt; &lt;collision&gt; &lt;origin xyz="0 0 0" rpy="0 0 0" /&gt; &lt;geometry&gt; &lt;mesh filename="package://jmbot_description/meshes/Castor_R.STL" /&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;/link&gt; &lt;joint name="Castor_R" type="continuous"&gt; &lt;origin xyz="-0.34387 0.39256 -0.86909" rpy="1.5708 -0.93144 3.1416" /&gt; &lt;parent link="Base_plate" /&gt; &lt;child link="Castor_R" /&gt; &lt;axis xyz="0 0 1" /&gt; &lt;/joint&gt; &lt;gazebo&gt; &lt;plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so"&gt; &lt;leftJoint&gt;Wheel_L&lt;/leftJoint&gt; &lt;rightJoint&gt;Wheel_R&lt;/rightJoint&gt; &lt;robotBaseFrame&gt;Base_plate&lt;/robotBaseFrame&gt; &lt;wheelSeparation&gt;0.235&lt;/wheelSeparation&gt; &lt;wheelDiameter&gt;0.12&lt;/wheelDiameter&gt; &lt;publishWheelJointState&gt;true&lt;/publishWheelJointState&gt; &lt;/plugin&gt; &lt;plugin name="joint_state_publisher" filename="libgazebo_ros_joint_state_publisher.so"&gt; &lt;jointName&gt;Castor_F, Castor_R&lt;/jointName&gt; &lt;/plugin&gt; &lt;/gazebo&gt; &lt;/robot&gt; </code></pre>
Tf frame origin is offset from the actual base_link
<p>A possible approach would be to construct a virtual potential field, which is tangent to your desired circle and use it for reference (target) orientation (yaw angle).</p> <p>There is an algorithm for a plane here: Beard, R., &amp; McLain, T. (2012). Small unmanned aircraft: Theory and practice, Ch. 10.2</p>
8877
2016-01-13T23:05:56.890
|quadcopter|pid|multi-rotor|
<p>I've got a project that will require my drone to perform a circle turn while the drone is always facing the tangent of the turning curve. Similiar to a car that is performing a frictionless banked turn. Just wondering which method should i use to achieve it, the throttle control can be ignored since i already have a pid on height control.</p> <p>Any suggestions would be appreciated. </p>
Need the uav to perform a circle turn
<p>I would definitely look at a brushed motor when the budget is low. They have a reputation of being less powerful, but the main problem is they have a very high RPM. This means you either have to use very small inefficient propellers or a geared system. Brushed geared mini-drones have pretty good performance when it comes to battery life because the propeller is big compared to the weight. Small, coreless brushed motors are pretty efficient but wear out after some time.</p>
8879
2016-01-14T06:32:39.270
|quadcopter|motor|brushless-motor|
<p>I want to make a drone. But my budget is very low. Brushless motors are very expensive. I want use the Brushed CHEAP ones. can i us them ?</p>
Can i use DC Brushed motors for building a drone?
<p>In this case the precision is the most important factor. </p> <p>industrial robots have typically a precision of 0.1 mm. If you are building one without in depth machine design and manufacturing experience, it is expected that the precision for your robot arm will not be as good. </p> <p>Instead of a robot arm I would recommend a portal type structure. Much easier to build from COTS components. </p> <p>If the disks are round (enough) you can use a distance sensor (laser or ultra sound depending in the material/coating you are trying to measure) at the side of the conveyor belt. As soon as the disk passes the sensor you can see how the radius is changing, if you know the velocity of your conveyor belt you can calculate the radius and then the center of the disk. Will be a bit trick if the conveyor belt stops and moves in the middl of a measurement but passing the disk in front of the sensor and knowing the velocity of the conveyor gives you enough measurments to eliminate noise.</p> <p>Just wait until the center is under your portal tool on the conveyor belt (the disance to the sensor is constant) then move the tool to the center and drill.</p> <p><a href="https://i.stack.imgur.com/nwT6f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nwT6f.jpg" alt="enter image description here"></a></p>
8883
2016-01-14T10:35:57.577
|robotic-arm|mechanism|electronics|movement|
<p>Hello I am new to the field of robotics but have some knowledge of raspberry pi, arduino, python. I want to make Robotic arm which can be used to find the centre of any disk. There may be disk of different diameter coming one after another on conveyor. I need to make hole at the center of disk using robotic arm. How can I do this ? What techniques and sensors I should use to implement the mechanical and electronic part. (I don't want to use camera and openCV). Thanks in advance.</p> <p><a href="https://i.stack.imgur.com/xwqCh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xwqCh.jpg" alt="enter image description here"></a></p>
How to find center of a disk using robotic arm
<p><strong>Quadrotor Control Problem</strong></p> <p>It depends! You have a good mathematical model? After you achieve one, you might need to choose a good system identification method. Comparing their outputs you should have enough data to start thinking on the controller design.</p> <p>Beyond this, I can't say precisely what you want or need...</p>
8891
2016-01-14T21:00:04.013
|quadcopter|pid|
<p>I have all the specs from a quadcopter, everthing, would it be possible to figure out the pid from those specs?</p>
Figure out PID values from drone specs
<p>It is possible to stabilize a quadcopter using only angle measurements in a single loop pid. However it is easier to stabilize a quadcopter using a cascaded PID controller. Yes you are tuning more parameters. Firstly you tune first the inner loop rate PID controller using the gyroscope's (the fast sensor, but drifts) angular rate readings then tune the outer loop stabilize PID using an angle setpoint and angle measurements from the sensor fusion of the angle readings from both the accelerometer and the angles integrated from the angular velocity readings from the gyroscope. I found that it was the easiest way to achieve stable flight coupled with my now current control loop rate of 530Hz.</p> <p>Other related helpful questions with answers:</p> <ol> <li><p><a href="https://robotics.stackexchange.com/questions/2800/pid-output-does-not-reach-setpoint-precisely-enough/2817#2817">PID output does not reach setpoint precisely enough</a></p></li> <li><p><a href="https://robotics.stackexchange.com/questions/8354/need-help-for-a-quadcopter-pid/8903#8903">Need help for a quadcopter PID</a></p></li> </ol> <p>Resources:</p> <ol> <li><a href="https://www.quora.com/What-is-rate-and-stabilize-PID-in-quadcopter-control" rel="nofollow noreferrer">https://www.quora.com/What-is-rate-and-stabilize-PID-in-quadcopter-control</a></li> </ol>
8895
2016-01-15T07:10:40.770
|control|quadcopter|pid|raspberry-pi|stability|
<p>Good day,</p> <p>I am a student currently working on an autonomous quadcopter project, specifically the stabilization part as of now. I am using a tuned propeller system and I also already considered the balancing of the quadcopter during component placements. I had been tuning the PID's of my quadcopter for the past 3 1/2 weeks now and the best I've achieved is a constant angle oscillation of the quadcopter by +-10 degrees with 0 degrees as the setpoint/desired angle. I also tried a conservative 7 degrees setpoint with the same results on the pitch axis.</p> <p>As of now my PID code takes in the difference of the angle measurement from the complementary filter <code>( FilteredAngle=(0.98)*(FilteredAngle + GyroAngleVel*dt) + (0.02)*(AccelAngle) )</code> and the desired angle.</p> <p><a href="https://i.stack.imgur.com/QbCmL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QbCmL.png" alt="enter image description here"></a></p> <p>I have read somewhere that it is IMPOSSIBLE to stabilize the quadcopter utilizing only angle measurements, adding that the angular rate must be also taken into consideration. But I have read a lot of works using only a single pid loop with angle differences (Pitch Yaw and Roll) as the input. </p> <p>In contrast to what was stated above, I have read a comment from this article (<a href="https://www.quora.com/What-is-rate-and-stabilize-PID-in-quadcopter-control" rel="noreferrer">https://www.quora.com/What-is-rate-and-stabilize-PID-in-quadcopter-control</a>) by Edouard Leurent that a Single PID control loop only angle errors and a Cascaded PID loop (Angle and Rate) that utilizes both angle errors and angular velocity errors are equivalent Mathematically.</p> <p>If I were to continue using only the Single PID loop (Angle) method, I would only have to tune 3 parameters (Kp, Ki &amp; Kd).</p> <p>But if I were to change my code to utilize the Cascaded Loop (Angle and Angular Velocity),</p> <ol> <li>Would I have to tune two sets of the 3 parameters (Kp, Ki &amp; Kd for angle and Kp, Ki &amp; Kd for the angular velocity)?</li> <li>Would the cascaded PID control loop give better performance than the single PID control loop?</li> <li>In the Cascaded Loop, is the set point for the angular velocity for stabilized flight also 0 in deg/sec? What if the quadcopter is not yet at its desired angle?</li> </ol> <p>Thank you :)</p>
Quadcopter PID Control: Is it possible to stabilize a quadcopter considering only angle measurements?
<p>The following are mostly based on <a href="http://www.cs.cmu.edu/%7Ekaess/pub/Dellaert17fnt.pdf" rel="nofollow noreferrer">&quot;Factor Graphs for Robot Perception&quot; by Frank Dellaert and Michael Kaess</a>, with additional notes:</p> <p>As a reminder, marginalization is about having a joint density <span class="math-container">$p(x, y)$</span> over two variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, and we would like to marginalize out or &quot;eliminate a variable&quot;, lets say <span class="math-container">$y$</span> in this case:</p> <p><span class="math-container">\begin{equation} p(x) = \int_{y} p(x, y) \end{equation}</span></p> <p>resulting in a density <span class="math-container">$p(x)$</span> over the remaining variable <span class="math-container">$x$</span>.</p> <p>Now, if the density was in covariance form with mean <span class="math-container">$\boldsymbol{\mu}$</span> and covariance <span class="math-container">$\mathbf{\Sigma}$</span>, partitioned as follows:</p> <p><span class="math-container">\begin{equation} p(x, y) = \mathcal{N}( % Mean \begin{bmatrix} \boldsymbol\mu_{x} \\ \boldsymbol\mu_{y} \end{bmatrix}, % Covariance \begin{bmatrix} \mathbf\Sigma_{xx}, \mathbf\Sigma_{xy} \\ \mathbf\Sigma_{yx}, \mathbf\Sigma_{yy} \end{bmatrix} ) \end{equation}</span></p> <p>marginalization is simple, as the corresponding sub-block <span class="math-container">$\mathbf{\Sigma}_{xx}$</span> already contains the covariance on <span class="math-container">$x$</span> after marginalizing out <span class="math-container">$y$</span>, i.e.,</p> <p><span class="math-container">\begin{equation} p(x) = \mathcal{N}( % Mean \boldsymbol\mu_{x}, % Covariance \mathbf\Sigma_{xx} ) \end{equation}</span></p> <p>However, in the nonlinear least squares formulation, we don't have the covariance <span class="math-container">$\mathbf{\Sigma}$</span>, we can however estimate it via the following property:</p> <p><span class="math-container">\begin{equation} \mathbf{\Sigma} = \mathbf{H}^{-1} \end{equation}</span></p> <p>where <span class="math-container">$\mathbf{H}$</span> is the hessian in a Gauss-Newton system (approximated with <span class="math-container">$\mathbf{J}^{T} \mathbf{J}$</span>), it just so happens that when we're trying to invert the Hessian via Schur's complement, we are actually:</p> <ol> <li>Inverting a sub-block of <span class="math-container">$\mathbf{H}$</span>: to solve <span class="math-container">$\mathbf{H} \delta\mathbf{x} = \mathbf{b}$</span> for <span class="math-container">$\delta\mathbf{x}$</span></li> <li>Performing marginalization to remove old states</li> </ol> <p>both at the same time. The marginalized <span class="math-container">$\mathbf{H}$</span> (after Schur's complement) will be equivalent to <span class="math-container">$\mathbf{\Sigma}_{xx}^{-1}.$</span></p> <h2>Derivation of Schurs Complement in Gauss-Newton</h2> <p>From the Gauss-Newton system, <span class="math-container">$\mathbf{H} \delta\mathbf{x} = \mathbf{b}$</span>, we can derive the marginalization of the old states <span class="math-container">$\mathbf{x}_{2}$</span> algebraically. Let us decompose the system as: <span class="math-container">\begin{equation} % H \begin{bmatrix} \mathbf{H}_{11} &amp; \mathbf{H}_{12} \\ \mathbf{H}_{21} &amp; \mathbf{H}_{22} \end{bmatrix} % x \begin{bmatrix} \delta\mathbf{x}_{1} \\ \delta\mathbf{x}_{2} \end{bmatrix} = % b \begin{bmatrix} \mathbf{b}_{1} \\ \mathbf{b}_{2} \end{bmatrix} \end{equation}</span> If we multiply out the block matrices and vectors out we get: <span class="math-container">\begin{equation} % Line 1 \mathbf{H}_{11} \delta\mathbf{x}_{1} + \mathbf{H}_{12} \delta\mathbf{x}_{2} = \mathbf{b}_{1} \\ % Line 2 \mathbf{H}_{21} \delta\mathbf{x}_{1} + \mathbf{H}_{22} \delta\mathbf{x}_{2} = \mathbf{b}_{2} \end{equation}</span> Now if we want to marginalize out the <span class="math-container">$\mathbf{x}_{2}$</span>, we simply rearrange the second equation above to be w.r.t. <span class="math-container">$\mathbf{x}_{2}$</span> like so: <span class="math-container">\begin{align} % Line 1 \mathbf{H}_{21} \delta\mathbf{x}_{1} + \mathbf{H}_{22} \delta\mathbf{x}_{2} &amp;= \mathbf{b}_{2} \\ % Line 2 \mathbf{H}_{22} \delta\mathbf{x}_{2} &amp;= \mathbf{b}_{2} - \mathbf{H}_{21} \delta\mathbf{x}_{1} \\ % Line 3 \delta\mathbf{x}_{2} &amp;= \mathbf{H}_{22}^{-1} \mathbf{b}_{2} - \mathbf{H}_{22}^{-1} \mathbf{H}_{21} \delta\mathbf{x}_{1} \\ \end{align}</span> substitute our <span class="math-container">$\delta\mathbf{x}_{2}$</span> above back into <span class="math-container">$\mathbf{H}_{11} \delta\mathbf{x}_{1} + \mathbf{H}_{12} \delta\mathbf{x}_{2} = \mathbf{b}_{1}$</span>, and rearrange the terms so it is w.r.t <span class="math-container">$\delta\mathbf{x}_{1}$</span> to get: <span class="math-container">\begin{align} % Line 1 \mathbf{H}_{11} \delta\mathbf{x}_{1} + \mathbf{H}_{12} (\mathbf{H}_{22}^{-1} \mathbf{b}_{2} - \mathbf{H}_{22}^{-1} \mathbf{H}_{21} \delta\mathbf{x}_{1}) &amp;= \mathbf{b}_{1} \\ % Line 2 \mathbf{H}_{11} \delta\mathbf{x}_{1} + \mathbf{H}_{12} \mathbf{H}_{22}^{-1} \mathbf{b}_{2} - \mathbf{H}_{12} \mathbf{H}_{22}^{-1} \mathbf{H}_{21} \delta\mathbf{x}_{1} &amp;= \mathbf{b}_{1} \\ % Line 3 (\mathbf{H}_{11} - \mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{H}_{21}) \mathbf{x}_{1} &amp;= \mathbf{b}_{1} - \mathbf{H}_{12} \mathbf{H}_{22}^{-1} \mathbf{b}_{2} \end{align}</span> We end up with the Schur Complement of <span class="math-container">$\mathbf{H}_{22}$</span> in <span class="math-container">$\mathbf{H}$</span>: <span class="math-container">\begin{align} \mathbf{H} / \mathbf{H}_{22} := \mathbf{H}_{11} - \mathbf{H}_{12}\mathbf{H}_{22}^{-1}\mathbf{H}_{21} \\ \mathbf{b} / \mathbf{b}_{2} := \mathbf{b}_{1} - \mathbf{H}_{12} \mathbf{H}_{22}^{-1} \mathbf{b}_{2} \end{align}</span> If you want to marginalize out <span class="math-container">$\delta\mathbf{x}_{1}$</span> you can follow the same process above but w.r.t <span class="math-container">$\delta\mathbf{x}_{1}$</span>, that is left as an exercise to the reader ;)</p>
8900
2016-01-15T23:20:05.253
|slam|computer-vision|
<p>Consider the system $$ \tag 1 H\delta x=-g $$ where $H$ and $g$ are the Hessian and gradient of some cost function $f$ of the form $f(x)=e(x)^Te(x)$. The function $e(x)=z-\hat{z}(x)$ is an error function, $z$ is an observation (measurement) and $\hat{z}$ maps the estimated parameters to a measurement prediction. </p> <p>This minimization is encountered in each iteration of many SLAM algorithms, e.g.one could think of $H$ as a bundle adjustment Hessian. Suppose $x=(x_1,x_2)^T$, and let $x_2$ be some variables that we seek to marginalize. Many authors claim that this marginalization is equivalent to solving a smaller liner system $M\delta x_1=-b$ where $M$ and $g$ are computed by applying Schur's complement to (1), i.e. if $$H= \begin{pmatrix} H_{11} &amp; H_{12}\\ H_{21} &amp; H_{22} \end{pmatrix} $$ then $$ M=H_{11}-H_{12}H_{22}^{-1}H_{21} $$ and $$ b=g_1-H_{12}H_{22}^{-1}g_2 $$</p> <p>I fail to understand why that is equivalent to marginalization... I understand the concept of marginalization for a Gaussian, and I know that schur's complement appears in the marginalization if we use the canonical representation (using an information matrix), but I don't see the link with the linear system. </p> <p><strong>Edit:</strong> I understand how Schur's complement appears in the process of marginalizing or conditioning $p(a,b)$ with $a,b$ Gaussian variables, as in the link supplied by Josh Vander Hook. I had come to the same conclusions, but using the canonical notation: If we express the Gaussian $p(a,b)$ in canonical form, then $p(a)$ is gaussian and its information matrix is the Schur complement of the information matrix of $p(a,b)$, etc. Now the problem is that I don't understand how Schur's complement appears in marginalization in bundle adjustment (for reference, in these recent papers: <a href="https://www.google.fr/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0ahUKEwiDgNiC9K3KAhVL1RoKHcXvAWkQFgggMAA&amp;url=http%3A%2F%2Fwww-users.cs.umn.edu%2F~stergios%2Fpapers%2FRSS_2013_Workshop_CKLAM_Esha_Kejian.pdf&amp;usg=AFQjCNEsKknpqqMNJcYSWawCinb5R3qzAg&amp;sig2=gikqqcwFFZWWA_g_p60J3w" rel="noreferrer">c-klam</a> (page 3 if you want to look) and in <a href="https://www.google.fr/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0ahUKEwjStJyd9K3KAhWBCxoKHfLWAK4QFgglMAA&amp;url=http%3A%2F%2Fwww.roboticsproceedings.org%2Frss09%2Fp37.pdf&amp;usg=AFQjCNHpE92Meqxmuja-N5gs0Oh3HbqB_g&amp;sig2=wh5gcv6W3tGrJ9TN1lGJOA" rel="noreferrer">this</a> (part titled marginalization). In these papers, a single bundle adjustment (<strong>BA</strong>) iteration is performed in a manner similar to what I initially described in the question. I feel like there is a simple connection between marginalizing a Gaussian and the marginalization in BA that I am missing. For example, one could say that optimizing $f$ (one iteration) is equivalent to drawing a random variable following a denstiy $$e^{-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)}$$ where $\Sigma$ is the inverse of the Hessian $H$ of $f$, and $\mu$ is the true value for $x$ (or an approximation of that value), and that marginalizing this density is equivalent to using Schur's compelement in the bundle? I am really confused... </p>
SLAM : Why is marginalization the same as schur's complement?
<p>It looks like maybe the author ditched the zero-based numbering, but I agree that it is hard to tell. </p> <p>Why not try to contact the author directly? His <a href="https://www.linkedin.com/in/mavrogiannis" rel="nofollow">LinkedIn page</a> shows he's currently a graduate research assistant at Cornell. I've contacted authors in the past and everyone I've gotten in touch with seemed pleased that I did so. </p>
8907
2016-01-16T18:17:17.803
|inverse-kinematics|
<p>I am trying to find the inverse kinematics formulation of DLR/HIT II hand. Till now I had success in finding an analytical method described in the thesis of Mavrogiannis, Christoforos I. named <a href="http://dspace.lib.ntua.gr/handle/123456789/7858" rel="nofollow">Grasp Synthesis Algorithms for Multifingered Robot Hands</a>, given in appendix B. My question is regarding the A.28 where the author calculates q3 but has mentioned previously in the text that q3 is equal to q2. </p> <p>Note: q here denotes joint angles</p>
Inverse Kinematics of DLR/HIT II Hand
<p>I don't think you'll be able to use PID control successfully because you don't have a continuous feedback signal. Your feedback is binary on each of three sensors. </p> <p>From your description, you need to monitor or eliminate the gap between the sensors. Monitor meaning add more sensors in the gap, and eliminate meaning move the existing sensors close enough together that the gap is smaller than the line thickness.</p> <p>You have a good description of your problem, but you have a poor description of the scenario. For instance:</p> <ol> <li>Why did you set the gap to be wider than the line?</li> <li>What happens when the robot is supposed to stop? Does the line you were following just end? </li> <li>How quickly do you need to stop after detecting all white?</li> <li>Why are you using three sensors? </li> </ol> <p>If I were you, I would use two sensors to follow the line. Set them equal distances on either side of the "center" you would like to use for line following. The total distance between the sensors should be less than or equal to the line thickness. </p> <p>Now, with two sensors, if both register black, you are on the line and should go straight. If the left sensor fails to register black then you are too far to the left and you need to turn right. When both register black then turn a little to get straight again then resume going straight. Same process for the right sensor: when it fails to register black, start turning, then once both register black then course-correct back to straight and resume straight driving again. </p> <p>With one sensor only to detect if you're on the line you won't know what action to take after you depart from your intended course because you have no way of knowing which way you need to go back for course correction. </p>
8909
2016-01-16T20:05:28.683
|microcontroller|line-following|avr|
<p>I am working on building a line follower robot using ATmega2560 and I want its <strong>movement to be more precise</strong>. I am facing a very typical problem. It consists of three(3) IR sensors. The <strong>thickness of the line</strong> to be followed is <strong>1.2cm</strong> and the <strong>gap between the sensors</strong> is more than that, around <strong>1.8cm</strong>. </p> <p>So if the <strong>black line comes between the center and any of the side sensors</strong>, <strong>all the three sensors</strong> are on <strong>white</strong> and it stops. And I need the robot to stop over white, due to my application. So please can anyone suggest me a good algorithm to tackle this situation. I think PID control can be of good use, as i searched on Google. But I don't understand how to implement it with three sensors. Please Help</p>
Typical Problem in Simple Line Follower Using 3 sensors
<p><strong>Kinematics of mobile robots</strong></p> <p><a href="https://i.stack.imgur.com/SjQrN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SjQrN.png" alt="enter image description here"></a></p> <p>For the figure on the left:</p> <ol> <li>I = Inertial frame;</li> <li>R = Robot frame;</li> <li>S = Steering frame;</li> <li>W = Wheel frame;</li> <li>$\beta$ = Steering angle;</li> </ol> <p>For the figure on the right:</p> <ol> <li>L = Distance between the wheels;</li> <li>r = radious of the wheel;</li> </ol> <p>Now we can derive some useful equations.</p> <ol> <li>Kinematics:</li> </ol> <p>$\hspace{2.5em}$ $\vec{v}_{IW} = \vec{v}_{IR} + \vec{\omega}_{IR} \times \vec{r}_{RS}$</p> <p>If we express the equation above in the wheel frame:</p> <p>$\hspace{2.5em}$ $\begin{bmatrix} 0 \\ r\dot{\varphi} \\ 0 \end{bmatrix} = R(\alpha+\beta)R(\theta)\begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{bmatrix} + \begin{bmatrix} 0 &amp; -\dot{\theta} &amp; 0 \\ \dot{\theta} &amp; 0 &amp; 0 &amp; \\ 0 &amp; 0 &amp; 0 \end{bmatrix}\begin{bmatrix} lcos(\beta) \\ -lsin(\beta) \\ 0 \end{bmatrix}$</p> <p>We obtain the rolling-constraint and the no-sliding constraint respectively:</p> <p>$\hspace{2.5em}$ $[-sin(\alpha+\beta)\hspace{1.0em}cos(\alpha+beta)\hspace{1.0em}lcos(\beta)]\dot{\xi}_{R} = \dot{\varphi}r$</p> <p>$\hspace{2.5em}$ $[cos(\alpha+\beta)\hspace{1.0em}sin(\alpha+beta)\hspace{1.0em}lsin(\beta)]\dot{\xi}_{R} = 0$</p> <p>where $\dot{\xi}_{R} = [\dot{x_{R}}\hspace{1.0em}\dot{y_{R}}\hspace{1.0em}\dot{\theta}]^{T}$</p> <p>Now we need to apply each of this constraints in the differential wheels</p> <ul> <li>For the left wheel: $\alpha = -\frac{\pi}{2}$, $\beta = 0$, $l = -\frac{L}{2}$</li> <li>For the right wheel: $\alpha = -\frac{\pi}{2}$, $\beta = 0$, $l = \frac{L}{2}$</li> <li>Stacked equation of motion:</li> </ul> <p>$\hspace{2.5em}$ $\begin{bmatrix} 1 &amp; 0 &amp; \frac{L}{2} \\ 1 &amp; 0 &amp; -\frac{L}{2} &amp; \\ 0 &amp; -1 &amp; 0 \\ 0 &amp; -1 &amp; 0 \end{bmatrix}\dot{\xi}_{R} = \begin{bmatrix} r &amp; 0\\ 0 &amp; r \\ 0 &amp; 0 \\ 0 &amp; 0 \end{bmatrix}\begin{bmatrix} \dot{\varphi}_{r} \\ \dot{\varphi}_{l} \end{bmatrix} $</p> <p>$\hspace{2.5em}$ $A\dot{\xi}_{R} = B\dot{\varphi} $</p> <p>For the forward kinematic solution, just do:</p> <p>$\hspace{2.5em}$ $\dot{\xi}_{R} = \big( A^{T}A \big)^{-1}A^{T}B\dot{\varphi} $</p> <p>which yields:</p> <p>$\hspace{2.5em}$ $\begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{bmatrix} = \begin{bmatrix} \frac{r}{2} &amp; -\frac{r}{2} \\ 0 &amp; 0 \\ \frac{r}{L} &amp; -\frac{r}{L} \end{bmatrix} \begin{bmatrix} \dot{\varphi}_{r} \\ \dot{\varphi}_{l} \end{bmatrix}$</p> <p>An excellent chapter that I suggest <a href="http://www.cs.cmu.edu/~rasc/Download/AMRobots3.pdf" rel="nofollow noreferrer">here</a>.</p>
8911
2016-01-16T21:33:28.190
|mobile-robot|kinematics|wheeled-robot|differential-drive|two-wheeled|
<p>I'm working on a little project where I have to do some simulations on a small robot.</p> <p>I my case I'm using a differential-drive robot as one of the wheels of a bigger robot platform (which has two differential-drive casters), and I really do not understand how to find its kinematics in order to describe it in a model for finding the speed <strong>V_tot</strong> of the platform.</p> <p><a href="https://i.stack.imgur.com/uBhDr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uBhDr.png" alt="enter image description here"></a></p> <p>This is my robot and I know the following parameters</p> <ul> <li><strong>d</strong> is the distance between a joint where my robot is costrained</li> <li><strong>blue point</strong> is the joint where the robot is linked to the robot platform</li> <li><strong>L</strong> is distance between the wheels</li> <li><strong>r</strong> the radius of the wheel</li> <li>the robot can spin around the <strong>blue point</strong> and with and <strong>THETA</strong> angle</li> </ul> <p>As I know all this dimensions, I would like to apply two velocities <strong>V_left</strong> and <strong>V_right</strong> in order to move the robot.</p> <p>Let's assume that <strong>V_left</strong> = <strong>- V_right</strong> how do I find analitically the ICR (Istantaneous Center of Rotation) in this costrained robot?</p> <p>I mean that I cannot understand how to introduce <strong>d</strong> in the formula.</p>
How to find kinematics of differential drive caster robot?
<p>It turns out, the default i2c baudrate becomes a bottle neck in reading the measurement data from the IMU's so increasing it from 100kbps to 400kbps was able to boost my execution frequency from 150Hz (After optimizing) to ~ 210Hz (0.00419s)</p>
8916
2016-01-17T11:26:46.747
|quadcopter|pid|raspberry-pi|sensor-fusion|c++|
<blockquote> <p>Is it possible to speed up execution time of a c++ program in raspberry pi solely by increasing the i2c baudrate and increasing the sampling frequency of the sensors? </p> </blockquote> <p>I have the issue of sudden jerkiness of my quadcopter and found the culprit which is the frequency at which my loop excecutes which is only about 14Hz. The minimum requirement for a quadcopter is 100-200hz. It is similar to the issue he faces here <a href="https://robotics.stackexchange.com/questions/6720/raspberry-pi-quadcopter-thrashes-at-high-speeds">Raspberry Pi quadcopter thrashes at high speeds</a></p> <p>He said that he was able to increase his sampling rate from 66hz to 200hz by increasing the i2c baudrate. I am confused on how that is done.</p> <p>In the wiring pi library, it says that we can set the baudrate using this command:</p> <pre><code>gpio load i2c 1000 will set the baud rate to 1000Kbps – ie. 1,000,000 bps. (K here is times 1000) </code></pre> <p>What I am curious about is how to set this baudrate to achieve my desired sampling rate?</p> <p>I plan on optimizing it further to achieve at least a 100Hz sampling rate</p> <p>As of now, the execution time of each loop in my quadcopter program is at 0.07ms or 14Hz.</p> <p>It takes 0.01ms to 0.02ms to obtain data from the complementary filter.</p> <p>I have already adjusted the registers of my sensors to output readings at 190Hz (Gyroscope L3GD20H) and 200Hz (Accelerometer LSM303) and 220Hz (Magnetometer LSM303).</p>
Quadcopter program execution time optimization using Raspberry Pi by increasing i2c baudrate
<p>Khalil himself says that it is a form of the modified D-H parameters. See, <em>e.g.,</em> Section 2.1 of the 2000 Springer-Verlag book <em>Advances in Robot Kinematics.</em></p>
8930
2016-01-18T21:50:12.727
|dh-parameters|
<p>Is the notation of the geometry of robots from Khalil and Kleinfinger be considered as one of the probably "many" Modified DH Parameters?</p>
Modified DH Parameters?
<p>As the other answers suggest, 100ms of delay in control may not be significant to your application. It would be prudent to first solve your data-fusion problem and then see if the delay is an issue to your controller. I would first record some data of a closed-loop path and do some offline filtering to see if the results are good. For instance, make your robot drive a square and record that data.</p> <p>Your first problem is then to synchronise the data streams; since your IMU is lagging by 100ms, then simply operate on the newest IMU data you have, with the odometry from the corresponding time. </p> <p>The data-fusion could be done using a Kalman filter for the 2D case (X,Y,heading). The heading is updated by the IMU and the velocity (and heading, depending on the model) is updated by the odometry. See the system model in <a href="http://home.isr.uc.pt/~rui/publications/isie20052.pdf" rel="nofollow">this paper</a>, which uses odometry. A good solution would involve a nonlinear KF such as EKF, but I suspect for low speeds and high sensor rates you can get away with a linear KF. I think the states would be $\mathbf{[x, y, \phi, x', y', \phi']}^\top$, where your process model would be the same as the first paper I linked, and the measurement model is simply $\mathbf{y}= \mathbf{u} + v$, where v is Gaussian white noise with known standard deviation. Maybe see <a href="https://github.com/acfr/snark/blob/master/math/filter/test/constant_speed_test.cpp" rel="nofollow">this kalman filter framework</a>.</p> <p>After you have some working results with your offline data, you can worry about implementing it online. The implementation will greatly depend on your choice of data-fusion algorithm.</p> <p>See also: <a href="http://www.mrpt.org/tutorials/programming/odometry-and-motion-models/probabilistic_motion_models/" rel="nofollow">particle filter methods</a> and a full <a href="http://wiki.ros.org/robot_localization" rel="nofollow">ROS implementation</a>.</p>
8931
2016-01-19T01:06:02.250
|mobile-robot|imu|odometry|
<p>I'm working with a differential-drive robot that has odometry measurements from wheel shaft encoders and heading information from an IMU (I'm using BNO055 in IMU mode to get Euler angles, primarily the heading angle).</p> <p>I'd like to use the IMU header angle to augment the odometry which is prone to slipping and other errors, but the IMU lags the odometry by up to 100 ms.</p> <p>How can I combine these measurements to get the best estimate of the robot pose?</p> <p>Thanks for any word on this.</p>
How to combine odometry information with time-shifted information from IMU?
<p>Vision will typically use as much processing power as you can provide. Large humanoid (human size) control systems also take a lot of processing power. It is common for a large humanoid to have at least two large computers (one for vision, one for control) or more. Also, each motor controller and many sensor subsystems will have microcontrollers.</p> <p>There is no preferred OS that I know of. Linux with Xenomai or preempt patch seems to be popular and what I use. I would not be surprised to find vxworks, qnx, xPC or realtime windows running the current generation of humanoids. The PC doing the motor control will need a real-time capable OS (which all of the above are).</p> <p>The Darpa Robotics Challenge last year generated a lot of good documentation describing various humanoid systems. You can find lots of papers describing in detail the computer systems used for many of the contestants. An example is <a href="http://drc.mit.edu/docs/2014_jfr_fallon.pdf" rel="nofollow">http://drc.mit.edu/docs/2014_jfr_fallon.pdf</a></p>
8934
2016-01-19T09:16:44.173
|embedded-systems|humanoid|
<p>I ask this question to clear my concept about hardware structure of humanoid autonomous fire robot, Here is scenario a fire robot detect humans from fire, there are some vision cameras some temperature and smoke sensors which help to perform this task. Now a days in market there are many processors like <code>ARMV7</code> and <code>Snapdragon</code> which process tasks in any device and control the system i don't think autonomous fire robot use some kind of processors.</p> <p>Does autonomous robots like fire robot use processors or micro controllers. Does it require <code>OS</code> or <code>Rams</code> environment? Like any computer system which use these kind of things.</p>
Autonomous robots hardware structure planning
<p>The Jacobian in that equation is from the joint velocity to the "spatial velocity" of the end effector.</p> <p>The spatial velocity of an object is a somewhat unintuitive concept: it is the velocity of a frame rigidly attached to the end effector <em>but currently coincident with the origin frame</em>. It may help to think of the rigid body as extending to cover the whole space, and you're measuring the velocity by standing at the origin and looking at what is happening to the end effector where you are, instead of at the end of the arm.</p> <p>In the example given, the origin is at the same location as the first joint. This means that rotating the joint will sweep the end effector in a circle, but that the frame at the origin will only pivot around the vertical axis; this is encoded by the first column of the Jacobian being [0 0 0 0 0 1]'.</p> <p>Rotating joints two and three will pull the origin-overlapping frame away from the origin, and hence have translational components.</p> <p>To see how this works in action, note that the Jacobian that you asked about from MLS can be simplified to</p> <p>$J_{st,SCARA}^{s} = \begin{bmatrix} 0 &amp; \phantom{-}y_{2} &amp; \phantom{-}y_{3} &amp; 0\\ 0 &amp; -x_{2} &amp; - x_{3} &amp; 0 \\ 0 &amp;0&amp;0&amp;1 \\ 0&amp; 0&amp; 0&amp;0\\ 0&amp; 0&amp; 0&amp;0\\ 1&amp;1&amp;1 &amp;0 \end{bmatrix}, $</p> <p>in which the first three columns encode the velocity at the origin of an object rotating around the corresponding axis and the fourth column gives the velocity at the origin of an object sliding up the fourth axis.</p> <p>You can convert the spatial-velocity Jacobian into a world-frame Jacobian by incorporating the Jacobian from base-frame motion to motion at the end effector's current position and orientation. For the SCARA arm, this works out fairly cleanly, with the only difference between the two frame velocities being the "cross product" term that accounts for the extra motion of the end effector sweeping around the base,</p> <p>$ J^{w}_{st, SCARA} = \begin{bmatrix} % \begin{pmatrix} 1 &amp;&amp;\\ &amp;1&amp;\\ &amp;&amp; 1 \end{pmatrix} % &amp; % \begin{pmatrix} 0 &amp; &amp; -y_{4}\\ &amp; 0 &amp; \phantom{-}x_{4} \\ &amp; &amp; \phantom{-}0 \end{pmatrix} \\ \begin{pmatrix} 0&amp;&amp; \\ &amp;0&amp;\\ &amp;&amp;0 \end{pmatrix} &amp; \begin{pmatrix} 1 &amp; &amp; \\ &amp; 1 &amp; \\ &amp;&amp; 1 \end{pmatrix} \end{bmatrix} J^{s}_{st,\text{SCARA}}. $</p> <p>This product evaluates to</p> <p>$ J_{st,SCARA}^{w} = \begin{bmatrix} -y_{4} &amp; -(y_{4}-y_{2}) &amp; -(y_{4}-y_{3}) &amp; 0\\ \phantom{-}x_{4} &amp; \phantom{-}x_{4}-x_{2} &amp; \phantom{-}x_{4}- x_{3} &amp; 0 \\ 0 &amp;0&amp;0&amp;1 \\ 0&amp; 0&amp; 0&amp;0\\ 0&amp; 0&amp; 0&amp;0\\ 1&amp;1&amp;1 &amp;0 \end{bmatrix}, $</p> <p>which matches what we would expect to see: Each of the rotary joints contributes to the end effector velocity by the cross product between its rotational velocity and the vector from the joint to the end effector (note that $x_{4}-x_{3}$ and $y_{4}-y_{3}$ are both zero).</p> <p>===</p> <p>In the general case, where the rotations are not only around the $z$ axis, you would want to use the full form of the matrix mapping between the spatial to world Jacobians,</p> <p>$ J_{w} = \begin{bmatrix} % \begin{pmatrix} 1 &amp;&amp;\\ &amp;1&amp;\\ &amp;&amp; 1 \end{pmatrix} % &amp; % \begin{pmatrix} \phantom{-}0 &amp; \phantom{-}z &amp; -y\\ -z&amp; \phantom{-}0 &amp; \phantom{-}x \\ \phantom{-}y&amp; -x &amp; \phantom{-}0 \end{pmatrix} \\ \begin{pmatrix} 0&amp;&amp; \\ &amp;0&amp;\\ &amp;&amp;0 \end{pmatrix} &amp; \begin{pmatrix} 1 &amp; &amp; \\ &amp; 1 &amp; \\ &amp;&amp; 1 \end{pmatrix} \end{bmatrix} J^{s}, $</p> <p>which encodes the cross product terms for all three rotation axes when the end effector is at $(x,y,z)$ relative to the base frame.</p>
8940
2016-01-19T19:56:25.943
|manipulator|jacobian|product-of-exponentials|screw-theory|
<p>I've taken a class and started a thesis on robotics and my reference for calculating the Jacobian by product of exponentials seems incorrect, see:</p> <p><a href="http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf" rel="nofollow">http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf</a></p> <p>Specifically the resulting Jacobian matrix for the SCARA manipulator on page 118 would have us believe that the end effector translational velocity depends on joints 2 and 3 rather than 1 and 2.</p> <p>Could someone please explain me why?</p>
Robotic manipulator Jacobian by product of exponentials
<p>After studying the code I cited on the <strong>EDIT</strong> of my question.<br> I came up with this solution which is working so far </p> <pre><code># define encoderRight 2 # define encoderLeft 3 volatile int countR = 0; volatile int countL = 0; volatile boolean firstChangeR = true; // expecting the 1st change of the bounce volatile boolean firstChangeL = true; // same boolean right_set = true; boolean left_set = true; void setup() { Serial.begin(9600); // Right encoder attachInterrupt(digitalPinToInterrupt(encoderRight), doEncoderRight, CHANGE); pinMode(encoderRight, INPUT); digitalWrite(encoderRight, HIGH); // turn on pullup resistor // Left encoder attachInterrupt(digitalPinToInterrupt(encoderLeft), doEncoderLeft, CHANGE); pinMode(encoderRight, INPUT); digitalWrite(encoderLeft, HIGH); } void loop() { firstChangeR = true; // we reset to true to expect next change triggered firstChangeL = true; // by the interrupt // printing String txt = String(countL); txt += "__"; txt += String(countR); Serial.println(txt); } void doEncoderRight(){ if (firstChangeR) delay(1); // if this is the first detection then we wait // for the bounce to be over // if the current state is different from the last saved state then: // a real change happened and it's not part of the bouncing but // actually the real beginning of the change: the first bounce ! if (digitalRead(2) != right_set) { right_set = !right_set; // so we change the real state countR ++; // we also increment the right encoder // since this was the firstChange the next are part of bouncing, so: firstChangeR = false; } } void doEncoderLeft(){ if (firstChangeL) delay(1); if (digitalRead(3) != left_set) { left_set = !left_set; countL ++; firstChangeL = false; } } </code></pre> <p>Tell me what do you think about it? Do you think it's reliable and is there any improvement you can propose?</p>
8950
2016-01-20T11:51:47.767
|arduino|wheel|two-wheeled|interrupts|
<p>I am building a simple robot with two driving wheel. I want to control the wheel rotation using a wheel encoder like this <a href="http://www.tinyosshop.com/index.php?route=product/product&amp;product_id=541" rel="nofollow noreferrer">one</a>.</p> <p>Here is a code I have on Arduino to try to understand the problem I'm facing:</p> <pre class="lang-cc prettyprint-override"><code>int count = 0; void setup() { Serial.begin(9600); attachInterrupt(digitalPinToInterrupt(2), upL, RISING); pinMode(2, INPUT); } void loop() { Serial.println(String(digitalRead(2)) + "__" + String(count)); } void upL(){ count++; } </code></pre> <p>What I notice is:<br> The interrupt is triggered <strong>multiple times</strong> when the sensor beam is cut <strong>once</strong>.<br> But when I <em>digitalRead</em> the pin, then there is only one change. </p> <p>I also noticed that the interrupt is also triggered when going from HIGH to LOW.</p> <p>Here is an example of the ouput I have:</p> <pre><code>0__0 0__0 0__0 0__0 ... ... 0__0 0__0 0__0 &lt;&lt;&lt; change from LOW to HIGH... 1__9 &lt;&lt;&lt; the interrupt must have incremented only once... 9 instead ! 1__9 1__9 1__9 ... ... 1__9 1__9 &lt;&lt;&lt; change from HIGH to LOW. the interrupt shouldn't be triggered 0__24 &lt;&lt;&lt; still... we have 15 increments 0__24 0__24 0__24 0__24 ... ... 0__24 0__24 &lt;&lt;&lt; same here... 1__51 &lt;&lt;&lt; 26 increments 1__51 ... ... 1__51 &lt;&lt;&lt; same here... 0__67 &lt;&lt;&lt; 16 increments 0__67 0__67 </code></pre> <p>The only way I can explain that is that during a change of state, the signal received from the sensor is not really square, but somehow noisy.<br> Like in this image :<br> <a href="https://i.stack.imgur.com/DzMIK.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DzMIK.gif" alt="explanation?"></a><br> Therefore we would have, indeed, many RISING on one change....<br> <em>(However reading the output of the sensor on an analog pin shows a direct variation from 880(HIGH) to 22(LOW))</em></p> <p>Does anyone have another explanation? Or a solution to this problem ?</p> <hr> <p><strong>EDIT</strong></p> <p>Thanks to @TobiasK I know that this is called a <em>bouncing effect</em>. By doing further research I came across this solution:<br> playground.arduino.cc/Main/RotaryEncoders (Ctrl+F for rafbuff).<br> I'm trying it and i'll let you know.</p>
Wheel encoder triggers interrupt too many times
<p>The Raspberry Pi's UART interface can handle up to 4Mbps which fits my needs as I am sending a 480 element floating vector from Pi_1 to Pi_2 </p>
8951
2016-01-20T13:08:48.823
|quadcopter|raspberry-pi|communication|
<p>Good day</p> <p>I am currently implementing an autonomous quadcopter with stereo vision using raspberry Pi. One (Let's call this Pi_1) is responsible for stereo vision the other is responsible for motor control and trajectory planning (Pi_2). I was looking for a way to transfer a 480 element float vector via GPIO from Pi_1 to Pi_2. Pi_1 stereovision program runs at 2Hz while Pi_2 motor control runs at 210Hz. </p> <blockquote> <p>Is there any protocol fast enough to deliver this amount of information to the second raspberry pi via GPIO? </p> </blockquote> <p>I am currently looking at SPI but I saw that the Raspberry Pi cannot be turned to a Slave making it not an option. I also looked at UART however it is too slow for my needs. All the I2c ports on the Pi are currently being used by the stereo vision cameras and the IMU's. If the gpio option is not feasible, I am also open for other suggestions such as using other hardware (middle man) or wireless options. </p>
Establishing Data Transfer between two Raspberry Pi's using GPIO
<p>Here you go. This will control 2 steppers on an Arduino.</p> <pre><code>#include &lt;Stepper.h&gt; #define STEPS 200 // change this to the number of steps on your motor // create an instance of the stepper class, specifying // the number of steps of the motor and the pins it's attached to Stepper stepperX(STEPS, 2, 3, 4, 5); // Arduino pins attached to the stepper driver Stepper stepperY(STEPS, 8, 9, 10, 11); int previous = 0; // the previous reading from the analog input void setup() { // set the speed of the motor to # RPMs stepperX.setSpeed(60); stepperY.setSpeed(60); // Initialize the Serial port: Serial.begin(9600); } void loop() { int sensorPosition = analogRead(0); // Step forward # steps: Serial.println("Moving Forward..."); stepperX.step(500); stepperY.step(500); delay(1000); // Step backward # steps: Serial.println("Moving Backward..."); stepperX.step(-100); stepperY.step(-100); delay(1000); } </code></pre> <p>As mentioned in @Chuck's post, the Arduino doesn't directly control stepper motors. What happens is that you plug in a stepper driver to the Arduino, and that driver is wired to &amp; controls the motor. You'll see that my pins are 2, 3, 4, and 5 for one driver &amp; motor, and 8, 9, 10, and 11 for the other driver &amp; motor.</p> <pre><code>Stepper stepperX(STEPS, 2, 3, 4, 5); // Arduino pins attached to the stepper driver Stepper stepperY(STEPS, 8, 9, 10, 11); </code></pre> <p>This is because I'm using 4-phase unipolar motors. Your choice of stepper motors will determine your own setup.</p> <p>Here's a great writeup on the topic of motor selection:</p> <p><a href="https://learn.adafruit.com/all-about-stepper-motors/types-of-steppers" rel="nofollow">AdaFruit stepper motor knowledge</a></p> <p>Good luck!</p> <p>Ryan</p>
8954
2016-01-20T18:02:48.327
|arduino|robotic-arm|stepper-driver|
<p>I would like to know what software is available to control a timing belt with stepper motors for arduino board.Much like how its done in 3d printing. But in this case i wont be making a 3d printer.Just one simple setup.</p>
Software to control an Arduino setup with a timing belt and stepper motors
<p>In 1981 Raibert and Craig wrote a paper <em>Hybrid Position/Force Control of Manipulators</em> which was published in the June 1981 issue of <em>Journal of Dynamic Systems, Measurement, and Control.</em> It was republished in Brady's book <em>Robot Motion: Planning and Control.</em> You can find many similar concepts today, some being called "position and torque control," and others using velocity and/or force variables for the combination. If you are able to sense joint torques (or motor currents), this would be a great approach. If you cannot sense the joint torques, you can use an end effector force/torque sensor. This would work for the scenario in which the robot cannot finish its positioning task when the object being gripped (or the gripper itself) hits an obstacle. It would not, however, detect when an individual arm link encounters an obstacle. </p>
8960
2016-01-21T06:44:08.133
|control|robotic-arm|joint|
<p>What are strategies used when trajectories, which are applied to a robotic joint, are interrupted? Say a robotic arm hits an obstacle, the controller just keeps applying the trajectory. Then at the end, the error gets so large, the torque can get quite strong and damage the robot or snap.</p>
What to do when position control with trajectories is interrupted?
<p>I could finally control the CC3D from the RPi. </p> <p>The problem was a bad configuration from a RC input. I also upgrade the CC3D firmware from the new Groung Control System software developed by LibrePilot, another community that wants to give support for the OpenPilot boards.</p>
8965
2016-01-21T16:19:02.343
|raspberry-pi|quadcopter|uav|
<p>I am trying to control a quadcopter using te OpenPilot CC3D board and a RaspberryPi. The main idea was first replace the signals from the RC emitter to the CC3D RC receiver for an RPi connected directly to the RC receiver inputs of the CC3D.</p> <p>As far as I know the RC signals to the CC3D are PWM so the RPi should be able to control the channels using RPIO library to create the PWM by software. </p> <p>But after make some tests I haven't find any way to move the motors. I am using the Ground Control System (OpenPilot Software) to configure the CC3D.</p> <p>I am not sure whether I need to send the PWM signals in any order or something like that. I am also not sure how the Flight Mode Switch works, I suposse it works the same way as the other channels, using PWM.</p> <p>Anyone have made anything similar to this? </p>
CC3D - Replacing RC emitter with an RPi
<p>You've got most of the needed parts right however, since you are a begginner I would recommend you just buy ARF (Almost Ready to fly kits from HobbyKing or Amazon). An example is the DJI F450 kit with e300/e305 tuned propulsion system. The motors and propellers are already tuned so you do not need to balance them and the motors that come with either of these kits are designed to work seamlessly with the ESC's they come with. These also work with your KK flight controller. The recommended takeoff weight for this kit is 1.2 kgs which is enough for most applications. I've also seen people doing their thesis projects using this kit including me.</p> <p>If you want to pursue building you own (Which I think you shouldn't do), you should first take into consideration the weight you are expecting. </p> <ol> <li><p>Get the weight of your frame and all the peripherals you would mount such as the flight controller, the camera and the frame that you are going to use. Take into consideration the maximum weight capacity of your frame while doing this. You should also leave allowances for the addition of motors, propellers and esc's and the battery. </p></li> <li><p>Once you've got the weight, its time to select motors. You should select motors based on their recommended take off weight (you can check this on their specs). If your the weight you've calculated plus the weight of the motors you've selected fits into the recommended take off weight then your good. Take note I said recommended take off weight, not maximum weight/thrust. </p></li> <li><p>I'm not that familiar when it comes to selecting propellers though. You can check this link instead. He gives a better explaination: <a href="http://blog.oscarliang.net/how-to-choose-motor-and-propeller-for-quadcopter/" rel="nofollow">http://blog.oscarliang.net/how-to-choose-motor-and-propeller-for-quadcopter/</a> </p></li> <li><p>When selecting batteries you can use this calculator to check your estimated flight time. <a href="http://multicopter.forestblue.nl/lipo_need_calculator.html" rel="nofollow">http://multicopter.forestblue.nl/lipo_need_calculator.html</a></p></li> </ol>
8988
2016-01-25T18:41:50.633
|quadcopter|motor|
<p>Before I start, I am a 13 year old, I would like to apologise because I am a beginner to all this and I wouldn't really understand any professional talk, I am only doing this for a hobby.</p> <p>I am building a quadcopter,</p> <ul> <li>Flight controller: KK 1.2.5</li> <li>ESC: Q Brain 25amp</li> <li>Frame: KK 260 / FPV 260</li> <li>Frame Addon: KK/FPV 250 Long Frame Upgrade Kit</li> <li>Tx &amp; Rx: HobbyKing 6ch tx rx (Mode 2)</li> <li>Battery: Turnigy Nano-Tech 2200 mAh 3S</li> </ul> <p>I am not sure about what motor and propellers I should use. All I know is: for the frame the motor mounts are: 16mm to 19mm with M3 screws I am not sure what 1806 and 2208 means.</p> <p>Here are my questions:</p> <ol> <li>What calculations should I do to find out how much thrust the quad needs to produce / any other useful calculations</li> <li>Using the calculations what would be the best and CHEAPEST motors I could have</li> <li>And finally, what propeller would be best suited for the motor.</li> </ol> <p>p.s: I am looking for a durable and really cheap motors also, I live in London, so shipping might be a problem if there is an immense bill.</p> <p>Thanks a lot for your time, Sid</p>
Building a quadcopter, what motors, props and what are the calculations?
<p>Besides the answers posted above, you can use Matlab to acquire <strong>H</strong>. </p> <pre><code>syms my mx x y r p a r = sqrt( (my-y)^2 + (mx-x)^2 ); % range equation (r) p = atan( (my-y)/(mx-x) ) - a; % bearing equation (\phi) input = [x y a]; output = [r p]; H = jacobian(output, input) </code></pre> <p>The result is </p> <pre><code>H = [ -(2*mx - 2*x)/(2*((mx - x)^2 + (my - y)^2)^(1/2)), -(2*my - 2*y)/(2*((mx - x)^2 + (my - y)^2)^(1/2)), 0] [ (my - y)/((mx - x)^2*((my - y)^2/(mx - x)^2 + 1)), -1/((mx - x)*((my - y)^2/(mx - x)^2 + 1)), -1] </code></pre>
8992
2016-01-26T07:10:31.750
|slam|
<p>I am reading SLAM for Dummies, which you can find on Google, or at this link: <a href="http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-2005/projects/1aslam_blas_repo.pdf" rel="nofollow">SLAM for Dummies - A Tutorial Approach to Simultaneous Localization and Mapping</a>.</p> <p>They do some differentiation of matrices on page 33 and I am getting different answers for the resulting Jacobian matrices. </p> <p>The paper derived </p> <p>$$ \left[ {\begin{array}{c} \sqrt{(\lambda_x - x)^2 + (\lambda_y - y)^2} + v_r \\ \tan^{-1}\left(\frac{\lambda_y - y}{\lambda_y - x}\right) - \theta + v_\theta \end{array}} \right] $$</p> <p>and got $$ \left[ {\begin{array}{ccc} \frac{x - \lambda_y}{r},&amp; \frac{y - \lambda_y}{r},&amp; 0\\ \frac{\lambda_y - y}{r^2},&amp; \frac{\lambda_y - x}{r^2},&amp; -1 \end{array}} \right] $$</p> <p>I don't get where the $r$ came from. I got completely different answers. Does anybody know what the $r$ stands for? If not, is there a different way to represent the Jacobian of this matrix?</p>
In the SLAM for Dummies, why are there extra variables in the Jacobian Matricies?
<p>Yes there is and the principle is same as wheeled vehicles. There is a part in vehicle chassis called differential which transmits the raw power from the engine to the axle. Tanks and other tracked vehicles use a similar system to utilize a single engine output for both sides.</p> <p>Although going forward is pretty simple, there is a major problem that still persists. Since tanks don't have additional mechanism for turning, they use 2 different methods which acts as a solution. </p> <p>The first one is a basic solution, there are 2 different brake clutches for individual tracks. When one of them is used, the corresponding track slows down and the vehicle turns in that direction by dragging the tracks.</p> <p>The second one is called neutral steering and not that common. It was used in tanks such as German Panther. This system has a hybrid gearbox to set different speeds in different directions for each track. Vehicles with this mechanism can do a stationary 360 degree turn. Neutral steering looks pretty but trying to implement it into a robotic infrastructure requires a high knowledge in mechanical engineering as this kind of steering system is a very complex one.</p> <p>More information about tracked vehicle steering can be found here: <a href="http://www.gizmology.net/tracked.htm" rel="nofollow">Tracked Vehicle Steering</a></p>
8994
2016-01-26T08:14:29.397
|tracks|gearing|chassis|
<p>I see that in small robots tracked chassis is implemented with 2 motors, each powering one side of the vehicle, like this: <a href="https://i.stack.imgur.com/DLfvd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DLfvd.jpg" alt="enter image description here"></a></p> <p>(image stolen from here)</p> <p>But in real scale tanks I assume there is only one motor so there must be some way of applying power to both sides independently.</p>
How to implement transmission in tracked chassis with one motor?
<p>All PID controllers act on an <em>error</em> signal, so you definitely should stick with your first implementation. The derivative term in the PID controller wants to "see" how the error is changing - if the error is getting bigger, then the derivative term gets larger and "kicks in" a higher contribution. If the error is getting smaller, then the derivative term backs off. </p> <p>Your problem description is vague; something about "pushing" the quadcopter and damping and feeling something. </p> <p>Engineers don't have feelings, they have data. </p> <p>For the moment, with no data, all I can tell you is to reuse your first implementation and double check how you are acquiring your <code>deltaTime2</code> information. Typically, in industrial controls, the derivative term is left off of the controller and only a PI controller is implemented. This is because any bit of temporal sampling problems (not taking the sample on time or mis-judging the sampling rate) can cause <em>huge</em> problems with the derivative term.</p> <p>Imagine your sampling rate is supposed to be 0.005 seconds (200Hz). If instead of that sampling rate, you mis-judged the sample rate to be 0.006 seconds for <em>one sample</em>, then the following happens:</p> <ol> <li>No change to proportional error because it doesn't interact with the sampling rate. </li> <li>Integral error multiplies proportional error by this sampling rate <em>and then adds it to all of the previous integral error samples</em>, so overall it's a pretty small impact. </li> <li>Derivative error <em>divides</em> proportional error by this sampling rate <em>with no other means of context</em>. </li> </ol> <p>So, conceivably, you could have something like:</p> <pre><code>prevIntError = 1.8; prevDerError = 1.1 PropError = 1; IntError = prevIntError + 1*0.006; derError = (1 - 1.1)/0.006; </code></pre> <p>So where the integral error is now 1.806 instead of 1.805 (an error of 0.05%), your derivative error is -16.6 instead of what it should be, (1 - 1.1)/0.005, which is -20. That means that error in the proportional error signal is 0, the error in the integral error signal is 0.05%, and the error in the derivative error signal is 17%! </p> <p>As your sampling rate increases, your sampling period goes down. Errors in timing can begin to become significant after a point. For example, at 10Hz, you are sampling every 0.1s. If your timing accuracy is +/- 0.001s, then this means that your time sampling error is +/- 1%. If you push to 200Hz (0.005s), and your timing accuracy is still +/- 0.001s, then suddenly your time sampling error is +/- 20%! </p> <p>So, in summary:</p> <ol> <li>Post data, not feelings. </li> <li>Revert to your first implementation.</li> <li>Check how you acquire <code>deltaTime2</code> and report back. </li> </ol> <p>Please update your question to include the answers to 1 and 3 (don't post it as a comment to this answer please). </p> <p>:EDIT:</p> <p>I would move your timestamp evaluation to one location in your code, and I would put that location immediately before you use it. At the moment, you don't update the sampling time until <em>after</em> you've done your derivative/integral calculations. This is even more problematic given the fact that you are performing communications in the same sampling window. </p> <p>Say your first sweep, you read some data, do your calculations, and that sample time is 5ms. Then, on the second sweep, something happens to the communications (no data, data error, IMU reset, etc.). The time it takes to get back to the derivative/integral calculations <em>won't</em> be 5ms, it'll be 1ms or 10ms or something similar. So you use the wrong sample time for that sample, but then your sample time gets updated <em>after the fact</em> to reflect that sweep took the 10ms or whatever, then that gets used on the following sample, which again may not be correct. </p> <p>I would revise your code to the following:</p> <pre><code>/* Initialize I2c */ /* Open Files for data logging */ deltaTimeInit=(float)getTickCount(); //&lt;--- Placeholder value for first pass while(1){ /* Get IMU data */ deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //&lt;--- Get the time that elapsed to NOW deltaTimeInit=(float)getTickCount(); //&lt;--- Start the timer for the next loop /* Filter using Complementary Filter */ /* Compute Errors for PID */ //&lt;--- Use the actual time elapsed for the calculations /* Update PWM's */ /* (more stuff) */ } </code></pre> <p>Basically, you care about the time that has elapsed from when you get data to the next time that you get data, and you want to act on current information. Your sample time should be calculated before you use it, on the current sweep. </p>
8998
2016-01-26T14:50:25.613
|control|quadcopter|pid|raspberry-pi|stability|
<p>Good day,</p> <p>I am currently implementing a single loop PID controller using angle setpoints as inputs. I was trying out a different approach for the D part of the PID controller. </p> <p>What bought this about is that when I was able to reach a 200Hz (0.00419ms) loop rate, when adding a D gain, the quadcopter seems to dampen the movements in a non continous manner. This was not the case when my algorithm was running at around 10Hz. At an angle set point of 0 degrees, I would try to push it to one side by 5 degrees then the quad would try to stay rock solid by resisting the movements but lets go after while enabling me to get it of by 2 degrees (the dampening effect weakens over time) then tries to dampen the motion again.</p> <p>This is my implementation of the traditional PID:</p> <blockquote> <p>Derivative on Error:</p> </blockquote> <pre><code>//Calculate Orientation Error (current - target) float pitchError = pitchAngleCF - pitchTarget; pitchErrorSum += (pitchError*deltaTime2); float pitchErrorDiff = pitchError - pitchPrevError; pitchPrevError = pitchError; float rollError = rollAngleCF - rollTarget; rollErrorSum += (rollError*deltaTime2); float rollErrorDiff = rollError - rollPrevError; rollPrevError = rollError; float yawError = yawAngleCF - yawTarget; yawErrorSum += (yawError*deltaTime2); float yawErrorDiff = yawError - yawPrevError; yawPrevError = yawError; //PID controller list float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2; float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2; float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2; //Motor Control - Mixing //Motor Front Left (1) float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation; </code></pre> <p>What I tried to do now is to implement a derivative on measurement method from this article to remove derivative output spikes. However the Derivative part seems to increase the corrective force than dampen it.</p> <blockquote> <p>Derivative on Measurement:</p> </blockquote> <pre><code>//Calculate Orientation Error (current - target) float pitchError = pitchAngleCF - pitchTarget; pitchErrorSum += (pitchError*deltaTime2); float pitchErrorDiff = pitchAngleCF - pitchPrevAngleCF; // &lt;---- pitchPrevAngleCF = pitchAngleCF; float rollError = rollAngleCF - rollTarget; rollErrorSum += (rollError*deltaTime2); float rollErrorDiff = rollAngleCF - rollPrevAngleCF; // &lt;---- rollPrevAngleCF = rollAngleCF; float yawError = yawAngleCF - yawTarget; yawErrorSum += (yawError*deltaTime2); float yawErrorDiff = yawAngleCF - yawPrevAngleCF; // &lt;---- yawPrevAngleCF = yawAngleCF; //PID controller list // &lt;---- The D terms are now negative float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum - pitchKd*pitchErrorDiff/deltaTime2; float rollPID = rollKp*rollError + rollKi*rollErrorSum - rollKd*rollErrorDiff/deltaTime2; float yawPID = yawKp*yawError + yawKi*yawErrorSum - yawKd*yawErrorDiff/deltaTime2; //Motor Control - Mixing //Motor Front Left (1) float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation; </code></pre> <p>My question now is:</p> <blockquote> <p>Is there something wrong with my implementation of the second method? </p> </blockquote> <p>Source: <a href="http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/" rel="nofollow noreferrer">http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/</a></p> <p>The way I've obtained the change in time or DT is by taking the timestamp from the start of the loop then taking the next time stamp at the end of the loop. Their difference is obtained to obtain the DT. getTickCount() is an OpenCV function.</p> <pre><code>/* Initialize I2c */ /* Open Files for data logging */ while(1){ deltaTimeInit=(float)getTickCount(); /* Get IMU data */ /* Filter using Complementary Filter */ /* Compute Errors for PID */ /* Update PWM's */ //Terminate Program after 40 seconds if((((float)getTickCount()-startTime)/(((float)getTickFrequency())))&gt;20){ float stopTime=((float)getTickCount()-startTime)/((float)getTickFrequency()); gpioPWM(24,0); //1 gpioPWM(17,0); //2 gpioPWM(22,0); //3 gpioPWM(18,0); //4 gpioTerminate(); int i=0; for (i=0 ; i &lt; arrPitchCF.size(); i++){ file8 &lt;&lt; arrPitchCF.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrYawCF.size(); i++){ file9 &lt;&lt; arrYawCF.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrRollCF.size(); i++){ file10 &lt;&lt; arrRollCF.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPitchAccel.size(); i++){ file2 &lt;&lt; arrPitchAccel.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrYawAccel.size(); i++){ file3 &lt;&lt; arrYawAccel.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrRollAccel.size(); i++){ file4 &lt;&lt; arrRollAccel.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPitchGyro.size(); i++){ file5 &lt;&lt; arrPitchGyro.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrYawGyro.size(); i++){ file6 &lt;&lt; arrYawGyro.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrRollGyro.size(); i++){ file7 &lt;&lt; arrRollGyro.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPWM1.size(); i++){ file11 &lt;&lt; arrPWM1.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPWM2.size(); i++){ file12 &lt;&lt; arrPWM2.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPWM3.size(); i++){ file13 &lt;&lt; arrPWM3.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPWM4.size(); i++){ file14 &lt;&lt; arrPWM4.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrPerr.size(); i++){ file15 &lt;&lt; arrPerr.at(i) &lt;&lt; endl; } for (i=0 ; i &lt; arrDerr.size(); i++){ file16 &lt;&lt; arrDerr.at(i) &lt;&lt; endl; } file2.close(); file3.close(); file4.close(); file5.close(); file6.close(); file7.close(); file8.close(); file9.close(); file10.close(); file11.close(); file12.close(); file13.close(); file14.close(); file15.close(); file16.close(); cout &lt;&lt; " Time Elapsed = " &lt;&lt; stopTime &lt;&lt; endl; break; } while((((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())))&lt;=0.00419){ //0.00209715|0.00419 cout &lt;&lt; " DT end = " &lt;&lt; deltaTime2 &lt;&lt; endl; deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); } cout &lt;&lt; " DT end = " &lt;&lt; deltaTime2 &lt;&lt; endl; } </code></pre> <p>Here's my data:</p> <p><a href="https://i.stack.imgur.com/H9O7s.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H9O7s.jpg" alt="Derivative on Error without D gain"></a></p> <p><a href="https://i.stack.imgur.com/flipn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/flipn.jpg" alt="Derivative on Error with D gain"></a></p> <p><a href="https://i.stack.imgur.com/B8x2r.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B8x2r.jpg" alt="Derivative on Measurement with D gain"></a></p>
Quadcopter PID Controller: Derivative on Measurement / Removing the Derivative Kick
<p>I faced the same problem: I used DC motors and L293D motor. Like everyone else said you need to use a sensor. I used MPU6050 gyroscope sensor, which is cheap and somewhat inaccurate but is good enough for driving straight. Implementing a sensor can be <strong>cheap and simple</strong> as shown on (which uses Arduino, but the same concept can be used on Raspberry Pi):</p> <p><a href="https://www.instructables.com/How-to-Make-a-Robot-Car-Drive-Straight-and-Turn-Ex/" rel="nofollow noreferrer">https://www.instructables.com/How-to-Make-a-Robot-Car-Drive-Straight-and-Turn-Ex/</a></p> <p>I have also tried adjusting PWM to each motor, which does NOT work, since the surface/ground the car runs on changes.</p>
9001
2016-01-26T17:56:10.027
|control|motor|wheeled-robot|raspberry-pi|
<p>I am using <strong>2 identical</strong> DC motors and a castor wheel. The motors are connected to <strong>L293D motor driver</strong> and are controlled by <strong>RPi</strong>. </p> <p>The robot is not going straight. It veers off to the right. I am running both the motors at <strong>100% PWM</strong>.</p> <p>What I tried to correct the error:</p> <ol> <li>I adjusted the PWM of the wheel going faster to <strong>99%</strong>, but the robot just turns to the other side; </li> <li>I adjusted the weight on the robot and the problem still persists.</li> </ol> <p>I once tried to run the motor without any load. Is that the cause of this, as I was later told that, running a DC motor without any load damages them?</p> <p>If that is not the cause, then please tell me how to solve this problem without using any <strong>sensors</strong> for controlling it. </p>
Robot never goes straight
<p>This depends on what you mean by "cost". If you've got your own (or access to a) 3D printer and you're just paying for raw material then you can calculate the raw material cost per cubic distance (cubic inches, cubic centimeter, whatever your preferred units are), then determine the volume of the part to be printed, then multiply the two together. </p> <p>Typically the CAD software used to create the 3D model will have a button that gives you the volume of the part. </p> <p>Sometimes, as in selective laser sintering (SLS), the high temperatures involved degrade the <em>unused</em> nylon powder. This means that some percentage of the powder needs to be exchanged after some number of builds, and you'll wind up paying a higher cost than strictly the cost per unit volume of the powder (to pay for the replacement of the unused powder). </p> <p>If you don't have access to the printer, then you'll wind up paying the same costs mentioned above, plus whatever their going rate is for time on the machine, time to setup the part, time to inspect/depowder/clean the part, shipping, and markup. </p> <p>If you're interested in getting someone else to 3D print the part (and give you an instant quote), try a site like <a href="http://www.shapeways.com/" rel="nofollow">Shapeways</a>.</p>
9025
2016-01-28T10:34:46.153
|3d-printing|
<p>Can someone please share the typical cost of material to 3D print an object like a raspberry pi case? Thank you.</p>
Cost of material to 3D print
<p>Like Andy said, a Harmonic drive seems to be the most likely answer. But your final question is very vague. The other option is that it is a direct-drive system (with a clutch to protect the motor), and they are using a very high-torque motor. They certainly seem to have the diameter, as long as you aren't trying to lift something too heavy.</p> <p>As for your last question, if you must avoid shafts while trying to achieve rotational motion, you're probably going to have to use a pneumatic/hydraulic system. Most companies avoid this - I know of only two that use hydraulics for rotational motion, leaks will be the bane of your existence. The other option is to start playing with some pretty advanced materials (say some electro-reactive ones, that change shape with electrical current)</p>
9033
2016-01-29T01:49:47.067
|mechanism|joint|robotic-arm|
<p>Hello I'm trying to figure out how modular arm joints are designed and what kind of bearings/shafts are used for a modular-type robotic arm. Take "UR arm" for example. I believe those 'T-shaped pipes' include both a drive and bearing system. And as you can see from second image, it can be detached easily. So I think it's not just a simple "motor shaft connecting to the member that we want to rotate" mechanism. I'm wondering which type of mechanism and bearing system is inside of those T-shaped pipes. How can i transfer rotational motion to a member without using shafts? </p> <p><a href="https://i.stack.imgur.com/aJTMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aJTMs.png" alt="UR3 Robotic Arm"></a></p> <p><a href="https://i.stack.imgur.com/LiRoD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LiRoD.jpg" alt="UR10 Robotic Arm Joint"></a></p>
How modular arm joints work
<p>Multiple robots and environments can be loaded. Simply load them and position the robots. Here is a Python example:</p> <pre><code>#! /usr/bin/env python from openravepy import * env = Environment() env.SetViewer('qtcoin') env.Load("pr2-beta-sim.robot.xml") env.Load("puma.robot.xml") pr2 = env.GetRobot('pr2') puma = env.GetRobot('PumaGripper') puma.SetTransform([1,0,0,0,0,1,0]) </code></pre> <p>Run this with something like: <code>ipython -i my2robots.py</code></p> <p>For an example in C++, see <a href="http://openrave.org/docs/latest_stable/coreapihtml/orloadviewer_8cpp-example.html" rel="nofollow">this example</a>. </p>
9038
2016-01-29T23:58:33.287
|robotic-arm|motion-planning|
<p>I have a PR2 robot in an environment, which can be seen on the GUI of OpenRAVE. </p> <p>Now, how can I load a <a href="https://en.wikipedia.org/wiki/Programmable_Universal_Machine_for_Assembly" rel="nofollow">PUMA</a> robot arm in the same environment?</p>
How to load a PUMA robot in the existing environment in OpenRAVE 0.9
<p>Since the bevel gear itselt is made out of plastic, you don't need to do a lot of considerations about the shaft. <br> Basically you are talking about a relationship between a hole and a shaft. This relationshift is called a fit. Usually you are defineing your fit by assining special tolerancefields and tolerancesizes to your nominal size.<br> In this case you have some cheap plastics, where you are almost unable to apply any special tolerances. This comes from the nature of those plastics, they are pretty flexible and can't stand lot of forces, in addition they are very hard to maschine.(I have never seen a broach made for plastics) Therefor most of this plastic parts are made by injection moulding (which is most likely true for your parts). <br> So let's this excursion into construction theory end and come back to your question. Just buy a cheap metal pin with 6mm (in most cases it is 6h7). If you measure the pin, you will see it will have somewhere between 5.97 and 6.00mm. Create a flatspot ( use a file or an electric driller), to prevent twisting between the gearing and the shaft and you are done.</p>
9040
2016-01-30T04:15:12.857
|mechanism|torque|gearing|
<p>I have a small POM bevel gear with these dimensions:</p> <p><a href="https://i.stack.imgur.com/ARMjQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ARMjQ.png" alt="enter image description here"></a></p> <p>It has a 6mm hole for the shaft and a M4 hole for the set screw.</p> <p>Suppose this bevel gear is meshed with a 45T bevel gear and give a max. output torque of 0.4kg/cm. How should the design of the 6mm shaft be? Should the diameter be precisely 6mm? Should it be flattened into a 'D' shape (so that the set screw can hold the shaft)? I'm planning to use a metal shaft.</p> <p>Any help will be appreciated.</p> <p>Thanks</p>
Metal shaft design for a 6mm plastic bevel gear
<p>Yes, it will work! The electrical interface is the same between the Create 2 and the 700 series Roomba; however, the position of the mini-DIN is changed. Instead of being to the right, under the top cover, the connector can be found under a rubber gasket under the handle. As long as any Create cable can physically fit in that location (and the one iRobot sells can), it will work the same as it does on a Create. <p><sup> Please note that I am an iRobot employee, but the postings on this site are my own and don't necessarily represent iRobot's positions, strategies, or opinions. </sup></p></p>
9044
2016-01-30T17:57:32.447
|ros|irobot-create|roomba|
<p>For the last few months I have been playing with ROS on an nVidia Jetson TK1 development board. Up until this point, it has mostly been playing with the GPIO header, an Arduino Uno, a couple physical contact sensors, and a few custom motor and servo boards that I slapped together. But lately I've been eyeing an old 700 series Roomba that has been gathering dust (was replaced by an 800 series).</p> <p>Does anyone know if the Communication Cable for Create 2 will work with a 700 series Roomba?</p> <p>I know there are DIY designs out there, but I have always been a fan of using off-the-shelf components if they exist - you rarely save more money than your time is worth if it is something like a cable or similar component. So if the Create 2 cable will work, I'll use that. If not, I'll see what I can do to make my own.</p>
Create 2 Cable with 700 Series Roomba
<p>There are a few tests that will help in this sort of situation. Try either of these.</p> <p>Test 1. Get a logic probe (a simple low-cost device that will serve you for many years!) and use it <strong>carefully</strong> on the L293D. Test each of the two Enable pins and each of the four Input pins, and make sure each of those pins reads either High or Low as you would expect. This is the most certain way to find a poor connection or incorrect wires.</p> <p>(Note about the logic probe: always make sure the probe tip is placed centrally and firmly on top of the IC pin you are testing; if you slip sideways and "short" the probe tip between two pins, you could damage an output pin on the Raspberry Pi, depending on the situation.)</p> <p>Test 2. First disconnect the Raspberry Pi completely and put it aside. Take the wires that go to the motor controller inputs, and physically connect each of these to power, either zero volts or the VCC connection. You are effectively sending the signals by hand instead of by software. If the motor now works as expected, then perhaps your Raspberry Pi has a damaged output pin. If not, either you have misunderstood the connections of there is a problem with the driver.</p> <p>(As you probably don't have a logic prove, I suggest you at least try test 2 in the first instance. That alone will probably help.)</p>
9048
2016-01-30T20:11:56.457
|wheeled-robot|raspberry-pi|
<p>My small robot has two motors controlled by an L293D and that is controlled via a Raspberry Pi. They will both go forwards but only one will go backwards. </p> <p>I've tried different motors and tried different sockets in the breadboard, no luck. Either the L293D's chip is broken (but then it wouldn't go forwards) or I've wired it wrong. </p> <p>I followed the tutorial, <a href="http://computers.tutsplus.com/tutorials/controlling-dc-motors-using-python-with-a-raspberry-pi--cms-20051" rel="nofollow noreferrer">Controlling DC Motors Using Python With a Raspberry Pi</a>, exactly.</p> <p>Here is a run down of what works. Let the 2 motors be A and B:</p> <p>When I use a python script (see end of post) both motors go "forwards". When I change the values in the Python script, so the pin set to HIGH and the pin set to LOW are swapped, motor A will go "backwards", this is expected. However, motor B will not move at all. </p> <p>If I then swap both motors' wiring then the original python script will make both go backwards but swapping the pins in the code will make motor A go forwards but motor B won't move.</p> <p>So basically, motor A will go forwards or backwards depending on the python code but motor B can only be changed by physically changing the wires.</p> <p>This is <code>forwards.py</code></p> <pre><code>import RPi.GPIO as GPIO from time import sleep GPIO.setmode(GPIO.BOARD) Motor2A = 23 Motor2B = 21 Motor2E = 19 Motor1A = 18 Motor1B = 16 Motor1E = 22 GPIO.setup(Motor1A, GPIO.OUT) GPIO.setup(Motor1B, GPIO.OUT) GPIO.setup(Motor1E, GPIO.OUT) GPIO.setup(Motor2A, GPIO.OUT) GPIO.setup(Motor2B, GPIO.OUT) GPIO.setup(Motor2E, GPIO.OUT) print("ON") GPIO.output(Motor1A, GPIO.HIGH) GPIO.output(Motor1B, GPIO.LOW) GPIO.output(Motor1E, GPIO.HIGH) GPIO.output(Motor2A, GPIO.HIGH) GPIO.output(Motor2B, GPIO.LOW) GPIO.output(Motor2E, GPIO.HIGH) </code></pre> <p>And this is <code>backwards.py</code></p> <pre><code>import RPi.GPIO as GPIO from time import sleep GPIO.setmode(GPIO.BOARD) Motor2A = 21 Motor2B = 23 Motor2E = 19 Motor1A = 16 Motor1B = 18 Motor1E = 22 GPIO.setup(Motor1A, GPIO.OUT) GPIO.setup(Motor1B, GPIO.OUT) GPIO.setup(Motor1E, GPIO.OUT) GPIO.setup(Motor2A, GPIO.OUT) GPIO.setup(Motor2B, GPIO.OUT) GPIO.setup(Motor2E, GPIO.OUT) print("ON") GPIO.output(Motor1A, GPIO.HIGH) GPIO.output(Motor1B, GPIO.LOW) GPIO.output(Motor1E, GPIO.HIGH) GPIO.output(Motor2A, GPIO.HIGH) GPIO.output(Motor2B, GPIO.LOW) GPIO.output(Motor2E, GPIO.HIGH) </code></pre> <p>If you see this diff <a href="https://www.diffchecker.com/skmx6084" rel="nofollow noreferrer">https://www.diffchecker.com/skmx6084</a>, you can see the difference:</p> <blockquote> <p><a href="https://i.stack.imgur.com/q2YDS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q2YDS.png" alt="Screenshot of code diff"></a></p> </blockquote> <p>Below are some pictures. You can use the colour of the cables to link them between pictures</p> <p><a href="https://i.stack.imgur.com/Ke9VG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ke9VG.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/W9gbw.jpg" rel="nofollow noreferrer">enter image description here</a> <a href="https://i.stack.imgur.com/VcWml.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcWml.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/Oq9ho.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oq9ho.jpg" alt="enter image description here"></a></p>
L293D won't turn motor backwards
<p>As <a href="https://robotics.stackexchange.com/a/9061/37">SteveO suggested</a>, it is clear from the exploded diagram that a bearing is part of each <em>Tail Gear Case Plate</em>, and the <em>Tail Output Shaft Stopper</em> is locked to the output shaft using the set screws, which sit in the D grooves in the shaft.</p> <p>The Washer and Poly Slider precisely set the distance between the left bearing and the mesh gear (actually a <a href="https://en.wikipedia.org/wiki/Gear#Bevel" rel="nofollow noreferrer">mitre gear</a>), while stopping the shaft from sliding up/left on the exploded diagram and disengaging the leader and follower.</p> <p>The purpose of the Tail Output Shaft Stopper is to stop the shaft being pulled in the opposite direction, which is the direction the blades are going to be causing the shaft to be pulled when rotating.</p> <p>With the stopper in place, the sideways force would be applied from the shaft to the bearing, through the Stopper, with very little force applied to the mesh gear.</p> <p>Without the Tail Output Shaft Stopper all of the sideways force would be applied via the mesh gear, potentially wearing it out sooner, and possibly even causing it to bind and stop rotating altogether.</p> <p>This sort of arrangement is fairly common in <em>robotics</em> application.</p>
9057
2016-01-31T18:17:54.100
|mechanism|torque|gearing|
<p>I'm looking at the assembly of a tail rotor that should look like this</p> <p><a href="https://i.stack.imgur.com/UuQ5b.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UuQ5b.jpg" alt="enter image description here"></a> <sup><a href="https://i.stack.imgur.com/8ElkZ.jpg" rel="nofollow noreferrer">original image</a></sup></p> <p>I wonder if the "tail output shaft stopper" (circled in red) is meant to be a bearing or just a piece of metal stopper:</p> <p><a href="https://i.stack.imgur.com/ZH64j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZH64j.png" alt="enter image description here"></a></p> <p>My reading is that, since it's held by 2 set screws, the whole part should rotate with the rod. While rotating, it'd rub against the bevel gear on the tail drive though. Am I missing something?</p>
Is this supposed to be a bearing?
<p>Basically you can utitilize one global frame for both robots since each robot has its own SLAM. SLAM provides an estimate for a robot's pose (i.e. location and direction). If you unify the global frame for both SLAMs, then you can determine the poses of the two robots. I've drawn a picture to illustrate my approach. </p> <p><a href="https://i.stack.imgur.com/u8S9A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u8S9A.png" alt="enter image description here"></a></p> <p>As you can see from the above picture, two robots are placed in one global frame. Once they start running, there is one landmark in this picture. Each SLAM will utilize this landmark to estimate the robot's pose, therefore, since the robots can communicate with each other, each robot can have an estimate for the other robot's pose. </p>
9073
2016-02-02T00:38:23.743
|mobile-robot|localization|precise-positioning|
<p><strong>Scenario</strong></p> <p>I have 2 roaming robots, each in different rooms of a house, and both robots are connected to the house wifi. Each robot only has access to the equipment on itself.</p> <p><strong>Question</strong></p> <p>How can the robots be aware of each other's exact position using only their own equipment and the house wifi?</p> <p><strong>EDIT: Additional Info</strong></p> <p>Right now the robots only have:</p> <ul> <li>RGBDSLAM via Kinect</li> <li>No initial knowledge of the house or their location (no docks, no mappings/markings, nada) </li> <li>Can communicate via wifi and that part is open ended</li> </ul> <p>I'm hoping to be able to stitch the scanned rooms together before the robots even meet. Compass + altimeter + gps will get me close but the goal is to be within an inch of accuracy which makes this tough. There IS freedom to add whatever parts to the robots themselves / laptop but the home needs to stay dynamic (robots will be in a different home every time).</p>
Determine robot's position in a nearby room
<p><strong>Hard real time</strong> : What ever time cycle you are setting for your loop, it should strictly follow that. In most of the robotics system they follow the cycle time to be <em>1ms</em>. <a href="https://i.stack.imgur.com/JwbxX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JwbxX.png" alt="enter image description here"></a></p> <p>You can go to this <a href="https://roscon.ros.org/2015/presentations/RealtimeROS2.pdf" rel="nofollow noreferrer">link</a> for more information of above image. Current ROS version are having TCPROS (non realtime) for their node to node communication but in ROS2, the community is redesigning ros for hard real time application and instead of TCPROS they are using DDS (Data Distribution Service) for node to node communication. </p>
9074
2016-02-02T03:34:47.990
|microcontroller|ros|real-time|
<p>As far as I know, a hardware real-time robot control system requires a specific computing unit to solve the kinematics and dynamics of a robot such as interval zero RTX, which assigns CPU cores exclusively for the calculation, or a DSP board, which does exactly the same calculation. This configuration makes sure that each calculation is strictly within, maybe, 1 ms. </p> <p>My understanding is that ROS, which runs under Ubuntu, doesn't have a exclusive computing unit for that. Kinematics and dynamics run under different threads of the same CPU which operates the Ubuntu system, path plan, and everything else. </p> <p>My question is that how does ROS achieve software-real time? Does it slow down the sampling time to maybe 100ms and makes sure each calculation can be done in time? Or the sampling time changes at each cycle maybe from 5ms, 18ms, to 39ms each time in order to be as fast as possible and ROS somehow compensates for it at each cycle?</p>
Software real-time of ROS system
<p>Based on the videos, it looks like the answer is, unfortunately, "the test setup you have might be insufficient to say one way or the other". </p> <p>There is too much slack in the tethers to draw any meaningful conclusions about whether your code is doing the right thing. The quadcopter seemed to be reacting somewhat correctly to the way it was being tugged by the ropes. There were some moments where it looked incorrect as well, but they were not isolated enough to point to a specific logic error.</p> <p>Spend some time on the quality and stability of your test setup so that you can rigidly constrain two axes while letting the other move freely. (For example, disable one pair of motors and hang the quadcopter from strings around those propeller shafts.) That will let you troubleshoot one axis at a time in a very reasonable way, and provide more accurate feedback on the correctness of your PID values.</p> <p>Looking at your setup, another problem seems to be that you're attaching the tether too low and creating an "inverted pendulum" effect. (See also, <a href="https://aviation.stackexchange.com/q/9787">this question on quadcopter center of lift</a>).</p> <p><a href="https://i.stack.imgur.com/o3rnQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o3rnQ.png" alt="enter image description here"></a></p> <p>You need to create a mounting bracket for your tether that will rigidly mount it to the quadcopter body at the level of the center of lift that the propellers provide. It would also help to use something structural (like a metal post vertically attached to a sturdy base), instead of just rope or wire. </p>
9077
2016-02-02T15:52:58.220
|pid|raspberry-pi|quadcopter|tuning|
<p>Good day,</p> <p>I am working on an autonomous flight controller for a quadcopter ('X' configuration) using only angles as inputs for the setpoints used in a single loop PID controller running at 200Hz (PID Implementation is Here: <a href="https://robotics.stackexchange.com/questions/8998/quadcopter-pid-controller-derivative-on-measurement-removing-the-derivative-k">Quadcopter PID Controller: Derivative on Measurement / Removing the Derivative Kick</a>). For now I am trying to get the quadcopter to stabilize at a setpoint of 0 degrees. The best I was able to come up with currently is +-5 degrees which is bad for position hold. I first tried using only a PD controller but since the quadcopter is inherently front heavy due to the stereo cameras, no amount of D or P gain is enough to stabilize the system. An example is the image below which I added a very small I gain:</p> <p><a href="https://i.stack.imgur.com/OvijX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OvijX.jpg" alt="Small/Almost Negligible I-gain"></a></p> <p>As you can see from the image above (at the second plot), the oscillations occur at a level below zero degrees due to the quadcopter being front heavy. This means that the quad oscillates from the level postion of 0 degrees to and from a negative angle/towards the front. To compensate for this behaviour, I discovered that I can set the DC level at which this oscillations occur using the I gain to reach the setpoint. An image is shown below with [I think] an adequate I gain applied:</p> <p><a href="https://i.stack.imgur.com/o9Qmh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o9Qmh.jpg" alt="Ok I-gain, I think"></a></p> <p>I have adjusted the PID gains to reduce the jitters caused by too much P gain and D gain. These are my current settings (Which are two tests with the corresponding footage below):</p> <p><a href="https://i.stack.imgur.com/JfZX8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JfZX8.png" alt="Current Settings"></a></p> <p>Test 1: <a href="https://youtu.be/8JsraZe6xgM" rel="nofollow noreferrer">https://youtu.be/8JsraZe6xgM</a></p> <p>Test 2: <a href="https://youtu.be/ZZTE6VqeRq0" rel="nofollow noreferrer">https://youtu.be/ZZTE6VqeRq0</a></p> <p>I can't seem to tune the quadcopter to reach the setpoint with at least +-1 degrees of error. I noticed that further increasing the I-gain no longer increases the DC offset. </p> <blockquote> <p>When do I know if the I-gain I've set is too high? How does it reflect on the plot?</p> </blockquote> <p>EDIT: The Perr in the graphs are just the difference of the setpoint and the CF (Complementary Filter) angle. The Derr plotted is not yet divided by the deltaTime because the execution time is small ~ 0.0047s which will make the other errors P and I hard to see. The Ierr plotted is the error integrated with time.</p> <p>All the errors plotted (Perr, Ierr, Derr) are not yet multiplied by the Kp, Ki, and Kd constants</p> <p>The 3rd plot for each of the images is the response of the quadcopter. The values on the Y axis correspond to the value placed as the input into the gpioPWM() function of the pigpio library. I had mapped using a scope the values such that 113 to 209 pigpio integer input corresponds to 1020 to 2000ms time high of the PWM at 400Hz to the ESC's</p> <p>EDIT:</p> <p>Here is my current code implementation with the setpoint of 0 degrees:</p> <pre><code>cout &lt;&lt; "Starting Quadcopter" &lt;&lt; endl; float baseThrottle = 155; //1510ms float maxThrottle = 180; //This is the current set max throttle for the PITCH YAW and ROLL PID to give allowance to the altitude PWM. 205 is the maximum which is equivalent to 2000ms time high PWM float baseCompensation = 0; //For the Altitude PID to be implemented later delay(3000); float startTime=(float)getTickCount(); deltaTimeInit=(float)getTickCount(); //Starting value for first pass while(1){ //Read Sensor Data readGyro(&amp;gyroAngleArray); readAccelMag(&amp;accelmagAngleArray); //Time Stamp //The while loop is used to get a consistent dt for the proper integration to obtain the correct gyroscope angles. I found that with a variable dt, it is impossible to obtain correct angles from the gyroscope. while( ( ((float)getTickCount()-deltaTimeInit) / ( ((float)getTickFrequency()) ) ) &lt; 0.005){ //0.00209715|0.00419 deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed cout &lt;&lt; " DT endx = " &lt;&lt; deltaTime2 &lt;&lt; endl; } //deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed deltaTimeInit=(float)getTickCount(); //Start counting time elapsed cout &lt;&lt; " DT end = " &lt;&lt; deltaTime2 &lt;&lt; endl; //Complementary Filter float pitchAngleCF=(alpha)*(pitchAngleCF+gyroAngleArray.Pitch*deltaTime2)+(1-alpha)*(accelmagAngleArray.Pitch); float rollAngleCF=(alpha)*(rollAngleCF+gyroAngleArray.Roll*deltaTime2)+(1-alpha)*(accelmagAngleArray.Roll); float yawAngleCF=(alpha)*(yawAngleCF+gyroAngleArray.Yaw*deltaTime2)+(1-alpha)*(accelmagAngleArray.Yaw); //Calculate Orientation Error (current - target) float pitchError = pitchAngleCF - pitchTarget; pitchErrorSum += (pitchError*deltaTime2); float pitchErrorDiff = pitchError - pitchPrevError; pitchPrevError = pitchError; float rollError = rollAngleCF - rollTarget; rollErrorSum += (rollError*deltaTime2); float rollErrorDiff = rollError - rollPrevError; rollPrevError = rollError; float yawError = yawAngleCF - yawTarget; yawErrorSum += (yawError*deltaTime2); float yawErrorDiff = yawError - yawPrevError; yawPrevError = yawError; //PID controller list float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2; float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2; float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2; //Motor Control - Mixing //Motor Front Left (1) float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation; //Motor Front Right (2) float motorPwm2 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation; //Motor Back Left (3) float motorPwm3 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation; //Motor Back Right (4) float motorPwm4 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation; //Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles. float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float minPWM = motorPWM[0]; int i; for(i=0; i&lt;4; i++){ // Get minimum PWM for filling if(motorPWM[i]&lt;minPWM){ minPWM=motorPWM[i]; } } cout &lt;&lt; " MinPWM = " &lt;&lt; minPWM &lt;&lt; endl; if(minPWM&lt;baseThrottle){ float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors cout &lt;&lt; " Fill = " &lt;&lt; fillPwm &lt;&lt; endl; motorPwm1=motorPwm1+fillPwm; motorPwm2=motorPwm2+fillPwm; motorPwm3=motorPwm3+fillPwm; motorPwm4=motorPwm4+fillPwm; } float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float maxPWM = motorPWM2[0]; for(i=0; i&lt;4; i++){ // Get max PWM for trimming if(motorPWM2[i]&gt;maxPWM){ maxPWM=motorPWM2[i]; } } cout &lt;&lt; " MaxPWM = " &lt;&lt; maxPWM &lt;&lt; endl; if(maxPWM&gt;maxThrottle){ float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors cout &lt;&lt; " Trim = " &lt;&lt; trimPwm &lt;&lt; endl; motorPwm1=motorPwm1-trimPwm; motorPwm2=motorPwm2-trimPwm; motorPwm3=motorPwm3-trimPwm; motorPwm4=motorPwm4-trimPwm; } //PWM Output gpioPWM(24,motorPwm1); //1 gpioPWM(17,motorPwm2); //2 gpioPWM(22,motorPwm3); //3 gpioPWM(18,motorPwm4); //4 </code></pre>
PID Tuning for an Unbalanced Quadcopter: When do I know if the I-gain I've set is too high?
<p>You might want to have a look at [1] where the author uses iSAM for estimating - among other quantities - the position and attitude of a rigid body spinning in microgravity.</p> <p>In section 4.5 p111 he mentions using a combination of {attitude error vector + reference quaternion} within the nodes of the pose graph :</p> <blockquote> <p>*It is clear that any representations of attitude include nonlinear transformations and kinematics. This causes a problem for modelling and propagating probability distribution functions with Gaussian random variables, such as those typically used in Extended Kalman Filters or the iSAM system. It is well understood that the covariance matrix of a quaternion is rank deficient due to its normalization constraint. While there is active research in a number of estimation systems that do not use Gaussian random variables [83, 109, 127, 118, 119], a typical approach for dealing with this is to use three vector error parameterization and reset the quaternion (see [26, 70, 82, 134]), which is what will be used here since it fits well with the iSAM system for Gaussian random variables and has a history of good performance.</p> <p>This error vector and reference quaternion approach can be applied to pose graph optimization methods such as iSAM. For each of the nodes that specify the vehicle’s 6DOF trajectory, the reference quaternion approach is mirrored. This means that at the vehicle’s state nodes for each timestep, both a four parameter reference quaternion and a three parameter attitude error is stored. Each time the optimization problem is re-linearized, a reset step is performed. This reset step transfers all of the attitude error into the reference quaternion.&quot;</p> </blockquote> <p>In his particular case he uses Modified Rodriguez Parameters (MRP) but you could probably choose an alternative representation if you keep the idea of combining an error vector with a quaternion within the nodes.</p> <p><a href="http://ssl.mit.edu/files/website/theses/PhD-2013-TweddleBrent.pdf" rel="nofollow noreferrer">[1] Tweddle, B. E., Computer Vision-Based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations, Ph.D. thesis, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, Cambridge, MA, 2013.</a></p>
9096
2016-02-04T11:27:22.330
|slam|pose|
<p>I want to implement my own pose graph SLAM following [1]. Since my vehicle is moving in 3D-space i represent my pose using a 3D-translation vector and a quaternion for orientation. [1] tells me that it's necessary to adapt their algorithm 1 by using manifolds to project the poses into euclidean space. I also studied the approach of [2]. In section "IV.B. Nonlinear Systems" they write that their approach remains valid for nonlinear systems. I conclude that for their case it's not obligatory to make use of a manifold. But I don't understand how they avoid it. So my questions are:</p> <ol> <li>Is it correct that there is an alternative to manifolds?</li> <li>If yes, how does this alternative look like?</li> </ol> <hr> <p>[1] Grisetti, G., Kummerle, R., Stachniss, C., &amp; Burgard, W. (2010). A tutorial on graph-based SLAM. Intelligent Transportation Systems Magazine, IEEE, 2(4), 31-43.</p> <p>[2] Kaess, M., Ranganathan, A., &amp; Dellaert, F. (2008). iSAM: Incremental smoothing and mapping. Robotics, IEEE Transactions on, 24(6), 1365-1378.</p>
Is there an alternative to manifolds when using quaternions for orientation representation in Pose Graph SLAM?
<ol> <li><p>For the range of motion you are talking about, I would recommend stepper motors. Motion control will be easier to implement because they move a fixed distance per step.</p></li> <li><p>Assuming the drum (idea B) is larger than the gear/pulley you would use for idea A, then I would say idea A is better. This is because the motor you use will turn some finite angle $\theta$, which means the carriage or drum will traverse a distance $r\theta$ - whichever device has a shorter radius (pulley or drum) will result in smaller motions, giving you a smoother curve. Also, putting paper correctly indexed on the drum (and holding it there!!) will be considerably more difficult than laying it flat in a frame. For example, if your stepper motor has a step size of 1.8 deg (0.0314 rad), and you use it on a pulley for a carriage where the pulley radius is 1cm, then your minimum motion on that axis is (0.0314 * 1)cm = 0.314mm. However, if your drum is big enough to hold the 30x30cm paper (perimeter = 30cm), then the drum radius is about 4.75cm, so your new minimum motion is (0.0314*4.75)cm = 1.5mm, or almost 5 times the distance between points! </p></li> <li>Typically you won't controller the stepper motors directly from the microcontroller; you'll use a <a href="https://www.sparkfun.com/products/12779" rel="nofollow">stepper driver</a> to run the motors. The microcontroller then tells the driver what size steps to take (full, half, quarter, etc.) and how many steps to take and the driver does all of the power electronics and phasing for you. </li> </ol> <p>As a reminder - you'll need a mechanism to raise and lower the pen or your entire drawing will be one connected line. </p> <h2>:EDIT:</h2> <p>Finding a mechanism to raise and lower the pen frame is outside the scope of this site [shopping recommendation], but I would try a spring-loaded pen attached to a servo motor. </p> <p>Regarding micro-stepping, I'm not sure what your question is. For the driver I <a href="https://cdn.sparkfun.com/datasheets/Robotics/A3967-Datasheet.pdf" rel="nofollow">linked above</a>, you set two bits for full [0,0], half [1,0], quarter [0,1] or eighth [1,1] stepping. That's it. You would use GPIO pins to do this, wired to the MS2 and MS1 pins on the stepper driver. When you want to take a step, you send a pulse to the STEP pin on the driver. The driver takes care of everything else.</p>
9097
2016-02-04T16:13:29.313
|microcontroller|stepper-motor|servomotor|
<p>To plot any curve or a function on a paper we need points of that curve, so to draw a curve, I will store a set of points in the processor and use motors, markers and other mechanism to draw straight lines attaching these points and these points are so close to each other that the resultant will look an actual curve.</p> <p>So I am going to draw the curve with a marker or a pen.</p> <ol> <li>Yes to do this project I need motors which would change the position of a marker but which one?</li> </ol> <p>With my knowledge stepper motor and servo motors are appropriate but not sure whether they are appropriate since I have never used them, so will they work?</p> <p>The dimension of paper on which I will be working on is 30x30 cms.</p> <p>I have two ideas for this machine </p> <p>a. A rectangular one as shown <a href="https://i.stack.imgur.com/CplxN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CplxN.jpg" alt="enter image description here"></a></p> <p>I would make my marker holder movable with help of rack and pinion mechanism but I am not sure that this would be precise and I may have to alter to some other mechanism and if you know such then that can really help me.</p> <p>b. A cylindrical one <a href="https://i.stack.imgur.com/Q4ytj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q4ytj.jpg" alt="enter image description here"></a></p> <p>Here I would roll a paper on this cylinder and this paper will get unrolled as the cylinder rotates and even the marker holder is movable but only in X direction and the rolling of paper is nothing but change of Y position.</p> <ol start="2"> <li><p>Which one of the above two methods is good?</p></li> <li><p>I know about microcontrollers and I want to control the motors using them so I decided to go with Atmega 16 microcontroller. But here i might need microstepping of signals how would I be able to do that with microcontrollers?</p></li> </ol> <p>If you know the answer to atleast one of the questions then those answers are always welcomed. If you need any clarifications about these then please leave a comment.</p> <p>Thankyou for your time.</p> <p>Your sincerely, Jasser</p> <p>Edit : To draw lines of particular slope I would have to know the slope between two points and the depending on the slope I would rotate motors with particular speed so that marker will move in a straight fashion with that slope.</p>
A simple function plotter project
<p>The forward kinematics of the manipulator will correctly identify the larger displacements of the end effector for small rotations of the proximal joints, as opposed to the smaller displacements of the end effector for small rotations of the distal joints. When these motions are due to errors - all real mechanical systems have them - the established process for relating joint errors to task-space errors involves analytical perturbation analysis, and/or physical calibration of the system.</p> <p>A good starting point would be Siciliano and Khatib, <em>Handbook of Robotics.</em> Check out the end of Chapter 14 (they only hit upon the topic but the references will certainly help). You can also look at the papers which describe 3D sensors for robot calibration. Those papers frequently derive the perturbation analysis, then show how the new sensor allowed the end effector errors to be reduced after calibration. I recommend many of the editions of Lenarcic's <em>Advances in Robot Kinematics.</em> The 2000 edition with Stanisic has a paper by Khalil et al regarding calibration techniques. Or a web search will find many such papers, e.g.,</p> <p><a href="http://www.columbia.edu/~yly1/PDFs2/wu%20recursive.pdf" rel="nofollow">http://www.columbia.edu/~yly1/PDFs2/wu%20recursive.pdf</a></p> <p><a href="http://lup.lub.lu.se/luur/download?func=downloadFile&amp;recordOId=535825&amp;fileOId=625590" rel="nofollow">http://lup.lub.lu.se/luur/download?func=downloadFile&amp;recordOId=535825&amp;fileOId=625590</a></p> <p><a href="http://math.loyola.edu/~mili/Calibration/index.html" rel="nofollow">http://math.loyola.edu/~mili/Calibration/index.html</a> (follow the references in this one).</p> <p>Hope this helps.</p>
9099
2016-02-04T17:24:54.047
|algorithm|industrial-robot|joint|
<p>I am currently reviewing a path accuracy algorithm. The measured data are points in the 7 dimensional joint space (the robot <strike>under test</strike> is a 7 axes Robot, but this is not of importance for the question). As far as I know path accuracy is measured and assessed in configuration (3 D) space. Therefore I am wondering if a path accuracy definition in joint angle space has any practical value. Sure, if one looks at the joint angle space as a 7 dimensional vector space in the example (with Euclidean distance measure) one can do formally the math. But this seems very odd to me. For instance, an angle discrepancy between measured and expected for the lowest axis is of much more significance than a discrepancy for the axis near the <strike>actuator</strike> end effector.</p> <p>So here is my Question: Can anyone point me to references where path accuracy in joint space and/or algorithms for its calculation is discussed ? </p> <p>(I am not quite sure what tags to use. Sorry if I misused some.)</p>
Reference request: Path accuracy algorithm in the joint angle space
<p>Yes. As @hauptmech mentioned, you can use your forward kinematics to get the center of mass of each link in the base frame. Then you can simply compute the <em>weighted</em> average of the masses and positions to get the overall center of mass. </p> <p>In other words:</p> <p>$$ M = \sum_{i=0}^n m_i $$ $$ \mathbf{P}_i^0 = pos( \mathbf{T}_i^0(\mathbf{q}) \mathbf{T}_{i_m}^i) $$ $$ \mathbf{COM}^0 = \frac{1}{M} \sum_{i=0}^n m_i \mathbf{P}_i^0 $$</p> <p>Where:</p> <ul> <li>$m_i$ = mass of link $i$</li> <li>$M$ = total mass</li> <li>$\mathbf{q}$ = vector of joint angles</li> <li>$\mathbf{T}_i^0$ = Transform from base frame $0$ to link $i$. (The forward kinematics which is a function of $\mathbf{q}$).</li> <li>$\mathbf{T}_{i_m}^i$ = Transform from the link $i$ frame to it's center of mass. </li> <li>$pos()$ = function to extract position vector from full transform.</li> <li>$\mathbf{P}_i^0$ = position of mass of link $i$ in base frame $0$.</li> <li>$\mathbf{COM}^0$ = center of mass of system in base frame $0$.</li> </ul>
9100
2016-02-04T22:28:29.503
|mobile-robot|inverse-kinematics|humanoid|balance|
<p>I've been working on Humanoid Robot, and I face the problem of finding the Center of Mass of the Robot which will help in balancing the biped. Although COM has a very simple definition, I'm unable to find a simple solution to my problem.</p> <p>My view: I have already solved the Forward and Inverse Kinematics of the Robot with Torso as the base frame. So, if I can find the position(and orientation) of each joint in the base frame, I can average all of them to get the COM. Is this approach reasonable? Will it produce the correct COM?</p> <p>Can anyone offer any series of steps that I can follow to find the COM of the biped? Any help would be appreciated. </p> <p>Cheers! </p>
Finding Center of Mass for Humanoid Robot
<p>I am trying to do the same project. </p> <p>I asked this question in the LibrePilot forum (the new community which gives support to the OpenPilot project) and they told me about the UAVTalk protocol, an object oriented protocol that is used to exchange telemetry data.</p> <p>I give you some links about the protocol and also some modules programmed in Python to use it as interface with the protocol. By now I haven't test those modules but they seems to work properly.</p> <p>If you achieve something, please let me know. Here are the links</p> <p><a href="https://bitbucket.org/alessiomorale/librepilot/commits/812babc4406e8063f04997e9f5214efe66ca4cac?at=next" rel="nofollow">https://bitbucket.org/alessiomorale/librepilot/commits/812babc4406e8063f04997e9f5214efe66ca4cac?at=next</a></p> <p><a href="https://librepilot.atlassian.net/wiki/display/LPDOC/UavTalk" rel="nofollow">https://librepilot.atlassian.net/wiki/display/LPDOC/UavTalk</a></p>
9103
2016-02-05T08:15:49.290
|serial|communication|
<p>I want to get telemetry data in my Raspberry Pi that will be connected to a <a href="http://www.openpilot.org/product/coptercontrol/" rel="nofollow">CC3D board</a> either via USB cable or Serial communication. How can I get the data? I plan to have wifi communication between the Pi and my Laptop. Also <a href="http://www.openpilot.org/product/oplink-rf-modems/" rel="nofollow">OPLink modems</a> will be used both in the Pi and the CC3D for the telemetry. Does anyone have a python example that may help to build an interface or output in the Linux shell to get raw telemetry data in RPi? </p>
Connecting a CC3D board with Raspberry Pi to get telemetry data
<p>How <em>quickly</em> do you want to go from stopped to 10rpm? This will define your angular acceleration. </p> <p>Regarding calculations, first you should convert to standard units, so meters instead of centimeters and radians per second instead of revolutions per minute:</p> <p>$$ \omega_{\mbox{rad/s}} = N_{\mbox{rpm}}*\frac{2\pi}{60} \\ \omega_{\mbox{rad/s}} = N_{\mbox{rpm}}*0.1 \\ \omega_{\mbox{rad/s}} = 1 \mbox{rad/s} \\ $$</p> <p>$$ L = 0.1 \mbox{m} \\ $$</p> <p>Now, the equations you'll need are:</p> <p>$$ \tau_{\mbox{min}} = \tau_{\mbox{dynamic}} + \tau_{\mbox{static}_\mbox{max}} \\ $$</p> <p>where</p> <p>$$ \tau_{\mbox{static}_\mbox{max}} = mgL \\ $$</p> <p>and</p> <p>$$ \tau_{\mbox{dynamic}} = I\alpha \\ $$</p> <p>where $g$ is the gravitational constant $9.81\mbox{m/s}^2$, $I$ is the moment of inertia and $\alpha$ is the angular acceleration. These can be further defined as:</p> <p>$$ I = mL^2 \\ \alpha = \frac{\omega_{\mbox{desired}}}{t_{\mbox{desired}}} $$</p> <p>where $t_{\mbox{desired}}$ is how long you want the motor to take to get from stopped to full speed and $L$ and $\omega$ are your arm length and rotational speeds in meters and rad/s, respectively. </p> <p>So, putting this all together:</p> <p>$$ \tau_{\mbox{min}} = (mL^2)(\frac{\omega_{\mbox{desired}}}{t_{\mbox{desired}}}) + mgL $$</p> <p>The power required to achieve this torque will peak the instant before you stop accelerating, when you are at top speed. This power is given by:</p> <p>$$ P = \tau \omega \\ $$</p> <p>where $\tau$ is the torque calculated above and again $\omega$ is rotational speed in rad/s. Power is in Watts. </p> <p>Please note this is the <em>theoretical minimum</em>; in reality you will need more torque (and thus more power) because the arm is <em>not</em> massless and your load is <em>not</em> a point mass, but most importantly because whatever gear box you use to get to 10rpm will introduce significant inertia and frictional losses. I would shoot for <em>at least</em> double whatever those calculations give you as a performance margin. </p> <p>Once you have torque, speed, and power, you should be have enough specifications to purchase the correct motor. </p>
9114
2016-02-06T21:21:50.667
|motor|torque|
<p>Suppose I have a DC motor with an arm connected to it (arm length = 10cm, arm weight = 0), motor speed 10rpm.</p> <p>If I connect a 1Kg weight to the very end of that arm, how much torque is needed for the motor to do a complete 360° spin, provided the motor is placed horizontally and the arm is vertical?</p> <p>Is there an simple equation where I can input any weight and get the required torque (provided all other factors remain the same)?</p>
Simple equation to calculate needed motor torque
<p>I have used MicroMo's stepper motors in the past. You can find their information here: <a href="http://www.micromo.com/products/stepper-motors/stepper-motors-datasheets">http://www.micromo.com/products/stepper-motors/stepper-motors-datasheets</a></p> <p>However, I don't know if a design process of "make decisions then see what fits" is the best approach. You should determine the performance you need, then look for components, and base your architectural decision on what you find available. You may need to relocate the motors and use some kind of mechanical transmission to get the hand performance you're looking for. It is hard to say with such little required performance information. </p>
9124
2016-02-07T18:09:49.297
|stepper-motor|
<p>I'm developing a robotic hand, and decided to place motors inside joints (as in picture) and I'm stuck with finding a stepper motor that can fit there. Approximate size of motor body is radius - 10mm, length - 10 mm. </p> <p>Any suggestions?</p> <p><a href="https://i.stack.imgur.com/HP4Ri.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HP4Ri.png" alt="Diagram of hand joint with stepper motor"></a></p>
Choosing stepper motor for hand
<p>Yes, you have an issue, and it has to do with the parameterization of rotations. As you have it, your $e_{ij}$ will be a $7\times 1$ column, but your pose has only six degrees of freedom (three for translation, three for rotation). There is actually a constraint that isn't properly represented in your optimization function (the norm of a unit quaternion must be one). This means that when you try to do the optimization, it will not "know" about this redundancy and solving for $X^*$ will actually optimize a degree of freedom that doesn't exist.</p> <p>So how do we deal with this? It's a <em>good</em> idea to use unit quaternions because they avoid the singularities present in all minimal parameterizations (i.e., three parameter representations) of rotations. A good approach to this problem is using a <em>dual</em> parameterization of rotations. First, we use a singularity-free parameterization of the rotation in the state (e.g., your use of unit quaternions is fine), but we represent rotation <em>errors</em> using a <em>minimal parameterization</em>. If the differences between your measurements $z_{ij}$ and predicted measurements $f$ are relatively small, you can safely convert the error rotation to a minimal parameterization without fear of it encountering a singularity (depending on your choice of minimal parameterization).</p> <p>For example, let's use a <em>rotation vector</em> $\mathbf{r} = \theta\mathbf{a}$ for the minimal parameterization (where $\theta$ is the angle of rotation and $\mathbf{a}$ is the axis of rotation). Your new error vector is</p> <p>$$ e_{ij} = \begin{bmatrix} t_{ij} - t_j - t_i \\ \log\left(q_{ij}(q_jq_i^{-1})^{-1}\right) \end{bmatrix} $$</p> <p>where $\log(q)$ converts a quaternion to a rotation vector. Note that you should also be representing the uncertainty of your rotation measurement using a minimal parameterization (i.e., $\Sigma_{ij}$ is a $6\times 6$ matrix).</p> <p>After you solve for the optimal $6 \times 1$ perturbation of the state $\Delta\mathbf{x}^*$, you need to apply it to your state by converting the rotation vector back to a unit quaternion; i.e.,</p> <p>$$ q \gets q\, \exp(\Delta \mathbf{r}^*) $$</p> <p>where $\Delta \mathbf{r}^*$ is the optimal perturbation of one of the state's quaternions (i.e., it's part of $\Delta\mathbf{x}^*$), and $\exp$ converts a rotation vector to a unit quaternion.</p> <p>I didn't really go into too much detail here, and skipped some mathematical rigour for the sake of keeping things brief. For more information on this approach, see the below references. This approach is not limited to rotations.</p> <p>C. Hertzberg, R. Wagner, U. Frese, and L. Schröder, “Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds,” Information Fusion, vol. 14, no. 1, pp. 57–77, Jan. 2013.</p> <p>G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Maga- zine, vol. 2, no. 4, pp. 31–43, 2010.</p>
9129
2016-02-08T11:06:11.680
|slam|errors|
<p>Given a pose $x_i = (t_i, q_i)$ with translation vector $t_i$ and rotation quaternion $q_i$ and a transform between poses $x_i$ and $x_j$ as $z_{ij} = (t_{ij}, q_{ij})$ I want to compute the error function $e(x_i, x_j) = e_{ij}$, which has to be minimized like this to yield the optimal poses $X^* = \{ x_i \}$:</p> <p>$$X^* = argmin_X \sum_{ij} e_{ij}^T \Sigma^{-1}_{ij} e_{ij}$$</p> <p>A naive approach would look like this:</p> <p>$$ e_{ij} = z_{ij} - f(x_i,x_j) $$</p> <p>where $z_{ij}$ is the current measurement of the transform between $x_i$ and $x_j$ and $f$ calculates an estimate for the same transform. Thus, $e_{ij}$ simply computes the difference of translations and difference of turning angles:</p> <p>$$ e_{ij} = \begin{pmatrix} t_{ij} - t_j - t_i \\\ q_{ij} (q_j q_i^{-1})^{-1} \end{pmatrix} $$</p> <p>Is there anything wrong with this naive approach? Am I missing something?</p>
How to compute the error function in graph SLAM for 3D poses?
<p><strong>Mapping With Coordinates Systems</strong></p> <p>Suppose you have an robotic arm with 3 links:</p> <p><a href="https://i.stack.imgur.com/6ln9c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6ln9c.png" alt="robotic arm"></a></p> <p>Using generalized coordinates:</p> <p>$\hspace{2.5em}$ $\vec{q}$ = $[q_{1}\hspace{1em}q_{2}]^{T}$ $\hspace{1.5em}$ [Generalized coordinate]</p> <p>We can evaluate the system as following:</p> <p><a href="https://i.stack.imgur.com/p3IRz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p3IRz.png" alt="enter image description here"></a></p> <p>The kinematic equation that maps the origin to point A is:</p> <p>$\hspace{5.em}$ $_{o}\vec{r}_{OA}$ = $_{o}\vec{r}_{O1}$ + $_{o}\vec{r}_{12}$ + $_{o}\vec{r}_{2A}$ $\hspace{1.5em}$ [kinematic equation]</p> <p>We have two rotations over joint 1 and 2. Substituting in the equation above:</p> <p>$\hspace{5.em}$ $_{o}\vec{r}_{OA}$ = $_{0}\vec{r}_{O1}$ + $R{(q_{1})_{01}}$ $ _{1}\vec{r}_{12}$ + $R{(q_{1}+q_{2})_{12}}$ $ _{2}\vec{r}_{2A}$</p> <p>Where $R(\bullet)$ is the <a href="https://en.wikipedia.org/wiki/Rotation_matrix" rel="nofollow noreferrer">rotation matrix</a>.</p> <blockquote> <p>If you want to map between point 1 to A:</p> <p>$\hspace{5.em}$ $_{1}\vec{r}_{1A}$ = $_{1}\vec{r}_{12}$ + $_{1}\vec{r}_{2A}$ $\hspace{1.5em}$ [New kinematic equation]</p> <p>$\hspace{5.em}$ $_{1}\vec{r}_{1A}$ = $R{(q_{1})_{01}}$ $ _{1}\vec{r}_{12}$ + $R{(q_{1}+q_{2})_{12}}$ $ _{2}\vec{r}_{2A}$</p> <p>It's pretty straightforward! For more on direct kinematics <a href="http://www.diag.uniroma1.it/~deluca/rob1_en/09_DirectKinematics.pdf" rel="nofollow noreferrer">here</a>.</p> </blockquote> <p>The term $R{(q_{1})_{01}}$ $ _{1}\vec{r}_{12}$ should be something like:</p> <p>$\hspace{5.em}$ $R{(q_{1})_{01}}$ $ _{1}\vec{r}_{12}$ = $\begin{bmatrix} cos(q_{1}) &amp; -sin(q_{1}) &amp; 0 \\ sin(q_{1}) &amp; cos(q_{1}) &amp; 0 \\ 0 &amp; 0 &amp; 1\end{bmatrix}$ $\begin{bmatrix} l_{1} \\ 0 \\ 0 \end{bmatrix}$ </p> <p>With a little bit of algebra, you can achieve the answer:</p> <p>$\hspace{5.em}$ $ _{0}\vec{r}_{OA}$ = $\begin{bmatrix} l_{0} + l_{1}cos(q_{1}) + l_{2}cos(q_{1}+q_{2}) \\ 0 + l_{1}sin(q_{1}) + l_{2}sin(q_{1}+q_{2}) \\ 0 \end{bmatrix}$</p> <p>That equation make possible to map all your system at any time relative to the origin.</p>
9132
2016-02-08T15:22:10.583
|robotic-arm|dh-parameters|
<p>I have a robot with 3 rotational joints that I am trying to simulate in a program I am creating. So I have 4 frames, one base frame, and each joint has a frame. I have 3 transformation functions to go from frame 1 or 2 or 3 to frame 0.</p> <p>By using the transformation matrix, I want to know how much each frame has been rotated (by the X,Y and Z axis) compared with the base frame. Any suggestions?</p> <p>The reason I want this is because I have made some simple 3D shapes that represent each joint. By using the DH parameters I made my transformation matrices. When ever I change my θ (it does not mater how the θ changes, it just does), I want the whole structure to update. I take the translation from the last column. Now I want to get the rotations.</p>
Find orientation through Transformation matrix
<p>Let me put homotopy into the context of planning algorithms</p> <p>Suppose you want to get from point A to point B. Clearly, the easiest way is to traverse a straight line. But if there is an obstacle in the way of this straight line path, what should you do? If you want to obtain the "optimal" path (e.g. traversing the least total distance to the goal), you need only to travel in straight lines around the convex hull of the obstacle. This forms a non-smooth, piecewise linear path.</p> <p>However, sometimes the "path of least traversal" is not the best path to take or may even be infeasible. For example, what if you don't have omni-directional wheels or it's just impossible to pivot in place (e.g. you have ackermann steering). Then, instead of traversing the straight line path along the convex hull, you need find some other optimal way to move around the obstacle. By using homotopy, you can continuously move away from the optimal straight line path (around the obstacle) to a smoother path around the obstacle that your robot can actually take.</p> <p>More generally, homotopy allows you to relax the constraints of the original problem so that you can solve an easier problem that almost gives the best solution to the original problem. The key "trick" is to parameterize the problem so that in the limit as the parameter approaches zero, you get closer and closer to the original problem such that for any non-zero choice of the parameter, the "perturbed" problem is easier to solve. This is a very powerful tool for many problems, especially optimization problems where discreteness or non-smoothness make it very difficult to solve. </p>
9143
2016-02-10T07:04:36.803
|motion-planning|rrt|
<p>What is an intuitive understanding for homotopy? At what stage is homotopy (I understand it as stretching or bending of path) in a planning algorithm? Is homotopy involved, for example, while implementing an algorithm like RRT?</p>
How is homotopy used in planning algorithms?
<p>You might want to try <a href="http://openrave.org/docs/latest_stable/openravepy/ikfast/" rel="nofollow">IKFast</a> in OpenRave. It can supposedly handle "robots with arbitrary joint complexity like non-intersecting axes". </p> <p>However, I had the same problem as you. After kinematic calibration, my joint axes were no longer at right angles. But when I tried my new robot kinematics in IKFast, it would crash and I was never able to get valid IK. I had a 7 DOF arm on a 1 DOF torso though. You might have better luck with a 6 DOF robot.</p>
9152
2016-02-11T02:39:31.640
|inverse-kinematics|calibration|
<p>I am working on a 6DOF robot arm project and I have one big question. When I first derived the inverse kinematics (IK) algorithm after decoupling (spherical wrist), I could easily get the equations based on nominal DH values, where alpha are either 0 or 90 degrees and there are many zeros in $a_i$ and $d_i$. However, after kinematics calibration, the identified DH parameters are no longer ideal ones with a certain small, but non-zero, bias added to the nominal values. </p> <p>So my question is, can the IK algorithm still be used with the actual DH parameters? If yes, definitely there will be end-effector errors in actual operation. If not, how should I change the IK algorithm? </p> <p>P.S. I am working on a modular robot arm which means the DH bias could be bigger than those of traditional robot arms. </p>
Inverse kinematics after calibration
<p>An STL file converts a solid model to an approximation that is represented by a bunch of triangular faces. There is no way for an STL file to represent joints or any other motion constraints because it's simply not part of the file protocol. </p> <p>This means that, if your assembly has motion constraints or movable parts that you <em>don't</em> actually want to export that assembly to one STL file. Instead, you should export each part individually and then re-build the assembly in SimMechanics using the frame transforms and motion constraint blocks provided there. </p> <p>I understand that this can be a (very) painful and tedious process, but unfortunately <a href="http://www.mathworks.com/help/physmod/sm/ref/smimport.html" rel="nofollow">SimMechanics only supports SolidWorks, Inventor, and Pro/E</a> (stated in the tip at the bottom of the page).</p> <p>That said, if you're using SimMechanics, I'd say there's a fair bet you're a student (because SimMechanics is EXPENSIVE given the perquisites) and Autodesk Inventor is free to students. </p> <p>If your model is only a few parts, it will probably be easiest to just reassemble it with the joint constraints. If it's especially detailed, you might be better off fusing the non-moving parts for a "simulation model" such that you do wind up with only a few parts. If you absolutely need to import an assembly that has a lot (probably over 10 would be a lot to me) of parts or you intend on importing models frequently then I would pursue acquiring Inventor or SolidWorks. SolidWorks is better but not free to students, but they have an education version that is less expensive than the commercial version. </p>
9155
2016-02-11T06:25:36.523
|matlab|
<p>In catia .stl format is available only for part file not for assembly file. Please help how to import asembly in simmechanics</p> <p>.CATProduct to .stl Or Is there any other way to do?</p>
How to import catia assembly to Matlab. Simmechanics
<p>This is how I do it:</p> <pre><code># return True if in collision, False if no collision # pass in joint angles q to test configuration other than current def inCollision(q=None): with env: if q == None: return env.CheckCollision(robot) or robot.CheckSelfCollision() else: initq = robot.GetDOFValues() robot.SetDOFValues(q) ret = env.CheckCollision(robot) or robot.CheckSelfCollision() robot.SetDOFValues(initq) return ret </code></pre> <p>NOTE: OpenRave uses ODE as its collision checker by default. And for some reason, the default build of ODE for Ubuntu does not enable cylinder-cylinder collision checking. You can use Bullet instead if you like, but I have found it to be slower than ODE. Use these lines to change which collision checker OpenRave uses:</p> <pre><code>collisionChecker = RaveCreateCollisionChecker(env, 'bullet') env.SetCollisionChecker(collisionChecker) </code></pre> <p>Or you can just download the ODE source and recompile. These are the steps I follow:</p> <ol> <li>Download the latest ODE (<a href="https://bitbucket.org/odedevs/ode/downloads/ode-0.13.1.tar.gz" rel="nofollow">https://bitbucket.org/odedevs/ode/downloads/ode-0.13.1.tar.gz</a>)</li> <li><code>./configure --enable-double-precision --enable-libccd --disable-demos --disable-asserts --enable-shared</code></li> <li><code>make -j8</code></li> <li><code>sudo make install</code></li> </ol>
9167
2016-02-13T00:15:52.303
|robotic-arm|motion-planning|python|
<p>I have a robot arm in an environment. How can I check for collision between this robot arm and the environment?</p>
Check collision between robot and environment in OpenRAVE
<p>I don't think this is related to integral windup at all.</p> <blockquote> <p>I noticed that the I-error does not converge to zero</p> </blockquote> <p>That's a good thing, because it means your integral term is not useless.</p> <p>The integral term is there to compensate for steady-state errors. If you set the integral gain to 0, you should see that your system never reaches the setpoint.</p> <p>The I-error that builds up will fix that.</p> <blockquote> <p>and continues to increase</p> </blockquote> <p>I'm not sure if that's really the case. From the image that you posted it might as well converge to some constant value. Hence my request to let the system run for a longer period to see how the I-error behaves in the long run.</p> <p>Maybe there actually is something in your system that increases the steady-state error over time. I imagine that the increasing temperature of an electrical resistance (in the driver or the motor) might cause that.</p> <p>It's somewhat hard to tell what the reason is because it's one PID for everything. If you had several (cascaded) controllers, you could locate the reason by looking at the individual controllers. In case of a rising electrical resistance due to rising temperature, that should be visible in the current controller.</p> <p><strong>tl:dr;</strong> There's a steady-state error in your system and the integral part of your controller compensates it. If that keeps growing, it means that the steady-state error is growing over time, which I think suggests that there's some kind of drift.</p>
9169
2016-02-13T10:04:01.023
|control|quadcopter|pid|stability|tuning|
<p>Good day,</p> <p>I had been recently reading up more on PID controllers and stumbled upon something called integral wind up. I am currently working on an autonomous quadcopter concentrating at the moment on PID tuning. I noticed that even with the setpoint of zero degrees reached in this video, the quadcopter would still occasionally overshoot a bit: <a href="https://youtu.be/XD8WgVFfEsM" rel="nofollow noreferrer">https://youtu.be/XD8WgVFfEsM</a></p> <p>Here is the corresponding data testing the roll axis: <a href="https://i.stack.imgur.com/gAatW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gAatW.png" alt="enter image description here" /></a></p> <p>I noticed that the I-error does not converge to zero and continues to increase: <a href="https://i.stack.imgur.com/jS8oS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jS8oS.png" alt="enter image description here" /></a></p> <blockquote> <p>Is this the integral wind-up?</p> <p>What is the most effective way to resolve this?</p> </blockquote> <p>I have seen many implementations mainly focusing on limiting the output of the system by means of saturation. However I do not see this bringing the integral error eventually back to zero once the system is stable.</p> <p>Here is my current code implementation with the setpoint of 0 degrees:</p> <pre><code>cout &lt;&lt; &quot;Starting Quadcopter&quot; &lt;&lt; endl; float baseThrottle = 155; //1510ms float maxThrottle = 180; //This is the current set max throttle for the PITCH YAW and ROLL PID to give allowance to the altitude PWM. 205 is the maximum which is equivalent to 2000ms time high PWM float baseCompensation = 0; //For the Altitude PID to be implemented later delay(3000); float startTime=(float)getTickCount(); deltaTimeInit=(float)getTickCount(); //Starting value for first pass while(1){ //Read Sensor Data readGyro(&amp;gyroAngleArray); readAccelMag(&amp;accelmagAngleArray); //Time Stamp //The while loop is used to get a consistent dt for the proper integration to obtain the correct gyroscope angles. I found that with a variable dt, it is impossible to obtain correct angles from the gyroscope. while( ( ((float)getTickCount()-deltaTimeInit) / ( ((float)getTickFrequency()) ) ) &lt; 0.005){ //0.00209715|0.00419 deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed cout &lt;&lt; &quot; DT endx = &quot; &lt;&lt; deltaTime2 &lt;&lt; endl; } //deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed deltaTimeInit=(float)getTickCount(); //Start counting time elapsed cout &lt;&lt; &quot; DT end = &quot; &lt;&lt; deltaTime2 &lt;&lt; endl; //Complementary Filter float pitchAngleCF=(alpha)*(pitchAngleCF+gyroAngleArray.Pitch*deltaTime2)+(1-alpha)*(accelmagAngleArray.Pitch); float rollAngleCF=(alpha)*(rollAngleCF+gyroAngleArray.Roll*deltaTime2)+(1-alpha)*(accelmagAngleArray.Roll); float yawAngleCF=(alpha)*(yawAngleCF+gyroAngleArray.Yaw*deltaTime2)+(1-alpha)*(accelmagAngleArray.Yaw); //Calculate Orientation Error (current - target) float pitchError = pitchAngleCF - pitchTarget; pitchErrorSum += (pitchError*deltaTime2); float pitchErrorDiff = pitchError - pitchPrevError; pitchPrevError = pitchError; float rollError = rollAngleCF - rollTarget; rollErrorSum += (rollError*deltaTime2); float rollErrorDiff = rollError - rollPrevError; rollPrevError = rollError; float yawError = yawAngleCF - yawTarget; yawErrorSum += (yawError*deltaTime2); float yawErrorDiff = yawError - yawPrevError; yawPrevError = yawError; //PID controller list float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2; float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2; float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2; //Motor Control - Mixing //Motor Front Left (1) float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation; //Motor Front Right (2) float motorPwm2 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation; //Motor Back Left (3) float motorPwm3 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation; //Motor Back Right (4) float motorPwm4 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation; //Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles. float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float minPWM = motorPWM[0]; int i; for(i=0; i&lt;4; i++){ // Get minimum PWM for filling if(motorPWM[i]&lt;minPWM){ minPWM=motorPWM[i]; } } cout &lt;&lt; &quot; MinPWM = &quot; &lt;&lt; minPWM &lt;&lt; endl; if(minPWM&lt;baseThrottle){ float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors cout &lt;&lt; &quot; Fill = &quot; &lt;&lt; fillPwm &lt;&lt; endl; motorPwm1=motorPwm1+fillPwm; motorPwm2=motorPwm2+fillPwm; motorPwm3=motorPwm3+fillPwm; motorPwm4=motorPwm4+fillPwm; } float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4}; float maxPWM = motorPWM2[0]; for(i=0; i&lt;4; i++){ // Get max PWM for trimming if(motorPWM2[i]&gt;maxPWM){ maxPWM=motorPWM2[i]; } } cout &lt;&lt; &quot; MaxPWM = &quot; &lt;&lt; maxPWM &lt;&lt; endl; if(maxPWM&gt;maxThrottle){ float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors cout &lt;&lt; &quot; Trim = &quot; &lt;&lt; trimPwm &lt;&lt; endl; motorPwm1=motorPwm1-trimPwm; motorPwm2=motorPwm2-trimPwm; motorPwm3=motorPwm3-trimPwm; motorPwm4=motorPwm4-trimPwm; } //PWM Output gpioPWM(24,motorPwm1); //1 gpioPWM(17,motorPwm2); //2 gpioPWM(22,motorPwm3); //3 gpioPWM(18,motorPwm4); //4 </code></pre>
PID Control: Integral error does not converge to zero
<p>Much as stated above the power of a servo is limited. You may need to instead use a light interrupted motor as are found in printers combined with a series of reduction gears to allow the force needed to suspend your robot without adding much weight.</p>
9175
2016-02-13T19:27:20.780
|otherservos|3d-printing|
<p>I am trying to make custom parts that fit directly onto a servo. Doing this has proved more difficult than I've expected so far.</p> <p>I was hoping to avoid incorporating the provided servo horns into the 3D printed part, so I've been trying this method out. Below are images of my current test - a 3D printed attachment to the servo, with an indentation for an M3 nut (the servo accepts an M3 bolt) for attachment to the servo. The plastic ring doesn't have the spline (I can't print that level of detail I think) but is tight around it. The top piece attaches to a 3/8" nut for use with the 3/8" threaded rod I had laying around.</p> <p><a href="https://i.stack.imgur.com/T7Q8s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T7Q8s.png" alt="The nut casing"></a></p> <p><a href="https://i.stack.imgur.com/LGIfh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LGIfh.png" alt="The servo attachment piece - note the M3 nut indent"></a></p> <p>So far, I'm having difficulty of this setup working with any level for torque and not just spinning in place.</p> <p>So... is this the correct approach? Am I going to have to design a piece with the servo horn inside of it to get the servo to connect? Are there better approaches I haven't considered?</p>
What is the best way to attach a 3D printed part to a servo for robotics use?
<p>Frontier based exploration is concerned primarily with exploring the physical space in order to produce an occupancy grid (or cost map) of the terrain traversibility. The control actions follow a set of rules which work empirically well (but not <em>theoretically optimally</em>) to achieve for the frontier-exploration goal.</p> <p>Information-gain methods can be used to specify <em>any</em> objective that you can model mathematically. Because they are probabilistic the prior beliefs of the system can also be modelled, which gives the system a better idea than a set of simple rules.</p> <p>Take the example of the Mars Rover <em>Curiosity</em> looking for stromatolites (fossilised bacteria) on an unexplored region of Mars:</p> <ul> <li>A grid of 100x100 represents the area to be explored <ul> <li>For each point on the grid, the grid could be either a Ordinary rock, Interesting rock or Stromatolite (class labels $\{O, I, S\}$)</li> </ul></li> <li>The roboticists work with the geologists to provide Curiosity with expert knowledge of geology concerning stromatolite discovery: <ul> <li>$P(x=S | x_n=I) = 0.05$, which means the probability of the grid at x being a stromatolite, given that any if x's neighbour is an Interesting rock, is 5%.</li> <li>$P(x=S | x_n=S) = 0.35$, similarly this encodes the domain knowledge that stromatolites occur spatially close to one another</li> </ul></li> </ul> <p>In such a system, the info-gain system would choose the action that would tell us the most about <em>the population of rock classes</em> in this map, taking in to account the domain knowledge, sensor uncertainty model and current state of the map.</p> <p>This is contrasted to the frontier mapping method which <em>can only find the shape</em> of the map, generally using non-probabilistic methods (probabilistic frontier-exploration exist).</p>
9180
2016-02-14T04:28:58.663
|mapping|exploration|
<p>I've recently come across the concept of using information gain (or mutual information criteria) as a metric for minimizing entropy on a map to aid in robotic exploration. I have somewhat of a basic question about it. </p> <p>A lot of papers that talk about minimizing entropy consider an example case of something like a laser scanner and try to compute the 'next best pose' so that the maximum entropy reduction is achieved. Usually this is mentioned like "information gain based approaches help finding the best spot to move the robot such that the most entropy is minimized using raycasting techniques, as opposed to frontier based exploration which is greedy" etc. But I don't understand what the underlying reason is for information gain/entropy based exploration being better. </p> <p>Let's say a robot in a room with three walls and open space in front. Because of range limitations, it can only see two walls: so in frontier based exploration, the robot has two choices; move towards the third wall and realize it's an obstacle, or move towards the open space and keep going. How does an information gain based method magically pick the open space frontier over the wall frontier? When we have no idea what's beyond our frontiers, how can raycasting even help?</p>
How does information gain based exploration differ from frontier based?
<p>Using evolutionary methods (GA is one of them) for parameter tuning is indeed a common method [1] in the control domain. Especially for non-linear systems, the analytic solutions for optimal parameters can be difficult to find. Evolutionary methods are one way to efficiently perform a search for near-optimal parameters. </p> <p>A very successful and universal method that is widely used is <a href="https://en.wikipedia.org/wiki/CMA-ES" rel="nofollow">CMA-ES</a>. There are a large number of <a href="https://www.lri.fr/~hansen/cmaes_inmatlab.html" rel="nofollow">implementations</a> out there, including for MATLAB. I know that pole balancing in its various forms is often used as a benchmark. </p> <p>Applying the algorithm is usually not that difficult. Rating the performance of your result - this is called the fitness function in EA - is usually the most involved part.</p> <p>[1] P.J Fleming, R.C Purshouse, Evolutionary algorithms in control systems engineering: a survey, Control Engineering Practice, Volume 10, Issue 11, November 2002, Pages 1223-1241, ISSN 0967-0661, <a href="http://dx.doi.org/10.1016/S0967-0661(02)00081-3" rel="nofollow">http://dx.doi.org/10.1016/S0967-0661(02)00081-3</a>.</p>
9186
2016-02-15T05:19:52.713
|control|
<p>I've read some <a href="http://link.springer.com/chapter/10.1007%2F978-0-387-73137-7_14" rel="noreferrer">papers</a> for controlling nonlinear systems (e.g. nonlinear pendulum). There are several approaches for targeting nonlinear systems. The most common ones are <strong><a href="https://en.wikipedia.org/wiki/Feedback_linearization" rel="noreferrer">feedback linearizaing</a></strong>, <strong><a href="https://en.wikipedia.org/wiki/Backstepping" rel="noreferrer">backstepping</a></strong>, and <strong><a href="https://en.wikipedia.org/wiki/Sliding_mode_control" rel="noreferrer">sliding mode</a></strong> controllers. </p> <p>In my case, I've done the theoretical and practical parts of controlling nonlinear model of a simple pendulum plus other manipulators problems in C++. For the pendulum, I've utilized a backstepping controller for solving the tracking task for the angular displacement and velocity. The results are </p> <p>$$ \ddot{\theta} + (k/m) \dot{\theta} + (g/L) \sin\theta= u $$</p> <p>where $m=0.5, k=0.0001, L=.2$ and $g=9.81$.</p> <p><a href="https://i.stack.imgur.com/yo9LV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yo9LV.png" alt="Graph 1"></a></p> <p><a href="https://i.stack.imgur.com/ljCHu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ljCHu.png" alt="Graph 2"></a></p> <p>The results are good. However, tuning the controller is time consuming. The majority of papers use <em>genetic algorithms</em> for tuning their controllers such as PD, PID, and backstepping controllers. <em>I'm clueless in this field and I hope someone sheds some light on this concept, preferable if there is a MATLAB sample for at least controlling a simple pendulum.</em></p> <p>So far I've designed a simple GUI in C++/Qt in order to tune the controller manually. In the below picture, the response of the controller for step function. </p> <p><a href="https://i.stack.imgur.com/1MPVB.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/1MPVB.jpg" alt="Screenshot of GUI of the controller tuning application"></a> </p>
Using genetic algorithm for tuning controllers
<p>I'm having a hard time following your code, partly because I don't know Python but I think mostly because I'm not sure I understand your variables. That said, I think I <em>do</em> understand your problem. </p> <p>An approach I would take to solve this would be to "warp" the data before evaluation, then make your decisions, then "de-warp" the output. For example, consider the following pseudo-code:</p> <pre><code>headingOffset = 0; while(histogram[0] != peak) headingOffset = headingOffset + 1; histogram = CircularShift(histogram, -1); end &lt;find heading&gt; heading = CircularShift(heading,+headingOffset); </code></pre> <p>Hopefully I've made this clear; basically, re-orient your data such that you always start at a peak. This will ensure a valley always ends &lt;= 360 degrees. Then, once you find your heading, shift it according to how far you shifted your data to start with.</p>
9200
2016-02-17T13:51:54.473
|motion-planning|python|planning|
<p>I am trying to implement the Vector Field Histogram as described by <a href="http://www.cs.cmu.edu/~motionplanning/papers/sbp_papers/integrated1/borenstein_VFHisto.pdf">Borenstein, Koren, 1991</a> in Python 2.7 using the <a href="https://scipy.org/">SciPy stack</a>.</p> <p>I have already been able to calculate the polar histogram, as described in the paper, as well as the smoothing function to eliminate noise. This variable is stored in a numpy array, named <code>self.Hist</code>.</p> <p>However, the function <code>computeTheta</code>, pasted below, which computes the steering direction, is only able to compute the proper direction if the valleys (i.e. consecutive sectors in the polar histogram whose obstacle density is below a certain threshold) do not contain the section where a full circle is completed, i.e. the sector corresponding to <code>360º</code>.</p> <p>To make things clearer, consider these two examples: </p> <ul> <li>If the histogram contains a peak in the angles between, say, <code>330º</code> and <code>30º</code>, with the rest of the histogram being a valley, then the steering direction will be computed correctly.</li> <li><p>If, however, the peak is contained between, say, <code>30º</code> and <code>60º</code>, then the valley will start at <code>60º</code>, go all the way past <code>360º</code> and end in <code>30º</code>, and the steering direction will be computed incorrectly, since this single valley will be considered two valleys, one between <code>0º</code> and <code>30º</code>, and another between <code>60º</code> and <code>360º</code>.</p> <p>def computeTheta(self, goal):</p> <pre><code>thrs = 2. s_max = 18 #We start by calculating the sector corresponding to the direction of the target. target_sector = int((180./np.pi)*np.arctan2(goal[1] - self.VCP[1], goal[0] - self.VCP[0])) if target_sector &lt; 0: target_sector += 360 target_sector /= 5 #Next, we assume there is no best sector. best_sector = -1 dist_best_and_target = abs(target_sector - best_sector) #Then, we find the sector within a valley that is closest to the target sector. for k in range(self.Hist.shape[0]): if self.Hist[k] &lt; thrs and abs(target_sector - k) &lt; dist_best_and_target: best_sector = k dist_best_and_target = abs(target_sector - k) #If the sector is still -1, we return it as an error. print (target_sector, best_sector) if best_sector == -1: return -1 #If not, we can proceed... elif best_sector &gt; -1: #... by deciding whether the valley to which the best sector belongs is a "wide" or a "narrow" one. #Assume it's wide. type_of_valley = "Wide" #If we find a sector that contradicts our assumption, we change our minds. for sector in range(best_sector, best_sector + s_max + 1): if sector &lt; self.Hist.shape[0]: if self.Hist[sector] &gt; thrs: type_of_valley = "Narrow" #If it is indeed a wide valley, we return the angle corresponding to the sector (k_n + s_max)/2. if type_of_valley == "Wide": theta = 5*(best_sector + s_max)/2 return theta #Otherwise, we find the far border of the valley and return the angle corresponding to the mean value between the best sector and the far border. elif type_of_valley == "Narrow": for sector in range(best_sector, best_sector + s_max): if self.Hist[sector] &lt; thrs: far_border = sector theta = 5*(best_sector + far_border)/2 return theta </code></pre></li> </ul> <p>How can I address this issue? Is there a way to treat the histogram as circular? Is there maybe a better way to write this function?</p> <p>Thank you for your time.</p>
A Vector Field Histogram implementation in Python 2.7
<p>From the iRobot support site, <a href="http://homesupport.irobot.com/app/answers/detail/a_id/231/~/list-of-roomba-400%2Fdiscovery-error-codes." rel="nofollow noreferrer">List of Roomba 400/Discovery error codes</a>:</p> <blockquote> <p>As Roomba cleans, it communicates with songs, beeps, and indicator lights. It also uses sounds to let you know when there's a problem or it needs maintenance. Here are the most common error codes and what they mean. If the problem cannot be resolved, contact <a href="http://homesupport.irobot.com/app/ask" rel="nofollow noreferrer">iRobot Customer Care</a>.</p> </blockquote> <p>Your particular issue, if the tune being played is a 4-note tune (in conjunction with the flashing Power Button red LED, is due to a low battery which requires charging:</p> <blockquote> <p><a href="https://i.stack.imgur.com/HXFB1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HXFB1.png" alt="iRobot Low battery error"></a></p> </blockquote> <p>As it says:</p> <blockquote> <p>Connect Roomba to a battery charger (or Home Base) and recharge the battery.</p> </blockquote> <p>As your iRobot is continuously making this tune, even whilst charging, this would suggest that either:</p> <ul> <li>the battery is not taking on any charge (i.e. it is dead), or;</li> <li>the charging circuitry (within the iRobot) is not detecting the charge current, and assumes that it is not plugged in. As a sanity check, verify that the charger is working, and that the connections are OK. Otherwise, maybe the charging circuit is defective, and needs maintenance, or replacing?</li> </ul> <p>This is what should happen, during the charge cycle: <a href="http://homesupport.irobot.com/app/answers/detail/a_id/648/related/1" rel="nofollow noreferrer">Roomba 400 Series/Discovery Battery and Charging Overview</a>.</p> <blockquote> <p>Under normal conditions, Roomba should charge in this manner:</p> <ul> <li>Your Battery Charger should have a solid green light on when it is plugged into a wall outlet charging Roomba.</li> <li>Your Home Base, if you have one, should have two solid green lights (one for Power and one for Docked) when it is plugged into the Charger and Roomba is charging.</li> <li>Your Roomba’s POWER light should pulse red/amber while charging. Charging time under normal conditions is between 3 – 6 hours depending on your Battery Charger. Charging could take up to 16 hours if your battery is deeply discharged at the beginning of the charge cycle.</li> <li>When the charging cycle is complete, the POWER light on Roomba will turn solid green.</li> </ul> <p>While Roomba is running, the POWER light will display the state of the battery as follows:</p> <ul> <li>Green - fully charged</li> <li>Amber - partially discharged</li> <li>Red - nearly discharged</li> <li>Flashing Red - discharged</li> </ul> <p>Always store Roomba charging on the Home Base or with the Charger plugged into the robot. For long-term storage, iRobot recommends that the battery be fully charged and then removed from the robot and stored in a cool, dry place.</p> </blockquote>
9206
2016-02-18T15:44:22.193
|irobot-create|
<p>My iRobot Create is playing a tune about every 30 seconds and continuously flashing a red light when I attempt to charge it. What is the issue?</p>
iRobot Create: Making Noise and Flashing Red Light While Charging
<p>To compare two different arm configurations, Euclidean distance in joint space is usually sufficient. </p> <p>$$ d = \sqrt{\sum_{i=0}^n(q_i - q_{i_{ref}})^2} $$</p> <p>Where $q_i$ is joint $i$ of the test configuration, and $q_{i_{ref}}$ is the same joint in the reference configuration. This will work even if the end-effector has a different pose. But its usefulness goes down as the difference in end-effector poses grows. I believe you can combat this effect somewhat by using a weighted distance metric. </p> <p>Note that in general, you might want to have some global notion of the quality of an IK solution when you don't have a reference configuration. The rest of this post deals with this scenario. </p> <p>There are a few metrics for comparing IK solutions. The first, that I have found to be most useful is <em>manipulability measure</em>. Which is defined as:</p> <p>$$ \mathit{w}(q) = \sqrt{det(\mathbf{J}(q)\mathbf{J}^T(q)} $$</p> <p>This is a rough measure of how easy it is for the arm to move in any direction. It vanishes as singular configurations. In the image below, it is a measure of the roundness of the manipulability ellipsoid.</p> <p><a href="https://i.stack.imgur.com/ixg6a.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ixg6a.jpg" alt="enter image description here"></a></p> <p>Interesting side note: an arm's (velocity) manipulability is orthogonal to it's force manipulability. Which kind of makes intuitive sense. Where your arm can move quickly, it can't lift very much, and vice versa.</p> <p>Distance from mechanical joint stops is another common metric:</p> <p>$$ \mathit{w}(q) = -\frac{1}{2n}\sum_{i=1}^{n}\Big(\frac{q_i - \bar{q}_i}{q_{iM} - q_{im}}\Big)^2 $$</p> <p>Where $q_{iM}$ and $q_{im}$ are the maximum and minimum joint angles, and $\bar{q}_i$ is the middle value of the joint range. Note that because this is just a unitless metric, you can tweak to suit your needs. For example, using a higher exponent to have a flatter curve in the middle, and sharper penalties near the edges.<br> <a href="https://i.stack.imgur.com/vVHtQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vVHtQ.png" alt="Joint range costs with varying exponent"></a></p> <p>I have also seen something called the <em>condition number</em> used. Which is the ratio of the first and last eigen values of the SVD of the Jacobian. Which is kind of another measure of the roundness of the arm's Jacobian. But I have found this metric to be less useful. </p> <p>If you are only interested in forces or velocities in a specific direction $u$, you can compute the manipulability ellipsoid like so:</p> <p>$$ \alpha(q) = \Big(u^T\mathbf{J}(q)\mathbf{J}^T(q)u\Big)^{-1/2} $$ $$ \beta(q) = \Big(u^T\big(\mathbf{J}(q)\mathbf{J}^T(q)\big)^{-1}u\Big)^{-1/2} $$</p> <p>Where $\alpha$ is the force component, and $\beta$ is the velocity component. </p> <p>You could also calculate the theoretical current draw for each arm configuration to find the most power efficient one. </p> <p>Or you can make your own metric and combine some of these. </p> <p>Note that you <em>can</em> use these approaches while treating IKFast as a black box. IKFast gives you a number of potential solutions for the given end-effector pose. You can simply iterate through them, evaluating the desired metric on each, then pick the best. (Side note: IKFast works slightly differently in Python and C. In Python, OpenRave automatically iterates your fixed joints through their entire range with some discretization. So you can potentially get hundreds of solutions. But in C, it only gives you the few permutations of joint flips. So you only get a handful of solutions. You are forced to call it multiple times to iterate through your fixed joint range.)</p>
9228
2016-02-23T18:41:34.997
|robotic-arm|inverse-kinematics|
<p>I am working on the Baxter robot where I have a first arm configuration and a bunch of other arm configurations, where I want to find the closest arm configuration to the first among the many other arm configurations. The trick here is that the end effector location/orientation is the exact same for all the arm configurations, they are just different ik solutions. Can anyone point me towards the right direction towards this? Thank you.</p>
Evaluating the similarity of two 7 Degree of Freedom Arms
<blockquote> <p>I'm using the L3GD20H MEMS gyroscope with an Arduino ... How is the sensor's interrupt line intended to be used if the microcontroller can't handle the interrupt from an ISR using an interrupt-driven I2C subsystem?</p> </blockquote> <p>I'm assuming you've already looked at <a href="http://www.adafruit.com/datasheets/L3GD20H.pdf" rel="nofollow">the L3GD20H datasheet</a> and the <a href="http://www.st.com/web/catalog/sense_power/FM89/SC1288/PF254039" rel="nofollow">L3GD20H errata sheet</a>, and you understand that the data-ready pin (DRDY) is sometimes called an interupt pin (INT2), not to be confused with "the" programmable interrupt pin (INT1). In particular, p. 45 of the datasheet says that reading the IG_SRC register clears the interrupt signal on INT1.</p> <p>If I were you, my next step would be to look at Arduino software for the L3GD20H and see how they handled it. (If someone has already done it, it must be possible, right?)</p> <ul> <li><a href="http://learn.adafruit.com/adafruit-triple-axis-gyro-breakout/programming" rel="nofollow">The Adafruit L3GD20 Library for the Arduino</a></li> <li><a href="https://www.nordevx.com/content/l3gd20h-gyro" rel="nofollow">The NorDevX L3GD20H Gyro Arduino Code Using I2C</a></li> <li><a href="https://www.pololu.com/product/2468" rel="nofollow">The Pololu L3GD20 Arduino library</a></li> </ul> <h2>easy method</h2> <p>Perhaps the easiest method is polling: connect the interrupt pin of the L3GD20H up to some general-purpose digital input pin of the Arduino. Then write a function to check that pin and do the appropriate things. Then call that function every time through the main <code>loop()</code>.</p> <p>Just because a pin is labeled "interrupt" doesn't mean it has to be handled by an interrupt routine.</p> <h2>currently more difficult method</h2> <p>Several people have requested that someone improve the Wire library to make it non-blocking ( <a href="http://forum.arduino.cc/index.php?topic=37822.0" rel="nofollow">"Wire library and blocking"</a>; <a href="https://github.com/arduino/Arduino/issues/1476" rel="nofollow">"Make Wire library non-blocking"</a>; <a href="http://diydrones.com/forum/topics/i2c-wiring-library-lockup?id=705844%3ATopic%3A720939" rel="nofollow">"I2C Wiring Library - Lockup"</a>; etc. ).</p> <p>With such a theoretical future Wire library, you could write one interrupt handler that handles a pin-change interrupt of some Arduino pin connected to the interrupt pin of the L3GD20H. On a low-to-high change, that interrupt handler would do nothing and immediately return. On a high-to-low change, that interrupt handler would call the (non-blocking) functions of the Wire library that appends the appropriate message to the end of a queue in a RAM buffer. If the I2C hardware was idle at that instant (i.e., it wasn't already in the middle of sending out a byte, possibly in the middle of a message to some other device on the I2C bus), that (non-blocking) function would pull the first byte of the next message in the queue and start sending it out the I2C hardware.</p> <p>Much later, after that interrupt handler returned to the main <code>loop()</code>, the I2C hardware would finish transmitting a byte (the first byte of this message, or some byte in some previous message), and that I2C hardware would trigger a different interrupt handler (internal to the improved Wire library) that would pull the next byte from the queue in RAM and start sending that byte out the I2C hardware, check for and buffer up any incoming byte in the I2C hardware, etc.</p>
9232
2016-02-23T22:41:14.550
|arduino|microcontroller|gyroscope|i2c|interrupts|
<p><strong>Background:</strong> I'm using the L3GD20H MEMS gyroscope with an Arduino through a library <a href="https://github.com/pololu/l3g-arduino" rel="nofollow">(Pololu L3G)</a> that in turn relies on interrupt-driven I2C (Wire.h); I'd like to be able to handle each new reading from the sensor to update the calculated angle in the background using the data ready line (DRDY). Currently, I poll the STATUS register's ZYXDA bit (which is what the DRDY line outputs) as needed.</p> <p><strong>General question:</strong> With some digital output sensors (I2C, SPI, etc.), their datasheets and application notes describe using a separate (out-of-band) hardware line to interrupt the microcontroller and have it handle new sets of data. But on many microcontrollers, retrieving data (let alone clearing the flag raising the interrupt line) requires using the normally interrupt-driven I2C subsystem of a microcontroller. <strong>How can new sensor data be retrieved from the ISR for the interrupt line when also using the I2C subsystem in an interrupt-driven manner?</strong></p> <p>Possible workarounds:</p> <ol> <li><p>Use nested interrupts (as @hauptmech mentioned): re-enable I2C interrupt inside of ISR. Isn't this approach discouraged?</p></li> <li><p>Use non-interrupt-driven I2C (polling)--supposedly a dangerous approach inside of ISRs. The sensor library used depends on the interrupt-driven Wire library.</p></li> <li><p>[Edit: professors' suggestion] Use a timer to interrupt set to the sample rate of the sensor (which is settable and constant, although we measure it to be e.g. 183.3Hz rather than 189.4Hz per the datasheet). Handling the I2C transaction still requires re-enabling interrupts, i.e. nested interrupts or performing I2C reads from the main program.</p></li> </ol> <p>[Edit:] Here's a comment I found elsewhere on a similar issue that led me to believe that the hang encountered was from I2C reads failing inside an interrupt handler: <a href="https://www.sparkfun.com/tutorials/326#comment-4f4430c9ce395fc40d000000" rel="nofollow">https://www.sparkfun.com/tutorials/326#comment-4f4430c9ce395fc40d000000</a> </p> <blockquote> <p>…during the ISR (Interrupt Service Routine) I was trying to read the device to determine which bit changed. Bad idea, this chip uses the I2C communications which require interrupts, but interrupts are turned off during an ISR and everything goes kinda south.</p> </blockquote>
How to interrupt on a data ready trigger when communications to the sensor are interrupt driven?
<p>I'm going to give you a high-level overview without going into much math. The purpose here is to give you a somewhat intuitive understanding of what is going on, and hopefully this will help the more mathematical resources make more sense. I'm mostly going to focus on the <em>unscented transform</em>, and how it relates to the UKF.</p> <h1>Random variables</h1> <p>Alright, the first thing you need to understand is that modern (mobile) robotics is <em>probabilistic</em>; that is, we represent things we aren't sure about by <em>random variables</em>. I've <a href="http://marcgallant.ca/2015/12/16/you-dont-know-where-your-robot-is/" rel="noreferrer">written about this topic</a> on my personal website, and I strongly suggest you check it out for a review. I'm going to briefly go over it here.</p> <p>Take the (one-dimensional) position of a robot for example. Usually we use sensors (e.g., wheel encoders, laser scanners, GPS, etc.) to estimate the position, but all of these sensors are <em>noisy</em>. In other words, if our GPS tells us we are at $x = 10.2$ m, there is some uncertainty associated with that measurement. The most common way we model this uncertainty is using <em>Gaussian</em> (also known as <em>Normal</em>) distributions. The probabilistic density function (don't worry about what this means for now) of a Gaussian distribution looks like this:</p> <p><img src="https://i.stack.imgur.com/Hey7S.png" alt="Gaussian PDF"></p> <p>On the $x$ axis are different values for your random variable. For example, this could be the one dimensional position of the robot. The <em>mean</em> position is denoted by $\mu$, and the <em>standard deviation</em> is denoted by $\sigma$, which is a sort of "plus/minus" applied to the mean. So instead of saying "the robot's position is $x = 9.84$ m", we say something along the lines of "the mean of our estimate of the robot's position is $x = 9.84$ m with a standard deviation of $0.35$ m".</p> <p>On the $y$ axis is the relative <em>likelihood</em> of that value occurring. For example, note that $x = \mu$ has a relative likelihood of about $0.4$, and $x = \mu - 2\sigma$ has a relative likelihood of about $0.05$. And let's pretend that $\mu = 9.84$ m and $\sigma = 0.35$ m like the above example. That means</p> <p>$$ \mu = 9.84 \text{ m}, \quad \mu - 2\sigma = 9.84 - 2(0.35) = 9.14 \text{ m} $$</p> <p>What the $y$-axis values are telling you is that the likelihood that the robot is at $9.84$ is <em>eight times higher</em> than $9.14$ because the ratio of their likelihoods is $0.4/0.05 = 8$.</p> <p>The takeaway point from this section is that the things we often most interested in (e.g., position, velocity, orientation) are <em>random variables</em>, most often represented by <em>Gaussian distributions</em>.</p> <h1>Passing random variables through functions</h1> <p>An interesting property of Gaussian random variables is that the result of passing one through a linear function results in a Gaussian random variable. So if we have a function like</p> <p>$$ y = 12x - 7 $$</p> <p>and $x$ is a Gaussian random variable, then $y$ is also a Gaussian random variable (but with a different mean and standard deviation). On the other hand, this property does <em>not</em> hold for nonlinear functions, such as</p> <p>$$ y = 3\sin(x) - 2x^2. $$</p> <p>Here, passing the Gaussian random variable $x$ through the function results in a non-Gaussian distribution $y$. In other words, the shape of the PDF for $y$ would not look like the above plot. But what do I mean by "passing a Gaussian random variable through a function". In other words, what do I put in for $x$ in the above equation? As we saw in the above plot, $x$ has many possible values. </p> <p>As before, let's say $x$ has mean $\mu = 9.84$ m and standard deviation $\sigma = 0.35$ m. We are interested in <em>approximating</em> $y$ as a Gaussian random variable. One way to do this would be calculating a whole bunch of $y$s for different $x$s. Let's calculate $y$ for $x = \mu$, $x = \mu + 0.1\sigma$, $x = \mu - 0.1\sigma$, $x = \mu + 0.2\sigma$, etc. I've tabulated some results below.</p> <p>$$ \begin{array}{|c|c|} x &amp; y = 3\sin(x) - 2x^2 \\ \hline \mu &amp; -194.5 \\ \mu + 0.1\sigma &amp; -195.9 \\ \mu - 0.1\sigma &amp; -193.0 \\ \mu + 0.2\sigma &amp; -197.3 \\ \mu - 0.2\sigma &amp; -191.6 \\ \vdots &amp; \vdots \\ \mu + 10\sigma &amp; -354.5 \\ \mu - 10\sigma &amp; -80.3 \end{array} $$</p> <p>Although the values of $y$ do not form a Gaussian distribution, we create one by taking the mean of all the $y$ values to get $\mu_y$ (in this case $\mu_y = -201.8$), and we calculate the standard deviation the usual way:</p> <p>$$ \sigma_y = \sqrt{\frac{1}{n-1}\sum_{i=1}^n(y_i-\mu_y)^2} = 81.2. $$</p> <p>So voila! We have come up a way to pass Gaussian random variables through nonlinear functions. Problem solved, right? Well not necessarily. It took a lot of computations to come up with our solution, so solving problems this way when our data is streaming in many times per second may not be possible. And how did we choose how spread out our values of $x$ were? And why did we stop at $\mu \pm 10\sigma$? </p> <h1>The unscented transformation</h1> <p>The unscented transformation is a method of passing Gaussian distributions through nonlinear functions. It is not dissimilar to the brute force method I described above. In fact, it uses the exact same approach, but is really smart about how it chooses which values of $x$ to choose to pass through the nonlinear function.</p> <p>In fact, for the problem I described, the unscented transformation requires you to pass exactly three points (called <em>sigma points</em>) through the nonlinear function:</p> <p>$$ x = \mu, \quad x = \mu + \alpha\mu, \quad x = \mu - \alpha\mu, $$</p> <p>where $\alpha$ depends on the dimensionality of the random variable (one in this case) and other scaling factors. In other words, you pass the mean and one point on each side of the mean through the nonlinear function. Then you calculate the mean and standard deviation of the result to approximate $y$ as a random variable.</p> <h1>The unscented Kalman filter</h1> <p>Under the assumption that you have a basic understanding of Kalman filters, you'll recall that there are essentially two steps: prediction and correction. In the prediction step, you have a motion model that propagates the state forward in time. It might look something like</p> <p>$$ x_{k+1} = f(x_k, u_k) $$</p> <p>where $u_k$ is the input, and both $x_k$ and $u_k$ are (Gaussian) random variables. Now in a regular Kalman filter, $f(x_k, u_k)$ is a linear function, which results in $x_{k+1}$ being a Gaussian random variable. However, it is often the case that $f(x_k, u_k)$ is nonlinear. So what do we do? We calculate sigma points, pass them through the motion model, and then calculate the mean and variance of the result. This gives us our approximate estimate of $x_{k+1}$ as a Gaussian random variable.</p> <p>The correction step mostly works the same way. This time you have a measurement model that looks something like</p> <p>$$ z_k = h(x) $$</p> <p>where $h(x)$ may be nonlinear. So how do we get calculate our predicted measurement $z$? You guessed it, we use an unscented transformation again. I won't go into details on how you update the state from here (it's outside of the scope of my "intuitive" description), but it's relatively straightforward. </p> <h1>Summary</h1> <ol> <li>The state (what we are estimating) and the measurements come from "real-life" sensors, which all have noise. We model this uncertainty by representing them as Gaussian random variables.</li> <li>Passing Gaussian random variables through linear functions results in other Gaussian random variables. Passing them through nonlinear functions does not.</li> <li>One method of approximating the result of passing a Gaussian random variable through a nonlinear function as a Gaussian random variable is by sampling the input variable at a bunch of different points, passing those through the nonlinear function, and then considering the mean and standard deviation of the resulting output as a Gaussian distribution.</li> <li>The unscented transformation strategically picks points from distribution of the input variable that keep the most information about the distribution and passes them through the nonlinear function, then calculates the mean and standard deviation of the output.</li> <li>The unscented Kalman filter uses the unscented transformation to pass Gaussian random variables through the motion and measurement models. </li> </ol>
9233
2016-02-23T22:43:47.620
|kalman-filter|
<p>I need some help here because I can't figure how the Unscented Kalman Filter works. I've searched for examples but all of them are too hard to understand.</p> <p>Please someone can explain how it works step by step with a trivial example like position estimation, sensor fusion or something else?</p>
Unscented Kalman Filter for Dummies