Answer
stringlengths
44
28.2k
Id
stringlengths
1
6
CreationDate
stringlengths
23
23
Tags
stringlengths
4
94
Body
stringlengths
54
35.7k
Title
stringlengths
13
150
<p>Working with git HEADs is just a bad idea, so yes, this behaviour is expected.</p> <p>What you want to do is reset your current branch to the desired commit, i.e.</p> <pre><code>git reset --hard 995e018 </code></pre> <p>which will be picked up by autoproj</p>
3152
2014-06-24T11:30:09.800
|rock|
<p>I need to search in the git history of a couple of packages to get back to a working state for a demo. I am searching by checking out commits manually until I found the commits of all effected packages that work together.</p> <p>By checking out commits manually, I will get into the detached HEAD state:</p> <p>$ git checkout 995e018 -> You are in 'detached HEAD' state. [...]</p> <p>To save the current state of all packages, a snapshot is created:</p> <p>$ autoproj snapshot demo_working</p> <p>Now the demo_working/overrides.yml will pin the commit where the HEAD is pointing to (e.g. 5e2e3a259) instead of the commit that I chose manually for the package (995e018).</p> <p>Is this the desired behaviour? In my opinion a snapshot should store the current state of all my git repositories meaning that I can also select commits manually.</p>
autoproj snapshot with git detached HEAD
<p>The problem you are describing sounds like you need to calculate Frechet distance.</p> <p>If you take a point at each of your paths, connect them with a string and move them along the path (not necessarily with the same velocities), then Frechet distance of the paths is a minimum length of the string necessary. Not a good description :-)</p> <p>The formula from the <a href="https://en.wikipedia.org/wiki/Frechet_distance" rel="nofollow">wikipedia page</a> is much better:</p> <p>$$F(A,B) = \inf_{\alpha, \beta}\,\,\max_{t \in [0,1]} \,\, \Bigg \{d \Big ( A(\alpha(t)), \, B(\beta(t)) \Big ) \Bigg \}$$ where $\alpha$ and $\beta$ are mappings from time to the position on the curve.</p> <p>The only problem is, that calculating the Frechet distance between two polylines takes $O(n^{2}m + m^{2}n)$ of memory (and even more time). There are some approximations you could use that only take the turning points of the polyline into account, but I don't know much about them.</p> <p>And some resources:</p> <ul> <li>More detailed description of the calculation: <a href="http://www.cim.mcgill.ca/~stephane/cs507/Project.html" rel="nofollow">http://www.cim.mcgill.ca/~stephane/cs507/Project.html</a></li> <li>Implementation in matlab: <a href="http://www.mathworks.com/matlabcentral/fileexchange/38714-frechet-distance-calculation/content/frechet_compute.m" rel="nofollow">http://www.mathworks.com/matlabcentral/fileexchange/38714-frechet-distance-calculation/content/frechet_compute.m</a></li> <li>My buggy implementation in python: <a href="https://github.com/bluecube/robomower2/blob/master/src_python/util/frechet_distance.py" rel="nofollow">https://github.com/bluecube/robomower2/blob/master/src_python/util/frechet_distance.py</a></li> </ul>
3170
2014-06-27T12:06:48.570
|localization|robotic-arm|
<p>Problem: the cartesian position of an end effector (no orientation) of a robot arm is recorded, say, every millisecond (the time steps can not be changed), during a motion. The robot arm is commanded the same path but with different velocities. So I get different trajectories. I want to calculate the deviation of the paths, which is the distances of equivalent points of two paths. The problem is to find equivalent points. Since the two velocities are different the comparison at the same time steps of the trajectories makes no sense. I can assume that the paths underlying the trajectories to be compared are rather similar. The deviation for the ideal path being smaller than 1% of a typical length dimension of the path. I want to detect deviations of much lass than that. </p> <p>I have to map the timestamp of the recorded points to the path length, and make comparison of points at the same path length. But of course also the path lengths differ for different paths, so any deviation would distort the result for all later points. How can I compensate for this ? </p> <p>Is there a reliable algorithm ? Where can I find information ?</p> <p>Note: time warp algorithms (even memory optimized ones) are out of the game because of memory consumption. </p>
Path comparison
<p>Mechanically, there are a variety of ways to fix the gripper to the arm. I think the easiest way is to have a little tab rotate under an overhang. </p> <p>At least one of the grippers shown in your video is pneumatic. (It is quite possible both are). In which case, the pneumatic connection is just a hole with a gasket or O-ring to seal it.</p> <p>Electrically, you can have pins on a connector slide into the housing on the other end. But i think that requires some pretty tight tolerances, and might wear out quickly. More likely, you can use something called "pogo pins". these are spring loaded contacts made for just this type of thing.</p> <p>it is also possible that your end-effector gripper can be actuated mechanically. this can be accomplished with gears, splines, or push rods i think.</p>
3172
2014-06-27T16:55:56.450
|robotic-arm|
<p>How do you make a gripper changer for a robotic arm like <a href="https://www.youtube.com/watch?v=_8ovd4khIBM" rel="nofollow">this</a>? I don't see how you could connect power/control wires or what you use to hold the gripper to the arm.</p>
Making a Gripper Changer for a Robotic Arm
<p>Since you are attempting to slow down a car when the road gets too bumpy, a PID controller is not what you want. Or rather, the bumpiness of the road will determine the <em>desired speed</em> that goes into the PID, instead of being part of the PID's sensor feedback loop.</p> <p>You may want to look at <a href="http://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease" rel="nofollow">additive increase, multiplicative decrease</a> as a function to control your desired speed. The basic idea would be:</p> <pre><code>good = 1 bad = 1 while (true): if (bumpy_level_detected() &gt; bumpy_level_allowed): bad = bad + 1 else: good = good + 1 desired_speed = (good_speed_constant * good) / (bad_speed_constant ** bad) </code></pre> <p>In more mathematical notation: $$speed_{desired} = \frac{k_{good} * g}{({k_{bad}})^b} $$ Where $g$ represents the number of iterations where the bump sensor was within limits, $b$ represents the number of iterations where the bump sensor was over the limit, $k_{good}$ controls how quickly the car ramps up its speed when the road is smooth, and $k_{bad}$ controls how quickly the car slows down when the road is bumpy.</p> <p>(The above is just for explanatory purposes; in practice, rather than keeping $g$ and $b$ in memory and causing an overflow as they increase, you'd just apply the $k_{good}$ > 0 and $k_{bad} &gt; 1$ constants to the running total as appropriate.)</p>
3173
2014-06-27T21:02:19.073
|pid|accelerometer|navigation|
<p>I am working on a robot that has an accelerometer. This accelerometer measures the vibration of the robot. When the robot hits a certain vibration, I would like it to slow down in order to reduce the vibration. I thought about a PID controller, but I don't think it would work. Does anybody have some input on different types of controllers I can use? </p> <ul> <li>Mechaman</li> </ul>
Velocity Control via Vibration
<p>First, consider that the purpose of the magnets in a motor are to create a permanent magnetic field. The magnetic flux associated with this magnetic field follows a path from one pole of the magnet, through the air gap to the armature, then through the opposite air gap, then through the other magnet, then around the housing of the motor back to the opposite pole of the first magnet. The magnets are embedded in the motor's magnetic circuit. If you place magnets on the outside of the motor, they will not be in the magnetic circuit of the motor. Rather, one pole of the magnets will be on the housing that the other will be facing out away from the motor. Little to no flux will be added to the motor's magnetic circuit. </p> <p>If you took the motor apart and replaced the standard magnets with new magnets, then that is a different question. This is a complicated subject and there is no definite answer to your question other than "it depends." Most motors are designed so that the steel in the motor is near magnetic saturation. Increasing the strength of the magnet will drive more of the steel into saturation. How much and what affect it will have depends on the motor and how much you change your magnet by. But you will increase Kt, which means for a given torque, your current will be lower. And your speed will be lower. Your ability to dissipate heat will not change. However, your magnet may be more susceptible to demagnetization. You'd want to make sure your magnet's coercivity (Hc) is high enough. </p>
3183
2014-06-29T20:23:21.853
|motor|
<p>Is it possible to strengthen permanent magnet DC motors by simply attaching extra magnets on the outside of the motor casing - adding to the magnetic field?</p> <p>If this is not possible, the question becomes; what happens if I replace the magnets inside the motor with better magnets?</p> <p>I know that the coils will not handle more current than they currently do, so what will the net effect on the motor be?</p>
Adding external magnets to a DC motor
<p>Most <a href="http://www.bth.se/fou/cuppsats.nsf/all/eb95114803e5034ec1257a080030948f/$file/BTH2012Naveen.pdf" rel="nofollow noreferrer">laptops</a>, <a href="http://www.google.com.ar/patents/US8503691" rel="nofollow noreferrer">cell phones</a>, and <a href="http://www.clearone.com/uploads/resource/Advanced_Beamforming_Microphone_Array_Technology_for_Corporate_Conferencing_Systems.pdf" rel="nofollow noreferrer">conference phones</a> actually use an <em>array</em> of microphones to properly locate the source of the person speaking, and filter out noise that comes from other locations. This is called "beamforming", used with a "microphone array".</p> <p>See also: <a href="https://robotics.stackexchange.com/a/1625/350">this answer</a>.</p>
3207
2014-07-08T02:27:06.817
|digital-audio|speech-processing|
<p>I made several tests with different setups in order to achieve an acceptable speech recognition quality. It works well when I push a button to activate it but now I want it to be automatically activated when a user speaks. This is a big problem, especially when I only use the energy of the audio signal to guess when the user is speaking. That is why I thought about using a headset and not a distant microphone. In a headset the microphone is very close to the users mouth and it is easier to make correct guesses about when the user is speaking. Now my question is if bluetooth sets used with mobile phones also have such a property. They are not long enough and their microphone is not positioned exactly in front of the mouth. Is there a possibility that such devices can also capture some speech/noise from a distant user? Is there a significant difference in the signal energy coming from the user's speech and a 1 meter distant person's speech?</p>
Best microphone for speech recognition tasks
<p>I've cleaned the bearings of residue using dimethoxymethane aerosol and lubricated them with PTFE-based oil lubricant. It works pretty well now.</p> <p>More details <a href="http://lab.whitequark.org/notes/2014-07-08/cnc3020t-poor-positioning-accuracy/" rel="nofollow">here</a>.</p>
3218
2014-07-09T07:37:58.663
|actuator|linear-bearing|
<p>I have a Chinese CNC mill (CNC3020T, though several different devices go under this name), and its Z axis was very imprecise, often being randomly off position by as much as 0.5mm. I've disassembled the linear actuator and discovered several problems with it.</p> <p>First problem is that they apparently forgot to lubricate the linear ball bearings. I make this conclusion because the rails have a set of grooves ground into them, and after wiping the rails with a tissue the only thing that is left is the finely powdered metal, with no traces of oil or other lubricant.</p> <p><img src="https://i.stack.imgur.com/eIQYD.jpg" alt="rails"> <img src="https://i.stack.imgur.com/bf2b4.jpg" alt="bearings"></p> <p>Second problem is the nut. I expected to see a ballnut, but in reality it is just a piece of threaded PTFE! The leadscrew rotates smoothly in it, but there is quite some lateral movement, i.e. I can tilt it slightly without any opposing force.</p> <p><img src="https://i.stack.imgur.com/8KVRS.jpg" alt="PTFE nut"></p> <p>Third problem is the overall mounting. In the picture below, the top left screw has been sheared in the factory and then they hid their mistake by tapping a larger thread and putting in a shorter screw that doesn't actually hold anything in the top plate. So the whole assembly was fixed in three, rather than four, points. However, the remaining screw was quite tight.</p> <p><img src="https://i.stack.imgur.com/MSvwB.jpg" alt="assembly"></p> <p>So my closely related questions are:</p> <ul> <li>Is the assembly even salvageable? How do I verify that linear ball bearings, the PTFE nut are relatively undamaged?</li> <li>Can I just rotate the rails by 45° to get smooth surface again?</li> <li>What do I lubricate the linear bearings with? Do I clean them before lubrication? I have an ultrasonic cleaner.</li> <li>Any other advice on maintenance of the whole assembly? There may be something that I missed.</li> </ul>
Repairing non-lubricated linear actuator
<p>Ball catching - which I believe should cover your intention - has been investigated a fair bit in robotics. There is some nice work on this from Udo Frese, which has been demonstrated on DLR's two arm humanoid Justin. They use a multi hypothesis filter to track the ball, and predict the landing [1]. </p> <p>Using the laws of physics will make it easier for you, and you should get a good enough accuracy. I would start out with something that is already working. Using a learning approach you could definitely enhance on the result. You could for example use the difference of your initial prediction with the final result as a signal for a reinforcement learning approach. </p> <p>[1] <a href="http://elib.dlr.de/74466/1/Birbach2011.pdf" rel="nofollow">Birbach, Oliver, Udo Frese, and B. Bauml. "Realtime perception for catching a flying ball with a mobile humanoid." Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.</a></p>
3222
2014-07-09T13:35:11.407
|machine-learning|
<p>Suppose we have a moving object (<i>a horizontal projectile motion as one of the most basic examples</i>). Is there any way to predict where it will hit finally? Please note that I'm looking for a machine learning method not a closed form solution.</p> <p>Although we can track the motion, using Kalman filter, That is only applicable when we want to predict the new future<i>(As far as I'm considered)</i>. But I need to predict the ultimate goal of a moving object.</p> <p>To better express the problem let see the following example:</p> <p>Suppose a goalkeeper robot that of course uses filtering methods to smooth the ball motion. It needs to predict if the ball is going to enter the goal or not, before it decide to catch the ball or neglect it to go out.</p> <p><b>Input data is a time series of location and velocity [x,y,z,v].</b></p>
Predicting the impact point of a moving object
<p>Program a Bluetooth 4 BLE transmitter to send a burst of short data every n seconds. Choose n to balance cell life and response time. Small n, less cell life. The data is dummy data as we need the signal strength only. Use shortest data length to save cell life.</p> <p>Older transmitter is MCU and BLE chip. Seem newer chip is one chip doing both.</p> <p>Program a cell phone app to read RSSI signal strength to estimate distance with reasonable accuracy. </p> <p>With mass production factory grade tools, facility and know how, size can be 1.2 diameter of coin cell and 3 times height of coin cell, being 1 time for cell, 1 time for electronic and PCB and 1 time for plastic and misc. </p> <p>Google "child monitor bracelet" to see examples of real commercial products</p> <p>Hope this helps</p>
3225
2014-07-09T18:44:10.180
|arduino|
<p>I want to make a simple device that causes my cellphone to vibrate for 30 seconds when my phone is 10 feet away from it. How would I go about doing that. How small could I make the device?</p>
Android Vibrating based on Arduino devices
<p>A linear actuator would be the ideal solution. You can find them very cheaply on sites like ebay or specialised robotics stores. If you need to keep position there are models avaliable with built in position sensors and endstops. </p>
3231
2014-07-09T22:36:20.897
|servomotor|
<p>I'm working on an application where I need to apply a linear or angular force to operate a linkage mechanism, but I don't (yet) know what amount of force I will need. I anticipate that it will be less than 4.5 kg (44 N). The travel distance on the linkage input should be less than 15 cm.</p> <p>As I look through available servos, they seem to exist firmly in the scale-model realm of remote control vehicles, and as such I am uncertain if any will be suitable for my application. For example, one of Futaba's digital servos, the mega-high torque <a href="http://www.futaba-rc.com/servos/digital.html" rel="nofollow">S9152</a>, is listed at 20 kg/cm.</p> <p>From what I understand, this means that at 1 cm from the center of the servo shaft, I can expect approximately 20 kg force. If I wanted 15 cm of travel distance I would need roughly a 10.6 cm radius, which would diminish the applied force to 20 / 10.6 = 1.9 kg, well below the 4.5 that might be required.</p> <p><strong>Question:</strong></p> <p>Is my understanding and calculation even remotely accurate? Should I be looking at other types of actuators instead of servos? They seem to become prohibitively expensive above 20 kg/cm torque. <em>(For the purposes of this project, the budget for the actuator is less than $250 US.)</em></p> <p>For my application, I'd like to have reasonable control over intermediate positions across the travel range, good holding power, and fairly fast operation. For this reason I have dismissed the idea of using a linear actuator driven by a gearmotor and worm drive.</p> <p>I am relatively new to robotics in the usage of motorized actuators, but I've used pneumatic cylinders for many years. For this application, I can't use pneumatics.</p> <p><strong>Edit:</strong></p> <p>Per comments, some additional constraints that are important:</p> <ul> <li><strong>Linkage Details:</strong> The linkage is a planar, one degree-of-freedom, part of a portable system (similar to a scissor lift mechanism). It is for a theatrical effect where the motion is amplified and force reduced (speed ratio and mechanical advantage are &lt; 1).</li> <li><strong>Power:</strong> It will be carried by a person. As such, the actuation needs to be battery-operated, as no tubing or wiring can tether the person. Tubing or wiring that is self-contained is okay. Because this is a portable system, battery-power will be used. The control system will be designed specifically for an appropriate actuator. Rechargeable batteries up to 12V will most likely be employed. Actuators could operate on as high as 24V. Ideally a motor would not exceed 1-2 amperes draw, but as it is not in continuous operation, this is not a hard limit.</li> <li><strong>Not Pneumatic:</strong> I've considered pneumatic actuation, using CO2 cartridges, for example, but the client would prefer not to use pneumatics. Also, the ability to stop/hold at intermediate points in the motion range is desirable, and somewhat more complicated to do with pneumatic actuators.</li> <li><strong>Speed:</strong> An ideal actuator will be able to move the input coupling 15 cm in 1-2 seconds.</li> <li><strong>Weight:</strong> Weight constraints are not well-defined. As it will be carried by a person, it should be moderately lightweight. The actuator itself should probably be less than 1kg, but certainly this can vary. (The rest of the mechanism will probably be 6-8 kg.)</li> <li><strong>Size:</strong> The primary size constraint is that everything must fit within a space measuring no more than 500 x 500 x 120 mm (H x W x D). The linkage mechanism extends from and collapses outside the enclosure, parallel to the width.</li> <li><strong>Noise:</strong> The quieter the better, but noise is the least priority.</li> </ul> <p>Servos seemed like the best choice for the job, but they don't seem to be available with the sort of torque I need.</p>
Up to what force is a servo motor a reasonable choice as an actuator?
<p>To make compatible gears, you need to match the pitch and shape of the teeth.</p> <p>First, check out <a href="http://en.wikipedia.org/wiki/Gear" rel="nofollow noreferrer">this wiki article</a> about gears and especially <a href="http://en.wikipedia.org/wiki/Gear#Nomenclature" rel="nofollow noreferrer">the image about nomenclature</a>, so you have a good idea of the names for things:</p> <p><img src="https://i.stack.imgur.com/f5ADO.png" alt="Gear Nomenclature"></p> <p>You first need to determine the pitch of the gear you want to match, the 8-tooth gear. If it has a diameter of say 10 mm, then take the circumference for your new gear (&pi;d = 31.4 mm) and divide by the number of teeth (8) to get a pitch of 3.925 mm. Note that I am using the root circle to calculate diameter, rather than outside circle.</p> <p>Knowing the pitch, multiply by 33 to get the circumference: 129.525 mm. We can now determine the diameter (d = c / &pi;), 41.25 mm.</p> <p>You'll need to use the same tooth shape, that is, the angles for the faces and flanks of the teeth, so that the gears mesh well.</p>
3233
2014-07-10T00:54:05.317
|motor|differential-drive|
<p>I'm trying to increase the torque on the output shaft of my robot's gearbox. I have a motor with a pinion attached to it with 8 teeth. I want to create a gear with 33 teeth that will mesh with the pinion that I currently have. I've got access to a 3D printer to make the gear, but I don't know how to design the second gear so that it will mesh properly.</p> <p>What parameters do I need to know about the first gear (8 teeth) to ensure that the second gear (33 teeth) will mesh correctly? How do I translate these parameters into the design of the second gear?</p>
Designing compatible spur gears for a robot gearbox
<p>The thrust vector is given by $f = qRg$ where $q$ is the magnitude of the thrust vector, $R$ is the rotation matrix for the current attitude and $g$ is the gravity vector in world coords, i.e. $[0~0~\pm1]^T$. Expanding $R$ for the ZYX Euler angles usually used for quadrotors, we have</p> <p>$$\ \begin{align} R &amp;= \begin{bmatrix} c_\psi c_\theta &amp; c_\psi s_\theta s_\phi - c_\phi s_\psi &amp; s_\psi s_\phi + c_\psi c_\phi s_\theta \\ c_\theta s_\psi &amp; c_\psi c_\phi + s_\psi s_\theta s_\phi &amp; c_\phi s_\psi s_\theta - c_\psi s_\phi \\ -s_\theta &amp; c_\theta s_\phi &amp; c_\theta c_\phi \end{bmatrix} \end{align} $$ where $\phi$ is the roll angle, $\theta$ is the pitch angle, and $\psi$ is the yaw angle.</p> <p>For simplicity, assume $\psi=0$ since it doesn't affect the vertical lift force. Multiplying $Rg$ and looking at just the vertical component we have $$ f_z = q c_\theta c_\phi \\ q = \frac{f_z}{c_\theta c_\phi} $$</p> <p>So just plug in your desired lift force (e.q. mass*gravity) and use your current roll and pitch angles to get the needed total thrust force.</p>
4247
2014-07-11T20:22:05.567
|quadcopter|
<p>The propellers of a multicopter produce thrust. Unfortunately the thrust is the smaller, the more the copter is tilted. I was currently wondering whether there is an established method to calculate how much the overall thrust has to be modified to hold the current altitude, based on the current attitude.</p> <p>This is the way a calculate the motor output so far. rol/pit/yaw-output already ran through the PIDs. </p> <pre><code>// Calculate the speed of the motors int_fast16_t iFL = rcthr + rol_output + pit_output - yaw_output; int_fast16_t iBL = rcthr + rol_output - pit_output + yaw_output; int_fast16_t iFR = rcthr - rol_output + pit_output + yaw_output; int_fast16_t iBR = rcthr - rol_output - pit_output - yaw_output; </code></pre>
Tilt-compensated motor output to keep altitude for quadcopter
<p>This project looks promising <a href="http://gazebosim.org/" rel="nofollow">http://gazebosim.org/</a>.</p>
4255
2014-07-14T11:36:54.657
|mechanism|simulator|
<p>Is there any software where I can simulate production line elements (joints, motors, springs, actuators, movement)? For example I want to simulate mechanism to unwind paper from big roll to weld it later with bubble foil and finally make bubble foil envelope, mechanism will look like this:</p> <p><a href="https://i.stack.imgur.com/WVROy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WVROy.jpg" alt="enter image description here"></a></p> <p>I need it as simple as possible and preferably free.</p>
Software to simulate mechanics of production line
<p>The <a href="http://www.visilibity.org/" rel="nofollow">VisiLibity</a> implementation appears to have python bindings.</p>
4263
2014-07-15T11:11:32.153
|motion-planning|python|
<p>I'm searching for a python toolbox/library to do visibility graph based motion planning. I have searched on the internet, but couldn't find anything. I'm probably missing out...</p> <p>Is there any package, you can recommend me?</p>
Visibility Graph Toolbox for Python
<p>Totally off topic and not really robotics related but that's an old gearless angle drive. Kind of like this...</p> <p><a href="http://www.cal-vantools.com/p-5-90-degree-gearless-angle-drive38.aspx" rel="nofollow">http://www.cal-vantools.com/p-5-90-degree-gearless-angle-drive38.aspx</a></p>
4281
2014-07-19T01:14:31.850
|design|joint|identification|
<p><img src="https://i.stack.imgur.com/J9yMY.jpg" alt="enter image description here"></p> <p>Held and rotated by the knurled ends, one in each hand, the silver spokes rise and fall in order for the assembly to rotate. What is it, some companies' salesmen show tool? Found in an old building, unit has no markings.</p>
What type of mechanism is this?
<p>It is certainly your second guess, i.e.:</p> <p>$$ F_{x,j} = \begin{bmatrix} 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ \end{bmatrix} $$</p> <p>If you pay attention, the columns that are repeated (with <code>...</code>) contain all zeros, both the first column and the last.</p> <p>If they wanted to show your first guess, they would have written something like this:</p> <p>$$ F_{x,j} = \begin{bmatrix} 1 \cdots 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \cdots 0 \\ 0 \cdots 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \cdots 0 \\ \underbrace{0 \cdots 1}_{N} &amp; 0 &amp; 0 &amp; 1 &amp; \underbrace{0 \cdots 1}_{N} \\ \end{bmatrix} $$</p> <p>Note that now the first element of the first column is 1 and the last element of the last column is also 1, showing that the 1 moves from top to bottom on each column. Note also that the number of columns (previously <code>3j-3</code> and <code>3N-3j</code>) are now changed to <code>N</code> (the number of rows, let's assume) because otherwise the pattern and the number of columns would be inconsistent.</p>
4285
2014-07-21T00:04:24.523
|slam|ekf|
<p>I've successfully done with EKF Localization Algorithm with known and unknown correspondences that are stated in "Probabilistic Robotics". The results make perfect sense,so I can estimate the position of a robot without using GPS or odometry. Now I've moved to EKF-SLAM with known correspondences in the same book. I don't understand this matrix </p> <p>$$ F_{x,j} = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \cdots 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \cdots 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \cdots 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \cdots 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \cdots 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \cdots 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \cdots 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \cdots 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \cdots 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \cdots 0 \\ 0 &amp; 0 &amp; 0 &amp; \underbrace{0 \cdots 0}_{3j-3} &amp; 0 &amp; 0 &amp; 1 &amp; \underbrace{0 \cdots 0}_{3N-3j} \\ \end{bmatrix} $$ What is exactly the bottom of this matrix? The following $$ F_{x,j} = \begin{bmatrix} 0 \cdots 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \cdots 0 \\ 0 \cdots 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \cdots 0 \\ \underbrace{0 \cdots 0}_{3j-3} &amp; 0 &amp; 0 &amp; 1 &amp; \underbrace{0 \cdots 0}_{3N-3j} \\ \end{bmatrix} $$ Is it as following (assuming N = 3) $$ F_{x,j} = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1\\ \end{bmatrix} $$ Or $$ F_{x,j} = \begin{bmatrix} 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0\\ \end{bmatrix} $$ where ones' represent a specific landmark. </p>
sparse matrix in EKF SLAM
<p>In regards to your question about differential drive, it sounds as though your robot is using the kinematic model to estimate the position, but this is being done "internally" (i.e., in the embedded on-board software). Whether or not this is okay depends on whether the wheel radii and track width (distance between left and right wheels) it uses are correct. If they aren't (e.g., you changed the wheels), the values it is spitting out will be very inaccurate. Assuming these are approximately correct, instead of the displacement of the left and right wheels, first calculate the translation and rotation since the last measurement, i.e.,</p> <pre><code>translation = sqrt( (dx - dx_prev)^2 + (dy - dy_prev)^2 ) rotation = dtheta - dtheta_prev </code></pre> <p>where <code>dx</code>, <code>dy</code>, and <code>dtheta</code> are your current measurements, and <code>dx_prev</code>, <code>dy_prev</code>, <code>dtheta_prev</code> are the previous measurements. Note that I assumed your robot is reporting the heading (theta) as well. If it doesn't, that means if the robot is turning on the spot, you won't be able to tell because no change in position is occurring.</p> <p>Your new input <code>u</code> is now</p> <pre><code>u = [translation; rotation] </code></pre> <p>Use an appropriate kinematic model where this is your input (e.g., change in <code>x</code> is <code>translation * cos(theta)</code>, etc.). Setting an appropriate covariance matrix for <code>u</code> is tricky. I would recommend determining it empirically (i.e., do a series of experiments and decide how much uncertainty you should add to <code>u</code> based on the results). Start with a guess such as</p> <pre><code>Q = [0.005^2, 0; 0, 0.005^2] </code></pre> <p>and check if the 3-sigma ellipse covers the actual position after the experiment. You can also get fancy and set Q at each time stamp, scaled according to the motion, i.e.,</p> <pre><code>Q = [(a*translation)^2 + (b*rotation)^2, 0; 0, (c*translation)^2 + (d*rotation)^2] </code></pre> <p>where you tune for values of <code>a</code>, <code>b</code>, <code>c</code>, and <code>d</code>. This allows you (for example), to add more uncertainty if the robot is turning (larger values of <code>c</code> and <code>d</code>).</p> <p>For your question about determining the <code>R</code> matrix, without getting into the details of how GPS works, your best bet is probably once again to do tests. One simple test would be to measure out some places on the ground (so you know their positions). Start by converting your initial latitude to (<code>x</code>, <code>y</code>) (see <a href="https://stackoverflow.com/questions/16266809/convert-from-latitude-longitude-to-x-y">here</a>). This will be your origin. Next drive to each of your known positions and calculate the (<code>x</code>, <code>y</code>) according to the GPS, and subtract off your initial position. Drive from known position to known position and so on (including revisiting positions) and record your (<code>x</code>, <code>y</code>) according to GPS. After this experiment you'll have a list of estimated positions and their corresponding true positions. From this you can use statistics to calculate the average uncertainty, and can set <code>R</code> appropriately.</p> <p>Good luck!</p>
4287
2014-07-21T17:27:39.073
|mobile-robot|kalman-filter|gps|odometry|
<p>I am working on estimating a robots pose using Odometry and GPS.</p> <p>My first problem is that all kinematic model i have seen for a differential drive robot proposes using the displacement of the left and right wheels to evaluate the robots next pose. However, in my situation the robot i have only spits out current X and Y pose relative to the starting point of the movement. can i use this as my state estimate P = [x,y]T</p> <p>P = [x0,y0] + [dx,dy] where dx and dy are change in respective coordinates gotten from the robots odometry.</p> <p>if the above is posible how do i calculate the state covariance Q of the filter.</p> <p>For GPS, how do i evaluate the covariance R; i have tried to collect multiple reading of latitude and longitude from a fixed point but i dont know if this is righ and i just dont get evaluate the covariance from these data (feeling dumb).</p> <p>Thank you in anticipation.</p>
Kalman filter Issue - GPS Odometry Fusion
<p>The Arduinos are mostly Microcontroller boards. This means that they are good for controlling sensors and actuators in real time, but not so good at controlling complex tasks.</p> <p>Single Board Computers (SBC's) are better for the more complex tasks, such as video, SLAM, and communicating with other computers.</p> <p>I would not use a Raspberry Pi for this, because the Pi isn't all that powerful or fast.</p> <p>I like the Udoo. It is an i86 platform rather than ARM. However, one of it's most interesting things is that it contains an Arduino-compatible (complete with headers for shields) on board and directly connected to the main processor.</p> <p>Another computer that I like using for the main brain of a computer is the Intel NUC. These are tiny little computers that can do a lot more than most people think. I've never had problems installing Linux on them.</p>
4297
2014-07-23T14:14:52.120
|arduino|mobile-robot|raspberry-pi|slam|stereo-vision|
<p>I am very new two robotics, however I have a working stereo algorithm which I want to combine with a SLAM algorithm. I am developing this system for an other application but I decided integrating it on a robot first and testing it might be a good way to get used to the system and test its behaviour in a realistic environment. (rather than testing it in some kind of software simulator only) However, I want the system to be autonomous and running on-board of the rover. </p> <p>The system I am talking about will consist of:</p> <p>a stereo camera a rover with wheels with one motor each possibly some kind of sensor that "measures" the movement, e.g. how much the wheels turned maybe some distance sensor </p> <p>Appart from this it's only software the stereo software is already developed, the SLAM algorithm not. Therefore it is currently impossible to say how much RAM it needs. I am currently running the stereo-vision only on an i7 in approx. 1s.</p> <p>Now my question: as mentioned I have no idea about robotics, and also my electronics knowledge is limited, so I have no idea what I need for this robot when it comes to the processor and the electronics. </p> <p>I read some stuff about the Raspberry Pi and Arduino boards but I have no idea what to make from this. I am afraid that a Arduino will not be able to handle the computational load of the stereo vision and the SLAM algorithm but I read that Raspberry Pis are not the first choice when interfacing with sensors is needed (in this case my stereo cameras). Also I found the Leika kit which is a robotics kit for the Raspberry Pi. Maybe this would be a good option for me?</p> <p>Maybe an entirely different system would be even more advisable? </p> <p>Possibly someone else build an equally complex system before and can give me some advise form his/her experience? </p>
Which micro-controler/processor to be used for autonomous stereo vision robot system?
<p>Yes, that looks like it will work. Keep in mind that the arduino io pins aren't powering any logic. Most Arduinos can provide more than 500 mA from their 5v pin assuming that they are connected to a power supply other than usb (not that you want to draw much current from it). The io pins on atmega328 based boards can provide 40 mA, so powering anything more than an led from them is generally not a good idea.</p> <p>The one thing I would worry about is that the ACS714 is rated at 30 amps, and it has a resolution of 66 mV/amp. The atmega328's 10bit adc has a theoretical resolution of 4mV. While this sounds like it should be enough, I personally doubt the adc would be reliably accurate below 39 mV when noise is taken into account (3 LSBs). This doesn't take thermal drift and other factors into account. It would still probably work fine, but don't expect much from the current sensors. </p>
4299
2014-07-23T23:47:33.803
|arduino|control|actuator|current|circuit|
<p>I am using the L298N motor driver to drive two HAD1 linear actuators (12V each and a no-load drive current of ~950mA each)</p> <p><strong>Linear Actuator</strong>: <a href="http://www.alibaba.com/showroom/mini-linear-actuator-had1.html" rel="nofollow">http://www.alibaba.com/showroom/mini-linear-actuator-had1.html</a></p> <p><strong>Motor Driver</strong>: is a L298N dual-h-bridge motor-driver controller board-module for arduino robot</p> <p>I am also using a current sensor per motor to get feedback of what the motor is doing (only sensors I have available, but I can detect of the motors are moving or stopped). I am using two ACS714 current sensors. The supply voltage for each is 4.5V to 5.5V and Supply Current is 10mA to 13ma:</p> <p><strong>Current Sensor</strong>: is an ACS714 current sensor.</p> <p>And Here is the circuit diagram that I made for my actual setup (an Arduino UNO, two current sensors, to linear actuators, and one motor drive):</p> <p><strong>Circuit Diagram</strong>: <img src="https://www.dropbox.com/s/ouxfrcutw5lcj6a/Actuator.jpeg?dl=1" alt=""></p> <p>Will this setup work? Will I have enough current/power coming out of the 5V of the arduino to power both the L298N logic and the two ACS714 sensors?</p>
Arduino with two Linear Actuators, two ACS714 Current Sensors, and an L298N Motor Driver setup
<p>After reviewing the Polulu Stepper Driver manual, I realized I was wiring stepper wires in wrong order.</p> <p>The controller pin labels are: 1A, 1B, 2A, 2B. The controller card's manual was saying 'use letters as pair' but it suppose to say 'use numbers as pair'.</p> <p>Everything works perfectly after correcting the wires.</p>
4307
2014-07-26T14:11:45.063
|stepper-motor|cnc|
<p>I always wanted to have a CNC to make PCB quickly at home. </p> <p>Finally, I got a <a href="http://rads.stackoverflow.com/amzn/click/B002ARTLUG" rel="nofollow noreferrer">7x7 kit</a> from zentools recently and put it together. I attached a battery powered screw driver to 2nd shaft of the stepper and moved the each axis all the way back and forward before wiring. All 3 axis moves smoothly, I can turn the steppers even by hand. Every piece works smoothly, no mechanical jam.</p> <p>I decided to use <a href="https://github.com/grbl/grbl" rel="nofollow noreferrer">GRBL</a> as controller software. Tested the software without the shield or stepper (qv: <a href="http://cadduino.wordpress.com/2013/11/18/testing-grbl-in-arduino-board-without-the-motors/" rel="nofollow noreferrer">Testing GRBL in Arduino Board without the steppers</a>) I use <a href="https://github.com/winder/Universal-G-Code-Sender" rel="nofollow noreferrer">Universal Gcode Sender</a> to communicate with GRBL.</p> <p>I got an <a href="http://blog.protoneer.co.nz/arduino-cnc-shield/" rel="nofollow noreferrer">Arduino CNC Shield</a> for Arduino UNO, put it together, attached to Arduino UNO, re-tested GRBL, it worked. </p> <p>I used Reprep's <a href="http://reprap.org/wiki/Stepper_wiring#.22pair.22_wires_on_6_wire_motors" rel="nofollow noreferrer">Stepper wiring</a> article to connect stepper to the driver, wired 1 stepper to the stepper driver (X axis). Powered the shield with 20V 17.5Amp (350W) DC Regulated Power supply. (It was the power adaptor for an old 17" notebook. Notebook died, I kept the adaptor)</p> <p>When the move 5 steps command (G1 X5) was sent, stepper makes a small move in the direction and then makes a grinding noise. (<a href="https://www.youtube.com/watch?v=cNiFV6u2nSk" rel="nofollow noreferrer">Can be seen on Youtube</a>) </p> <p>I tried switching 1st pair's cables, using another stepper driver (3 drivers), turning the potentiometer to increase the current, but still no luck.</p> <p>I attached 2 photos of the cnc and the controller and controller unit.</p> <p>I tried everything I can think of, any suggestions?</p> <p><img src="https://i.stack.imgur.com/Ubgr5.jpg" alt="CNC"> <img src="https://i.stack.imgur.com/hu2nb.jpg" alt="Controller (the DC adapter at the bottom)"></p>
Stepper does not turn
<p>Indeed I d also go for an interrupt method. As this method allows you to execute code, do something etc.. on a very arbitrary base. Which I think youll ,due to the non constant dt</p>
4313
2014-07-27T16:41:27.927
|quadcopter|pid|real-time|
<p>Is integration over not constant dt (∆time) a possible thing? Let's say you have a PID loop with differentiating frequency, can the integral part of it still work? (Assuming you know the dt from the last iteration)</p> <p>Could I just use a variable dt (∆time) in my calculation and the PID principle would still function correctly?</p>
PID Integration over not constant dt (∆time)
<p>I have done this myself, controlling the exact position of a brushless DC motor as its velocity ramps up and down. And I did it using a position controller only. Sounds obvious, but it worked extremely well.</p> <p>The integral term of the controller is key to this. Since you want the position error to be as close to zero as possible the whole time, but you need a large output from the controller to drive the motor, most of this comes from the I term.</p> <p>The other thing you need to add is a feed forward term, which feeds some of the demanded acceleration to the output. This will give the controller the extra power it needs to track rapid velocity changes. We could call this an FPID controller. </p>
4318
2014-07-28T14:33:16.217
|arduino|control|microcontroller|pid|wheeled-robot|
<p>I'm trying to build a robot with a differential drive powered by two DC Motors. First I implemented a PID Controller to control the velocity of each motor independently. Estimated the TF using the MATLAB's System Identification Toolbox, of the open loop system by the acquiring the velocity of each wheels encoder in function of the PWM signal applied by an Arduino microcontroller. All went well and i successfully dimensioned the PID gains for this controller.</p> <p>What I'm trying to accomplish now is to control the exact (angular) position of the DC Motor. I thought in cascading a PID controller in the input of the other already implemented. So this way, I can give a position to the first controller, which will be capable of generate an output reference to the second (velocity) controller so it generates the appropriate PWM value signal to drive the DC Motor accordingly.</p> <p>Will it work? Is that a good approach? Or should I try to implement a different controller which outputs the PWM signal in response to a position reference signal?</p> <p>Many thanks for your attention and I hope somebody can help me with these doubts.</p>
Cascading PID DC Motor Position & Velocity Controllers
<p>When selecting an actuator, you need to consider not just the force but also the power.</p> <p>Power = force x speed</p> <p>The faster you need to lift the lever, the more power you'll need. Companies like Maxon provide detailed graphs showing exactly how much torque you can expect at various speeds for their motors. Other companies may just state the motor's wattage.</p> <p><img src="https://i.stack.imgur.com/WatDr.png" alt="Maxon torque graph"></p> <p>You will also need to consider acceleration.</p> <p>Force = mass x acceleration</p> <p>When you're lifting your lever, you need to add its weight and the force required for acceleration. If you want to maintain maximum acceleration right up to the maximum speed, then the total power is:</p> <p>power = (weight + mass x acceleration) x maxspeed</p> <p>or:</p> <p>power = mass x (9.8+acceleration) x maxspeed</p> <p>Next check the efficiency of the gears. Again, companies like Maxon specify the gear efficiency, so divide your calculated power by the gear efficiency. If you're unsure of the efficiency, then assume it's about 60% efficient.</p> <p>Lastly, you need to estimate how efficient the rest of your mechanism is. A tendon driven robot might be only 50% efficient in its power transmission, whereas attaching your lever directly to the motor shaft will be 100% efficient.</p> <p>After all that, if you allow another 20% - 50% power on top of that, you should be OK.</p>
4326
2014-07-29T22:23:41.050
|actuator|force|
<p>When installing a servo or other actuator, I measure the force needed to perform whatever action is needed and find an actuator that can generate <em>more</em> than the necessary force. However, I recently wondered if there's a rule of thumb or guideline for <em>how much</em> overhead is useful, before it becomes a waste.</p> <p>For a (perhaps oversimplified) example, say I have a lever to lift something, and the force needed is 100 Newtons. An actuator that can manage 100 N maximum will have problems with this and stall, with any sort of friction or other imperfections. I would use an actuator that can produce 150 or 200 N - whatever is available and fits the design and budget. After testing, it may become apparent that 200 is overkill, 120 is sluggish, but 150 is good. Other than trial and error, is there a way to measure this, or a rule of approximation?</p> <p>I realize that variables in the mechanics and construction can significantly alter what force might is needed to be considered ideal, but is there a commonly accepted value for simple applications? Something like "If you need x force, install an actuator with x + 20% force."</p>
Is there a rule of thumb for actuator torque overhead?
<p>Look for a makerspace in your area, in boston there is <a href="http://artisansasylum.com/site/" rel="nofollow">http://artisansasylum.com/site/</a> they have various CNC machines, laser cutters, Etc. which with some simple low cost training you are allowed to use. </p>
4329
2014-07-31T09:21:59.710
|cnc|chassis|
<p>For my robotic projects I need some aluminium parts. Currently I am looking for a way to build a chassis including simple gear box. So I need relatively high precision. Which options do I have to machine aluminium without investing in expensive tools?</p> <p>This is what I could think of so far.</p> <ul> <li>Design parts in CAD and send them to a third party company for fabrication. The problem with this is that hobby projects almost never need large quantities and piece production can be still expensive.</li> <li>Buy cheap tools to work aluminium by hand. I don't know which tools would fit this task best. Moreover, the results might be inaccurate, which is a problem for designs with moving parts.</li> <li>Find someone with a CNC who let's me machine my parts. This would most likely result in very slow prototyping cycles though.</li> </ul> <p>A method that I can do at home with not too expensive tools would be perfect, but I'm looking forward to every solution.</p>
How to machine aluminium on a low budget?
<p>Image X and Y will reflect x and Y coordinates on the floor. Lens has distortion. Use calibration (if opencv), if high accuracy needed (may not need in this case).</p> <p>Original answer was mis-understood:</p> <p>Height and Width is distance (far, smaller image) then PID to control speed, slow down when near</p> <p>Left, Central, Right of the block is angle of block as reference to vehicle coordinate frame and calculate the motion angle</p>
4340
2014-08-02T18:00:09.710
|mobile-robot|microcontroller|wheeled-robot|robotc|
<p>the matlab code is used to detect red colored object, but i want to control a bot to move towards the detected object. just need a simple algorithm or idea, controlling the servo i will be able to do it. </p> <p><img src="https://i.stack.imgur.com/x0Jvm.png" alt="Detailed Diagram"></p> <pre><code>%get snapshot data = imread('image.jpg'); % Now to track red objects in real time % we have to subtract the red component % from the grayscale image to extract the red components in the image. diff_im = imsubtract(data(:,:,1), rgb2gray(data)); %Use a median filter to filter out noise diff_im = medfilt2(diff_im, [3 3]); % Convert the resulting grayscale image into a binary image. diff_im = im2bw(diff_im,0.18); % Remove all those pixels less than 300px diff_im = bwareaopen(diff_im,300); % Label all the connected components in the image. bw = bwlabel(diff_im, 8); % Here we do the image blob analysis. % We get a set of properties for each labeled region. stats = regionprops(bw, 'BoundingBox', 'Centroid'); % Display the image imshow(data) hold on %This is a loop to bound the red objects in a rectangular box. for object = 1:length(stats) bb = stats(object).BoundingBox; bc = stats(object).Centroid; rectangle('Position',bb,'EdgeColor','r','LineWidth',2) plot(bc(1),bc(2), '-m+') a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2))))); set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow'); end hold off </code></pre>
Matlab for moving a robot towards the detected block
<p>Have you tried the <code>line()</code> function, which plots a straight line on a figure?</p> <p>You do something like</p> <pre><code>imshow(myImage); hold on line(X, Y); </code></pre> <p>Check <code>help line</code> for details.</p>
4346
2014-08-03T11:45:57.823
|computer-vision|motion-planning|navigation|matlab|
<p><img src="https://i.stack.imgur.com/J61he.jpg" alt="Curve"></p> <p>I am able to locate centroids of each blocks, but i am unable to join two blocks with a line segment by avoiding the obstacle as shown in the figure. Please need help how do i achieve this using matlab.</p>
how to plot a line between two centroids in matlab
<p>Rough estimates of the resonant angular frequency $\hat{\omega}_i$ for joint $i$ can be always guessed from the current configuration of the manipulator.</p> <p>Let focus on joint $i$ and consider all downstream joints $\left \{j:i&lt;j \le n \right \}$, where $n$ is the number of DOFs of manipulator. Put the angular positions $\theta_j$ of these latter joints in such a way to determine somehow a worst case estimate (see below) of <em>moment of inertia</em> $\hat{I}_i^{\text{max}}$ computed with respect to joint $i$; that is:</p> <p>$$ \hat{I}_i^{\text{max}}= \max_{\mathbf{\theta} \in \mathbb{R}^{n-i}} I_i(\mathbf{\theta}). $$</p> <p>Then, by neglecting the internal dynamics and friction components: $$ \hat{\omega}_i=\sqrt\frac{\hat{k}_i}{\hat{I}_i^{\text{max}}}, $$</p> <p>where $\hat{k}_i$ represents an estimate of the stiffness of the downstream structure. See this <a href="http://en.wikipedia.org/wiki/Torsion_spring#Torsional_harmonic_oscillators" rel="nofollow">link</a> for further details. Here, $\hat{I}_i^{\text{max}}$ can be retrieved through simple geometrical inspections, and also by resorting to tools such as the <a href="http://en.wikipedia.org/wiki/Parallel_axis_theorem" rel="nofollow">Huygens–Steiner theorem</a>.</p> <p>Importantly, since it comes out that $\hat{\omega}_i$ is inversely proportional to $\hat{I}_i^{1/2}$, we then try to maximize $\hat{I}_i$ in order to account for the worst impact (i.e. lower $\hat{\omega}_i$) on the control design.</p> <p>Of course, the validity of such an approach remains limited, but yet worth being pursued, particularly considering that you're not forced to experiment on the real manipulator to get the Bode plot.</p>
4360
2014-08-05T10:54:00.240
|control|pid|robotic-arm|
<p>I am currently trying to parametrize the low-level gains of a robotic arm. This arm uses a classical PID for each joint.</p> <p>I am trying to use a method based on computation rather than a trial-and-error/tweaking approach. The method I use considers each joint independently and assumes the system driven by the PID is linear. Hence I infer a transfer function, a characteristic polynomial, poles and this gives me gains $K_p$, $K_i$, and $K_d$ for each joint.</p> <p>Now, computed as I did, these gains depend on the natural angular-frequency. For example: $$ K_p = 3 a w^2 $$ where $a$ is the inertia and $w$ is the natural angular-frequency.</p> <p>Hence my question: <strong>how shall I compute $w$, the natural angular-frequency for my system</strong>? Is this an enormous computation involving the geometry and other complex characteristics of the robot, and that only a computer can do or are there simple assumptions to be made which can already give a rough result for $w$?</p> <p>I guess this is a complex computation and this is one of the reasons why PID gains are most often found by trial-and-error rather than from computation. Though I am looking for some more details on the subject to help me understand what is possible and what is not.</p> <p>Kind regards,</p> <p>Antoine</p>
Natural frequency computation (for PID gains computations)
<p>You can either have <span class="math-container">$d_1 \neq 0$</span> or set <span class="math-container">$d_1=0$</span> and embed the vertical displacement in <span class="math-container">$d_3^*$</span>. </p>
4364
2014-08-05T17:03:24.343
|forward-kinematics|dh-parameters|
<p>I'm going through the textbook Robot Modeling and Control, learning about the DH convention and working through some examples. </p> <p>I have an issue with the following example. Given below is an image of the problem, and the link parameter table which I tried filling out myself. I got the same answers, except I believe there should be a parameter d1 representing the link offset between frames 1 and 2. This would be analogous to the d4 parameter. </p> <p><img src="https://i.stack.imgur.com/jc3Ax.png" alt="enter image description here"></p> <p>If anyone could explain why I might be wrong, or confirm that I have it right, that would be great. I hate it when it's me against the textbook lol. </p> <p>Cheers.</p>
Denavit-Hartenberg parameters for SCARA manipulator
<p>Oussamma Khatib's potential field algorithms suffer from local minima/maxima finding problems vice global minima/maxima. As such you will get trapped in "box canyons" where the potential field cannot escape. You need to add extra algorithms on top of potential fields to avoid local minima paths.</p> <p>I did my Master's thesis on artificial neural networks that avoid local minima and find global ones. But there are lots of others to choose from:</p> <ul> <li><em>Non-learning artificial neural network approach to motion planning for the Pioneer robot</em> <ul> <li>DOI: 10.1109/IROS.2003.1250614</li> <li>Available at <a href="http://dx.doi.org/10.1109/IROS.2003.1250614" rel="nofollow">IEEE (paywal)</a></li> <li>Also available at <a href="http://www.academia.edu/1169595/Motion_Planning_of_an_Autonomous_Mobile_Robot_Using_Artificial_Neural_Network" rel="nofollow">academia.edu (non-paywall)</a></li> </ul></li> </ul>
4369
2014-08-06T13:06:13.740
|mobile-robot|
<p>I've been working on my two-wheeled mobile robot I've been trying to perfect my obstacle avoidance algorithm which is Artificial Potential Field method . Also i use Arduino Uno kit . The basic concept of the potential field approach is to compute a artificial potential field in which the robot is attracted to the target and repulsed from the obstacles. The artificial potential field is used due to its computational simplicity. the mobile robot applies a force generated by the artificial potential field as the control input to its driving system . the Artificial Potential Field method in its computations depends on the distance between robot and goal or target and the distance between robot and obstacles that effected the robot (which could easily get for ultrasonic senors)</p> <p>I applied the Artificial potential field method in Matlab environment / simulation and it is done successfully , really what I need in the simulation is to get the current position of mobile robot and position of goal as x, y coordinates (to compute the distance between robot and goal) and the obstacles positions.</p> <p>The output of the Artificial potential field is the desired angle to avoid obstacle and reach to the goal , the method give the robot the angle the pointed to the goal then the robot goes toward that angle and if the robot face an obstacle in his way (got from sensor reading) the Artificial potential field will update the angle to avoid the obstacle and then re-give the robot the angle that pointed to the goal and so on.</p> <p>The question is how could I apply the Artificial potential field method in real would? what should I get? is it easy to do that or it is impossible? </p> <p>I had Rover 5 with two normal DC motors and two encoders (incremental rotary encoder) per each wheel.</p> <p>Any Help or suggestion on the topic will be highly appreciated please.</p> <hr> <p><strong>Edit:</strong> Based on the response from Shahbaz.</p> <p>The case is very simple, but first, there is something to know that I constrained with some limitations that I couldn't overstep them, one of them is that the real world should be exactly as simulation for example in the simulation I consisted that robot started with (0,0) on coordinates x, y axis and I should put the goal point for example (10,20) and feed this point in the Artificial potential field method and then compute distance between robot and goal (so I don't need any technique to determine the position of goal) and I don't know if I could applied that.</p> <p>The second constraint is that I should use the encoders of wheels to determine the current position of mobile robot and its orientation depending on a calculation formula (something like this <a href="http://rossum.sourceforge.net/papers/DiffSteer/" rel="nofollow">here</a>) even if that will be inaccurate.</p> <p>I had a Rover 5 with two normal DC motors and two encoders (incremental rotary encoder) per each wheel, each encoder has four wires I don't know how to deal with them yet, and how could I translate the pulses of encoders or how to work out the x.y position of your robot based on the shaft encoder data.</p> <p>I am still searching for ….</p>
Artificial Potential Field navigation
<p>There are basically two types of components in a network from the point of view of syskit, functional components and device drivers.</p> <p>Functional components behave as pure functions. It means that two components whose inputs are connected to the same source and that have the same configuration should produce the exact same output. Most components are like this, and that's how syskit works: it is able to find out which components are identical and merge them.</p> <p>Devices are there to provide entry/exit points for the data, i.e. they tell syskit that the component is not functional but pulls, push or both pushes and pulls data from "the outside world". In a robot, these are provided by hardware which is where the name comes from. Note that for instance a random generator would also need to be declared as a device even though there is no real "device" below.</p> <p>If you want to force syskit to instanciate the same component twice even though they have the same input connections, you therefore have to either make them device drivers and tie them to different devices, or assign different configurations. Note that they can be dummy configurations, i.e.</p> <pre><code>--- name:default # Real configuration goes there --- name:foo # Empty dummy configuration to force double-instance --- name:bar # Empty dummy configuration to force double-instance define 'foo', Composition.use(Component.use_conf('foo', 'default')) define 'bar', Composition.use(Component.use_conf('bar', 'default')) </code></pre> <p><strong>About the deployments</strong>: what I have described so far is how two components get merged or not. Another issue is that, once two instances of the same component model end up in the final network, they have to be deployed. Since they are from the same component model, syskit cannot choose for you (see P.S. below for more details). There is actually a tutorial on that later on in the tutorial series you are currently doing.</p> <p>P.S.: actually syskit <strong>could</strong> choose for you. The issue then is that you would not be able to easily understand what each component does when you look at rock-display. Indeed, the same component instance (let's say task1) could have different rols in the same network, depending on e.g. the order in which syskit would have assigned the tasks. Looking at a running system in rock-display and/or at logs with rock-replay would be hell. An integration of Syskit and Rock live display / log tools would solve that nicely, but we're not there yet.</p>
4370
2014-08-06T13:10:57.310
|rock|syskit|
<p>I've been going through Syskit tutorials at rock-robotics.org. In the tutorials e.g. <a href="http://rock-robotics.org/stable/documentation/system_management_tutorials/200_first_composition.htm" rel="nofollow">First composition</a>, there are two different components declared with:</p> <pre><code> add Controldev::JoystickTask, :as =&gt; "cmd" add RockTutorial::RockTutorialControl, :as =&gt; "rock" </code></pre> <p>I was wondering how could I add an additional <code>RockTutorialControl</code> into the composition, so that the instantiation would then create two separate instances of the same component?</p> <p>I've tried something like</p> <pre><code>add RockTutorial::RockTutorialControl, :as =&gt; "foo" </code></pre> <p>but this apparently isn't the way to go. <code>syskit instanciate</code> command shows only one instance of RockTutorialControl, but gives two roles to it (rock and foo). What is the meaning of "role" in this context?</p> <p>I've noticed that the tutorial explains how to make multiple instances of the same component when we're declaring our components as <a href="http://rock-robotics.org/stable/documentation/system_management_tutorials/800_devices.html" rel="nofollow">Devices</a>. But how to do this with components that should not be concerned as devices?</p> <p>BR, Mathias</p> <p><strong>EDIT:</strong></p> <p>This was my first question to StackExchange, and I don't know what's the policy for adding additional information to the original question, but here we go:</p> <p>It seems that both the deployment and configuration need to be different when there are two instances of the same component. I did a small scale testing with two components:</p> <pre><code>using_task_library 'foobar_user' using_task_library 'foobar_proxy' module FooModule class FooControl &lt; Syskit::Composition add FoobarUser::Task, :as =&gt; "producer" add FoobarProxy::Task, :as =&gt; "proxy" add FoobarUser::Task, :as =&gt; "consumer" producer_child.connect_to proxy_child proxy_child.connect_to consumer_child end end </code></pre> <p>where <code>FoobarUser::Task</code> has an input &amp; output port of /std/string. <code>FoobarProxy::Task</code> has corresponding i&amp;o ports. <code>FoobarUser::Task</code> has also two configurations called 'default' and 'other'. It also has two deployments 'foo_depl' and 'bar_depl'.</p> <p>In order to create a "pipeline" where data flows producer ==> proxy ==> consumer, I made define line:</p> <pre><code>define 'conf_and_depl', FooModule::FooControl.use('producer' =&gt; FoobarUser::Task.use_conf('other').prefer_deployed_tasks(/foo_depl/), 'consumer' =&gt; FoobarUser::Task.use_conf('default').prefer_deployed_tasks(/bar_depl/)) </code></pre> <p>and then instanciated the network with </p> <pre><code>syskit instanciate scripts/03-nwtest.rb conf_and_depl_def! </code></pre> <p>The component instanciation failed if either <code>use_conf</code> or <code>prefer_deployed_tasks</code> clause was left out. In both cases the produced error was "cannot deploy the following tasks... ...multiple possible deployments, choose one with #prefer_deployed_tasks". </p>
rock/syskit: How to add multiple instances of same component into a network
<p>The value of $\alpha$ is just some threshold Mahalanobis distance. </p> <p>Let's say you have four entries in your map. You take a measurement, then you calculate four predicted measurements (one for each map entry). You can calculate the Mahalanobis distance between your measurement and each of your predictions. You now have four Mahalanobis distances. The landmark that produced the smallest Mahalanobis distance is most likely the landmark that you have measured. However, what if this Mahalanobis distance is really large? Well then it's likely that your measurement actually was of a new, previously unmapped landmark. The value of $\alpha$ is simply the threshold Mahalanobis distance where you decide whether your smallest Mahalanobis distance is indeed a measurement of a landmark in your map, or it is actually a new landmark.</p> <p>If you decide it is a new landmark (the smallest Mahalanobis distance was larger than the threshold), you need to add it to your state. This step is not shown in the algorithm you posted. See <a href="https://robotics.stackexchange.com/a/5173/4320">my answer to another question</a> on how to do that.</p>
4392
2014-08-12T00:11:03.587
|slam|ekf|mapping|
<p>So far I have done EKF Localization (known and unknown correspondences) and EKF SLAM for only known correspondences that are stated in <a href="http://www.probabilistic-robotics.org" rel="nofollow">Probabilistic Robotics</a>. Now I moved to EKF SLAM with unknown correspondences. In the algorithm in page 322, </p> <blockquote> <p>16. &nbsp; &nbsp; $\Psi_{k} = H^{k} \bar{\Sigma}[H^{k}]^{T} + Q$</p> <p>17. &nbsp; &nbsp; $\pi_{k} = (z^{i} - \hat{z}^{k})^{T} \Psi^{-1}_{k}(z^{i} - \hat{z}^{k})$</p> <p>18. &nbsp; &nbsp; $endfor$</p> <p>19. &nbsp; &nbsp; $\pi_{N_{t+1}} = \alpha$</p> <p>20. &nbsp; &nbsp; $j(i) = \underset{k}{argmin} \ \ \pi_{k}$</p> <p>21. &nbsp; &nbsp; $N_{t} = max\{N_{t}, j(i)\}$</p> </blockquote> <p>I don't understand the line 19. In the book page 323, The authors state </p> <blockquote> <p>Line 19 sets the threshold for the creation of a new landmark: A new landmark is created if the Mahalanobis distance to all existing landmarks in the map exceeds the value $\alpha$. The ML correspondence is then selected in line 20. </p> </blockquote> <p>what is $\alpha$ in line 19 and how is it computed? Also, what is the Mahalanobis distance? I did research about Mahalanobis distance but still I can't understand its role in EKF SLAM. </p> <hr> <p>Edit: I found another book in my university's library <a href="http://rads.stackoverflow.com/amzn/click/B00BEZNHGY" rel="nofollow">Robotic Navigation and Mapping with Radar</a> The authors state </p> <blockquote> <p>The Mahalanobis distance measure in SLAM is define as $d^{2}_{M}(z^{j}_{k}, \hat{z}^{i}_{k})$, which provides a measure on the spatial difference between measurement $z^{j}_{k}$ and predicted feature measurement $\hat{z}^{i}_{k}$, given by $$ d^{2}_{M}(z^{j}_{k}, \hat{z}^{i}_{k}) = (z^{j}_{k} - \hat{z}^{i}_{k})^{T} S^{-1}_{k}(z^{j}_{k}, \hat{z}^{i}_{k}) $$ This value has to be calculated for all possible $(z^{j}_{k}, \hat{z}^{i}_{k})$ combinations, for which $$ d_{M}(z^{j}_{k},\hat{z}^{i}_{k}) \leq \alpha $$ Often referred to as a validation gate.</p> </blockquote> <p>Leave me to the same question what is $\alpha$?</p>
EKF SLAM and Mahalanobis distance?
<p>A "double shaft motor" has "two shafts" that exit the motor case. If you turn the motor on so the shaft spins and keep the motor case rigidly bolted in one position, there is one place where you can stand and see one shaft sticking out of the motor and pointed directly at you and turning clockwise. Meanwhile, there is another position someone else can stand on the opposite side of the motor; that person sees the shaft sticking out of the motor and turning counterclockwise. (Some people prefer to think of those two shafts as two ends of a single shaft -- they have a good point, since both shafts are part of a single rigid cylinder of metal).</p> <p><img src="https://i.stack.imgur.com/t3kCO.jpg" alt="a photo of a double shaft motor with gears on both output shafts"></p> <p>(double shaft motor <a href="http://www.myrcstation.com/webshaper/store/viewProd.asp?pkProductItem=5797" rel="nofollow noreferrer">http://www.myrcstation.com/webshaper/store/viewProd.asp?pkProductItem=5797</a> )</p> <p>A "single shaft motor" has a shaft that only sticks out in one direction.</p> <p>Often the only difference between some "double shaft motor" and a closely related "single shaft motor" is how far out the "other" shaft sticks out. Often there is a hole in the case of a single shaft motor, where the other shaft would have stuck out, if it were a double shaft motor. In the single-shaft version of that motor, you can typically you can see, through that hole, the end the shaft turning -- but the shaft doesn't stick out far enough to do anything useful.</p> <p>Often people put some sort of drive wheel on one shaft of a double shaft motor, and put a position-measuring encoder on the other shaft.</p> <p>Most applications only require a single shaft motor. Often such single-shaft applications work fine with a double shaft motor (with the unused shaft not connected to anything). So when I see that it's the same price either way, I generally choose a double shaft motor -- that way I can use that motor in both single-shaft applications and double-shaft applications.</p> <p>.... battery operated (BO) motor ...</p>
4394
2014-08-12T19:33:27.200
|motor|line-following|
<p>I am trying to make a line follower robot and I need help regarding the type of dc motor to use. So we have a single shaft BO Motor and a double shaft BO Motor. Can anyone help me understand what is the difference between the two? Here's the link for Single Shaft BO Motor: <a href="http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-single-shaft-bo-motor" rel="nofollow">http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-single-shaft-bo-motor</a></p> <p>Double Shaft BO Motor: <a href="http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-double-shaft-bo-motor" rel="nofollow">http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-double-shaft-bo-motor</a></p>
Single-shaft vs Double-shaft motors
<p>You're likely getting something analogous to "<a href="http://www.opencircuits.com/Debouncing_Microcontroller_Inputs" rel="nofollow">contact bounce</a>". When you slowly rotate the wheel, there are regions where channel A is Lo for a while, then channel A is Hi for a while. While it would be nice if the Arduino only saw one low-to-high transition between those regions, often a encoder will oscillate a few or a dozen times during the brief period when the encoder is balanced on the edge of the transition.</p> <p>Have you noticed that the original code always increments the counter, never decrements it? If you had code that handled both forward rotation (clockwise; incrementing the counter) and reverse rotation (counterclockwise; decrementing the counter), it would automatically fix the "bounce" problem.</p> <p>One popular approach: while B is low, low-to-high transitions on A increment the count, while high-to-low transitions decrement the count. Then if A bounces a few extra times while B is low -- instead of a crisp Lo-Hi (+1), it does Lo-Hi-Lo-Hi-Lo-Hi (+1, -1, +1, -1, +1) -- we still get the correct final count. In either case the final result is one count more. (While B is high, low-to-high transitions on A decrement the count, etc.)</p> <p>A quick Google search <a href="https://www.google.com/search?q=arduino+quadrature+encoder+library" rel="nofollow">https://www.google.com/search?q=arduino+quadrature+encoder+library</a> shows many quadrature encode libraries for the Arduino. Many of them are reviewed at <a href="http://playground.arduino.cc/Main/RotaryEncoders" rel="nofollow">http://playground.arduino.cc/Main/RotaryEncoders</a> .</p> <p>Could you maybe try one of those libraries and tell us how well it worked (or didn't work) ?</p>
4395
2014-08-12T19:37:10.910
|mobile-robot|quadrature-encoder|
<p>Simply, I had Rover 5 with 2 DC motors and 2 quadrature encoders, I just want to use encoders to measure the distance of travelling for each wheel.</p> <p>To start with, I just want to determine the total counts per revolution. I read the article about quadratic encoder from <a href="http://letsmakerobots.com/node/24031" rel="nofollow">this broken link</a>.</p> <p>In Rover 5, each encoder has four wires: red (5V or 3.3V), black(Ground), yellow (Signal 1) and white (Signal 2). I connected each wire in its right place on Arduino Uno board, using the circuit:</p> <ul> <li>rotary encoder ChannelA attached to pin 2 </li> <li>rotary encoder ChannelB attached to pin 3</li> <li>rotary encoder 5V attached to 5V</li> <li>rotary encoder ground attached to ground </li> </ul> <p>For one encoder, I test the code below to determine the total counts or ticks per revolution, the first program by using loop and second by using an interrupt.</p> <p>Unfortunately while I run each program separately, rotating the wheel 360 degree by hand, the outputs of these two programs was just "gibberish" and I don't know where is the problem . Could anyone help?</p> <p>Arduino programs posted below.</p> <p>First program:</p> <pre><code>// Constants const int ChanAPin = 2; // pin for encoder ChannelA const int ChanBPin = 3; // pin for encoder ChannelB // Variables int encoderCounter = -1; // counter for the number of state changes int ChanAState = 0; // current state of ChanA int ChanBState = 0; // current state of ChanB int lastChanAState = 0; // previous state of ChanA int lastChanBState = 0; // previous state of ChanB void setup() { // initialize the encoder pins as inputs: pinMode(ChanAPin, INPUT); pinMode(ChanBPin, INPUT); // Set the pullup resistors digitalWrite(ChanAPin, HIGH); digitalWrite(ChanBPin, HIGH); // initialize serial communication: Serial.begin(19200); Serial.println("Rotary Encoder Counter"); } void loop() { // read the encoder input pins: ChanAState = digitalRead(ChanAPin); ChanBState = digitalRead(ChanBPin); // compare the both channel states to previous states if (ChanAState != lastChanAState || ChanBState != lastChanBState) { // if the state has changed, increment the counter encoderCounter++; Serial.print("Channel A State = "); Serial.println(ChanAState); Serial.print("Channel B State = "); Serial.println(ChanBState); Serial.print("State Changes = "); Serial.println(encoderCounter, DEC); // save the current state as the last state, //for next time through the loop lastChanAState = ChanAState; lastChanBState = ChanBState; } } </code></pre> <p>The second program (with interrupt)</p> <pre><code>static long s1_counter=0; static long s2_counter=0; void setup() { Serial.begin(115200); attachInterrupt(0, write_s1, CHANGE); /* attach interrupt to pin 2*/ attachInterrupt(1, write_s2, CHANGE); /* attach interrupt to pin 3*/ Serial.println("Begin test"); } void loop() { } void write_s1() { s1_counter++; Serial.print("S1 change:"); Serial.println(s1_counter); } void write_s2() { s2_counter++; Serial.print("S2 change:"); Serial.println(s2_counter); } </code></pre>
Counts of Quadrature Encoder
<p>These are two different things which share a lot in common.</p> <p>Simply spoken the CV's task is to perform automatic image processing and then display it to humans.</p> <p>The MV has to do similar things but you do not get an image as result, you get data. For example: "There are 6 apples in this image" or "The image shows that the product has a malfunction"</p> <p>Sources: <a href="http://www.visiononline.org/vision-resources-details.cfm/vision-resources/Computer-Vision-vs-Machine-Vision/content_id/4585" rel="nofollow">http://www.visiononline.org/vision-resources-details.cfm/vision-resources/Computer-Vision-vs-Machine-Vision/content_id/4585</a></p>
4398
2014-08-13T03:00:14.923
|computer-vision|
<p>I'm trying to understand the core differences between the two topics. Is one simply a newer term? Connotations of automobile vs automation? Something with a screen vs without?</p> <p>I've only ever heard the term <code>computer vision</code> (tagged).</p>
Machine Vision vs Computer Vision?
<p>There are two main ways to power an Arduino: with 5 volts and with 6.5 to 12. Since your case falls under the 6.5 to 12 option there are two main ways you can get power into an arduino. The barrel jack can be plugged into a 6.5/12v source which works well. I have a habit of chopping the connectors off wall warts then I use the power supplies for other things (or sentence them to a life collect dust). If you take this route make sure the center pin is being fed positive voltage and the outside is ground. The easier option here is probably to feed your 6.5/12v into the pin labeled vin. It is right next to one of the grounds, and is electrically connected to the barrel jack. </p> <p>If your battery's JST connector is the right size you might be able to use .1 inch headers to plug it into your arduino (unlikely but possible). Otherwise I suggest you buy a JST breakout board. You said that you want to be able to take out and charge your battery which rules out soldering stuff to it. Unless you want to put a barrel jack on your battery, I suggest investing in a JST breakout. </p> <p>The battery it's self looks like a LiPo (lithium-ion polymer) or Li-ion (lithium-ion), which means that you need to buy a charger for it. Lithium batteries have a bad habit of exploding when mistreated, so DIY chargers are for the suicidal. Just from the picture it looks like a Li-ion , but that's just a guess. You should really figure out what type of battery it is before going further with it. When you do figure out what type it is, buy a charger for it. </p>
4413
2014-08-15T18:30:02.483
|arduino|battery|
<p>I want to power my Arduino Uno and I know I can do that either by connecting it with USB to PC or with DC power supply.But I want to connect it with a battery source(kindly see the image below) and I know its a silly question but how do I do it? The battery connector is not the regular DC jack but the one that's found in RC toys. So how do I power my Arduino with that battery? And also how do I connect it with a DC power supply adapter to charge it once its discharged? Please also mention the specifications of the DC power supply adapter that is to be used while charging this battery. <img src="https://i.stack.imgur.com/IFA2O.jpg" alt="Battery Image"></p>
Arduino Power Supply
<p>With Flightgear, which is open source, you can define and fly custom aircraft and custom algorithms for handling its automatic flight controller. </p> <p>Flightgear allows flying all around the world, and you could add / integrate sensor models into it.</p> <p>It also has quite a flexible API, to integrate to existing codes, models, and algorithms.</p>
4415
2014-08-16T04:15:14.337
|quadcopter|artificial-intelligence|machine-learning|
<p>Does anyone know of a robotics developer environment ideal for testing AI programs for drones (e.g. quadrocopters, planes, helicopters, etc.)? I would like something like Microsoft Robotics Developer Studio that includes a virtual environment (such as an outdoor environment with gravity, wind, etc.) to test out flight dynamics. I would like the options to add sensors to the virtual drone, such as gps, altimeter, gyros, etc. that the AI program can then use to steer the drone.</p>
Virtual Testing Environment for Drones
<p>Something I've heard said many times (by a WHOI scientist, I think) is that</p> <blockquote> <p>The problem with models is that they work.</p> </blockquote> <p>In other words, the only things that can go wrong in a model are <em>the things you put into the model ahead of time</em>. Your model may give you perfect results, but in the real world any number of strange things can go wrong; the world is a very complicated place. </p> <p>Watch this <a href="https://www.youtube.com/watch?v=yNu33w8Fmm4" rel="nofollow">entertaining presentation by Katy Levinson</a> for a few examples.</p>
4420
2014-08-19T21:08:30.477
|localization|theory|
<p>Sebastian Thrun says in his paper on Particle Filters that - no model however detailed fails to represent the complexity of even the simplest of robotic environment. What does he means by this? Can someone please elaborate?</p>
Why models are not perfect to represent robotic environments?
<p>There is no "term" for this arrangement of wheels, just like there is no term for this arrangement:</p> <pre><code> // // </code></pre> <p>other than "turning right"</p>
4442
2014-08-27T06:35:10.940
|mobile-robot|wheeled-robot|wheel|
<p>I have a robotic simulator that enables a 6 wheel rover to perform spot turn. To prepare the rover to spot turn, I have to arrange/align the wheels in such a fashion:</p> <pre><code>&lt;front side&gt; // \\ || || \\ // &lt;rear side&gt; </code></pre> <p>What is the technical name of it? Circular wheel arrangement? Circular alignment? </p>
What is the technical name when robot wheels are aligned to perform spot turn?
<p>Multiple channels are time sliced onto "one-wire". The real wire count is usually three. Typically there is a positive voltage wire (may not be there depending on whether both ends have their own power or not) and always a shared ground wire.</p> <p>Perhaps 40-100 times per second the transmitter broadcast the settings of all controls which map to servos on something. At the beginning of each broadcast, the voltage level on the output wire (the "one-wire") is held low for a sync'ing low pulse. After that, each servo gets a time slot. Servo #1 gets the first time slot, servo #2, the second, and so on.</p> <p>Hope the illustration is clear. Although my drawing only shows 1.0, 1.5 and 2.0ms time pulses any value between 1.0 and 2.0 is valid and correlates to "proportional" values at the servo (hence the "proportional" term once used to describe these).</p> <p><a href="https://i.stack.imgur.com/cxR3x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cxR3x.png" alt="Basic RC Waveform"></a></p>
4456
2014-08-28T18:14:01.937
|rcservo|pwm|
<p>How are several channels multiplexed down to a single physical wire? If two channels are transmitting the same value in the same frame, wont there be an overlap of the pulses?</p>
Pulse Position Modulation as used in RC controls
<p>I think you're confusing two words that are spelled almost identically:</p> <p>Altitude: (with an L) is the distance from the vehicle to the earth. It is given in terms of meters.</p> <p>Attitude: (with a T) refers to the orientation of the vehicle with respect to a reference frame. It is usually given in terms of roll, pitch, and yaw, which are all essentially angles of rotation along the x, y, or z axis.</p>
4465
2014-08-29T23:27:54.503
|imu|
<p>I have a big miss conception between Yaw and attitude ? </p> <p>Isn't both represent "how far is the quad from earth ?" </p> <p>Also if you could post how to calculate them from IMU (gyro +accele + magent ) </p>
What's the difference between Yaw and Attitude in Quad Rotor
<p>When talking about vehicles (such as aircraft), <strong>attitude</strong> is just a fancy word for "orientation": the combination of yaw, pitch, and roll. These would be easy to calculate if the plane was just standing still; you'd use a compass to get the yaw, and a <a href="http://en.wikipedia.org/wiki/Plumb_bob" rel="nofollow">plumb bob</a> to measure the pitch and roll. However, the acceleration or deceleration of the aircraft would severely alter these measurements. </p> <ul> <li><a href="http://www.sensorwiki.org/doku.php/sensors/gyroscope" rel="nofollow">Gyroscopes measure angular velocity</a> but can't measure position, velocity, or acceleration. (Fortunately, they aren't affected by it either.)</li> <li><a href="http://www.sensorwiki.org/doku.php/sensors/accelerometer" rel="nofollow">Accelerometers</a> can measure the force on an object (either real forces like gravity or perceived forces like centrifugal force). This measurement is often integrated to estimate velocity, or double-integrated to estimate position (both of which become increasingly inaccurate with time).</li> <li><a href="http://www.sensorwiki.org/doku.php/sensors/compass_magnetoresistive" rel="nofollow">Magnetometers</a> measure the magnitude and direction of magnetic fields.</li> </ul> <p>Regarding the other terms: IMUs produce some subset of MARG data (some have more sensors than others). MARG data is used by an AHRS system to compute attitude (roll, pitch, yaw). MARG data (optionally combined with other sensor data) is used by an INS system to compute both attitude and position. </p>
4467
2014-08-30T07:25:00.390
|navigation|
<p>it's been while since I started reading about INS, orientation and so for quadrotors . </p> <p>I faced the following terms : AHRS - Attitude - Yaw,Pitch and Roll - MARG sensors</p> <p>I know for example how to calculate Yaw,Pitch and Roll , but does it related to Attitude ? </p> <p>What's Attitude any way and how it get calculated ?</p> <p>AHRS "Attitude and heading reference system" does it formed from Yaw,Pitch and Roll ? </p> <p>MARG(Magnetic, Angular Rate, and Gravity) ? how it's related to other terms ? </p> <p>What about INS ( Inertial Navigation Systems ) ? </p> <p>My questions here are about these concepts, and there meaning , how they cooperate with each other , how they got calculated and which sensors suits for what ?</p>
Need to clear some concepts: AHRS - Attitude - Yaw,Pitch and Roll - MARG sensors -INS
<p>In the photoshop sense, image resolution refers to the quality of the printed image: how much physical space each image pixel will take up when it is printed.</p> <p>In the computer vision sense, it's reversed: how many pixels that each unit of physical space takes up when it is captured by the camera. (This measurement is entirely dependent on the distance between the camera and the subject.)</p>
4469
2014-08-30T16:43:36.747
|computer-vision|
<p>I read somewhere that in the case of photoshop for example, the size refers to the number of pixels an image contains, but resolution involves the pixel's size, I don't know whether this definition goes for all the other fields. In computer vision, what's the difference between image size and image resolution?</p>
image size vs image resolution
<p>After reading AVR and CrossPack docs, I've found really easy to do this.</p> <p>For AVR development environment setup, do the following thing:</p> <ol> <li>Install <a href="https://developer.apple.com/xcode/" rel="nofollow">xCode</a> on your Mac, choose latest version.</li> <li>Now install <a href="http://www.obdev.at/crosspack/" rel="nofollow">CrossPack</a>. </li> </ol> <p>This will do all the environment setup. CrossPack installs required libs to support USBasp as well. </p> <p>To create project using xCode:</p> <ol> <li>Create new project</li> <li>Select xAVR project on next screen</li> <li>Next set MCU you are using, Programmer to USBAsp, Frequency as needed.</li> </ol> <p>There you go, try a first first time build it may fail, but try again it will work smooth. </p> <p>You can use <strong>AVR DUDE</strong> directly from Terminal:</p> <pre><code>avrdude -c usbasp -p &lt;partno&gt; -P usb -v </code></pre>
4472
2014-08-31T11:08:01.753
|usb|embedded-systems|avr|
<p>I am new to Embedded, starting with <strong>AVR programming using C</strong>. I am working on <strong>Mac OS 10.9.4</strong>, so far I am using <strong>avrdude</strong> and <strong>xCode</strong> as IDE. It works very well, for now I am testing my code using <strong>Proteus</strong>.</p> <p>But now I want to burn my <strong>.hex</strong> to <strong>AVR ATMega16</strong> board. I have <strong>USBasp</strong>, which I am able to connect and it lights up the board. Now after searching on the internet, I think Mac is not detecting my board. I have checked <strong>/dev</strong> directory, but no usb device found. </p> <p>So I am not sure what to next, how to make Mac detect my board and burn my .hex on it. I've found this: <a href="http://www.fischl.de/usbasp/" rel="nofollow">http://www.fischl.de/usbasp/</a> but no idea how to use this or its required or not.</p> <p>So question stand is: <strong>how to make Mac detect AVR board using USBasp and burn program to it?</strong></p> <p>FYI: I've installed <strong>CrossPack</strong> on Mac. </p>
how to make Mac detect AVR board using USBasp and burn program to it?
<p>It is pretty common to take a simple and cheap solenoid valve and use a PWM at the input</p> <p>TI has a pretty nice driver (DRV102) which uses a analog input and provides enough power to switch the most valves. I don't even need a MCU, just a variable resistor is enought Then just take a cheap valve and the job is done</p>
4474
2014-08-31T16:47:15.497
|otherservos|valve|
<p>I am trying to build a servo-controlled water valve. Max pressure 150 psi , valve size 1/2".</p> <p>Can anyone recommend a suitable 1/4-turn valve, either ceramic, ball valve, or anything else that is <em>easy</em> to turn, even under pressure? It must require very little torque to turn, so a standard servo can rotate it with a small lever attached.</p>
Servo controlled valve
<p>An aircraft's <a href="http://en.wikipedia.org/wiki/Flight_dynamics_(fixed-wing_aircraft)" rel="nofollow">attitude</a> can be calculated from an IMU, but a full-fledged <a href="http://en.wikipedia.org/wiki/Attitude_and_heading_reference_system" rel="nofollow">Attitude and Heading Reference System (AHRS)</a> needs a bit more data -- magnetometers and GPS sensors can help. Your question is a little vague as to what you're hoping to accomplish.</p> <p>Here is <a href="http://www.camelsoftware.com/firetail/blog/c/imu-maths/" rel="nofollow">an article describing some of the code you need in order to do this</a> with <a href="http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation" rel="nofollow">quaternions</a>, but in general there's not much I can suggest besides searching for "IMU attitude estimation".</p>
4477
2014-08-31T20:27:26.573
|imu|
<p>How to calculate attitude from IMU ? </p> <p>For example, mathematical equations </p>
How to calculate Altitude from IMU?
<p>The problem is that you cannot control both the voltage and the current. You apply a voltage, and the motor draws whatever current it wants (subject to your ability to supply that current). Alternatively, you make a current controller which automatically adjusts the voltage to maintain the desired current.</p> <p>An analogy would be pushing an object through honey. Voltage is equivalent to the force you're applying, and current is equivalent to the speed of the object.</p> <p>The object moves at a speed which depends on how hard you're pushing, and the dimensions of the object. You cannot choose to push gently and quickly, and you cannot choose to push really hard and slowly. If you want the object to move at a certain speed, you have no choice but to adjust the amount of force you apply until it moves at that speed. If it's moving too fast, you reduce the force. Too slow, and you increase the force.</p> <p>This is how a motor is controlled. PWM 'magic' is just a way to control the voltage that doesn't cause the voltage controller to get really hot. (The alternative is a linear voltage source, which will get really hot).</p> <p>Before we get into what's happening in the motor controller, it would be worth looking at a different graph:</p> <p><img src="https://i.stack.imgur.com/yDxYC.gif" alt="Motor current torque graph"></p> <p>Here we can see that the torque produced by the motor is purely a function of the current flowing through the windings, and it's pretty linear. If you want to produce a certain torque at the motor, all you need to do is look up on the graph for the current required, then tell your current controller to deliver this current. It does this by constantly measuring the actual current, and adjusting the voltage at the motor (using PWM magic). </p> <p>Now we have a very nice situation for our robot's controller. Assuming a world with no friction, motor torque is proportional to acceleration. If you can control the acceleration, then you can easily control the velocity and position of the motor.</p> <p>The motor position controller knows the trajectory it needs from the motor, and can calculate how much torque it needs at every point during this trajectory (because it knows the acceleration at every point in the trajectory). It's also looking at the actual position of the motor, which won't quite be correct because of friction, and uses that position error to adjust the amount of torque it wants. It then converts the torque demand into a current demand, and gives that to the current controller.</p> <p>And there you have a servo.</p>
4492
2014-09-03T09:25:52.203
|motor|control|
<p>I am having some trouble understanding how to practically use the speed-torque curve of a DC motor.</p> <p>I understand that the gradient of the speed-torque curve is defined by the design of the motor, the exact position of the curve depending on the voltage applied. So if the voltage is changed the speed-torque curve is also changed but remains parallel to the initial curve before the voltage was changed. See figure below.</p> <p><img src="https://i.stack.imgur.com/eIjJF.jpg" alt="enter image description here"></p> <p>So my intuitive guess is that when using the motor at a given desired operation point (desired speed and desired torque), the corresponding speed-torque curve <em>Cd</em> has a gradient specified in the data sheet of the motor and passes through the operation point. This curve <em>Cd</em> is obtained at a corresponding voltage <em>Vd</em>. See diagram below.</p> <p><img src="https://i.stack.imgur.com/wlmxm.jpg" alt="enter image description here"></p> <p>So my next guess is that in order to have the motor operate at this desired operation point, you have to set the voltage applied to the motor to <em>Vd</em>, and apply a current <em>Id</em> (computed using the torque and the torque constant).</p> <p>Now from what I read this is not what is done in DC motor controllers. These seem to only drive the motor using current and some sort of PWM magic as is shown in the following diagram by maxon.</p> <p><img src="https://i.stack.imgur.com/rRxj2.jpg" alt="enter image description here"></p> <p>Anyone knows why voltage is not used in DC motor control and only current is? I do not understand how you can set the speed if you do not modify the voltage? And what is PWM useful for?</p> <p>I have looked for hours over the internet and could not find anything relevant.</p> <p>Thanks,</p> <p>Antoine.</p>
DC motor control - speed-torque curve
<p>You don't need the positional IK to solve this problem. All that is required is, that your Jacobian is <em>invertible</em>, i.e. keep away from sinuglar joint configurations. Consider, how the end effector's velocity is formed:</p> <p>$$ \begin{bmatrix} v_{trans} \\ v_{rot} \end{bmatrix} = J(\boldsymbol{q})\cdot \boldsymbol{\dot{q}} $$</p> <p>So, if I understand correctly, you want to solve the above equation with $v_{rot} = 0$ for $\boldsymbol{\dot{q}}$:</p> <p>$$ \boldsymbol{\dot{q}} = J(\boldsymbol{q})^{-1}\begin{bmatrix} v_{trans} \\ 0 \end{bmatrix} $$</p> <p>Keep in mind that -- in general -- you cannot nicely separate the joints in those joints which command translation and those which command rotation of the end effector. After all, how joints are mapped to the end effector's position and angle is determined by the mechanism's structure, which dictates the Jacobian's structure (and its singularites). For example, an arm with only rotational joints (like a typical industrial robot) needs to move all its motors in some kind of "compensating" way to produce pure end effector translation.</p>
4496
2014-09-03T15:27:34.707
|kinematics|robotic-arm|jacobian|
<p>I have a 7 DOF arm that I am controlling with joint velocities computed from the Jacobian in the standard way. For example: $$ {\Large J} = \begin{bmatrix} J_P \\J_O \end{bmatrix} $$ $$ J^{\dagger} = J^T(JJ^T)^{-1} $$ $$ \dot{q}_{trans} = J^{\dagger}_P v_{e_{trans}} $$ $$ \dot{q}_{rot} = J^{\dagger}_O v_{e_{rot}} $$ $$ \dot{q} = \dot{q}_{trans} + \dot{q}_{rot} $$</p> <p>However, when specifying only translational velocities, the end-end effector also rotates. I realized that I might be able to compute how much the end-effector would rotate from the instantaneous $\dot{q}$, then put this through the Jacobian and subtract out its joint velocities. </p> <p>So I would do this instead of using the passed in $v_{e_{rot}}$:</p> <p>$$ v_{e_{rot}} = R(q) - R(q+\dot{q}_{trans}) $$</p> <p>Where $R(q)$ computes the end-effector rotation for those joint angles. </p> <p>Is this OK to do, or am I way off base? Is there a simpler way? </p> <p>I am aware that I could also just compute the IK for a point a small distance from the end-effector with no rotation, then pull the joint velocities from the delta joint angles. And that this will be more exact. However, I wanted to go the Jacobian route for now because I think it will fail more gracefully.</p> <p>A side question, how do I compute $R(q) - R(q+\dot{q}_{trans})$ to get global end-effector angular velocity? My attempts at converting a delta rotation matrix to Euler angles yield wrong results. I did some quick tests and implemented the above procedure to achieve pure end-effector rotation while maintaining global position. (This is easier because $T(q) - T(q+\dot{q}_{rot})$ is vector subtraction.) And it did kind of work.</p>
How to get pure end-effector translation through Jacobian?
<p>If you don't know the location of the obstacles and want to apply your filter, I see two solutions:</p> <ol> <li><ul> <li>Do a quick mapping using a simple algorithm easy to implement like the Occupancy Grid Algorithm (with this you localize the obstacles relatively to your robot)</li> <li>Apply your particles filter</li> <li>Move the robot</li> <li>Relocalize your robot using odometry</li> <li>Correct odometry errors by using another sensor measurements if possible</li> </ul></li> <li><p>Using a SLAM algorithm to simultaneously Localize and Map obstacles. The SLAM algorithms are more robust to sensors and odometry errors than the Occupancy Grid Algorithm but are more difficult to implement and need generally more CPU consumption.</p></li> </ol> <p>N.B: you must understand that there is no "optimal" solution since this problem still a very active research domain in robotics, the two solutions have advantages and drawbacks. Consider also the speed of your processor or microcontroller.</p>
4513
2014-09-05T21:33:11.617
|localization|particle-filter|
<p>I am trying to implement a particle filter for a robot in Java. This robot is having a range sensor. The world has 6 obstacles - 3 in the top and 3 in bottom. I am calculating the distance of the robot from each obstacle's center and then performing the same activity for each particle. Then, i calculate the difference between the robot and particles. The particles for which the difference with robot measured distance are small, i give them higher probability in resampling.</p> <p>But, the problem with this approach as told by my friend is that I am assuming that I already know the locations of the obstacles, which make this all process useless. How should I approach it rather in a sense that I don't know the obstacles. How can the particle filter be implemented then? How will the particle filter work in case i don't know the obstacles location? An example of process would be great help. Thanks</p> <p><img src="https://i.stack.imgur.com/VQ8tz.jpg" alt="Screenshot of my application"></p>
Whats the logic to implement a particle filter for a robot with range sensor?
<p>I can't quite see how you're doing the right-hand wall following. When I do it, this is what I get:</p> <p><img src="https://i.stack.imgur.com/NlnVg.png" alt="Right hand wall following"></p> <p>For the Pledge algorithm, it's working. It found an exit to the maze! It just so happens that the exit is the same as the start. If you want it to find the other exit, you'll have to block off the top exit after you enter the maze.</p>
4514
2014-09-06T10:01:58.103
|mobile-robot|
<p>i saw this maze, and tried to apply pledge algorithm on it. But i am not able to solve this maze using this algorithm. what am i missing? i want to ask, what i am doing wrong?</p> <p>PLEDGE ALGORITHM: in both cases we don't get to exit. <img src="https://i.stack.imgur.com/rvcN1.png" alt="enter image description here"></p> <p>you can read about these algorithms at:</p> <p><a href="http://en.wikipedia.org/wiki/Maze_solving_algorithm" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Maze_solving_algorithm</a></p> <p><a href="http://www.astrolog.org/labyrnth/algrithm.htm" rel="nofollow noreferrer">http://www.astrolog.org/labyrnth/algrithm.htm</a></p>
Pledge algorithm for maze solving robots
<p>I found the reason, in my original code, i just use one stage PID. The right one should comprise of 2 cascaded PIDs: rate PID and stablized PID</p> <p><img src="https://i.stack.imgur.com/wBGwx.jpg" alt="enter image description here"></p>
4526
2014-09-10T02:59:34.720
|pid|
<p>I am tuning PID for quadcopter, the problem i have is that with different base Throttle, i seems that i have to adjust different PID gains in order for the quadcopter to balance! </p>
PID Tuning Quadcopter Problem
<p>The ServoShock module works by itself, you just have to wire the connections. If you use the shield, it's plug and play. You can pull the button/axes states and write servo position outputs using the SPI bus, there's a Arduino library at the ServoShock documentation page at www.servoshock.com.</p>
4545
2014-09-12T19:18:33.887
|arduino|wireless|
<p>I'm currently building a hexapod bot, composed with an <a href="http://arduino.cc/en/Main/arduinoBoardMega" rel="nofollow">Arduino Mega</a> board and an <a href="http://www.lynxmotion.com/p-395-ssc-32-servo-controller.aspx" rel="nofollow">USB SSC-32</a> (from <a href="http://www.lynxmotion.com/" rel="nofollow">Lynxmotion</a>). But now I want add a PS3 wireless controller to move my hexapod, I have made some search but nothing realy interesting. Maybe the <a href="http://www.servoshock.com/" rel="nofollow">Servoshock module</a> but it seems works only with the ServoshockShield, a kind of Arduino card with Servo output. </p> <ul> <li>Can I use the ServoShock module alone ? </li> <li>Can I connect it with Rx/Tx port of the Arduino Mega board ?</li> <li>Do you have other solution for me ? Board with documentation and sources codes ?</li> </ul> <p>Thank you all</p>
How to control an Arduino Board with wireless PS3 controler?
<p>I assume that your 2 channels control forward/backward and left/right. But even if the 2 channels control forward/backward in each wheel (differential-drive style), it should still be possible to do what you are suggesting <em>electrically</em> instead of mechanically. </p> <p>You should be able to read the input signal to the motors, decide whether those signals are commanding a "reverse" movement, and trigger your snow plow lift motor accordingly.</p>
4555
2014-09-15T13:51:32.273
|wheeled-robot|mechanism|
<p>I have made a RC robot from a wheelchair and I'm planning to attach a snow plow. I'm wondering if there is any mechanism that would be able to lift the plow when reversing. I have only 2 channel transmitter so I can't control the plow's movement through it so I was thinking of some mechanical lift that triggers when reversing.</p> <p>Do you guys know about something I could use for it? Thanks.</p>
Reverse lift mechanism
<p>At the end I discovered <a href="http://ctms.engin.umich.edu/CTMS/index.php?example=Introduction&amp;section=SimulinkModeling" rel="nofollow noreferrer">Simulink</a> and the possibility to write a non linear system with it. Simply take the differential equations in the paper and put them as they are in simulink. Really strightforward. After I ve written a PID controller for my system it seems to work really good.</p> <p>The ide a linearized system to implement was good but for some reason didn t have the expected reaction during simulation.</p> <p>Anyway many thanks to all explanations and the suggestions that other people gave me.</p> <p>Regards</p>
4558
2014-09-15T21:34:28.627
|control|quadcopter|
<p>I have a big problem trying to stabilize a quadrotor with a PD controller. The model and the program has been written in C++ and the model dynamic has been taken from this source in internet: </p> <p>Well, in my code I wrote the model like in the eq. system ( see eq. 3.30 on page 21):</p> <pre><code>/* Calculate the acceleration about all 6 axis */ body_pos_current_.x_dot_2 = ( thrust_.total / masse_ ) * ( sin( body_ang_current_.theta ) * cos( body_ang_current_.phi ) * cos( body_ang_current_.psi ) + sin( body_ang_current_.psi ) * cos( body_ang_current_.phi ) ); body_pos_current_.y_dot_2 = ( thrust_.total / masse_ ) * ( sin( body_ang_current_.theta ) * sin( body_ang_current_.psi ) * cos( body_ang_current_.phi ) - cos( body_ang_current_.psi ) * sin( body_ang_current_.phi ) * cos( body_ang_current_.psi ) ); body_pos_current_.z_dot_2 = ( thrust_.total / masse_ ) * ( cos( body_ang_current_.theta ) * cos( body_ang_current_.phi ) ) - 9.81; body_ang_current_.phi_dot_2 = ( torque_.phi / Jxx_ ); body_ang_current_.theta_dot_2 = ( torque_.theta / Jyy_ ); body_ang_current_.psi_dot_2 = ( torque_.psi / Jzz_ ); </code></pre> <p>where <code>body_ang_current.&lt;angle&gt;</code> and <code>body_pos_current_.&lt;position&gt;</code> are structures defined in a class to store position, velocities and accelerations of the model given the 4 motor velocities about all 3 axis.</p> <p>$$ \large \cases{ \ddot X = ( \sin{\psi} \sin{\phi} + \cos{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Y = (-\cos{\psi} \sin{\phi} + \sin{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Z = (-g + (\cos{\theta} \cos{\phi}) \frac{U_1}{m} \cr \dot p = \frac{I_{YY} - I_{ZZ}}{I_{XX}}qr - \frac{J_{TP}}{I_{XX}} q \Omega + \frac{U_2}{I_{XX}} \cr \dot q = \frac{I_{ZZ} - I_{XX}}{I_{YY}}pr - \frac{J_{TP}}{I_{YY}} p \Omega + \frac{U_3}{I_{YY}} \cr \dot r = \frac{I_{XX} - I_{YY}}{I_{ZZ}}pq - \frac{U_4}{I_{ZZ}} } $$</p> <p>Once I get the accelerations above I m going to integrate them to get velocities and positions as well:</p> <pre><code>/* Get position and velocities from accelerations */ body_pos_current_.x_dot = body_pos_current_.x_dot_2 * real_duration + body_pos_previous_.x_dot; body_pos_current_.y_dot = body_pos_current_.y_dot_2 * real_duration + body_pos_previous_.y_dot; body_pos_current_.z_dot = body_pos_current_.z_dot_2 * real_duration + body_pos_previous_.z_dot; body_ang_current_.phi_dot = body_ang_current_.phi_dot_2 * real_duration + body_ang_previous_.phi_dot; body_ang_current_.theta_dot = body_ang_current_.theta_dot_2 * real_duration + body_ang_previous_.theta_dot; body_ang_current_.psi_dot = body_ang_current_.psi_dot_2 * real_duration + body_ang_previous_.psi_dot; body_pos_current_.x = 0.5 * body_pos_current_.x_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.x_dot * real_duration ) + body_pos_previous_.x; body_pos_current_.y = 0.5 * body_pos_current_.y_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.y_dot * real_duration ) + body_pos_previous_.y; body_pos_current_.z = 0.5 * body_pos_current_.z_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.z_dot * real_duration ) + body_pos_previous_.z; body_ang_current_.phi = 0.5 * body_ang_current_.phi_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.phi_dot * real_duration ) + body_ang_previous_.phi; body_ang_current_.theta = 0.5 * body_ang_current_.theta_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.theta_dot * real_duration ) + body_ang_previous_.theta; body_ang_current_.psi = 0.5 * body_ang_current_.psi_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.psi_dot * real_duration ) + body_ang_previous_.psi; /* Copy the new value into the previous one (for the next loop) */ body_pos_previous_.x = body_pos_current_.x; body_pos_previous_.y = body_pos_current_.y; body_pos_previous_.z = body_pos_current_.z; body_pos_previous_.x_dot = body_pos_current_.x_dot; body_pos_previous_.y_dot = body_pos_current_.y_dot; body_pos_previous_.z_dot = body_pos_current_.z_dot; body_ang_previous_.phi = body_ang_current_.phi; body_ang_previous_.theta = body_ang_current_.theta; body_ang_previous_.psi = body_ang_current_.psi; body_ang_previous_.phi_dot = body_ang_current_.phi_dot; body_ang_previous_.theta_dot = body_ang_current_.theta_dot; body_ang_previous_.psi_dot = body_ang_current_.psi_dot; </code></pre> <p>The model seems to work well but, as like reported in many papers, is very unstable and needs some controls.</p> <p>The first approach for me was to create a controller (PD) to keep the height constant without moving the quadcopter, but just putting a value (for example 3 meter) and see how it reacts.</p> <p>Here the small code I tried:</p> <pre><code>/* PD Controller */ double e = ( 3.0 - body_pos_current_.z ); // 3.0 is just a try value!!! thrust_.esum = thrust_.esum + e; thrust_.total = 1.3 * e + 0.2 * real_duration * thrust_.esum; </code></pre> <p>The problem, as you can see here in this video, is that the copter starts falling down into the ground and not reaching the desired altitude (3.0 meters). Then it comes back again again like a spring, which is not damped. I tried already many different value for the PD controller but it seems that it doesn't affect the dynamic of the model.</p> <p>Another strange thing is that it goes <em>always</em> to a negative point under the ground, even if I change the desired height (negative or positive).</p> <p>What s wrong in my code? Could you me please point to some documents or code which is understandable and well documented to start?</p> <p>Thanks</p> <p><strong>EDIT:</strong> Many thanks to your suggestion. Hi was really surprise to know, that my code had lots of potential problems and was not very efficient. So I elaborate the code as your explanation and I implementers a RK4 for the integration. After I ve read those articles: <a href="http://gafferongames.com/game-physics/integration-basics/" rel="nofollow">here</a> and <a href="http://buttersblog.com/runge-kutta/" rel="nofollow">here</a> I got an idea about RK and its vantage to use it in simulations and graphics PC. As an example I rewrote again the whole code:</p> <pre><code>/* Calculate the acceleration about all 6 axis */ pos_.dVel.x = ( ( thrust_.total / masse_ ) * ( -sin( body_position_.angle.theta ) * cos( body_position_.angle.phi ) * cos( body_position_.angle.psi ) - sin( body_position_.angle.phi ) * sin( body_position_.angle.psi ) ) ); pos_.dVel.y = ( ( thrust_.total / masse_ ) * ( sin( body_position_.angle.phi ) * cos( body_position_.angle.psi ) - cos( body_position_.angle.phi ) * sin( body_position_.angle.theta ) * sin( body_position_.angle.psi ) ) ); pos_.dVel.z = ( ( thrust_.total / masse_ ) * ( -cos( body_position_.angle.phi ) * cos( body_position_.angle.theta ) ) - 9.81 ); pos_.dOmega.phi = ( torque_.phi / Jxx_ ); pos_.dOmega.theta = ( torque_.theta / Jyy_ ); pos_.dOmega.psi = ( torque_.psi / Jzz_ ); /* Get position and velocities from accelerations */ body_position_ = RKIntegrate( body_position_, real_duration ); </code></pre> <p>which is much more clear and easy to debug. Here some useful functions I implemented:</p> <pre><code>QuadrotorController::State QuadrotorController::evaluate( const State &amp;initial, const Derivative &amp;d, double dt ) { State output; output.position.x = initial.position.x + d.dPos.x * dt; output.position.y = initial.position.y + d.dPos.y * dt; output.position.z = initial.position.z + d.dPos.z * dt; output.velocity.x = initial.velocity.x + d.dVel.x * dt; output.velocity.y = initial.velocity.y + d.dVel.y * dt; output.velocity.z = initial.velocity.z + d.dVel.z * dt; output.angle.phi = initial.angle.phi + d.dAngle.phi * dt; output.angle.theta = initial.angle.theta + d.dAngle.theta * dt; output.angle.psi = initial.angle.psi + d.dAngle.psi * dt; output.omega.phi = initial.omega.phi + d.dOmega.phi * dt; output.omega.theta = initial.omega.theta + d.dOmega.theta * dt; output.omega.psi = initial.omega.psi + d.dOmega.psi * dt; return output; }; QuadrotorController::Derivative QuadrotorController::sampleDerivative( double dt, const State &amp;sampleState ) { Derivative output; output.dPos = sampleState.velocity; output.dVel.x = pos_.dVel.x; output.dVel.y = pos_.dVel.y; output.dVel.z = pos_.dVel.z; output.dAngle = sampleState.omega; output.dOmega.phi = pos_.dOmega.phi; output.dOmega.theta = pos_.dOmega.theta; output.dOmega.psi = pos_.dOmega.psi; return output; }; QuadrotorController::State QuadrotorController::RKIntegrate( const State &amp;state, double dt ) { const double C1 = 0.0f; const double C2 = 0.5f, A21 = 0.5f; const double C3 = 0.5f, A31 = 0.0f, A32 = 0.5f; const double C4 = 1.0f, A41 = 0.0f, A42 = 0.0f, A43 = 1.0f; const double B1 = 1.0f/6.0f, B2 = 1.0f/3.0f, B3 = 1.0f/3.0f, B4 = 1.0f/6.0f; Derivative k1 = sampleDerivative( 0.0f, state ); Derivative k2 = sampleDerivative( C2 * dt, evaluate( state, k1 * A21, dt ) ); Derivative k3 = sampleDerivative( C3 * dt, evaluate( state, k1 * A31 + k2 * A32, dt ) ); Derivative k4 = sampleDerivative( C4 * dt, evaluate( state, k1 * A41 + k2 * A42 + k3 * A43, dt ) ); const Derivative derivativeSum = k1 * B1 + k2 * B2 + k3 * B3 + k4 * B4; return evaluate( state, derivativeSum, dt ); } </code></pre> <p>Now I m really lost because...because the simulated qudrotor has the same behavior as before. Nevertheless I ve implemented the same PD algorithm as discussed in the paper, it stabilize on Z (height) but it get really crazy due to unstable behavior. So... I dunno what is wrong in my code and my implementation. And above all I cannot find any source in internet with a good self explaned dynamic model for a quadrotor.</p> <p>Regards</p>
PD Algorithm for a Quadrotor [Simulation]
<p>I've implemented this algorithm before but I found a different paper easier to read. Try find the paper by Hamel and Mahoney (with Hamel listed as the first author). </p> <p>From that paper, which I don't have available at the moment, I believe the first equation should was $$ \begin{align} \dot{\hat{R}} &amp;= \hat{R} \left ( \Omega_{IMU} - \hat{b} - \alpha \right )_\times \end{align} $$ but in that paper $\alpha$ ($\omega$ in the paper) was a vector.</p> <p>Another thing to look at for the equations you have listed is if the second equation shouldn't have the operator transforming $\alpha$ from its skew-symmetric matrix form to a vector on $\mathbb{R}^3$.</p>
4562
2014-09-16T09:08:32.490
|control|quadcopter|
<p>While reading the paper <a href="http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6289431&amp;abstractAccess=no&amp;userType=inst" rel="nofollow">"Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor" by Mahony, Kumar and Corke</a>, I stumbled across the following equations for a non-linear attitude observer, which I would like to implement, but I believe there is something wrong.</p> <p>$\dot{\hat{R}} := \hat{R} \left( \Omega_{IMU} - \hat{b} \right)_\times - \alpha \\ \dot{\hat{b}} := k_b \alpha \\ \alpha := \left( \frac{k_a}{g^2}((\hat{R}^T \vec z) \times a_{IMU}) + \frac{k_m}{|^Am|^2} ((\hat{R}^T {^Am}) \times m_{IMU}) \right)_\times + k_E \mathbb{P}_{so(3)} (\hat{R} R_E^T)$</p> <p>Where $\hat{R}$ and $\hat{b}$ are etimates of orientation and gyroscope bias, $\Omega_{IMU}, a_{IMU}, m_{IMU}, R_E^T$ are measurements and $k_X$ are scalar gains, which may be set to 0 for measurements that are not evailable.</p> <p>Now $\dot{\hat{R}}$ and $\alpha$ need to be matrices $\in \mathbb{R}^{3\times 3}$ due to their definitions. $\hat{b}$ and thus $\dot{\hat{b}}$ need to be vectors $\in \mathbb{R}^3$. But then what is the correct version of the second equation $\dot{\hat{b}} := k_b \alpha$?</p>
Non-linear complementary filter on so3: Corrected equations?
<p>The term PID itself refers to Proportional plus Integral plus Derivative control, which is a very common control technique. The IMU (inertial measurement unit) measures the orientation of the quadrotor, and the roll/pitch/yaw values are used in the PID controller to calculate motor commands.</p>
4568
2014-09-16T21:46:37.263
|quadcopter|arduino|pid|microcontroller|
<p>I'm trying to make a Quadcopter from scratch, I have a fair amount of experience with adruinos, and I'm trying to understand how to necessary systems work, and I can't seem to figure out what PID means, is it a method of regulating pitch and roll? like a stabilizer? I think from what I've read that its a system that detects orientation of the craft and tries to correct it</p>
What is a PID as is related to quadcopters
<p>Performance-wise, first version wins as the motor's own weight is not added to the payload for motion. </p> <p>Efficiency-wise, it depends on the elasticity and material properties of the drive belt, versus the material of the powered roller surface, gripping force of rollers, and other energy loss factors. </p> <p>The belt drive will be far more vibration-damped: Note the spring-loaded tensioning pin through which the belt passes in the top photo. This will flex and absorb a lot of the vibration - acting like a low-pass filter for mechanical energy. </p> <p>With regard to "these motors vibrate a lot": If vibration reduction is important, a stepper motor might be the worst possible choice. Use a linear encoder on the slider for closed-loop position feedback, and drive the unit using a BLDC motor - effectively making a linear servo out of the whole thing. BLDCs are much less vibration prone: Note the smoothness of rotation of a CDROM drive main motor. Of course, for meaningful traversal speeds, the BLDC would be geared down significantly, with the side effect of even more vibration reduction. For that matter, even a brushed DC motor with suitable gearing down would work great, again with the linear encoder.</p> <ol> <li>The first version will require slightly less power, and will suffer marginally less inertia effects: The motor in the photograph shown most likely weighs under 1 Kg, that will add to the payload weight for movement energy as well as starting and stopping inertia. However, the flexing drive belt will eat up some (maybe a lot) of the improvement in efficiency.</li> <li>Pulling versus pushing force, not sure how that applies. The first uses a closed loop belt with tensioner, i.e. pulling force is all that applies. The second uses rotation-translation, no pushing there either.</li> <li>A belt drive with a spring loaded tensioner will significantly damp vibration of the motor, except in the odd chance of the motor hitting a resonant harmonic frequency of the belt assembly (can't quite see that happening).</li> <li>Lower inertia = lower stress on motor. So clearly the first version is better, see above.</li> <li>For vertical pulling, a belt drive instinctively feels safer: Dust getting between rollers and slide, or the powered rollers wearing out, may cause the entire assembly to slide down under gravity. Of course, this safety is illusory, as the drive belt could break too. Adding an auto-brake design to protect against slippage or breakage (like some elevators have) just adds complication. </li> </ol>
4580
2014-09-18T04:58:21.613
|control|design|stepper-motor|motion|
<p>I'm currently designing a linear camera slider, that will be used to hold camera equipment weighing just about 15 Kgs including all of the lenses and monitors and everything else.</p> <p>For those who don't know what a camera slider is, it's a linear slider on top of which a camera is mounted and then the camera is slided slowly to create some nice footage <a href="https://www.youtube.com/watch?v=hLvP2Dr2FWQ" rel="nofollow noreferrer">like this</a>.</p> <p><strong>The problem</strong></p> <p>Now, looking at the commercially available camera sliders out there, there seems to be two ways in which the motor maybe mounted on these sliders:</p> <ol> <li><p>Motor mounted on the side:</p> <p><img src="https://i.stack.imgur.com/6TMcu.jpg" alt="enter image description here" /></p> </li> <li><p>Motor mounted directly on the carriage:</p> <p><img src="https://i.stack.imgur.com/PE1PW.jpg" alt="enter image description here" /></p> </li> </ol> <p>I would like to know which option would be optimal - Performance-wise (this slider maybe used vertically too, to create bottom to top slide shots), efficiency-wise and which one of these two will be resistant to motor vibration (these motors vibrate a lot, the effects of which may sometimes leak into the produced footage).</p> <p><strong>Additional Questions</strong></p> <ol> <li><p>Motor mounted on the carriage directly <em>maybe, just maybe</em> more efficient, but it also has to carry it's own weight in addition to the 15kg camera load?</p> </li> <li><p>Pulling force is greater than pushing force (I have no idea why, would be great if someone explained why, atleast in this case?), so a motor mounted in the end should be able to lift vertically with ease?</p> </li> <li><p>Does a belt setup as shown in the first figure above really dampen the motor vibrations? Will/won't the motor vibrating on the end get amplified (because, the whole setup will be attached to a single tripod in the exact center of the slider)</p> </li> <li><p>Which design will be less stressful for the motor, taking inertia into consideration for both cases?</p> </li> <li><p>Which one of these designs will be best suitable for vertical pulling of load against gravity?</p> </li> </ol> <p>Manufacturers use both designs interchangeably, so it's hard to predict which design is better than which.</p> <p>Any help would be much appreciated!</p> <p>Please note, this question has been migrated from the Stackexchange Physics (and Electrical) forum by me because the mods thought it would be appropriate here.</p>
Linear slider motor mount location - Pros/cons
<p>The problem is simple if you are using a coil and a pin (solenoid) but the size (diameter) of the coil would be the limiting factor to how closely your pins could be nested. Pneumatics (pressure= up and vacuum = down) could also be used and achieve a tighter nesting. The valve configuration would be the limiting factor with pneumatics.</p>
4587
2014-09-19T18:50:58.900
|actuator|
<p>I need two state linear actuator. You can have a look at the picture to understand what I mean. </p> <p>Don't care about the hand !</p> <p><a href="https://i.stack.imgur.com/Qh9s1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qh9s1.jpg" alt="mit inform project"></a><br> <sub>(source: <a href="https://www.robaid.com/wp-content/gallery/mit/mit-inform-system-1.jpg" rel="nofollow noreferrer">robaid.com</a>)</sub> </p> <p>I need to electrically move the things like this squares up and down. Bidirectional linear actuators are needed. What is the cheapest and tiniest actuator (or sth else) that I can use to move this squares up and down. There are just two states ('up','down'). Don't care how much higher a square rises, when it is up.</p>
Two State Linear Actuator
<p>Your function <code>calculateRange()</code> uses a variable <code>robot_orientation</code> which I cannot see from your code snippets that it is set or changed. </p> <p>I would expect something like:</p> <pre><code>double particle_orientation = particleListOrientation.get(index); </code></pre> <p>inside your function <code>measurementProbability()</code> and then call <code>calculateRange()</code> with that additional parameter.</p> <pre><code>private int calculateRange(double x, double y, double robot_orientation) { double Vx = x, double Vy = y; int pace = 2; int counter = 0; while(robotIsWithinBoundary(Vx, Vy)) { Vx += pace* Math.sin(Math.toRadians(robot_orientation)); Vy += pace* Math.cos(Math.toRadians(robot_orientation)); counter++; Line2D line1 = new Line2D.Double(x,y,Vx,Vy); if(line1.intersects(obst1)) { break; } if(line1.intersects(obst2)) { break; } } return counter*pace; } </code></pre>
4589
2014-09-20T18:03:04.427
|localization|particle-filter|
<p>I am implementing a particle filter in Java. The problem with my particle filter implementation is that the particles suddenly go away from the robot i.e the resampling process is choosing particles which are away from robot more than those which are near.It is like particles chase the robot, but always remain behind it. I am trying to find the root cause, but to no luck. Can anyone please help me where I am going wrong?</p> <p>I am adding all the imp. code snippets and also some screenshots in consecutive order to make it more clear. </p> <p><a href="https://i.stack.imgur.com/Tul8Vm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tul8Vm.jpg" alt="1"></a> <a href="https://i.stack.imgur.com/t04Wz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t04Wzm.jpg" alt="2"></a> <a href="https://i.stack.imgur.com/5EZJZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EZJZm.jpg" alt="3"></a> <a href="https://i.stack.imgur.com/bNqBV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bNqBVm.jpg" alt="4"></a> <a href="https://i.stack.imgur.com/tibqD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tibqDm.jpg" alt="5"></a></p> <p>Details:</p> <p>I am using a range sensor which only works in one direction i.e. its fixed and tells the distance from the obstacle in front. If there is no obstacle in its line of vision, then it tells the distance to boundary wall. </p> <p>Code: Calculating Range</p> <pre><code>/* * This method returns the range reading from the sensor mounted on top of robot. * It uses x and y as the actual position of the robot/particle and then creates Vx and Vy as virtual x and y. * These virtual x and y loop from the current position till some obstruction is there and tell us distance till there. */ private int calculateRange(double x, double y, double Vx, double Vy, int counter, int loop_counter) { while(robotIsWithinBoundary(Vx, Vy)) { int pace = 2; Vx += pace* Math.sin(Math.toRadians(robot_orientation)); Vy += pace* Math.cos(Math.toRadians(robot_orientation)); counter++; Line2D line1 = new Line2D.Double(x,y,Vx,Vy); if(line1.intersects(obst1)) { //System.out.println("Distance to obst1:"+counter); loop_counter++; break; } if(line1.intersects(obst2)) { //System.out.println("Distance to obst2:"+counter); loop_counter++; break; } } return counter; } /* * This method tells us whether the robot/particle is within boundary or not. */ private boolean robotIsWithinBoundary(double x, double y) { boolean verdict = true; if(x&gt;680||x&lt;0) { verdict = false; } if(y&lt;0||y&gt;450) { verdict = false; } return verdict; } /* * This method returns the range reading from the sensor mounted on top of robot. * It uses x and y as the actual position of the robot/particle and then creates Vx and Vy as virtual x and y. * These virtual x and y loop from the current position till some obstruction is there and tell us distance till there. */ private int calculateRange(double x, double y, double Vx, double Vy, int counter, int loop_counter) { while(robotIsWithinBoundary(Vx, Vy)) { int pace = 2; Vx += pace* Math.sin(Math.toRadians(robot_orientation)); Vy += pace* Math.cos(Math.toRadians(robot_orientation)); counter++; Line2D line1 = new Line2D.Double(x,y,Vx,Vy); if(line1.intersects(obst1)) { //System.out.println("Distance to obst1:"+counter); loop_counter++; break; } if(line1.intersects(obst2)) { //System.out.println("Distance to obst2:"+counter); loop_counter++; break; } } return counter; } /* * This method tells us whether the robot/particle is within boundary or not. */ private boolean robotIsWithinBoundary(double x, double y) { boolean verdict = true; if(x&gt;680||x&lt;0) { verdict = false; } if(y&lt;0||y&gt;450) { verdict = false; } return verdict; } </code></pre> <p>Calculating Weights</p> <pre><code>/* * This method calculates the importance weights for the particles based on the robot_range which is * the reading of the range sensor for the robot. */ private double measurementProbability(int index) { double probability=1; double particle_x_position=particleListX.get(index); double particle_y_position=particleListY.get(index); double particle_Vx=particle_x_position; double particle_Vy=particle_y_position; int range_counter=0; int loop_counter=0; int distance = calculateRange(particle_x_position, particle_x_position, particle_Vx, particle_Vy ,range_counter, loop_counter); probability *= calculateGaussianDistance(distance, senseNoise, robot_range); //System.out.println(probability); return probability; } private double calculateGaussianDistance(double mu, double sigma, double x ) { double gDistance=Math.exp(-(((Math.pow((mu - x),2))/(Math.pow(sigma,2)) / 2.0) / (Math.sqrt(2.0 * Math.PI * (Math.pow(sigma,2)))))); return gDistance; } </code></pre> <p>Resampling</p> <pre><code>/* * This method provides a resampled particle back to the list. It chooses a particle randomly * from the list based on the weights with replacement. */ private int giveResampledParticle() { int index = randomInteger(0, n-1); double sample =0.0; double maxWeight = maximumWeight(); sample += randomDouble(0, maxWeight); while(sample &gt; particleListProbability.get(index)) { sample -= particleListProbability.get(index); index = (index +1) % n; } return index; } </code></pre>
Particles not behaving correctly in the implementation of particle filter
<p>It really depends on the economic details of the project you are working on. If it's a one time project with limited budget I would say a DC brush motor with or without position feedback would do the job. Sure you will need a controller. If you aim to get a consistently high mechanical work output, precision positioning, and compact size you might draw your attention to servo motors. Some of them already come with a gear and a controller integrated. If you still feel embarassed about making the right choice, you could find some hints in here.<a href="https://rozum.com/robot-servo/" rel="nofollow noreferrer">What you need to know about robot servos before purchase</a></p>
4592
2014-09-21T12:16:38.743
|motor|brushless-motor|servomotor|
<p>Simply, when to use brushless dc motor and when to use Servo Motor ? </p> <p>what are the differences , specially when adding an encoder to the dc motor you can have the position and it will be similar to Servo Motor ? </p>
For robot wheel control : Brushless DC motor or Servo Motor?
<p>Another idea is to use polarized light beacons. If you have a few light sources polarized at different angles, then you can use some simple light sensors (or cameras) similarly polarized. If the beacons are at known world locations, you should be able to figure out your location through triangulation or trilateration. the polarization lets you distinguish between the different light sources.</p> <p>This robot competition uses polarized light beacons to designate the goal areas: <a href="http://www.botlanta.org/cube-quest" rel="nofollow">http://www.botlanta.org/cube-quest</a>. And here is some detail on the beacons they use: <a href="http://www.botlanta.org/contests/cube-quest/beacon-details" rel="nofollow">http://www.botlanta.org/contests/cube-quest/beacon-details</a>.</p>
4595
2014-09-21T20:33:06.203
|localization|electronics|laser|rangefinder|
<p>I am on a robotics team that plans to compete in a competition where one of the rules is that no sort of sonic sensor is allowed to be used. I guess that limits it to some sort of EM frequency right?</p> <p>Ideally, my team is looking for a simple beacon system, where beacon A would be attached to the robot, while beacon B would be attached to a known point on the competition space. Then, beacon A can give information about how far away B is. After some searching, I could only turn up laser rangefinders that required pointing at the target. I am a CS student, so I'm not familiar with the terminology to aid searches.</p> <p>Another nice property would be if the beacons also gave the angle of beacon A in beacon B's field of view, although this is not necessary, since multiple beacons could be used to obtain this information.</p> <p>We have an Xbox 360 Kinect working, and able to track things and give distances, but it looses accuracy over distance quickly (the arena is about 6 meters long), and this beacon should be as simple as possible. We ONLY need it for a relative position of our robot.</p> <p><strong><em>Alternate Solution:</em></strong> Another way to solve this would be for an omni-directional beacon to only give angle information, two of these could be used to triangulate, and do the job just as well.</p>
Do simple, non-sonic, omni-directional rangefinding beacons exist?
<p>You can use the Jacobians of the inverse observation model to initialize the new row/column of the covariance matrix.</p> <p>Suppose your observation model is <span class="math-container">$g(\mathbf{x})$</span>, which maps your state <span class="math-container">$\mathbf{x}$</span> to a predicted observation <span class="math-container">$\hat{\mathbf{z}}$</span>. The inverse observation model <span class="math-container">$g^{-1}(\mathbf{x}, \tilde{\mathbf{z}})$</span> maps an observation <span class="math-container">$\tilde{\mathbf{z}}$</span> to a new entry of your state. For example, if <span class="math-container">$\tilde{\mathbf{z}}$</span> is a range and bearing measurement, <span class="math-container">$g^{-1}$</span> might determine the global <span class="math-container">$(x,y)$</span> coordinates of a newly observed landmark given your current state estimate.</p> <p>Let <span class="math-container">$\mathbf{P}$</span> be the covariance of your state and let <span class="math-container">$\mathbf{G}_x$</span> and <span class="math-container">$\mathbf{G}_z$</span> be the Jacobians of <span class="math-container">$g^{-1}$</span> with respect to <span class="math-container">$\mathbf{x}$</span> and <span class="math-container">$\tilde{\mathbf{z}}$</span>. Then the updated state and covariance matrix after adding the new entry is</p> <p><span class="math-container">$\mathbf{x}\gets \begin{bmatrix} \mathbf{x} \\ g^{-1}(\mathbf{x}, \tilde{\mathbf{z}}) \end{bmatrix}, \qquad \mathbf{P}_{k+1} = \begin{bmatrix} \mathbf{P}_k &amp; \mathbf{P}_k\mathbf{G}_x^\top \\ \mathbf{G}_x\mathbf{P}_k &amp; \mathbf{G}_x\mathbf{P}_k\mathbf{G}_x^\top + \mathbf{G}_z\mathbf{R}_k\mathbf{G}_z^\top \end{bmatrix}$</span>,</p> <p>where <span class="math-container">$\mathbf{R}_k$</span> is the variance (1D) or covariance of observation <span class="math-container">$\mathbf{z}$</span>.</p>
4599
2014-09-22T16:17:19.073
|slam|ekf|errors|mapping|
<p>In EKF-SLAM (based-feature map) once the robot senses a new landmark, it is augmented to state vector. As a result, the size of the state vector and the covariance matrix are expanded. My question is about the uncertainty of the new landmark and its correlation with other pairs of the covariance matrix. How should I assign them? When I assign them to be zero, the error of the estimation this landmark won't change as time goes. If I assign them with very large value, the estimation is getting better every time the robot reobserves this landmark however, the error approaches to fixed value not to zero. I assume the problem id with assigning the uncertainty. Any suggestions?</p>
the uncertainty of initializing new landmark in EKF-SLAM
<p>You're correct that measurement and modeling is the right way to go about this.</p> <p>For your PID to work properly, you need to be able to make a somewhat linear conversion of error (desired roll vs actual roll) into corrective force (in this case, provided by the <a href="http://en.wikipedia.org/wiki/Flight_control_surfaces" rel="nofollow">control surfaces</a> -- the <a href="http://en.wikipedia.org/wiki/Aileron" rel="nofollow">aileron</a> angle, influenced by air speed and other factors). </p> <p>The $k_d$ term of your PID should account for the inertia of the plane in rolling from side to side, so don't worry about that in your measurements.</p> <p>What you <em>should</em> measure in your wind tunnel tests is the torque on the longitudinal axis of the plane, in response to airspeed (in both X and Y axes, if you have on-board sensors for that) and aileron angle. That will provide data on the following relationship:</p> <p>$$ \tau_{actual} = f(\theta_{\text{aileron}}, \text{airspeed}_x, \text{airspeed}_y, \text{[other measurable factors]}) $$</p> <p>You are going to approximate the inverse of that function -- finding $\theta_{\text{aileron}}$ given $\tau_{desired}$. Whether you do that with a neural network, wolfram alpha, multivariate regression, or a good knowledge of physics is up to you. In the end, you want to produce a modeling function in this form: $$ \theta_{\text{aileron}} = f(\tau_{desired}, \text{airspeed}_x, \text{airspeed}_y, \text{[other measurable factors]}) $$</p> <p>The PID will give you $\tau_{desired}$, and your sensors will give you the other factors to plug into this function.</p>
4608
2014-09-23T08:48:34.900
|pid|
<p>I'm trying to control a plane via roll using PID controller , </p> <p>I had a problem finding the transfer function thus I used the following method :- </p> <blockquote> <p>Fix the plane in an air tunnel </p> <p>change the motor that controls the roll in fixed steps and check the roll </p> <p>thus I will have a table of roll/motor degree </p> <p>next is to deduce the nonlinear function using wolfram alpha or approximation neural network .</p> </blockquote> <p>Is this a correct method or should I try another method ? </p>
Deducing single wing plane transfer function Aka Transfer function estimation through set of points
<p>The smaller the range, the more sensitive the device will be. So, in that sense, a smaller range is better. However, the downside of a smaller range is that movement of the robot will be more likely to exceed the range of the sensor.</p> <p>You should select the smallest range which you're sure will never be exceeded by normal operation of your robot.</p> <p>Or, the smallest range within which you want the robot to be under control. I.E. the range might be exceeded during a crash, but at that point control is pretty irrelevant.</p>
4609
2014-09-23T08:53:09.560
|sensors|imu|accelerometer|gyroscope|
<p>I've been using mpu6050 IMU unit ( gyro + accelerometer ) </p> <p>I found that I can set acc range to +/- 2g or 4g till 16 g </p> <p>and same for gyro +/- 250 deg/sec , 500 deg/sec and so </p> <p>I know that they are low cost and full noise , so which settings to the range are best to ensure higher accuracy ? </p>
Is increasing gyro , accelerometer sensor range is good or bad ? how does it affect the accuracy
<p>Just gonna add one here as I researched this topic thoroughly. The best one I found so far is the fully functioning AERobot: 20$ without shipping and programmable:</p> <p><a href="https://www.seeedstudio.com/AERobot-p-2531.html" rel="nofollow noreferrer">https://www.seeedstudio.com/AERobot-p-2531.html</a></p>
4625
2014-09-25T14:48:03.043
|mobile-robot|
<p>I was playing the old "confuse the cat with a flash-light" game, when I thought that I might like to program a confuse-a-cat robot.</p> <p>Something, probably with tracks, which can right itself if he flips it over, and which I can program to move randomly around a room, turning at walls, making an occasional sound or flashing a light.</p> <p>Since I am on a <strong><em>very</em></strong> tight budget, I wondered if there is some cheap kit which I can program ...</p> <p>Arduino, Raspberry Pi, any platform, so long as it is programmable.</p> <p>Thanks in advance for your help</p>
Seeking dirt cheap, wheeled, programmable robot
<p>Whether a single 3-way, 2-position pneumatic valve (typically with a work port, an input port, and an exhaust port ‒ see page 3 of <a href="http://www.nationalpneumatic.com/img/PDF/Anglais/Technical%20notes/TOPRING/Topring-Understanding-air-control-valves.pdf" rel="nofollow">nationalpneumatic.com's pdf about valves</a>) will suffice depends on information not given in the question. For example, if you can turn the compressor on or off at will, and if it will hold pressure when off, you can attach the balloon to the work port, the compressor to the input port, and leave the exhaust port open (or put an exhaust silencer or rate control on the exhaust port).</p> <p>If, however, you are using a reservoir and need three settings – balloon fill, balloon hold, and balloon exhaust – you will need to use either a 5-way, 3-position valve with center off (see eg <a href="http://www.youtube.com/watch?v=vY_bguSNgSc" rel="nofollow">youtube.com about Closed Center Valves</a>), or a pair of 3-2 or 5-2 valves. (The 5-2 valves would be used as overqualified 3-2 valves, with extra ports either blocked or left open as necessary.) Using two 3-2 valves X and Y, on valve X attach the balloon to the work port and leave the exhaust port open (or silenced, etc). Connect X's input port to Y's work port. Connect Y's input port to the compressor, and block Y's exhaust port.</p> <p>The “<a href="http://www.robotshop.com/en/viair-12v-120-psi-air-compressor.html" rel="nofollow">VIAIR 12V 120 psi Air Compressor</a>” mentioned in a comment is an oilless single-piston compressor. As well as being sold at the robotshop.com site, it is marketed as part of a roughly twice-as-expensive air system, on <a href="http://rads.stackoverflow.com/amzn/click/B002915HVK" rel="nofollow">Amazon</a> and <a href="http://www.suspensionconnection.com/10000-compressor.html" rel="nofollow">other sites</a>. A single 3-way, 2-position pneumatic valve should work ok for balloon air control if used per paragraph 1.</p> <p>Somewhat-less-expensive compressors that will work ok for initial experiments or for light duty production include Harbor Freight models <a href="http://www.harborfreight.com/12v-100-psi-high-volume-air-compressor-61788.html" rel="nofollow">61788</a>, <a href="http://www.harborfreight.com/12v-100-psi-high-volume-air-compressor-69284.html" rel="nofollow">69284</a>, and <a href="http://www.harborfreight.com/12-volt-250-psi-compact-air-compressor-4077.html" rel="nofollow">4077</a>.</p>
4635
2014-09-26T11:03:50.670
|arduino|wheeled-robot|mechanism|industrial-robot|valve|
<p>I am trying to make a simple robot with few functionality for someone, one of these functionality is inflating a balloon inside the robot, I know how to control a compressor using Arduino but the problem is that the requested task is bit different here: There must be an air exit and it must be controllable through arduino, so he can inflate the balloon to a certain pressure, and depress the air from another exit if needed (I don't know if it is possible to have a depression through the same pressure-in valvle.</p> <p>I think that it can be done somehow using a solenoid 3/2 valve or something but I am bit unfocused these days and I need some hints.</p> <p>Any thoughts?</p>
12V compressor and air pressure control
<p>Well over voltage at an ESC usually results in an instant magic smoke. I would never recommend this.</p> <p>If you have enough reserve power you can use the 3S version. If you are not sure or do not have enough power reserve then you should buy 4s ESCs.</p>
4656
2014-10-02T08:51:46.213
|motor|esc|
<p>This is for a battle robot in the hobby-weight class (5.44 Kg max)</p> <p>I want to drive the robot using 2 cordless drill motors rated at 14.4 volts. I have 4S LIPOs which means I have 4 x 3.7 volts or 14.8 volts. So far so good. </p> <p>The problem is that I bought 2 ESCs and only afterwards noticed that they are rated for 2-3S (or max of 11.1 volts).</p> <p>So my question is am I likely to damage the ESC if I use my 4S LIPOs instead of 3S LIPOs?</p> <p>Or should I just buy 3S LIPOs and live with the reduced performance?</p>
Over-voltage on a brushed electronic speed controller
<p>Like others said yes it is possible. I made it few times when i needed less height(stacked cells side-by-side), but <strong>SAFELY</strong> no. The only safely method would be to buy 2 or 3 s battery! You can damage your self. </p>
4658
2014-10-02T09:22:53.663
|battery|
<p>Newbie to robotics here! </p> <p>I bought a 5S LIPO but now realise that it is overkill. And these things are expensive!</p> <p>So, given that (as far as I know) the pack is apparently made up of individual cells of 3.7 volts each, is there any way in which I could somehow (safely) separate out the cells to get a 3S and a 2S or even single 1S cells?</p>
Can a 5S LIPO battery be changed to a 3S and a 2S?
<p>This problem can be conveniently addressed in the context of multi-rate Kalman estimation.</p> <p>See for example "<a href="http://www.researchgate.net/publication/223050722_Multi-rate_Kalman_filtering_for_the_data_fusion_of_displacement_and_acceleration_response_measurements_in_dynamic_system_monitoring" rel="nofollow">Multi-rate Kalman filtering for the data fusion of displacement and acceleration response measurements in dynamic system monitoring</a>" (Section 4).</p> <p>Essentially, you have to establish a time sample $T_s$ lower than the minimum time interval you can expect from two consecutive measurements. Then, you always perform the Kalman time update step each $T_s$ instant, whereas the measurement update is run only upon fresh data.</p>
4663
2014-10-02T14:20:44.360
|control|sensor-fusion|
<p>How can I periodically estimate the states of a discrete <a href="http://en.wikipedia.org/wiki/LTI_system_theory" rel="nofollow">linear time-invariant system</a> in the form $$\dot{\vec{x}}=\textbf{A}\vec{x}+\textbf{B}\vec{u}$$ $$\vec{y}=\textbf{C}\vec{x}+\textbf{D}\vec{u} $$if the measurements of its output $y$ are performed in irregular intervals? (suppose the input can always be measured).</p> <hr> <p>My initial approach was to design a <a href="http://en.wikipedia.org/wiki/State_observer" rel="nofollow">Luenberger observer</a> using estimates $\hat{\textbf{A}}$, $\hat{\textbf{B}}$, $\hat{\textbf{C}}$ and $\hat{\textbf{D}}$ of the abovementioned matrices, and then update it periodically every $T_s$ seconds according the following rule:</p> <blockquote> <p>If there has been a measurement of $y$ since the last update: $$\dot{\hat{x}}=\hat{\textbf{A}}\hat{x}+\hat{\textbf{B}}\hat{u}+\textbf{L}(y_{measured}-\hat{\textbf{C}}\hat{x})$$ If not: $$\dot{x}=\hat{\textbf{A}}\hat{x}+\hat{\textbf{B}}\hat{u}$$</p> </blockquote> <p>(I have omitted the superscript arrows for clarity)</p> <p>I believe that there may be a better way to do this, since I'm updating the observer using an outdated measurement of $y$ (which is outdated by $T_s$ seconds in the worst case).</p>
How to periodically estimate states of a LTI if the output is measured irregularly?
<p>I have a suggestion. I have worked some time on px4 controller. For position control, it takes position as input and by feedback (PID) generates desired acceleration. Since quadcopter can generate acce. in Body Z axis , from this preliminary idea u can convert the desired thrust (vector) to attitude command and use the inner loop for attitude controller.</p>
4675
2014-10-05T19:30:35.140
|pid|quadcopter|
<p>I have a question regarding the implementation of a quadrotor's position controller. In my Matlab model the quadrotor takes 4 inputs: a desired altitude ($Z_{des}$) and desired attitude angles($\Phi_{des}$, $\Theta_{des}$, $\Psi_{des}$) which reflects the motion described by the differential equations of the model (see last picture). </p> <p><a href="https://i.stack.imgur.com/fzk2D.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fzk2D.png" alt="theoretical loop controller for a quadrotor"></a></p> <p>Here an insight into the implemented Matlab dynamic model. As you can see it has a structure like an inner loop controler:</p> <p><a href="https://i.stack.imgur.com/IGZaA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IGZaA.png" alt="enter image description here"></a></p> <p>Anyway...it "hovers" perfectly on the starting point. (perfect graphs :) ) Now I just need to go over and implement a sort of position controller to let the quadrotor to get from a start to a goal point, defined as usual through 3 coordinates $[X_d, Y_d, Z_d]$. </p> <p>That's tricky because I don't have the same space state variables as input and output of the system. So the controller must take a vector of three coordinates and be able to output 3 different angles to get there. The only exception is the height because it will be simply bypassed by the controller and doesn't need another calculation loop. A different story is for the three angles... </p> <p>My first idea was to simply create a feedback between the position given at the output of the simulated system and the desired position as in the figure above. But that rises another question: my quadrotor model solves the following equation system:</p> <p>$$ \large \cases{ \ddot X = ( \sin{\psi} \sin{\phi} + \cos{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Y = (-\cos{\psi} \sin{\phi} + \sin{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Z = (-g + (\cos{\theta} \cos{\phi}) \frac{U_1}{m} \cr \dot p = \frac{I_{YY} - I_{ZZ}}{I_{XX}}qr - \frac{J_{TP}}{I_{XX}} q \Omega + \frac{U_2}{I_{XX}} \cr \dot q = \frac{I_{ZZ} - I_{XX}}{I_{YY}}pr - \frac{J_{TP}}{I_{YY}} p \Omega + \frac{U_3}{I_{YY}} \cr \dot r = \frac{I_{XX} - I_{YY}}{I_{ZZ}}pq - \frac{U_4}{I_{ZZ}} } $$</p> <p>that means that they expect (as in the matlab model above) the desired angles and height. But now I need right the inverse: given a desired position calculate the right angles!!! For the direction is the solution really simple, since I can write something like:</p> <pre><code>Psi = atan2( (yd - yactual), (xd - xactual) ); </code></pre> <p>where y and x lies on the horizontal plane. This is not so simple for the other two angles. So what can I do at this point? Just "invert" the given equations to get the desired angles?</p> <p>Another idea could be to implement a simple PD or PID controller. This is much more easier given the fact that I can experiment very quickly using Simulink and get very good results. But the problem is here again: how get I the desired angles from a desired position?</p>
Position Controller for a Quadrotor
<p>yaw can be measured by rate gyro and magnetometer not with accelerometer because accelerometer values depends on gravity component but on rotation in z axis only there is no change in gravity componets</p>
4677
2014-10-05T22:12:32.103
|sensors|quadcopter|kalman-filter|imu|accelerometer|
<p>I would like to estimate the <strong>yaw</strong> angle from <strong>accelerometer</strong> and <strong>gyroscope</strong> data. For <em>roll</em> and <em>pitch</em> estimate I've used the following trigonometric equations:</p> <pre><code>roll = atan2(Ax,Az) * RAD_TO_DEG pitch = atan2(Ay,Az) * RAD_TO_DEG </code></pre> <p>and a simpified version of the <strong>Kalman Filter</strong> to consider also angular rates. The roll and pitch estimates are <em>accurate</em> (accelerometer values need to be filtered in presence of chassis vibrations).</p> <p><img src="https://i.stack.imgur.com/5XwmO.png" alt="enter image description here"></p> <p>In order to get the <strong>Yaw</strong> angle I'm using the following equation:</p> <pre><code>yaw = atan2(Ax,Ay) * RAD_TO_DEG; </code></pre> <p>but the it doesn't work. Do you have any advice?</p>
How to estimate yaw angle from tri-axis accelerometer and gyroscope
<p>Not really. </p> <p>With careful planning, experimentation and testing to minimize effect of servo motor magnetic field affecting the sensor (by placing sensor as far away as possible and mathematically iron compensation via angle table), one should be able to achieve accuracy of a few degrees, which should be sufficient for the said application. (This excludes the case of sensor error detection via multiple sensors)</p>
4682
2014-10-06T23:11:39.717
|sensors|sensor-fusion|usv|
<p>I'm building an autonomous sail boat (ripped out the guts of an RC sail boat and replaced with my own mainboard etc.)</p> <p>The controller board I have can accommodate both an MPU9150 and an HMC5883. Is there any advantage is using both magnetometers for a tilt-compensated heading? I'm thinking that I could compute the unit vector with soft/hard iron offsets removed for both, and then average the two vectors to get one slightly better one?</p> <p>Not sure if it would yield a better result though.</p>
Is there an advantage to multiple magnetometers for heading computation
<p>You don't need to modify or mutilate a belt in any way to be able to use it as a ball elevator.</p> <p>Simply place the belt upside-down <em>above</em> the balls, and and use it to roll the balls uphill.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/30/BallBearing.gif/120px-BallBearing.gif" alt="ball bearing animation"> &lt;-- similar to what's happening in this part of the animation</p> <p>In this case, the inner ring represents your belt and the outer ring represents the ramp.</p>
4689
2014-10-08T08:59:27.827
|kinematics|manipulator|
<p>I'm looking for a way to transport balls (diameter 50mm) 220 mm up with over a slope with a length of 120 mm. Currently I'm considering the usage of a belt system but I cannot seem to find a good belt system.</p> <p>Because of space constraints within my robot, normally I would probably take a nylon belt and jam nails trough it to make little slots and then use that. However this would result in considerable reduction in available space as it means that I have to also take into account the extra space required for the nails on the way back. This means that ideally there would be a way to reduce the space used by the nails on the way back.</p> <p>Does anybody have a good solution for this?</p>
Short distance ball transport
<p>On the video you've linked the reference(x desire) is distance and the feedback variable is also distance, what is different is the input to the system which is force. The magic happens inside the PID where the fine tuning of the gains, which are commonly regarded as abstract values, makes the system behave the way you want. </p> <p>So you can only compare variables of the same units to input the error to the PID, the PID will take care of calculating the right value no matter what units the input to the system is in. </p>
4701
2014-10-09T21:24:42.240
|pid|
<p>I have a basic question because I'm trying to understand right now a concept that I thought it was obvious. Looking at this <a href="http://youtu.be/JJSgUNfZqgU?t=3m10s" rel="nofollow">video</a> he is going to feedback the variable state <strong><em>x</em></strong> with the input of the system, which is a force <strong><em>f</em></strong>. </p> <p>Now, if I'm correct it is only possibile to feedback variables which share the same units, so I expect to drive a meter through an input variable which is a meter and the difference will be then feed into the PID. Is the example in the video just to show up how to use simulink? Or I m wrong? </p>
A general question about PID Controller
<p>I have thought of that too, I am not very sure but I feel we can improve the battery life of a quadcopter by doing the following:</p> <ol> <li>We can have a secondary winding on the quadcopter motors.</li> <li>As the motor is powered to rotate even the secondary winding rotates too hence cuts magnetic field hence creating electricity (we need to make sure that the secondary winding is on primary winding but they shouldn't be connected).</li> <li>Later somehow we can drain the energy produced by the secondary coil from the motors using some wires and slip rings.</li> <li>The energy could either be used for led lighting on the quadcopter or for charging the battery, hence increasing the battery life. </li> </ol>
4704
2014-10-10T06:39:11.547
|quadcopter|
<p>I have this idea or a very curious question in my mind. I am no where near professional though, but i would like it to be answered. </p> <p>We all know how wind turbines can be used to generate electricity. So is it possible to create a quadcoptor that will start with some minimal power by small battery but in time will sustain and keep its system on by self generating electricity and keep on rotating its rotors on its own without other external supply? </p>
Self Powered Quadcoptor
<p>The stall amps are dependent on your type of motor, there are motors with stall currents into the 100's of amps, and some into a few milli amps. 3.2 Amps isn't a lot, but it isn't impossibly little either. The motor's used by quadcopters are brushless, they generally are way higher powered, into hundreds of watt's and more. </p> <p>Make sure your power supply isn't current limited to 3.2 A. You are probably fine with a motor driver capable of something like 4A. </p>
4705
2014-10-10T13:15:14.760
|motor|
<p>I am currently building a hobby-weight (5.44kg) robot and will be using 2 x 14.4 cordless driller motors for my wheels. </p> <p>The thing is I keep reading about high amperages when working with r/c models such as quadcopters BUT when I connect my cordless driller motor to my bench power supply and monitor current draw it never rises above 3.2 Amps even when I try to stop the motor by hand. </p> <p>Of course in the arena in the event of a stand off I have plastic wheels which will slip so I am not too concerned about stall currents. </p> <p>I am now left wondering whether I have mis-calculated or whether people make a lot of fuss about high currents for nothing. or do these currents only perhaps really apply to brush-less motors?</p>
Amperage on brushed motors
<p>The motor's aren't so much at risk here, poorly designed motor drivers are. The purpose of the diodes is to drain the exces energy generated (by disconnecting the motor) to the battery of your robot. When an electric motor is running by means of external power and that power is removed, it starts acting like a generator. If it can't deliver the power it is generating to something, the voltage over the motor terminals will keep rising until something releases it's magic smoke. </p> <p>If you look at the direction of the diodes in the attached image, you will see that the diodes enable the motor to drain it's exces power to the battery, thus saving your driver. By the way, the round things with the arrow in them, are the switches, they are called MOSFETs. <img src="https://i.stack.imgur.com/657xc.png" alt="adsf"></p> <p>The cap over the motor terminals is there to short out high frequency noise generated by power supply, the motor itself or something else. Make sure the cap is bipolar if you use one, otherwise it will blow if you reverse the motor. The size doesn't have to be huge, use something like 100nF.</p> <p>What motor driver are you using? If you could provide me with a type, I can tell you whether or not you need the diodes, a small cap is always a good idea. </p>
4706
2014-10-10T13:24:26.300
|motor|
<p>I am currently building a hobby-weight robot (5.44kg) and will be using 2 x 14.4v cordless drill brushed motors to drive my wheels. </p> <p>I have read somewhere that due to "induced currents" when I turn the motor off (or reverse it presumably?) I should protect it by using a diode or a capacitor across the terminals. </p> <p>Which should I use (capacitor or diode) and what are the parameters I need to consider for these components (voltage or current)? </p> <p>Some answers to a similar question discussed capacitors but not diodes. Are diodes relevant?</p> <p>Would I seriously damage the cordless drill (presumably quite tough) motor if I did nothing?</p> <p>And don't motor controllers have any form of inbuilt protection for the motors anyway?</p>
Diode or capacitor across terminals of brushed motor
<p>Well grinding machines in general use really high cutting speeds, so a very high revolution speed. At a metal lathe it is pretty easy to get the force of your tool onto the workpiece. In a robo-fight it's actually not that easy, so i suggest very high speed. More like an angle grinder</p>
4707
2014-10-10T13:34:09.170
|motor|battle-bot|
<p>I am building a Hobby-weight robot and my weapon of choice is a spinning disk at the front.</p> <p>As regards the disk I was thinking of buying commercial (grinder-type) disks and change type of disk depending on the "enemy's" chassis construction material. So for instance I would have an aluminum cutting disk if the enemy's chassis is made of aluminum and so on.</p> <p>First question is therefore; do such disks do the job in practise (or break, fail to cut?)</p> <p>Secondly, should I use a brushed or brush-less motor for the disk? I actually have ESCs for both but sort of feel a brushed motor will give me more torque while a brush-less motor might give me more speed. So which is more important speed or torque? </p> <p>I do know - from my uncle who uses metal lathes - that machines that cut metal usually spin at a slower speed (drills, cutting wheels etc)- indeed he likes to say that metal working machines are safer than wood-working ones partially for this reason. </p> <p>But I am a newbie and really would like to have an effective weapon if possible and breaking or not-cutting disks do not make such a weapon!</p> <p>Also is it normal practise to use one battery for everything (drive and weapon) or have two separate batteries?</p>
Spinning disk Weapon
<p>Why do you need to have the motor at the axis? Wouldn't it be much easier to have <del>several</del> at least one motors at the edge of the disc and the disc would lie on wheels (the smaller the more accurate)? There could still be a pivot in the middle but just with ball bearings instead of a motor. If the surfaces are clean this should be accurate and by far easier to build. </p> <p>If this is incomprehensible I'll do a drawing. </p> <p>[I'd have preferred for this to be a comment, but lack the reputation and wanted to be helpful]</p> <p>EDIT: Here have crappy drawings; I hope they bring the general idea across though. It would be quite important to have a force pulling (or pressing if it is convenient to have something above the disc) the disc down so that you have more friction; then normal rubber wheels would be enough I think (with the benefit of having smooth motion). I'm not sure what you mean by skate bearings as wheels; the wheels should have friction in contact with the disc. The non-motorized wheels (to support the disc) could have ball bearings as hubs. </p> <p>On a side note, the design you proposed in the question is quite likely to cause the disc to wobble even if there is a only a slight misalignment or vertical force on the disc. </p> <p>It might also be advisable to mount all those things on a material that doesn't change volume too much based on temperature/humidity. </p> <p><img src="https://i.stack.imgur.com/MxTBh.jpg" alt="seen from above, the disc is transparent"> <img src="https://i.stack.imgur.com/KUsPt.jpg" alt="seen from the side, the non-motorized wheel is transparent"></p>
4713
2014-10-11T11:36:21.880
|motor|stepper-motor|
<p>In a lab build I'm doing, I'm stuck at this problem, so I am fishing for suggestions.</p> <p>I'm creating a turn-table type setup where I need to make readings (with a nanotube-tip probe I've already designed, similar to an AFM probe) on the very edge/circumference of a 10 cm radius disk (substrate).</p> <p>The current hurdle is: I need to get the substrate disk to move circularly in steps of 0.1 mm displacement -- meaning, I occasionally need to STOP at certain 0.1mm-increment positions.</p> <p>What would be a way I can achieve this, assuming an accurate feedback system (with accuracy of say ~0.1 mm, e.g., with quadrature optical encoders) is available if needed for closed-loop control?</p> <p>Specs of commonly sold steppers don't seem to allow this kind of control. I'm at the moment trying to study how, e.g. hard disks achieve extreme accuracies (granted they don't have such large disks).</p> <p>Certainly, direct-drive like I'm currently building (see below image) probably doesn't help!</p> <p><img src="https://i.stack.imgur.com/P24mX.png" alt=""></p>
Rotate (and stop) a large disk in very tiny increments
<p>The programm you are searching for is <a href="http://www.drivecalc.de/" rel="nofollow">http://www.drivecalc.de/</a></p> <p>They have all the common motors with lot of data... some "special" motors you won't find there</p>
4722
2014-10-13T16:12:58.430
|brushless-motor|
<p>I am looking for some figures surrounding the specs of brushless motors and their relative efficiency (in power usage terms) for multi-copter use. </p> <p>There are 4 basic specs for motors themselves: - Motor width (EG 28mm) - Motor height (EG 30mm) - "KV" - RPM per volt supplied (EG 800KV) - wattage (eg 300w)</p> <p>This would then be a 28-30 800kv 300w motor. </p> <p>What i am looking for is a chart containing: - Motor spec - pack voltage (eg 14.8v) - Amps drawn @ various % throttle (10% to 100% say) - static thrust from various propellers (11x5, 12x6 etc etc)</p> <p>Does such information exist? I know its a BIT subjective as prop and motor designs vary slightly, but a baseline would be a start.</p>
Brush-less motor specs vs efficiency for multi-copters
<p>It sounds like a <a href="https://en.wikipedia.org/wiki/K-d_tree" rel="nofollow">KD-tree</a></p> <p>If it is a KD-tree, the four floats seem a little redundant, bounding boxes can be calculated on the fly when traversing the tree, you only need to record the splitting plane in the node.</p> <p>I would guess that begin and end describe range of indices of agents belonging to the current node and left and right are indices of left and right subtree of the current node.</p>
4725
2014-10-13T19:16:28.143
|multi-agent|
<p>I have been going through a code base for multi agent motion planning. And I came across a recursive tree building algorithm for the agents. I haven't been able to figure out the algorithm. Does anyone know what it is called? Or any other similar kinds of algorithms so I could read more about it?</p> <p>Here is what I got from the code: The node of the tree is as follows - </p> <pre><code>&gt; struct AgentTreeNode { &gt; int begin; &gt; int end; &gt; float minX, maxX, minY, maxY; &gt; int left; &gt; int right; }; </code></pre> <p>Each node has a max and min value for x and y. And also a begin, end, left and right value. </p> <p>Then the tree is split either horizontally or vertically based on which limits are longer (x or y). And an optimal value is found and agents are split. </p> <p>Then these split agents are recursively build again.</p> <p>Thank you very much. </p>
Recursive Tree Representation for Multi Agent Robots?
<p>While I am not really sure what you exactly mean by "motion primitive", having the ability to do the "crab-motion" enables you to translate regardless of your orientation and without changing it (assuming 180 degrees of steering) which would not be the case if you had only one pair of steerable wheels (like in cars, remember the parallel parking problem).</p> <p>Note however that this is still non-holomonic motion system unless the wheels are omniwheels (you cannot rotate and translate independently).</p>
4738
2014-10-15T00:50:16.347
|wheeled-robot|motion|
<p>I am simulating a wheeled robot of six-wheels and can be independently steered, like MER-Opportunity. The wheeled robot can perform throttling forward, </p> <pre><code>||---|| &lt;--wheel orientation || || ||---|| </code></pre> <p>crab-motion, </p> <pre><code>//---// &lt;--wheel orientation when heading is 45 // // //---// </code></pre> <p>and turning on the spot.</p> <pre><code>//---\\ &lt;--wheel orientation || || \\---// </code></pre> <p>My question is: <strong><em>Is it correct to say that I have 2 motion primitives?</em></strong> Throttling forward is basically crab-motion with heading zero.</p>
Wheeled Robot Motion Primitives: Is throttling forward and crab motion considered as one?
<p>This is a standard rotation matrix with pure rotation about the Z axis. See for more info: <a href="http://en.wikipedia.org/wiki/Rotation_matrix" rel="nofollow">http://en.wikipedia.org/wiki/Rotation_matrix</a></p>
4745
2014-10-15T15:33:14.460
|robotic-arm|industrial-robot|
<p>I've just started taking a robotics course and I am having a little problem. I need to rotate the $O_i-1$ coordinate system into a position, where $X_i-1$ will be parallel with $X_i$. </p> <p>The transformation matrix is given, but I have no idea how I can figure out this transformation matrix from the picture that can be found below.</p> <p>Actually, I know why the last vector is [0 0 0 1] and the previous vector is [0 0 1 0], but I can't figure out why the first vector is [$\cos q_i$ $\sin q_i$ 0 0] and the second [$-\sin q_i$ $\cos q_i$ 0 0].</p> <p><img src="https://i.stack.imgur.com/s3BlX.png" alt="The picture"></p>
How do I get this transformation matrix?
<p>A kinect mounted on your robot is enough for mapping and localization. There are a few different packages that will work:</p> <ul> <li><a href="http://wiki.ros.org/rgbdslam" rel="nofollow">rgbdslam</a> can create a 3d map using a kinect</li> <li>You can use <a href="http://wiki.ros.org/depthimage_to_laserscan" rel="nofollow">depthimage_to_laserscan</a> to take in a depth image from the kinect and output a laser scan message which you can then use with <a href="http://wiki.ros.org/gmapping" rel="nofollow">gmapping</a> for mapping, and the <a href="http://wiki.ros.org/navigation" rel="nofollow">nav stack</a> to navigate your robot.</li> </ul> <p>If you have a ring of ultrasonic rangers on your robot, you can use those for gmapping and navigation, though the kinect will certainly work better.</p>
4754
2014-10-16T03:06:56.800
|localization|ros|slam|raspberry-pi|ultrasonic-sensors|
<p>I've build a simple wheeled robot based on two continuous servos, controlled by Raspberry Pi running ROS-groovy, with a smart phone mounted on top to provide additional sensors. I'd like to situate the bot in a room and have it move to various points on command. I don't have laser ranger finder but do have a good ultrasonic ranger finder and kinect sensors.</p> <p>What are the typical ROS setup for this?</p> <p>The idea I'm thinking is to personally (e.g. manually) map my room using kinect and use this map using only the ultrasonic range finder sensors and IMU in the lightweight robot. Would this be possible?</p>
Build a ROS robot with SLAM without laser
<p>It can be negative, because these parameters are then used to describe the transformations that are needed in order to convert one reference frame to another. For example:</p> <p><a href="https://i.stack.imgur.com/0S7xQ.png" rel="nofollow noreferrer" title="Reference frames"><img src="https://i.stack.imgur.com/0S7xQ.png" alt="Reference frames" title="Reference frames" /></a></p> <p>So if we want to go from frame <span class="math-container">$i-1$</span> to frame <span class="math-container">$i$</span> we have to do the following:</p> <ul> <li>a rotation about <span class="math-container">$x_{i-1}$</span> for <span class="math-container">$a_{i-1}$</span> in order to make the <span class="math-container">$z$</span> axis parallel</li> <li>a transport in <span class="math-container">$x_{i-1}$</span> axis for <span class="math-container">$r_{i-1}$</span> in order to go to the <span class="math-container">$z_i$</span> axis</li> <li>a rotation about <span class="math-container">$z_i$</span> for <span class="math-container">$θ_{i}$</span> in order to make the <span class="math-container">$x$</span> axis parallel</li> <li>a transport in <span class="math-container">$z_{i}$</span> axis for <span class="math-container">$d_i$</span> in order to reach the origin of the <span class="math-container">$i$</span>-th frame</li> </ul> <p>Because all the transformations were done in relative frames, we multiply the homogenous transformations from the right.</p> <p>So the transformation matrix is:</p> <p><span class="math-container">$$ T_i^{i-1} = Rot_x(a_{i-1}) Trans(r_{i-1}) Rot_z( θ_i) Trans (d_i) $$</span></p> <p><strong>So the sign of the parameters are important.</strong></p> <p>Also, another way to see it is how we use them in the Transformation Matrix:</p> <p><span class="math-container">$$ T = \begin{bmatrix} R_i^{i-1} &amp; b^{i-1} \\ 0 &amp; 1 \end{bmatrix}$$</span></p> <p>where <span class="math-container">$b^{i-1}$</span> is the vector in the <span class="math-container">$i-1$</span> frame that points to the origin of <span class="math-container">$i$</span>. This vector is given (at least in the convention i use, taken from <a href="http://mathdep.ifmo.ru/wp-content/uploads/2018/10/John-J.Craig-Introduction-to-Robotics-Mechanics-and-Control-3rd-edition-Pearson-Education-Inc.-2005.pdf" rel="nofollow noreferrer">craig's book</a> pg.83):</p> <p><span class="math-container">$$b^{i-1} = [r_{i-1} \ -sin(a_{i-1})d_i \ cos(a_{i-1})d_i]^T$$</span></p> <p>For this vector, the sign of d_i is important. In the convention used by craig, you take the signed distance from <span class="math-container">$x_{i-1}$</span> to <span class="math-container">$x_i$</span>.</p>
4755
2014-10-16T05:51:47.650
|forward-kinematics|dh-parameters|
<p>By watching this <a href="http://youtu.be/rA9tm0gTln8" rel="nofollow">video</a> which explains how to calculate the classic Denavit–Hartenberg parameters of a kinematic chain, I was left with the impression that the parameter $r_i$ (or $a_i$) will always be positive. </p> <p>Is this true? If not, could you give examples where it could be negative? </p>
Parameter $r$ of Denavit-Hartenberg
<p>Given the scales we are talking about I think the lumberjack technique would not work. Lumberjacks walk away from a tree until they visually measure an angle of 45 degrees between the bottom and the top of the tree: at that point they are standing at a distance from the tree which is equal to its height, and which they can measure easily walking back to the tree. It only works because the lumberjack is small in comparison to the tree so model approximations such as "the eyes of the lumberjack are at ground level" or "the ground is perfectly horizontal" hold true enough to not impact the final result too much. I don't think it would work well enough on a rover and a 7" rock. Note that if you wanted to go this route anyway you could implement it with a camera or a laser rangefinder, and you could use any angle and trigonometry formulas.</p> <p>Because the rock is small I would use a mechanical trick. Maybe use a pole with a sonar pointing down that you would place directly above the rock. You could use a pendulum to make sure it's vertical even if the rover is on uneven ground. Or you could develop a device like a forklift that you would place above the rock and progressively lower until you hit the rock. You would then know the height based on the position of the stepper/servo driving the lifting part.</p> <p>That's the way I would go with this challenge, I encourage others to post their answers if they have different ideas because there might be plenty of other interesting solutions!</p>
4759
2014-10-16T21:50:43.743
|arduino|wheeled-robot|algorithm|
<p>So, I am designing a rover that will navigate to a rock, and then calculate the height of the rock. Currently, my team's design involves using an ultrasonic rangefinder and lots of math. I was interested in what sensors you would use to solve this problem, or how you would go about it? Assume that the rover has already located the rock. </p> <p>Additional Info: We are using an Arduino Uno to control our rover. It is completely autonomous.</p>
How to find the height of a rock with a rover?
<p>Your question is interesting one as most of us experience the same during the beginning of development of drones. I am a drone software developer and I was just a student with enthusiasm towards physics, mathematics, development of hardwares (as a part of purpose of life). I hadn't even known the types of differential equations of flight. But, taking up a project (professionally as well with deadlines) helped a lot. This is not "hey me me" story. Some of the websites that helped a lot were <a href="http://robots.stanford.edu/" rel="nofollow">robots.stanford.edu</a> and the papers ranging from discriminative training of Kalman filters to full fledged stanford driving software of Stanley, the car that won DARPA Grand Challenge. The 'depth of understanding' depends on your objective and the purpose for which you are developing drone applications. Delft University and University of Pennsylvania have been actively researching on application of drones as well. </p> <p>From my personal experience, it can be said that simulators like ROS Gazebo MAVproxy can aid the development of research while flying directly feels like a nice hobby.</p>
4762
2014-10-17T08:12:09.320
|quadcopter|
<p>I am a web developer. I am fascinated by Quadrocopters and i am trying to learn how to build one and basically i am trying to jump into robotics fields. I don't have much electric circuit and electronics knowledge so i did some research on how to build and what type of knowledge you would require to develop such flying machine. So i started learning basics of electronics from <a href="http://www.ibiblio.org/kuphaldt/electricCircuits/" rel="nofollow">Lessons In Electric Circuits by Tony R. Kuphaldt</a></p> <p>The books are very interesting but i could not find a technique so that i can implement what i learn from the books. Basically i am just going through the stuffs, and understanding them little by little. What i want to know is that what is the right way and effective way to learn electronics and electric circuit from your experience and i should i do now so that i can increase my learning speed so that i can achieve my goal.</p> <p>While i was researching i came across topics such as mathematical modelling and modelling the quadrocopters first and them implementing them on real. How can i gain such knowledge to model something mathematically and implement such in real life? How much math and what areas of mathematics do i need to learn and how can i learn such? </p> <p>Now you have idea what i want to learn and achieve. Can you please suggest me a road map or steps i need to take to gain such knowledge and skill to develop myself, so that in near future i would be able to build such flying machines on my own. </p>
Learning Materials for Beginners in Robotics and Quadrocopters
<p>The USB1 host port can deliver 500mA if powered from 5V 2A</p>
4766
2014-10-17T11:04:26.970
|power|usb|beagle-bone|
<p>I'm building a robot that uses a beaglebone black, however I have several different usb devices that I want to connect to it (microphone, usb sound device and some other things). Now I have heard that the usb output of the beaglebone doesn't power more then 0.1A. So the combined draw of these usb devices is likely to exceed this by a fair margin. So I started looking for powered usb hubs to use instead. However these tend to be powered by 220V and my robot currently only has a 12V power supply and a converter to 5V for the beaglebone. Which given the size expense and inefficiency of converting power to 220 from 12V and then back again doesn't seem very good. Is there a good method for fixing this? </p>
how to power extra usb devices for beaglebone black
<p>With fixed cameras, background subtraction will get you 90% of the way there. (Assuming you don't have lots of other moving things around your room).</p> <p>You should consider tagging your robot with color blobs like they do for <a href="https://www.google.com/search?q=robocup%20ssl&amp;tbm=isch" rel="nofollow">RoboCup SSL</a>. Color segmentation is easy, and can be made to be <a href="http://www.cs.cmu.edu/~trb/papers/wirevision00.pdf" rel="nofollow">very fast</a>.</p> <p>But then you need to write some custom code to recover the pose of your robot. If you put more standard fiducials on the robot like <a href="http://www.artag.net/" rel="nofollow">ARTags</a>, then you get this pose directly.</p> <p>But then you need potentially large and ugly fiducials on the robot. If you go with a motion capture system like <a href="http://www.vicon.com/" rel="nofollow">Vicon</a> or <a href="https://www.naturalpoint.com/optitrack/" rel="nofollow">OptiTrack</a>, you only need put some small IR retroreflective markers on the robot. Then you get the pose of your robot with mm accuracy at hundreds of Hz.</p>
4771
2014-10-18T10:06:22.577
|arduino|computer-vision|kinect|
<p>I'm trying to track a simple robot (e.g. arduino, raspberry pi, even toys) in a room using fixed location kinect sensor(s) and cameras at different parts of the room. How might one usually use to do this?</p> <p>Edit 1: More specifically, I want to know the position (and if possible, orientation) of an moving object in the room using one or more cameras or depth sensors. I'm new to the area, but one idea might be to use blob or haar to detect the moving object and get its location from kinect depth-map, and I'm trying to find what package I can use for that end. But for navigation to work I'd have to pre-map the room manually or with kinect. I can put some sensors on this tracked moving object, e.g. IMU, sonar, but not a kinect. I am allowed full PCs running ROS/opencv/kinect sdk in the environment, and I can wirelessly communicate with the tracked object (which is presently a raspberry pi running ROS groovy on wheels)</p>
tracking robot in a room
<p>For someone who will investigate this issue.</p> <p>If you created a model with two <code>link</code> but without <code>joint</code> between them, you will get parse issue:</p> <pre><code>Error [parser_urdf.cc:XXXX] Unable to call parseURDF on robot model Error [parser.cc:XXX] parse as old deprecated model file failed. </code></pre>
4784
2014-10-20T10:45:30.827
|simulator|gazebo|simulation|
<p>I'm trying to import the tutorial robot given at <a href="http://gazebosim.org/tutorials?tut=build_robot&amp;cat=build_robot" rel="nofollow">this link</a></p> <p>However this gives the following error: </p> <pre><code>Error [Param.cc:181] Unable to set value [1,0471975511965976] for key[horizontal_fov] Error [Param.cc:181] Unable to set value [0,100000001] for key[near] Error [parser_urdf.cc:2635] Unable to call parseURDF on robot model Error [parser.cc:278] parse as old deprecated model file failed. Error [parser_urdf.cc:2635] Unable to call parseURDF on robot model Error [parser.cc:278] parse as old deprecated model file failed. Error [parser.cc:278] parse as old deprecated model file failed. </code></pre> <p>This sugest something is wrong with parsing but does not actually point towards any line of my code (the example is only 103 lines long). </p> <pre><code> &lt;link name='chassis'&gt; &lt;pose&gt;0 0 .1 0 0 0&lt;/pose&gt; &lt;collision name='collision'&gt; &lt;geometry&gt; &lt;box&gt; &lt;size&gt;.4 .2 .1&lt;/size&gt; &lt;/box&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;visual name='visual'&gt; &lt;geometry&gt; &lt;box&gt; &lt;size&gt;.4 .2 .1&lt;/size&gt; &lt;/box&gt; &lt;/geometry&gt; &lt;/visual&gt; &lt;collision name='caster_collision'&gt; &lt;pose&gt;-0.15 0 -0.05 0 0 0&lt;/pose&gt; &lt;geometry&gt; &lt;sphere&gt; &lt;radius&gt;.05&lt;/radius&gt; &lt;/sphere&gt; &lt;/geometry&gt; &lt;surface&gt; &lt;friction&gt; &lt;ode&gt; &lt;mu&gt;0&lt;/mu&gt; &lt;mu2&gt;0&lt;/mu2&gt; &lt;slip1&gt;1.0&lt;/slip1&gt; &lt;slip2&gt;1.0&lt;/slip2&gt; &lt;/ode&gt; &lt;/friction&gt; &lt;/surface&gt; &lt;/collision&gt; &lt;visual name='caster_visual'&gt; &lt;pose&gt;-0.15 0 -0.05 0 0 0&lt;/pose&gt; &lt;geometry&gt; &lt;sphere&gt; &lt;radius&gt;.05&lt;/radius&gt; &lt;/sphere&gt; &lt;/geometry&gt; &lt;/visual&gt; &lt;/link&gt; &lt;link name="left_wheel"&gt; &lt;pose&gt;0.1 0.13 0.1 0 1.5707 1.5707&lt;/pose&gt; &lt;collision name="collision"&gt; &lt;geometry&gt; &lt;cylinder&gt; &lt;radius&gt;.1&lt;/radius&gt; &lt;length&gt;.05&lt;/length&gt; &lt;/cylinder&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;visual name="visual"&gt; &lt;geometry&gt; &lt;cylinder&gt; &lt;radius&gt;.1&lt;/radius&gt; &lt;length&gt;.05&lt;/length&gt; &lt;/cylinder&gt; &lt;/geometry&gt; &lt;/visual&gt; &lt;/link&gt; &lt;link name="right_wheel"&gt; &lt;pose&gt;0.1 -0.13 0.1 0 1.5707 1.5707&lt;/pose&gt; &lt;collision name="collision"&gt; &lt;geometry&gt; &lt;cylinder&gt; &lt;radius&gt;.1&lt;/radius&gt; &lt;length&gt;.05&lt;/length&gt; &lt;/cylinder&gt; &lt;/geometry&gt; &lt;/collision&gt; &lt;visual name="visual"&gt; &lt;geometry&gt; &lt;cylinder&gt; &lt;radius&gt;.1&lt;/radius&gt; &lt;length&gt;.05&lt;/length&gt; &lt;/cylinder&gt; &lt;/geometry&gt; &lt;/visual&gt; &lt;/link&gt; &lt;joint type="revolute" name="left_wheel_hinge"&gt; &lt;pose&gt;0 0 -0.03 0 0 0&lt;/pose&gt; &lt;child&gt;left_wheel&lt;/child&gt; &lt;parent&gt;chassis&lt;/parent&gt; &lt;axis&gt; &lt;xyz&gt;0 1 0&lt;/xyz&gt; &lt;/axis&gt; &lt;/joint&gt; &lt;joint type="revolute" name="right_wheel_hinge"&gt; &lt;pose&gt;0 0 0.03 0 0 0&lt;/pose&gt; &lt;child&gt;right_wheel&lt;/child&gt; &lt;parent&gt;chassis&lt;/parent&gt; &lt;axis&gt; &lt;xyz&gt;0 1 0&lt;/xyz&gt; &lt;/axis&gt; &lt;/joint&gt; &lt;/model&gt; &lt;/sdf&gt; </code></pre> <p>This is on ubuntu 14.04. Is there any hint at what I'm doing wrong or what information can I provide to better come to a solution </p>
gazebo import robot gives error
<p><strong>Controllers type</strong></p> <p>A more mathematical approach to the error.</p> <p><a href="https://i.stack.imgur.com/e06td.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e06td.jpg" alt="enter image description here"></a></p> <p>Suppose you have a close loop system like above. The equation is:</p> <p>$\hspace{2.5em}$ $Y(s) = \frac{G(s)C(s)}{1+G(s)C(s)} R(s)$ </p> <p>The error equation is:</p> <p>$\hspace{2.5em}$ $E(s) = R(s) - Y(s)$ </p> <p>$\hspace{2.5em}$ $E(s) = \frac{1}{1+G(s)C(s)} R(s)$ $\hspace{2.5em}[1]$</p> <p>The <a href="https://en.wikipedia.org/wiki/Final_value_theorem" rel="nofollow noreferrer">final value theorem</a> states that:</p> <p>$\hspace{2.5em}$ $e(\infty) = \lim_{s \rightarrow 0}sE(s)$ $\hspace{2.5em}[2]$</p> <p>Using $[1]$ in $[2]$:</p> <p>$\hspace{2.5em}$ $e(\infty) = \lim_{s \rightarrow 0}s\frac{1}{1+G(s)C(s)} R(s)$</p> <p>Ok! Now, we can conclude some cool stuffs.</p> <ul> <li>Suppose $R(s) = \frac{1}{s}$:</li> </ul> <p>$\hspace{2.5em}$ $e(\infty) = \lim_{s \rightarrow 0}\frac{1}{1+G(s)C(s)}$ $\hspace{2.5em}$ From calculus, is the same as:</p> <p>$\hspace{2.5em}$ $e(\infty) = \frac{1}{1+\lim_{s \rightarrow 0} (G(s)C(s))}$</p> <p>$\hspace{2.5em}$ $e(\infty) = \frac{1}{1+G(0)C(0)}$</p> <p>If $C(s) = K$, where $K$ is a proportional gain, $G(0)$ have a finite DC gain, your system is going to be the responsible for the finite final term. But, however, you can apply a PD control, $C(s) = \frac{K}{s}$. </p> <p>$\hspace{2.5em}$ $C(0) = \lim_{s \rightarrow 0}\frac{K}{s} = \infty$ </p> <p>Then:</p> <p>$\hspace{2.5em}$ $e(\infty) = \frac{1}{1+G(0)(\infty)} = 0$</p> <p>And that's the resume why you can't get null error only with a proportional gain.</p>
4793
2014-10-22T08:43:18.130
|control|dynamics|manipulator|
<p>I'm reading this <a href="http://www-lar.deis.unibo.it/people/cmelchiorri/Files_Robotica/FIR_05_Dynamics.pdf" rel="nofollow noreferrer">pdf</a>. The dynamic equation of one arm is provided which is </p> <p>$$ l \ddot{\theta} + d \dot{\theta} + mgL sin(\theta) = \tau $$</p> <p>where </p> <p>$\theta$ : joint variable. </p> <p>$\tau$ : joint torque</p> <p>$m$ : mass</p> <p>$L$ : distance between centre mass and joint. </p> <p>$d$ : viscous friction coefficient</p> <p>$l$ : inertia seen at the rotation axis. </p> <p><img src="https://i.stack.imgur.com/CvrSY.png" alt="enter image description here"></p> <p>I would like to use P (proportional) controller for now. </p> <p>$$ \tau = -K_{p} (\theta - \theta_{d}) $$</p> <p>My Matlab code is </p> <pre><code>clear all clc t = 0:0.1:5; x0 = [0; 0]; [t, x] = ode45('ODESolver', t, x0); e = x(:,1) - (pi/2); % Error theta1 plot(t, e); title('Error of \theta'); xlabel('time'); ylabel('\theta(t)'); grid on </code></pre> <p>For solving the differential equation </p> <pre><code>function dx = ODESolver(t, x) dx = zeros(2,1); %Parameters: m = 2; d = 0.001; L = 1; I = 0.0023; g = 9.81; T = x(1) - (pi/2); dx(1) = x(2); q2dot = 1/I*T - 1/I*d*x(2) - 1/I*m*g*L*sin(x(1)); dx(2) = q2dot; </code></pre> <p>The error is </p> <p><img src="https://i.stack.imgur.com/TIUNE.png" alt="enter image description here"></p> <p>My question is why the error is not approaching zero as time goes? The problem is a regulation track, so the error must approach zero. </p>
Proportional controller error doesn't approach zero
<p>The calibration in other ESCs is used to compensate for different ranges of the input that different transmitters/receivers produce -- if you push the throttle all the way forward, your system will generate some width of PWM pulses. This might not be identical to full throttle on other system, so you can calibrate to use the full range of the ESC.</p> <p>It should definitely not be critical to calibrate the range.</p>
4794
2014-10-22T11:09:38.287
|motor|
<p>I bought 2 brushed motor controllers from China to use with my hobby-weight battle robot (<a href="http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html" rel="nofollow">http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html</a>). </p> <p>These are intended for use with my 2 cordless drill motors which will be driving the left and right wheel respectively. The robot will therefore be steered in "tank mode" by varying the speed and direction of rotation of the 2 motors using the two joysticks on my Turnigy 9x transmitter.</p> <p>My question is: I have seen videos on youtube where people calibrate brushless motor controllers (ESCs) using some system of pushing the joystick on a standard transmitter forward and listening to tones and then doing the same for reverse and so on. </p> <p>However when I asked the suppliers about a similar procedure for these brushed controllers, all they could say is that they did not need calibration. The exact words were "It seems that you're talking about transmitter for copters,but this ESC is for RC car or boat. You pull the trigger, it goes forward, you push the trigger, it reverse. And you don't need to calibrate it, just plug it, then it can work." </p> <p>My transmitter is not one of those gun shaped ones used for cars. So am I in trouble with these controllers or should they work correctly out of the box as the supplier seems to be implying? </p> <p>You may fairly ask why have I not just tried this out and the simple answer is that my LIPO charger has not yet arrived and I therefore cannot power anything up as yet. </p>
Motor controller calibration
<p>In order to do this, you need to have something on the robot that can intercept your "single joystick" signal from the remote control and translate it to left/right wheel speeds. Your arduino might be able to serve this purpose, with the appropriate shield.</p> <p>For that calculation, check out <a href="https://robotics.stackexchange.com/q/2011/350">this question on calculating left and right motor speeds based on desired throttle and direction</a>.</p>
4796
2014-10-22T11:15:52.033
|motor|differential-drive|
<p>I bought 2 brushed motor controllers from China to use within my hobby-weight battle robot (<a href="http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html" rel="nofollow">http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html</a>). </p> <p>These are intended for use with my 2 cordless drill motors which will be driving the left and right wheel respectively. The robot will therefore be steered in "tank mode" by varying the speed and direction of rotation of the 2 motors using the two joysticks on my Turnigy 9x transmitter.</p> <p>I am seeking to refine the model and make it easier to operate so does anyone know of a way in which I can somehow synchronize the motors to get a single joystick steering system? My transmitter has 9 available channels so if this is part of a solution then I am fine with it. I also have an Arduino available if needs be. </p>
Changing tank drive (differential) mode to single joystick drive mode
<p>First things first: the information on wikipedia refers to ESCs in general, and not a specific one. You should consult the datasheet on your particular ESC model to make sure that it does in fact use 50 Hz.</p> <p>You also seem to have a misconception about how the PID controller and ESC are coupled together; in fact, the PID controller can work at whatever frequency it wants. The output of the PID will be some value, and in order to send that value to the ESC you will need to convert it to a <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation" rel="nofollow">pulse-width modulated (PWM)</a> signal on the frequency that the ESC uses for input.</p>
4808
2014-10-23T04:28:46.350
|pid|
<p>Based on the wiki page of ESC, the ESC generally accepts a nominal 50 Hz PWM servo input signal whose pulse width varies from 1 ms to 2 ms</p> <p><a href="http://en.wikipedia.org/wiki/Electronic_speed_control" rel="nofollow">http://en.wikipedia.org/wiki/Electronic_speed_control</a></p> <p>For our project, we integrate a flight controller for our UAV, Naza m-lite and we want to implement position control. We already have localization and we can control the quadrotor by applying servo width to roll, pitch, yaw and thrust throttle. Since the ESC only accepts 50 Hz, will the PID controller work at only 50 Hz?</p>
Is the input of ESC really limited at 50 Hz and will the PID controllers work properly?
<p>Try something like:</p> <pre><code>if(angle'&gt; max) angle'=max else if(angle'&lt; min) angle'=min end </code></pre> <p>Try 0.1 (and -0.1) for the max velocity of the servo, I came up with that value from the graph you showed.</p>
4827
2014-10-26T12:03:32.700
|motor|
<p>I am trying to create a model for the <a href="http://en.wikipedia.org/wiki/Nao_(robot)" rel="nofollow noreferrer">NAO [robot]</a>'s motors. The figure below shows the step response for the knee motor. Afaik the NAO internally uses a pid controller to control the motor. I have no control over the pid or it's parameters. Thus I would like to treat the motor including pid as a black box. Theoretically it should be possible to model <em>pid+motor</em> as a $pt_2$ system, i.e. a second order lti system. A $pt_2$ system is defined by the following differential equation:</p> <p>$$T^2\ddot{y}(t) + 2dT\dot{y}(t)+y(t) = Ku(t)$$.</p> <p>I tried fitting a $pt_2$ model but was unable to find good parameters.</p> <p>Any idea what model to use for this kind of step response?</p> <p><strong>edit</strong>: I tried modifying the equation to add a maximum joint velocity like this:</p> <p>$$T^2\ddot{y}(t) + (\frac{2dT\dot{y}(t) + m - |2dT\dot{y}(t) - m|}{2})+y(t) = Ku(t)$$ where $m$ is the maximum velocity. The fraction should be equivalent to $min(2dT\dot{y}(t), m)$.</p> <p>However I am not sure if this is the correct way to introduce a maximum joint velocity. The optimizer is unable to find good parameters for the limited velocity formula. I am guessing that is because the min() introduces an area where parameter changes do not cause any optimization error changes.</p> <p><img src="https://i.stack.imgur.com/V6QWu.png" alt="Step response image"></p>
NAO motor model identification
<p>To control the power of a DC motor you control the voltage applied to it. So no, it's not the "only" voltage you can apply to the motor. The nominal value is a reference so we know what to what specs the motor was engineered. You can input more than the nominal voltage to get more power but it may for example get a bit hotter than it is supposed to, as it wasn't designed to work continuously at a higher voltage. You won't kill a 12V motor by making it work at 20V for 2 minutes, specially Maxon motors which are very well built. </p>
4828
2014-10-26T13:21:06.700
|motor|brushless-motor|
<p>I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:</p> <p><strong>Question #1:</strong></p> <p>Maxon (or the other manufacturers) states a "nominal voltage" in the characteristic sheets. Is that the voltage you should apply to the motor? This may be a dumb question but I have followed the full maxon e-learning course and read about other tutorials on the web and I could not find this information anywhere. Can anyone who knows about motors confirm?</p> <p>I have followed some theoretical and practical courses on the web but I find it hard to find answers to my down to earth question...</p>
Is the nominal voltage of a motor the voltage to apply to the motor?
<p>The other answer is not quite right. Nominal (rated) torque is the torque at which the motor can run continuously without overheating. This is not the same as the torque at maximum efficiency. A well designed motor will have its rated torque slightly higher than the torque at max. efficiency. Maximum torque of a DC motor is the stall torque. </p>
4829
2014-10-26T13:29:40.147
|motor|servomotor|brushless-motor|
<p>I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:</p> <p><strong>Question #2:</strong></p> <p>As far as I understand, the nominal torque corresponds to the maximum torque the motor can sustain continuously. So I guess, as a rule of thumb, I should find a motor with a nominal torque = my max needed torque (after reduction), or around. Right?</p>
Choosing DC motor: max needed torque vs nominal torque
<p>I think the better practice might be to start with what you need to the transmission to do and work backwards to make your motor selection.</p> <ol> <li><p>Start with your max torque requirement (10.6 Nm). What kind of &quot;max&quot; is this? Is it a momentary maximum that the system will routinely reach or is it an absolute maximum that may occur accidentally. Let's assume that it is the former and not the latter.</p> </li> <li><p>Design starting with the transmission (aka reducer, gearbox) rather than the motor. How &quot;sloppy&quot; can your gearing be? If you need very low backlash and/or low weight, then you will need something like a harmonic drive. If not, you might be able to use a lower precision solution like planetary gearing.</p> </li> <li><p>Given your output torque requirements, look through a guide like <a href="https://www.harmonicdrive.net/_hd/content/documents1/gearhead-catalog.pdf" rel="nofollow noreferrer">this</a>. I've linked to a guide from Harmonic Drive, but there are lots of other manufacturers. If you look through the specs starting on p. 19 you will see a spec for &quot;Limit for Repeated Peak Torque&quot;. Let's take this as your max torque spec. (If you meant a momentary peak torque that is not routinely hit, then look at &quot;Limit for Momentary Torque&quot;).</p> </li> <li><p>You are looking for a transmission that can support 10.6 Nm repeated peak torque. The tables indicate that a size 14 gear (planetary or harmonic) could handle this with margin.</p> </li> <li><p>Next you need to decide on your speed requirement. Do you care how fast your element can slew? If you are not sure, then make a table of implied max output speeds for each gear model/available ratio given the max input speed listed in the specs. For example, looking at the CSF models, you could support output speeds of 85, 106.25, and 170 rpm with the CSF-14-100, -80, and -50, respectively. Step back and look at these and decide which of the candidates are acceptable and which are not.</p> </li> <li><p>At this point you might have a mix of harmonic drive and planetary drive candidates. If you are not particularly concerned with backlash and weight, then you probably will want to use a planetary drive for cost. Otherwise you will select a harmonic drive. (You could also make this choice up front, but maybe it is best to enumerate all the candidates up front).</p> </li> <li><p>You will also want to add a column for &quot;nominal&quot; output torque in each case. Here you could pull in the &quot;Average Torque&quot; specs. You could estimate average speed by multiplying the max speed by the ratio of average torque rating to momentary peak torque rating.</p> </li> <li><p>Divide the speeds, average torques, and momentary peak torques in your table by the gear ratio. At this point you will have a table listing the average and maximum speeds that your motor needs to support. You can go to your favorite motor suppliers catalog and select accordingly.</p> </li> </ol> <p>I've just illustrated this with a set of planetary and harmonic drives, but of course you could throw other types of transmissions into the mix. These two are the most common for robot arms.</p>
4831
2014-10-26T13:43:17.317
|motor|servomotor|brushless-motor|
<p>I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:</p> <p><strong>Question #4:</strong></p> <p>The motor I chose (maxon brushed DC: 310005 <a href="http://www.maxonmotor.com/maxon/view/catalog/" rel="nofollow">found here</a>) has nominal speed = 7630rpm - nominal torque = 51.6mNm. My needs are max speed = 50.42rpm / max torque = 10620 mNm. This means a reduction factor of 151 for speed and 206 for torque. Should I choose a gear closer to 151 or 206?</p>
Selecting a gear reduction: torque vs speed
<p>The problem is here:</p> <blockquote> <p>Since, for PID <code>ERROR = SETPOINT - INPUT</code></p> </blockquote> <p>This is only true for linear systems. In your system, your error never exceeds $180^\circ$ no matter how many times you rotate -- modular arithmetic is affecting your calculation. (For the PID to work properly for angles, you would need to keep track of the <em>absolute</em> yaw value, which would indicate how many rotations you've done in one direction or the other. That would only be relevant to cases where the error corresponds to the amount of "twist" in a rope or cable that connects 2 rotating elements.)</p> <p>The simple solution is to say </p> <pre><code>ABS_ERROR = SETPOINT - INPUT if ABS_ERROR &lt; 180 ERROR = ABS_ERROR else ERROR = ABS_ERROR - 360 </code></pre> <p>However, this could result in some instability if your system has a lot of momentum (large $K_d$ term). So a more correct solution would account for whether <code>ABS_ERROR</code> combined with that momentum should take the "long way" around instead of attempting to stop, change directions, and rotate the "shorter" distance.</p>
4836
2014-10-27T08:12:41.757
|pid|
<p>My yaw angle varies from -180 degree to 180 degree. </p> <pre><code> -170 170 -135 135 -90 90 45 45 10 -10 </code></pre> <p>If my current heading is about 170 degree, then the wind makes it rotate to the left at about -170 degree, then how can PID control it to make it rotate back to the right at 170 degree.</p> <p>Since, for PID <code>ERROR = SETPOINT - INPUT</code> In my case, <code>SETPOINT = 170</code>, and <code>INPUT = -170</code>, the the <code>ERROR = 170 - (-170) = 340</code>. So instead of moving to the right and apply <code>PWM = 20</code>, it rotate to the left and apply <code>PWM = 340</code> and come back to the desired position, which is 170 degree?</p>
How to control PID Yaw
<p>What you say ($\dot{z}_r$ = 0, $\ddot{z}_r=0$) is true in only the most basic systems. As a trivial counter example, consider systems where the desired height is moving: then it should be obvious that $\dot{z}_r$ and $\ddot{z}_r$ will be non-zero.</p> <p>Even for systems with $z_r = const$, if you want to improve performance you need to start thinking about $\dot{z}_r$ and $\ddot{z}_r$. Imaging you've just changed the set point so $|z-z_r| &gt;&gt; 0$. Your quad can't instantly jump so it will take some time during which $\dot{z}$ will necessarily be non-zero. If you define a profile, with conistent $z_r$, $\dot{z}_r$, and $\ddot{z}_r$ you get much better overall control than just leaving $\dot{z}_r=0$ and $\ddot{z}_r=0$.</p> <p>To be clear, this doesn't uniquely define $\dot{z}_r$ and $\ddot{z}_r$. That is for the system designer to decide. For best performance, just make sure the system is consistent, i.e. $\frac{d}{dt} z = \ddot{z}_r$ and $\frac{d^2}{dt^2} z_r = \ddot{z}_r$</p>
4847
2014-10-27T22:17:31.727
|quadcopter|
<p>I'm really willing to understand and implement such a controller (sliding mode) for a quadrotor. I've found <a href="http://waset.org/Publication/modeling-and-control-of-a-quadrotor-uav-with-aerodynamic-concepts/7686" rel="nofollow">this interesting document explaining that topic</a>. If you scroll down until page 381 (don't be scared, the document is just 6-7 pages) you can find the following height control law (equation .19):</p> <p>$$ U_1 = \frac{m}{\cos{\phi}\cos{\theta}}[c_1(\dot z_r - \dot z) + \ddot z_r + \epsilon_1 sgn(s_1) + k_1 s_1 + g] $$</p> <p>The explanation of most of the term should be quite easy, but let's focus on the variable <em>z</em>, the height (or altitude if absolute) of the quadrotor. Anyway the control law "pretends" not only the goal height <em>z</em> (through $s_{1}$) but even the vertical speed $\dot z_{r}$ and vertical acceleration $\ddot z_{r}$ (<em>r</em> means here <em>reference</em>).</p> <p>Now...to me is not clear whether those variables the setpoints are, that must be reached once the quadrotor reaches its predefined height or they just symbolize an abstract mathematical formalism but are going to be most of the time Zero (because I want to reach the target height with $z = z_{r}$ but $\dot z_{r} = \ddot z_{r} = 0$)</p> <p>?!?!?</p> <p>I hope my question is clear. Even if this I put in the title "sliding control" I think it may be helpful for other type of controllers. Regards</p>
Understanding a sliding mode controller for quadrotors
<p>To be clear, this is a single-point range sensor, correct? It is possible to do Monte Carlo localization with such a sensor, but it will take a <em>very</em> long time to converge if it does so at all, and will be easily confused. Each additional point you can add when you're doing localization will improve performance, so if you can get a few more of these sensors, you'll do better.</p>
4855
2014-10-28T20:54:56.997
|localization|particle-filter|
<p>I am trying to implement a monte carlo localization/particle filter localization with a simple range sensor. The range sensor only sees in the direction the robot is heading and returns back any obstacle in its line of sight. If there is no obstacle, then the sensor returns back the distance to the boundary wall i.e. there is no maximum range for sensor.</p> <p>But, the problem is that i am not able to locate the robot's position. Now, I am feeling is it cause the sensor is not powerful enough. Is it feasible to do localization with such a sensor or should I change the sensor type?</p> <p>Please tell me what you guys think?</p>
Is a simple range sensor described below sufficient to implement particle filter localization?